{"id":"H-14","title":"Studying the Use of Popular Destinations to Enhance Web Search Interaction","abstract":"We present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs. These popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic. Destinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority. We describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search. Results show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.","lvl-1":"Studying the Use of Popular Destinations to Enhance Web Search Interaction Ryen W. White Microsoft Research One Microsoft Way Redmond, WA 98052 ryenw@microsoft.com Mikhail Bilenko Microsoft Research One Microsoft Way Redmond, WA 98052 mbilenko@microsoft.com Silviu Cucerzan Microsoft Research One Microsoft Way Redmond, WA 98052 silviu@microsoft.com ABSTRACT We present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs.\nThese popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic.\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority.\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search.\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - search process.\nGeneral Terms Human Factors, Experimentation.\n1.\nINTRODUCTION The problem of improving queries sent to Information Retrieval (IR) systems has been studied extensively in IR research [4][11].\nAlternative query formulations, known as query suggestions, can be offered to users following an initial query, allowing them to modify the specification of their needs provided to the system, leading to improved retrieval performance.\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [8].\nIn recent years, applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work.\nHowever, interaction-based approaches to query suggestion may be less potent when the information need is exploratory, since a large proportion of user activity for such information needs may occur beyond search engine interactions.\nIn cases where directed searching is only a fraction of users'' information-seeking behavior, the utility of other users'' clicks over the space of top-ranked results may be limited, as it does not cover the subsequent browsing behavior.\nAt the same time, user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users, which may be particularly valuable for exploratory search tasks.\nThus, we propose exploiting a combination of past searching and browsing user behavior to enhance users'' Web search interactions.\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result ranking by Agichtein et al. [1].\nHowever, this approach only considers page visitation statistics independently of each other, not taking into account the pages'' relative positions on post-query browsing paths.\nRadlinski and Joachims [13] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations, yet their approach does not consider users'' interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages, referred to as destinations henceforth, in addition to the regular search results.\nThe destinations may not be among the topranked results, may not contain the queried terms, or may not even be indexed by the search engine.\nInstead, they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results.\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs, and our results support this hypothesis.\nIn prior work, O``Day and Jeffries [12] identified teleportation as an information-seeking strategy employed by users jumping to their previously-visited information targets, while Anderson et al. [2] applied similar principles to support the rapid navigation of Web sites on mobile devices.\nIn [19], Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users.\nHowever, we are not aware of such principles being applied to Web search.\nResearch in the area of recommender systems has also addressed similar issues, but in areas such as question-answering [9] and relatively small online communities [16].\nPerhaps the nearest instantiation of teleportation is search engines'' offering of several within-domain shortcuts below the title of a search result.\nWhile these may be based on user behavior and possibly site structure, the user saves at most one click from this feature.\nIn contrast, our proposed approach can transport users to locations many clicks beyond the search result, saving time and giving them a broader perspective on the available related information.\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages.\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search, and seek answers to questions on: (i) user preference and search effectiveness for known-item and exploratory search tasks, and (ii) the preferred distance between query and destination used to identify popular destinations from past behavior logs.\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience, while providing query refinement suggestions is most desirable for known-item tasks.\nThe remainder of the paper is structured as follows.\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs, and their use in identifying top destinations for new queries.\nSection 3 describes the design of the user study, while Sections 4 and 5 present the study findings and their discussion, respectively.\nWe conclude in Section 6 with a summary.\n2.\nSEARCH TRAILS AND DESTINATIONS We used Web activity logs containing searching and browsing activity collected with permission from hundreds of thousands of users over a five-month period between December 2005 and April 2006.\nEach log entry included an anonymous user identifier, a timestamp, a unique browser window identifier, and the URL of a visited Web page.\nThis information was sufficient to reconstruct temporally ordered sequences of viewed pages that we refer to as trails.\nIn this section, we summarize the extraction of trails, their features, and destinations (trail end-points).\nIn-depth description and analysis of trail extraction are presented in [20].\n2.1 Trail Extraction For each user, interaction logs were grouped based on browser identifier information.\nWithin each browser instance, participant navigation was summarized as a path known as a browser trail, from the first to the last Web page visited in that browser.\nLocated within some of these trails were search trails that originated with a query submission to a commercial search engine such as Google, Yahoo!, Windows Live Search, and Ask.\nIt is these search trails that we use to identify popular destinations.\nAfter originating with a query submission to a search engine, trails proceed until a point of termination where it is assumed that the user has completed their information-seeking activity.\nTrails must contain pages that are either: search result pages, search engine homepages, or pages connected to a search result page via a sequence of clicked hyperlinks.\nExtracting search trails using this methodology also goes some way toward handling multi-tasking, where users run multiple searches concurrently.\nSince users may open a new browser window (or tab) for each task [18], each task has its own browser trail, and a corresponding distinct search trail.\nTo reduce the amount of noise from pages unrelated to the active search task that may pollute our data, search trails are terminated when one of the following events occurs: (1) a user returns to their homepage, checks e-mail, logs in to an online service (e.g., MySpace or del.ico.us), types a URL or visits a bookmarked page; (2) a page is viewed for more than 30 minutes with no activity; (3) the user closes the active browser window.\nIf a page (at step i) meets any of these criteria, the trail is assumed to terminate on the previous page (i.e., step i - 1).\nThere are two types of search trails we consider: session trails and query trails.\nSession trails transcend multiple queries and terminate only when one of the three termination criteria above are satisfied.\nQuery trails use the same termination criteria as session trails, but also terminate upon submission of a new query to a search engine.\nApproximately 14 million query trails and 4 million session trails were extracted from the logs.\nWe now describe some trail features.\n2.2 Trail and Destination Analysis Table 1 presents summary statistics for the query and session trails.\nDifferences in user interaction between the last domain on the trail (Domain n) and all domains visited earlier (Domains 1 to (n - 1)) are particularly important, because they highlight the wealth of user behavior data not captured by logs of search engine interactions.\nStatistics are averages for all trails with two or more steps (i.e., those trails where at least one search result was clicked).\nTable 1.\nSummary statistics (mean averages) for search trails.\nMeasure Query trails Session trails Number of unique domains 2.0 4.3 Total page views All domains 4.8 16.2 Domains 1 to (n - 1) 1.4 10.1 Domain n (destination) 3.4 6.2 Total time spent (secs) All domains 172.6 621.8 Domains 1 to (n - 1) 70.4 397.6 Domain n (destination) 102.3 224.1 The statistics suggest that users generally browse far from the search results page (i.e., around 5 steps), and visit a range of domains during the course of their search.\nOn average, users visit 2 unique (non search-engine) domains per query trail, and just over 4 unique domains per session trail.\nThis suggests that users often do not find all the information they seek on the first domain they visit.\nFor query trails, users also visit more pages, and spend significantly longer, on the last domain in the trail compared to all previous domains combined.1 These distinctions of the last domains in the trails may indicate user interest, page utility, or page relevance.2 2.3 Destination Prediction For frequent queries, most popular destinations identified from Web activity logs could be simply stored for future lookup at search time.\nHowever, we have found that over the six-month period covered by our dataset, 56.9% of queries are unique, and 97% queries occur 10 or fewer times, accounting for 19.8% and 66.3% of all searches respectively (these numbers are comparable to those reported in previous studies of search engine query logs [15,17]).\nTherefore, a lookup-based approach would prevent us from reliably suggesting destinations for a large fraction of searches.\nTo overcome this problem, we utilize a simple term-based prediction model.\nAs discussed above, we extract two types of destinations: query destinations and session destinations.\nFor both destination types, we obtain a corpus of query-destination pairs and use it to construct term-vector representation of destinations that is analogous to the classic tf.idf document representation in traditional IR [14].\nThen, given a new query q consisting of k terms t1...tk, we identify highest-scoring destinations using the following similarity function: 1 Independent measures t-test: t(~60M) = 3.89, p < .001 2 The topical relevance of the destinations was tested for a subset of around ten thousand queries for which we had human judgments.\nThe average rating of most of the destinations lay between good and excellent.\nVisual inspection of those that did not lie in this range revealed that many were either relevant but had no judgments, or were related but had indirect query association (e.g., petfooddirect.com for query [dogs]).\n, : Where query and destination term weights, an computed using standard tf.idf weighting and que session-normalized smoothed tf.idf weighting, respec exploring alternative algorithms for the destination p remains an interesting challenge for future work, resu study described in subsequent sections demonstrate th approach provides robust, effective results.\n3.\nSTUDY To examine the usefulness of destinations, we con study investigating the perceptions and performance on four Web search systems, two with destination sug 3.1 Systems Four systems were used in this study: a baseline Web with no explicit support for query refinement (Base system with a query suggestion method that recomme queries (QuerySuggestion), and two systems that aug Web search with destination suggestions using either query trails (QueryDestination), or end-points of (SessionDestination).\n3.1.1 System 1: Baseline To establish baseline performance against which othe be compared, we developed a masked interface to a p engine without additional support in formulating q system presented the user-constructed query to the and returned ten top-ranking documents retrieved by t remove potential bias that may have been caused by perceptions, we removed all identifying information engine logos and distinguishing interface features.\n3.1.2 System 2: QuerySuggestion In addition to the basic search functionality offered QuerySuggestion provides suggestions about f refinements that searchers can make following an submission.\nThese suggestions are computed usin engine query log over the timeframe used for trail ge each target query, we retrieve two sets of candidate su contain the target query as a substring.\nOne set is com most frequent such queries, while the second set cont frequent queries that followed the target query in que candidate query is then scored by multiplying its sm frequency by its smoothed frequency of following th in past search sessions, using Laplacian smoothing.\nB scores, six top-ranked query suggestions are returned.\nsix suggestions are found, iterative backoff is per progressively longer suffixes of the target query; a si is described in [10].\nSuggestions were offered in a box positioned on the t result page, adjacent to the search results.\nFigure position of the suggestions on the page.\nFigure 1b sh view of the portion of the results page containing th offered for the query [hubble telescope].\nTo the left o nd , are ery- and userctively.\nWhile prediction task ults of the user hat this simple nducted a user of 36 subjects ggestions.\nsearch system line), a search ends additional gment baseline r end-points of session trails er systems can popular search queries.\nThis search engine the engine.\nTo subjects'' prior such as search d by Baseline, further query n initial query ng the search eneration.\nFor uggestions that mposed of 100 tains 100 most ery logs.\nEach moothed overall he target query Based on these .\nIf fewer than rformed using imilar strategy top-right of the 1a shows the hows a zoomed he suggestions of each query (a) Position of suggestions (b) Zoo Figure 1.\nQuery suggestion presentation in suggestion is an icon similar to a progress b normalized popularity.\nClicking a suggestion r results for that query.\n3.1.3 System 3: QueryDestination QueryDestination uses an interface similar t However, instead of showing query refinemen query, QueryDestination suggests up to six des visited by other users who submitted queries s one, and computed as described in the previous shows the position of the destination suggestio page.\nFigure 2b shows a zoomed view of the p page destinations suggested for the query [hubb (a) Position of destinations (b) Zoo Figure 2.\nDestination presentation in Que To keep the interface uncluttered, the page title is shown on hover over the page URL (shown to the destination name, there is a clickable icon to execute a search for the current query wi domain displayed.\nWe show destinations as a than increasing their search result rank, since deviate from the original query (e.g., those topics or not containing the original query terms 3.1.4 System 4: SessionDestination The interface functionality in SessionDestinat QueryDestination.\nThe only difference between the definition of trail end-points for queries use destinations.\nQueryDestination directs users to end up at for the active or similar que SessionDestination directs users to the domains the end of the search session that follows th queries.\nThis downgrades the effect of multi (i.e., we only care where users end up after sub rather than directing searchers to potentially irre may precede a query reformulation.\n3.2 Research Questions We were interested in determining the value of p To do this we attempt to answer the following re 3 To improve reliability, in a similar way to QueryS are only shown if their popularity exceeds a frequen med suggestions QuerySuggestion.\nbar that encodes its retrieves new search to QuerySuggestion.\nnts for the submitted stinations frequently imilar to the current s section.3 Figure 2a ons on search results portion of the results le telescope].\nmed destinations eryDestination.\ne of each destination in Figure 2b).\nNext n that allows the user ithin the destination a separate list, rather they may topically focusing on related s).\ntion is analogous to n the two systems is ed in computing top the domains others ries.\nIn contrast, s other users visit at he active or similar iple query iterations bmitting all queries), elevant domains that popular destinations.\nesearch questions: Suggestion, destinations ncy threshold.\nRQ1: Are popular destinations preferable and more effective than query refinement suggestions and unaided Web search for: a. Searches that are well-defined (known-item tasks)?\nb. Searches that are ill-defined (exploratory tasks)?\nRQ2: Should popular destinations be taken from the end of query trails or the end of session trails?\n3.3 Subjects 36 subjects (26 males and 10 females) participated in our study.\nThey were recruited through an email announcement within our organization where they hold a range of positions in different divisions.\nThe average age of subjects was 34.9 years (max=62, min=27, SD=6.2).\nAll are familiar with Web search, and conduct 7.5 searches per day on average (SD=4.1).\nThirty-one subjects (86.1%) reported general awareness of the query refinements offered by commercial Web search engines.\n3.4 Tasks Since the search task may influence information-seeking behavior [4], we made task type an independent variable in the study.\nWe constructed six known-item tasks and six open-ended, exploratory tasks that were rotated between systems and subjects as described in the next section.\nFigure 3 shows examples of the two task types.\nKnown-item task Identify three tropical storms (hurricanes and typhoons) that have caused property damage and\/or loss of life.\nExploratory task You are considering purchasing a Voice Over Internet Protocol (VoIP) telephone.\nYou want to learn more about VoIP technology and providers that offer the service, and select the provider and telephone that best suits you.\nFigure 3.\nExamples of known-item and exploratory tasks.\nExploratory tasks were phrased as simulated work task situations [5], i.e., short search scenarios that were designed to reflect real-life information needs.\nThese tasks generally required subjects to gather background information on a topic or gather sufficient information to make an informed decision.\nThe known-item search tasks required search for particular items of information (e.g., activities, discoveries, names) for which the target was welldefined.\nA similar task classification has been used successfully in previous work [21].\nTasks were taken and adapted from the Text Retrieval Conference (TREC) Interactive Track [7], and questions posed on question-answering communities (Yahoo! Answers, Google Answers, and Windows Live QnA).\nTo motivate the subjects during their searches, we allowed them to select two known-item and two exploratory tasks at the beginning of the experiment from the six possibilities for each category, before seeing any of the systems or having the study described to them.\nPrior to the experiment all tasks were pilot tested with a small number of different subjects to help ensure that they were comparable in difficulty and selectability (i.e., the likelihood that a task would be chosen given the alternatives).\nPost-hoc analysis of the distribution of tasks selected by subjects during the full study showed no preference for any task in either category.\n3.5 Design and Methodology The study used a within-subjects experimental design.\nSystem had four levels (corresponding to the four experimental systems) and search tasks had two levels (corresponding to the two task types).\nSystem and task-type order were counterbalanced according to a Graeco-Latin square design.\nSubjects were tested independently and each experimental session lasted for up to one hour.\nWe adhered to the following procedure: 1.\nUpon arrival, subjects were asked to select two known-item and two exploratory tasks from the six tasks of each type.\n2.\nSubjects were given an overview of the study in written form that was read aloud to them by the experimenter.\n3.\nSubjects completed a demographic questionnaire focusing on aspects of search experience.\n4.\nFor each of the four interface conditions: a. Subjects were given an explanation of interface functionality lasting around 2 minutes.\nb. Subjects were instructed to attempt the task on the assigned system searching the Web, and were allotted up to 10 minutes to do so.\nc. Upon completion of the task, subjects were asked to complete a post-search questionnaire.\n5.\nAfter completing the tasks on the four systems, subjects answered a final questionnaire comparing their experiences on the systems.\n6.\nSubjects were thanked and compensated.\nIn the next section we present the findings of this study.\n4.\nFINDINGS In this section we use the data derived from the experiment to address our hypotheses about query suggestions and destinations, providing information on the effect of task type and topic familiarity where appropriate.\nParametric statistical testing is used in this analysis and the level of significance is set to < 0.05, unless otherwise stated.\nAll Likert scales and semantic differentials used a 5-point scale where a rating closer to one signifies more agreement with the attitude statement.\n4.1 Subject Perceptions In this section we present findings on how subjects perceived the systems that they used.\nResponses to post-search (per-system) and final questionnaires are used as the basis for our analysis.\n4.1.1 Search Process To address the first research question wanted insight into subjects'' perceptions of the search experience on each of the four systems.\nIn the post-search questionnaires, we asked subjects to complete four 5-point semantic differentials indicating their responses to the attitude statement: The search we asked you to perform was.\nThe paired stimuli offered as responses were: relaxing\/stressful, interesting\/ boring, restful\/tiring, and easy\/difficult.\nThe average obtained differential values are shown in Table 1 for each system and each task type.\nThe value corresponding to the differential All represents the mean of all three differentials, providing an overall measure of subjects'' feelings.\nTable 1.\nPerceptions of search process (lower = better).\nDifferential Known-item Exploratory B QS QD SD B QS QD SD Easy 2.6 1.6 1.7 2.3 2.5 2.6 1.9 2.9 Restful 2.8 2.3 2.4 2.6 2.8 2.8 2.4 2.8 Interesting 2.4 2.2 1.7 2.2 2.2 1.8 1.8 2 Relaxing 2.6 1.9 2 2.2 2.5 2.8 2.3 2.9 All 2.6 2 1.9 2.3 2.5 2.5 2.1 2.7 Each cell in Table 1 summarizes subject responses for 18 tasksystem pairs (18 subjects who ran a known-item task on Baseline (B), 18 subjects who ran an exploratory task on QuerySuggestion (QS), etc.).\nThe most positive response across all systems for each differential-task pair is shown in bold.\nWe applied two-way analysis of variance (ANOVA) to each differential across all four systems and two task types.\nSubjects found the search easier on QuerySuggestion and QueryDestination than the other systems for known-item tasks.4 For exploratory tasks, only searches conducted on QueryDestination were easier than on the other systems.5 Subjects indicated that exploratory tasks on the three non-baseline systems were more stressful (i.e., less relaxing) than the knownitem tasks.6 As we will discuss in more detail in Section 4.1.3, subjects regarded the familiarity of Baseline as a strength, and may have struggled to attempt a more complex task while learning a new interface feature such as query or destination suggestions.\n4.1.2 Interface Support We solicited subjects'' opinions on the search support offered by QuerySuggestion, QueryDestination, and SessionDestination.\nThe following Likert scales and semantic differentials were used: \u2022 Likert scale A: Using this system enhances my effectiveness in finding relevant information.\n(Effectiveness)7 \u2022 Likert scale B: The queries\/destinations suggested helped me get closer to my information goal.\n(CloseToGoal) \u2022 Likert scale C: I would re-use the queries\/destinations suggested if I encountered a similar task in the future (Re-use) \u2022 Semantic differential A: The queries\/destinations suggested by the system were: relevant\/irrelevant, useful\/useless, appropriate\/inappropriate.\nWe did not include these in the post-search questionnaire when subjects used the Baseline system as they refer to interface support options that Baseline did not offer.\nTable 2 presents the average responses for each of these scales and differentials, using the labels after each of the first three Likert scales in the bulleted list above.\nThe values for the three semantic differentials are included at the bottom of the table, as is their overall average under All.\nTable 2.\nPerceptions of system support (lower = better).\nScale \/ Differential Known-item Exploratory QS QD SD QS QD SD Effectiveness 2.7 2.5 2.6 2.8 2.3 2.8 CloseToGoal 2.9 2.7 2.8 2.7 2.2 3.1 Re-use 2.9 3 2.4 2.5 2.5 3.2 1 Relevant 2.6 2.5 2.8 2.4 2 3.1 2 Useful 2.6 2.7 2.8 2.7 2.1 3.1 3 Appropriate 2.6 2.4 2.5 2.4 2.4 2.6 All {1,2,3} 2.6 2.6 2.6 2.6 2.3 2.9 The results show that all three experimental systems improved subjects'' perceptions of their search effectiveness over Baseline, although only QueryDestination did so significantly.8 Further examination of the effect size (measured using Cohen``s d) revealed that QueryDestination affects search effectiveness most positively.9 QueryDestination also appears to get subjects closer to their information goal (CloseToGoal) than QuerySuggestion or 4 easy: F(3,136) = 4.71, p = .0037; Tukey post-hoc tests: all p \u2264 .008 5 easy: F(3,136) = 3.93, p = .01; Tukey post-hoc tests: all p \u2264 .012 6 relaxing: F(1,136) = 6.47, p = .011 7 This question was conditioned on subjects'' use of Baseline and their previous Web search experiences.\n8 F(3,136) = 4.07, p = .008; Tukey post-hoc tests: all p \u2264 .002 9 QS: d(K,E) = (.26, .52); QD: d(K,E) = (.77, 1.50); SD: d(K,E) = (.48, .28) SessionDestination, although only for exploratory search tasks.10 Additional comments on QuerySuggestion conveyed that subjects saw it as a convenience (to save them typing a reformulation) rather than a way to dramatically influence the outcome of their search.\nFor exploratory searches, users benefited more from being pointed to alternative information sources than from suggestions for iterative refinements of their queries.\nOur findings also show that our subjects felt that QueryDestination produced more relevant and useful suggestions for exploratory tasks than the other systems.11 All other observed differences between the systems were not statistically significant.12 The difference between performance of QueryDestination and SessionDestination is explained by the approach used to generate destinations (described in Section 2).\nSessionDestination``s recommendations came from the end of users'' session trails that often transcend multiple queries.\nThis increases the likelihood that topic shifts adversely affect their relevance.\n4.1.3 System Ranking In the final questionnaire that followed completion of all tasks on all systems, subjects were asked to rank the four systems in descending order based on their preferences.\nTable 3 presents the mean average rank assigned to each of the systems.\nTable 3.\nRelative ranking of systems (lower = better).\nSystems Baseline QSuggest QDest SDest Ranking 2.47 2.14 1.92 2.31 These results indicate that subjects preferred QuerySuggestion and QueryDestination overall.\nHowever, none of the differences between systems'' ratings are significant.13 One possible explanation for these systems being rated higher could be that although the popular destination systems performed well for exploratory searches while QuerySuggestion performed well for known-item searches, an overall ranking merges these two performances.\nThis relative ranking reflects subjects'' overall perceptions, but does not separate them for each task category.\nOver all tasks there appeared to be a slight preference for QueryDestination, but as other results show, the effect of task type on subjects'' perceptions is significant.\nThe final questionnaire also included open-ended questions that asked subjects to explain their system ranking, and describe what they liked and disliked about each system: Baseline: Subjects who preferred Baseline commented on the familiarity of the system (e.g., was familiar and I didn``t end up using suggestions (S36)).\nThose who did not prefer this system disliked the lack of support for query formulation (Can be difficult if you don``t pick good search terms (S20)) and difficulty locating relevant documents (e.g., Difficult to find what I was looking for (S13); Clunky current technology (S30)).\nQuerySuggestion: Subjects who rated QuerySuggestion highest commented on rapid support for query formulation (e.g., was useful in (1) saving typing (2) coming up with new ideas for query expansion (S12); helps me better phrase the search term (S24); made my next query easier (S21)).\nThose who did not prefer this system criticized suggestion quality (e.g., Not relevant (S11); Popular 10 F(2,102) = 5.00, p = .009; Tukey post-hoc tests: all p \u2264 .012 11 F(2,102) = 4.01, p = .01; \u03b1 = .0167 12 Tukey post-hoc tests: all p \u2265 .143 13 One-way repeated measures ANOVA: F(3,105) = 1.50, p = .22 queries weren``t what I was looking for (S18)) and the quality of results they led to (e.g., Results (after clicking on suggestions) were of low quality (S35); Ultimately unhelpful (S1)).\nQueryDestination: Subjects who preferred this system commented mainly on support for accessing new information sources (e.g., provided potentially helpful and new areas \/ domains to look at (S27)) and bypassing the need to browse to these pages (Useful to try to `cut to the chase'' and go where others may have found answers to the topic (S3)).\nThose who did not prefer this system commented on the lack of specificity in the suggested domains (Should just link to site-specific query, not site itself (S16); Sites were not very specific (S24); Too general\/vague (S28)14 ), and the quality of the suggestions (Not relevant (S11); Irrelevant (S6)).\nSessionDestination: Subjects who preferred this system commented on the utility of the suggested domains (suggestions make an awful lot of sense in providing search assistance, and seemed to help very nicely (S5)).\nHowever, more subjects commented on the irrelevance of the suggestions (e.g., did not seem reliable, not much help (S30); Irrelevant, not my style (S21), and the related need to include explanations about why the suggestions were offered (e.g., Low-quality results, not enough information presented (S35)).\nThese comments demonstrate a diverse range of perspectives on different aspects of the experimental systems.\nWork is obviously needed in improving the quality of the suggestions in all systems, but subjects seemed to distinguish the settings when each of these systems may be useful.\nEven though all systems can at times offer irrelevant suggestions, subjects appeared to prefer having them rather than not (e.g., one subject remarked suggestions were helpful in some cases and harmless in all (S15)).\n4.1.4 Summary The findings obtained from our study on subjects'' perceptions of the four systems indicate that subjects tend to prefer QueryDestination for the exploratory tasks and QuerySuggestion for the known-item searches.\nSuggestions to incrementally refine the current query may be preferred by searchers on known-item tasks when they may have just missed their information target.\nHowever, when the task is more demanding, searchers appreciate suggestions that have the potential to dramatically influence the direction of a search or greatly improve topic coverage.\n4.2 Search Tasks To gain a better understanding of how subjects performed during the study, we analyze data captured on their perceptions of task completeness and the time that it took them to complete each task.\n4.2.1 Subject Perceptions In the post-search questionnaire, subjects were asked to indicate on a 5-point Likert scale the extent to which they agreed with the following attitude statement: I believe I have succeeded in my performance of this task (Success).\nIn addition, they were asked to complete three 5-point semantic differentials indicating their response to the attitude statement: The task we asked you to perform was: The paired stimuli offered as possible responses were clear\/unclear, simple\/complex, and familiar\/ unfamiliar.\nTable 4 presents the mean average response to these statements for each system and task type.\n14 Although the destination systems provided support for search within a domain, subjects mainly chose to ignore this.\nTable 4.\nPerceptions of task and task success (lower = better).\nScale Known-item Exploratory B QS QD SD B QS QD SD Success 2.0 1.3 1.4 1.4 2.8 2.3 1.4 2.6 1 Clear 1.2 1.1 1.1 1.1 1.6 1.5 1.5 1.6 2 Simple 1.9 1.4 1.8 1.8 2.4 2.9 2.4 3 3 Familiar 2.2 1.9 2.0 2.2 2.6 2.5 2.7 2.7 All {1,2,3} 1.8 1.4 1.6 1.8 2.2 2.2 2.2 2.3 Subject responses demonstrate that users felt that their searches had been more successful using QueryDestination for exploratory tasks than with the other three systems (i.e., there was a two-way interaction between these two variables).15 In addition, subjects perceived a significantly greater sense of completion with knownitem tasks than with exploratory tasks.16 Subjects also found known-item tasks to be more simple, clear, and familiar.\n17 These responses confirm differences in the nature of the tasks we had envisaged when planning the study.\nAs illustrated by the examples in Figure 3, the known-item tasks required subjects to retrieve a finite set of answers (e.g., find three interesting things to do during a weekend visit to Kyoto, Japan).\nIn contrast, the exploratory tasks were multi-faceted, and required subjects to find out more about a topic or to find sufficient information to make a decision.\nThe end-point in such tasks was less well-defined and may have affected subjects'' perceptions of when they had completed the task.\nGiven that there was no difference in the tasks attempted on each system, theoretically the perception of the tasks'' simplicity, clarity, and familiarity should have been the same for all systems.\nHowever, we observe a clear interaction effect between the system and subjects'' perception of the actual tasks.\n4.2.2 Task Completion Time In addition to asking subjects to indicate the extent to which they felt the task was completed, we also monitored the time that it took them to indicate to the experimenter that they had finished.\nThe elapsed time from when the subject began issuing their first query until when they indicated that they were done was monitored using a stopwatch and recorded for later analysis.\nA stopwatch rather than system logging was used for this since we wanted to record the time regardless of system interactions.\nFigure 4 shows the average task completion time for each system and each task type.\nFigure 4.\nMean average task completion time (\u00b1 SEM).\n15 F(3,136) = 6.34, p = .001 16 F(1,136) = 18.95, p < .001 17 F(1,136) = 6.82, p = .028; Known-item tasks were also more simple on QS (F(3,136) = 3.93, p = .01; Tukey post-hoc test: p = .01); \u03b1 = .167 Known-item Exploratory 0 100 200 300 400 500 600 Task categories Baseline QSuggest Time(seconds) Systems 348.8 513.7 272.3 467.8 232.3 474.2 359.8 472.2 QDestination SDestination As can be seen in the figure above, the task completion times for the known-item tasks differ greatly between systems.18 Subjects attempting these tasks on QueryDestination and QuerySuggestion complete them in less time than subjects on Baseline and SessionDestination.19 As discussed in the previous section, subjects were more familiar with the known-item tasks, and felt they were simpler and clearer.\nBaseline may have taken longer than the other systems since users had no additional support and had to formulate their own queries.\nSubjects generally felt that the recommendations offered by SessionDestination were of low relevance and usefulness.\nConsequently, the completion time increased slightly between these two systems perhaps as the subjects assessed the value of the proposed suggestions, but reaped little benefit from them.\nThe task completion times for the exploratory tasks were approximately equal on all four systems20 , although the time on Baseline was slightly higher.\nSince these tasks had no clearly defined termination criteria (i.e., the subject decided when they had gathered sufficient information), subjects generally spent longer searching, and consulted a broader range of information sources than in the known-item tasks.\n4.2.3 Summary Analysis of subjects'' perception of the search tasks and aspects of task completion shows that the QuerySuggestion system made subjects feel more successful (and the task more simple, clear, and familiar) for the known-item tasks.\nOn the other hand, QueryDestination was shown to lead to heightened perceptions of search success and task ease, clarity, and familiarity for the exploratory tasks.\nTask completion times on both systems were significantly lower than on the other systems for known-item tasks.\n4.3 Subject Interaction We now focus our analysis on the observed interactions between searchers and systems.\nAs well as eliciting feedback on each system from our subjects, we also recorded several aspects of their interaction with each system in log files.\nIn this section, we analyze three interaction aspects: query iterations, search-result clicks, and subject engagement with the additional interface features offered by the three non-baseline systems.\n4.3.1 Queries and Result Clicks Searchers typically interact with search systems by submitting queries and clicking on search results.\nAlthough our system offers additional interface affordances, we begin this section by analyzing querying and clickthrough behavior of our subjects to better understand how they conducted core search activities.\nTable 5 shows the average number of query iterations and search results clicked for each system-task pair.\nThe average value in each cell is computed for 18 subjects on each task type and system.\nTable 5.\nAverage query iterations and result clicks (per task).\nScale Known-item Exploratory B QS QD SD B QS QD SD Queries 1.9 4.2 1.5 2.4 3.1 5.7 2.7 3.5 Result clicks 2.6 2 1.7 2.4 3.4 4.3 2.3 5.1 Subjects submitted fewer queries and clicked on fewer search results in QueryDestination than in any of the other systems.21 As 18 F(3,136) = 4.56, p = .004 19 Tukey post-hoc tests: all p \u2264 .021 20 F(3,136) = 1.06, p = .37 21 Queries: F(3,443) = 3.99; p = .008; Tukey post-hoc tests: all p \u2264 .004; Systems: F(3,431) = 3.63, p = .013; Tukey post-hoc tests: all p \u2264 .011 discussed in the previous section, subjects using this system felt more successful in their searches yet they exhibited less of the traditional query and result-click interactions required for search success on traditional search systems.\nIt may be the case that subjects'' queries on this system were more effective, but it is more likely that they interacted less with the system through these means and elected to use the popular destinations instead.\nOverall, subjects submitted most queries in QuerySuggestion, which is not surprising as this system actively encourages searchers to iteratively re-submit refined queries.\nSubjects interacted similarly with Baseline and SessionDestination systems, perhaps due to the low quality of the popular destinations in the latter.\nTo investigate this and related issues, we will next analyze usage of the suggestions on the three non-baseline systems.\n4.3.2 Suggestion Usage To determine whether subjects found additional features useful, we measure the extent to which they were used when they were provided.\nSuggestion usage is defined as the proportion of submitted queries for which suggestions were offered and at least one suggestion was clicked.\nTable 6 shows the average usage for each system and task category.\nTable 6.\nSuggestion uptake (values are percentages).\nMeasure Known-item Exploratory QS QD SD QS QD SD Usage 35.7 33.5 23.4 30.0 35.2 25.3 Results indicate that QuerySuggestion was used more for knownitem tasks than SessionDestination22 , and QueryDestination was used more than all other systems for the exploratory tasks.23 For well-specified targets in known-item search, subjects appeared to use query refinement most heavily.\nIn contrast, when subjects were exploring, they seemed to benefit most from the recommendation of additional information sources.\nSubjects selected almost twice as many destinations per query when using QueryDestination compared to SessionDestination.24 As discussed earlier, this may be explained by the lower perceived relevance and usefulness of destinations recommended by SessionDestination.\n4.3.3 Summary Analysis of log interaction data gathered during the study indicates that although subjects submitted fewer queries and clicked fewer search results on QueryDestination, their engagement with suggestions was highest on this system, particularly for exploratory search tasks.\nThe refined queries proposed by QuerySuggestion were used the most for the known-item tasks.\nThere appears to be a clear division between the systems: QuerySuggestion was preferred for known-item tasks, while QueryDestination provided most-used support for exploratory tasks.\n5.\nDISCUSSION AND IMPLICATIONS The promising findings of our study suggest that systems offering popular destinations lead to more successful and efficient searching compared to query suggestion and unaided Web search.\nSubjects seemed to prefer QuerySuggestion for the known-item tasks where the information-seeking goal was well-defined.\nIf the initial query does not retrieve relevant information, then subjects 22 F(2,355) = 4.67, p = .01; Tukey post-hoc tests: p = .006 23 Tukey``s post-hoc tests: all p \u2264 .027 24 QD: MK = 1.8, ME = 2.1; SD: MK = 1.1, ME = 1.2; F(1,231) = 5.49, p = .02; Tukey post-hoc tests: all p \u2264 .003; (M represents mean average).\nappreciate support in deciding what refinements to make to the query.\nFrom examination of the queries that subjects entered for the known-item searches across all systems, they appeared to use the initial query as a starting point, and add or subtract individual terms depending on search results.\nThe post-search questionnaire asked subjects to select from a list of proposed explanations (or offer their own explanations) as to why they used recommended query refinements.\nFor both known-item tasks and the exploratory tasks, around 40% of subjects indicated that they selected a query suggestion because they wanted to save time typing a query, while less than 10% of subjects did so because the suggestions represented new ideas.\nThus, subjects seemed to view QuerySuggestion as a time-saving convenience, rather than a way to dramatically impact search effectiveness.\nThe two variants of recommending destinations that we considered, QueryDestination and SessionDestination, offered suggestions that differed in their temporal proximity to the current query.\nThe quality of the destinations appeared to affect subjects'' perceptions of them and their task performance.\nAs discussed earlier, domains residing at the end of a complete search session (as in SessionDestination) are more likely to be unrelated to the current query, and thus are less likely to constitute valuable suggestions.\nDestination systems, in particular QueryDestination, performed best for the exploratory search tasks, where subjects may have benefited from exposure to additional information sources whose topical relevance to the search query is indirect.\nAs with QuerySuggestion, subjects were asked to offer explanations for why they selected destinations.\nOver both task types they suggested that destinations were clicked because they grabbed their attention (40%), represented new ideas (25%), or users couldn``t find what they were looking for (20%).\nThe least popular responses were wanted to save time typing the address (7%) and the destination was popular (3%).\nThe positive response to destination suggestions from the study subjects provides interesting directions for design refinements.\nWe were surprised to learn that subjects did not find the popularity bars useful, or hardly used the within-site search functionality, inviting re-design of these components.\nSubjects also remarked that they would like to see query-based summaries for each suggested destination to support more informed selection, as well as categorization of destinations with capability of drill-down for each category.\nSince QuerySuggestion and QueryDestination perform well in distinct task scenarios, integrating both in a single system is an interesting future direction.\nWe hope to deploy some of these ideas on Web scale in future systems, which will allow log-based evaluation across large user pools.\n6.\nCONCLUSIONS We presented a novel approach for enhancing users'' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs.\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search.\nResults of our study revealed that: (i) systems suggesting query refinements were preferred for known-item tasks, (ii) systems offering popular destinations were preferred for exploratory search tasks, and (iii) destinations should be mined from the end of query trails, not session trails.\nOverall, popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems, and enhance the informationseeking experience for many Web searchers.\n7.\nREFERENCES [1] Agichtein, E., Brill, E. & Dumais, S. (2006).\nImproving Web search ranking by incorporating user behavior information.\nIn Proc.\nSIGIR, 19-26.\n[2] Anderson, C. et al. (2001).\nAdaptive Web navigation for wireless devices.\nIn Proc.\nIJCAI, 879-884.\n[3] Anick, P. (2003).\nUsing terminological feedback for Web search refinement: A log-based study.\nIn Proc.\nSIGIR, 88-95.\n[4] Beaulieu, M. (1997).\nExperiments with interfaces to support query expansion.\nJ. Doc.\n53, 1, 8-19.\n[5] Borlund, P. (2000).\nExperimental components for the evaluation of interactive information retrieval systems.\nJ. Doc.\n56, 1, 71-90.\n[6] Downey et al. (2007).\nModels of searching and browsing: languages, studies and applications.\nIn Proc.\nIJCAI, 1465-72.\n[7] Dumais, S.T. & Belkin, N.J. (2005).\nThe TREC interactive tracks: putting the user into search.\nIn Voorhees, E.M. and Harman, D.K. (eds.)\nTREC: Experiment and Evaluation in Information Retrieval.\nCambridge, MA: MIT Press, 123-153.\n[8] Furnas, G. W. (1985).\nExperience with an adaptive indexing scheme.\nIn Proc.\nCHI, 131-135.\n[9] Hickl, A. et al. (2006).\nFERRET: Interactive questionanswering for real-world environments.\nIn Proc.\nof COLING\/ACL, 25-28.\n[10] Jones, R., et al. (2006).\nGenerating query substitutions.\nIn Proc.\nWWW, 387-396.\n[11] Koenemann, J. & Belkin, N. (1996).\nA case for interaction: a study of interactive information retrieval behavior and effectiveness.\nIn Proc.\nCHI, 205-212.\n[12] O``Day, V. & Jeffries, R. (1993).\nOrienteering in an information landscape: how information seekers get from here to there.\nIn Proc.\nCHI, 438-445.\n[13] Radlinski, F. & Joachims, T. (2005).\nQuery chains: Learning to rank from implicit feedback.\nIn Proc.\nKDD, 239-248.\n[14] Salton, G. & Buckley, C. (1988) Term-weighting approaches in automatic text retrieval.\nInf.\nProc.\nManage.\n24, 513-523.\n[15] Silverstein, C. et al. (1999).\nAnalysis of a very large Web search engine query log.\nSIGIR Forum 33, 1, 6-12.\n[16] Smyth, B. et al. (2004).\nExploiting query repetition and regularity in an adaptive community-based Web search engine.\nUser Mod.\nUser Adapt.\nInt.\n14, 5, 382-423.\n[17] Spink, A. et al. (2002).\nU.S. versus European Web searching trends.\nSIGIR Forum 36, 2, 32-38.\n[18] Spink, A., et al. (2006).\nMultitasking during Web search sessions.\nInf.\nProc.\nManage., 42, 1, 264-275.\n[19] Wexelblat, A. & Maes, P. (1999).\nFootprints: history-rich tools for information foraging.\nIn Proc.\nCHI, 270-277.\n[20] White, R.W. & Drucker, S.M. (2007).\nInvestigating behavioral variability in Web search.\nIn Proc.\nWWW, 21-30.\n[21] White, R.W. & Marchionini, G. (2007).\nExamining the effectiveness of real-time query expansion.\nInf.\nProc.\nManage.\n43, 685-704.","lvl-3":"Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs.\nThese popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic.\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority.\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search.\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.\n1.\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval (IR) systems has been studied extensively in IR research [4] [11].\nAlternative query formulations, known as query suggestions, can be offered to users following an initial query, allowing them to modify the specification of their needs provided to the system, leading to improved retrieval performance.\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [8].\nIn recent years, applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work.\nHowever, interaction-based approaches to query suggestion may be less potent when the information need is exploratory, since a large proportion of user activity for such information needs may\noccur beyond search engine interactions.\nIn cases where directed searching is only a fraction of users' information-seeking behavior, the utility of other users' clicks over the space of top-ranked results may be limited, as it does not cover the subsequent browsing behavior.\nAt the same time, user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users, which may be particularly valuable for exploratory search tasks.\nThus, we propose exploiting a combination of past searching and browsing user behavior to enhance users' Web search interactions.\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result ranking by Agichtein et al. [1].\nHowever, this approach only considers page visitation statistics independently of each other, not taking into account the pages' relative positions on post-query browsing paths.\nRadlinski and Joachims [13] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations, yet their approach does not consider users' interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages, referred to as destinations henceforth, in addition to the regular search results.\nThe destinations may not be among the topranked results, may not contain the queried terms, or may not even be indexed by the search engine.\nInstead, they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results.\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs, and our results support this hypothesis.\nIn prior work, O'Day and Jeffries [12] identified \"teleportation\" as an information-seeking strategy employed by users jumping to their previously-visited information targets, while Anderson et al. [2] applied similar principles to support the rapid navigation of Web sites on mobile devices.\nIn [19], Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users.\nHowever, we are not aware of such principles being applied to Web search.\nResearch in the area of recommender systems has also addressed similar issues, but in areas such as question-answering [9] and relatively small online communities [16].\nPerhaps the nearest instantiation of teleportation is search engines' offering of several within-domain shortcuts below the title of a search result.\nWhile these may be based on user behavior and possibly site structure, the user saves at most one click from this feature.\nIn contrast, our proposed approach can transport users to locations many clicks beyond the search result, saving time and giving them a broader perspective on the available related information.\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages.\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search, and seek answers to questions on: (i) user preference and search effectiveness for known-item and exploratory search tasks, and (ii) the preferred distance between query and destination used to identify popular destinations from past behavior logs.\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience, while providing query refinement suggestions is most desirable for known-item tasks.\nThe remainder of the paper is structured as follows.\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs, and their use in identifying top destinations for new queries.\nSection 3 describes the design of the user study, while Sections 4 and 5 present the study findings and their discussion, respectively.\nWe conclude in Section 6 with a summary.\n2.\nSEARCH TRAILS AND DESTINATIONS\n2.1 Trail Extraction\n2.2 Trail and Destination Analysis\n2.3 Destination Prediction\n1 Independent measures t-test: t (~ 60M) = 3.89, p <.001\n3.\nSTUDY\n3.1 Systems\n3.1.1 System 1: Baseline\n3.1.2 System 2: QuerySuggestion\n3.1.3 System 3: QueryDestination\n3.1.4 System 4: SessionDestination\n3.2 Research Questions\n3.3 Subjects\n3.4 Tasks\n3.5 Design and Methodology\n4.\nFINDINGS\n4.1 Subject Perceptions\n4.1.1 Search Process\n4.1.2 Interface Support\n4.1.3 System Ranking\n4.1.4 Summary\n4.2 Search Tasks\n4.2.1 Subject Perceptions\n4.2.2 Task Completion Time\n4.2.3 Summary\n4.3 Subject Interaction\n4.3.1 Queries and Result Clicks\n4.3.2 Suggestion Usage\n4.3.3 Summary\n6.\nCONCLUSIONS\nWe presented a novel approach for enhancing users' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs.\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search.\nResults of our study revealed that: (i) systems suggesting query refinements were preferred for known-item tasks, (ii) systems offering popular destinations were preferred for exploratory search tasks, and (iii) destinations should be mined from the end of query trails, not session trails.\nOverall, popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems, and enhance the informationseeking experience for many Web searchers.","lvl-4":"Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs.\nThese popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic.\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority.\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search.\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.\n1.\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval (IR) systems has been studied extensively in IR research [4] [11].\nAlternative query formulations, known as query suggestions, can be offered to users following an initial query, allowing them to modify the specification of their needs provided to the system, leading to improved retrieval performance.\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [8].\nHowever, interaction-based approaches to query suggestion may be less potent when the information need is exploratory, since a large proportion of user activity for such information needs may\noccur beyond search engine interactions.\nIn cases where directed searching is only a fraction of users' information-seeking behavior, the utility of other users' clicks over the space of top-ranked results may be limited, as it does not cover the subsequent browsing behavior.\nAt the same time, user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users, which may be particularly valuable for exploratory search tasks.\nThus, we propose exploiting a combination of past searching and browsing user behavior to enhance users' Web search interactions.\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result ranking by Agichtein et al. [1].\nRadlinski and Joachims [13] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations, yet their approach does not consider users' interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages, referred to as destinations henceforth, in addition to the regular search results.\nThe destinations may not be among the topranked results, may not contain the queried terms, or may not even be indexed by the search engine.\nInstead, they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results.\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs, and our results support this hypothesis.\nIn [19], Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users.\nHowever, we are not aware of such principles being applied to Web search.\nPerhaps the nearest instantiation of teleportation is search engines' offering of several within-domain shortcuts below the title of a search result.\nWhile these may be based on user behavior and possibly site structure, the user saves at most one click from this feature.\nIn contrast, our proposed approach can transport users to locations many clicks beyond the search result, saving time and giving them a broader perspective on the available related information.\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages.\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search, and seek answers to questions on: (i) user preference and search effectiveness for known-item and exploratory search tasks, and (ii) the preferred distance between query and destination used to identify popular destinations from past behavior logs.\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience, while providing query refinement suggestions is most desirable for known-item tasks.\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs, and their use in identifying top destinations for new queries.\nSection 3 describes the design of the user study, while Sections 4 and 5 present the study findings and their discussion, respectively.\n6.\nCONCLUSIONS\nWe presented a novel approach for enhancing users' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs.\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search.\nResults of our study revealed that: (i) systems suggesting query refinements were preferred for known-item tasks, (ii) systems offering popular destinations were preferred for exploratory search tasks, and (iii) destinations should be mined from the end of query trails, not session trails.\nOverall, popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems, and enhance the informationseeking experience for many Web searchers.","lvl-2":"Studying the Use of Popular Destinations to Enhance Web Search Interaction\nABSTRACT\nWe present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs.\nThese popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic.\nDestinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority.\nWe describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search.\nResults show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.\n1.\nINTRODUCTION\nThe problem of improving queries sent to Information Retrieval (IR) systems has been studied extensively in IR research [4] [11].\nAlternative query formulations, known as query suggestions, can be offered to users following an initial query, allowing them to modify the specification of their needs provided to the system, leading to improved retrieval performance.\nRecent popularity of Web search engines has enabled query suggestions that draw upon the query reformulation behavior of many users to make query recommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query reformulation has its roots in adaptive indexing [8].\nIn recent years, applying such techniques has become possible at a much larger scale and in a different context than what was proposed in early work.\nHowever, interaction-based approaches to query suggestion may be less potent when the information need is exploratory, since a large proportion of user activity for such information needs may\noccur beyond search engine interactions.\nIn cases where directed searching is only a fraction of users' information-seeking behavior, the utility of other users' clicks over the space of top-ranked results may be limited, as it does not cover the subsequent browsing behavior.\nAt the same time, user navigation that follows search engine interactions provides implicit endorsement of Web resources preferred by users, which may be particularly valuable for exploratory search tasks.\nThus, we propose exploiting a combination of past searching and browsing user behavior to enhance users' Web search interactions.\nBrowser plugins and proxy server logs provide access to the browsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result ranking by Agichtein et al. [1].\nHowever, this approach only considers page visitation statistics independently of each other, not taking into account the pages' relative positions on post-query browsing paths.\nRadlinski and Joachims [13] have utilized such collective user intelligence to improve retrieval accuracy by using sequences of consecutive query reformulations, yet their approach does not consider users' interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the searching and browsing behavior of many users to suggest popular Web pages, referred to as destinations henceforth, in addition to the regular search results.\nThe destinations may not be among the topranked results, may not contain the queried terms, or may not even be indexed by the search engine.\nInstead, they are pages at which other users end up frequently after submitting same or similar queries and then browsing away from initially clicked search results.\nWe conjecture that destinations popular across a large number of users can capture the collective user experience for information needs, and our results support this hypothesis.\nIn prior work, O'Day and Jeffries [12] identified \"teleportation\" as an information-seeking strategy employed by users jumping to their previously-visited information targets, while Anderson et al. [2] applied similar principles to support the rapid navigation of Web sites on mobile devices.\nIn [19], Wexelblat and Maes describe a system to support within-domain navigation based on the browse trails of other users.\nHowever, we are not aware of such principles being applied to Web search.\nResearch in the area of recommender systems has also addressed similar issues, but in areas such as question-answering [9] and relatively small online communities [16].\nPerhaps the nearest instantiation of teleportation is search engines' offering of several within-domain shortcuts below the title of a search result.\nWhile these may be based on user behavior and possibly site structure, the user saves at most one click from this feature.\nIn contrast, our proposed approach can transport users to locations many clicks beyond the search result, saving time and giving them a broader perspective on the available related information.\nThe conducted user study investigates the effectiveness of including links to popular destinations as an additional interface feature on search engine result pages.\nWe compare two variants of this approach against the suggestion of related queries and unaided Web search, and seek answers to questions on: (i) user preference and search effectiveness for known-item and exploratory search tasks, and (ii) the preferred distance between query and destination used to identify popular destinations from past behavior logs.\nThe results indicate that suggesting popular destinations to users attempting exploratory tasks provides best results in key aspects of the information-seeking experience, while providing query refinement suggestions is most desirable for known-item tasks.\nThe remainder of the paper is structured as follows.\nIn Section 2 we describe the extraction of search and browsing trails from user activity logs, and their use in identifying top destinations for new queries.\nSection 3 describes the design of the user study, while Sections 4 and 5 present the study findings and their discussion, respectively.\nWe conclude in Section 6 with a summary.\n2.\nSEARCH TRAILS AND DESTINATIONS\nWe used Web activity logs containing searching and browsing activity collected with permission from hundreds of thousands of users over a five-month period between December 2005 and April 2006.\nEach log entry included an anonymous user identifier, a timestamp, a unique browser window identifier, and the URL of a visited Web page.\nThis information was sufficient to reconstruct temporally ordered sequences of viewed pages that we refer to as \"trails\".\nIn this section, we summarize the extraction of trails, their features, and destinations (trail end-points).\nIn-depth description and analysis of trail extraction are presented in [20].\n2.1 Trail Extraction\nFor each user, interaction logs were grouped based on browser identifier information.\nWithin each browser instance, participant navigation was summarized as a path known as a browser trail, from the first to the last Web page visited in that browser.\nLocated within some of these trails were search trails that originated with a query submission to a commercial search engine such as Google, Yahoo!, Windows Live Search, and Ask.\nIt is these search trails that we use to identify popular destinations.\nAfter originating with a query submission to a search engine, trails proceed until a point of termination where it is assumed that the user has completed their information-seeking activity.\nTrails must contain pages that are either: search result pages, search engine homepages, or pages connected to a search result page via a sequence of clicked hyperlinks.\nExtracting search trails using this methodology also goes some way toward handling multi-tasking, where users run multiple searches concurrently.\nSince users may open a new browser window (or tab) for each task [18], each task has its own browser trail, and a corresponding distinct search trail.\nTo reduce the amount of \"noise\" from pages unrelated to the active search task that may pollute our data, search trails are terminated when one of the following events occurs: (1) a user returns to their homepage, checks e-mail, logs in to an online service (e.g., MySpace or del.ico.us), types a URL or visits a bookmarked page; (2) a page is viewed for more than 30 minutes with no activity; (3) the user closes the active browser window.\nIf a page (at step i) meets any of these criteria, the trail is assumed to terminate on the previous page (i.e., step i--1).\nThere are two types of search trails we consider: session trails and query trails.\nSession trails transcend multiple queries and terminate only when one of the three termination criteria above are satisfied.\nQuery trails use the same termination criteria as session trails, but also terminate upon submission of a new query to a search engine.\nApproximately 14 million query trails and 4 million session trails were extracted from the logs.\nWe now describe some trail features.\n2.2 Trail and Destination Analysis\nTable 1 presents summary statistics for the query and session trails.\nDifferences in user interaction between the last domain on the trail (Domain n) and all domains visited earlier (Domains 1 to (n--1)) are particularly important, because they highlight the wealth of user behavior data not captured by logs of search engine interactions.\nStatistics are averages for all trails with two or more steps (i.e., those trails where at least one search result was clicked).\nTable 1.\nSummary statistics (mean averages) for search trails.\nThe statistics suggest that users generally browse far from the search results page (i.e., around 5 steps), and visit a range of domains during the course of their search.\nOn average, users visit 2 unique (non search-engine) domains per query trail, and just over 4 unique domains per session trail.\nThis suggests that users often do not find all the information they seek on the first domain they visit.\nFor query trails, users also visit more pages, and spend significantly longer, on the last domain in the trail compared to all previous domains combined .1 These distinctions of the last domains in the trails may indicate user interest, page utility, or page relevance .2\n2.3 Destination Prediction\nFor frequent queries, most popular destinations identified from Web activity logs could be simply stored for future lookup at search time.\nHowever, we have found that over the six-month period covered by our dataset, 56.9% of queries are unique, and 97% queries occur 10 or fewer times, accounting for 19.8% and 66.3% of all searches respectively (these numbers are comparable to those reported in previous studies of search engine query logs [15,17]).\nTherefore, a lookup-based approach would prevent us from reliably suggesting destinations for a large fraction of searches.\nTo overcome this problem, we utilize a simple term-based prediction model.\nAs discussed above, we extract two types of destinations: query destinations and session destinations.\nFor both destination types, we obtain a corpus of query-destination pairs and use it to construct term-vector representation of destinations that is analogous to the classic tf.idf document representation in traditional IR [14].\nThen, given a new query q consisting of k terms t1...tk, we identify highest-scoring destinations using the following similarity function:\n1 Independent measures t-test: t (~ 60M) = 3.89, p <.001\n2 The topical relevance of the destinations was tested for a subset of around ten thousand queries for which we had human judgments.\nThe average rating of most of the destinations lay between \"good\" and \"excellent\".\nVisual inspection of those that did not lie in this range revealed that many were either relevant but had no judgments, or were related but had indirect query association (e.g., \u201cpetfooddirect.com\" for query [dogs]).\nWhere query and destination term weights, wq (ti) and wd (ti), are computed using standard tf.idf weighting and query - and usersession-normalized smoothed tf.idf weighting, respectively.\nWhile exploring alternative algorithms for the destination prediction task remains an interesting challenge for future work, results of the user study described in subsequent sections demonstrate that this simple approach provides robust, effective results.\n(a) Position of suggestions (b) Zoo med suggestions Figure 1.\nQuery suggestion presentation in QuerySuggestion.\n3.\nSTUDY\nTo examine the usefulness of destinations, we conducted a user study investigating the perceptions and performance of 36 subjects on four Web search systems, two with destination suggestions.\n3.1 Systems\nFour systems were used in this study: a baseline Web search system with no explicit support for query refinement (Baseline), a search system with a query suggestion method that recommends additional queries (QuerySuggestion), and two systems that augment baseline Web search with destination suggestions using either end-points of query trails (QueryDestination), or end-points of session trails (SessionDestination).\n3.1.1 System 1: Baseline\nTo establish baseline performance against which other systems can be compared, we developed a masked interface to a popular search engine without additional support in formulating queries.\nThis system presented the user-constructed query to the search engine and returned ten top-ranking documents retrieved by the engine.\nTo remove potential bias that may have been caused by subjects' prior perceptions, we removed all identifying information such as search engine logos and distinguishing interface features.\n3.1.2 System 2: QuerySuggestion\nIn addition to the basic search functionality offered by Baseline, QuerySuggestion provides suggestions about further query refinements that searchers can make following an initial query submission.\nThese suggestions are computed using the search engine query log over the timeframe used for trail generation.\nFor each target query, we retrieve two sets of candidate suggestions that contain the target query as a substring.\nOne set is composed of 100 most frequent such queries, while the second set contains 100 most frequent queries that followed the target query in query logs.\nEach candidate query is then scored by multiplying its smoothed overall frequency by its smoothed frequency of following the target query in past search sessions, using Laplacian smoothing.\nBased on these scores, six top-ranked query suggestions are returned.\nIf fewer than six suggestions are found, iterative backoff is performed using progressively longer suffixes of the target query; a similar strategy is described in [10].\nSuggestions were offered in a box positioned on the top-right of the result page, adjacent to the search results.\nFigure 1a shows the position of the suggestions on the page.\nFigure 1b shows a zoomed view of the portion of the results page containing the suggestions offered for the query [hubble telescope].\nTo the left of each query suggestion is an icon similar to a progress bar that encodes its normalized popularity.\nClicking a suggestion retrieves new search results for that query.\n3.1.3 System 3: QueryDestination\nQueryDestination uses an interface similar to QuerySuggestion.\nHowever, instead of showing query refinements for the submitted query, QueryDestination suggests up to six destinations frequently visited by other users who submitted queries similar to the current one, and computed as described in the previous section .3 Figure 2a shows the position of the destination suggestions on search results page.\nFigure 2b shows a zoomed view of the portion of the results page destinations suggested for the query [hubb le telescope].\n(a) Position of destinations (b) Zoo med destinations Figure 2.\nDestination presentation in QueryDestination.\nTo keep the interface uncluttered, the page title of each destination is shown on hover over the page URL (shown in Figure 2b).\nNext to the destination name, there is a clickable icon that allows the user to execute a search for the current query within the destination domain displayed.\nWe show destinations as a separate list, rather than increasing their search result rank, since they may topically deviate from the original query (e.g., those focusing on related topics or not containing the original query terms).\n3.1.4 System 4: SessionDestination\nThe interface functionality in SessionDestina tion is analogous to QueryDestination.\nThe only difference between the two systems is the definition of trail end-points for queries used in computing top destinations.\nQueryDestination directs users to the domains others end up at for the active or similar queries.\nIn contrast, SessionDestination directs users to the domains other users visit at the end of the search session that follows the active or similar queries.\nThis downgrades the effect of multiple query iterations (i.e., we only care where users end up after submitting all queries), rather than directing searchers to potentially irrelevant domains that may precede a query reformulation.\n3.2 Research Questions\nWe were interested in determining the value of popular destinations.\nTo do this we attempt to answer the following research questions: 3 To improve reliability, in a similar way to QuerySuggestion, destinations are only shown if their popularity exceeds a frequency threshold.\nRQ1: Are popular destinations preferable and more effective than query refinement suggestions and unaided Web search for: a. Searches that are well-defined (\"known-item\" tasks)?\nb. Searches that are ill-defined (\"exploratory\" tasks)?\nRQ2: Should popular destinations be taken from the end of query trails or the end of session trails?\n3.3 Subjects\n36 subjects (26 males and 10 females) participated in our study.\nThey were recruited through an email announcement within our organization where they hold a range of positions in different divisions.\nThe average age of subjects was 34.9 years (max = 62, min = 27, SD = 6.2).\nAll are familiar with Web search, and conduct 7.5 searches per day on average (SD = 4.1).\nThirty-one subjects (86.1%) reported general awareness of the query refinements offered by commercial Web search engines.\n3.4 Tasks\nSince the search task may influence information-seeking behavior [4], we made task type an independent variable in the study.\nWe constructed six known-item tasks and six open-ended, exploratory tasks that were rotated between systems and subjects as described in the next section.\nFigure 3 shows examples of the two task types.\nYou are considering purchasing a Voice Over Internet Protocol (VoIP) telephone.\nYou want to learn more about VoIP technology and providers that offer the service, and select the provider and telephone that best suits you.\nFigure 3.\nExamples of known-item and exploratory tasks.\nExploratory tasks were phrased as simulated work task situations [5], i.e., short search scenarios that were designed to reflect real-life information needs.\nThese tasks generally required subjects to gather background information on a topic or gather sufficient information to make an informed decision.\nThe known-item search tasks required search for particular items of information (e.g., activities, discoveries, names) for which the target was welldefined.\nA similar task classification has been used successfully in previous work [21].\nTasks were taken and adapted from the Text Retrieval Conference (TREC) Interactive Track [7], and questions posed on question-answering communities (Yahoo! Answers, Google Answers, and Windows Live QnA).\nTo motivate the subjects during their searches, we allowed them to select two known-item and two exploratory tasks at the beginning of the experiment from the six possibilities for each category, before seeing any of the systems or having the study described to them.\nPrior to the experiment all tasks were pilot tested with a small number of different subjects to help ensure that they were comparable in difficulty and \"selectability\" (i.e., the likelihood that a task would be chosen given the alternatives).\nPost-hoc analysis of the distribution of tasks selected by subjects during the full study showed no preference for any task in either category.\n3.5 Design and Methodology\nThe study used a within-subjects experimental design.\nSystem had four levels (corresponding to the four experimental systems) and search tasks had two levels (corresponding to the two task types).\nSystem and task-type order were counterbalanced according to a Graeco-Latin square design.\nSubjects were tested independently and each experimental session lasted for up to one hour.\nWe adhered to the following procedure:\n1.\nUpon arrival, subjects were asked to select two known-item and two exploratory tasks from the six tasks of each type.\n2.\nSubjects were given an overview of the study in written form that was read aloud to them by the experimenter.\n3.\nSubjects completed a demographic questionnaire focusing on aspects of search experience.\n4.\nFor each of the four interface conditions: a. Subjects were given an explanation of interface functionality lasting around 2 minutes.\nb. Subjects were instructed to attempt the task on the assigned system searching the Web, and were allotted up to 10 minutes to do so.\nc. Upon completion of the task, subjects were asked to complete a post-search questionnaire.\n5.\nAfter completing the tasks on the four systems, subjects answered a final questionnaire comparing their experiences on the systems.\n6.\nSubjects were thanked and compensated.\nIn the next section we present the findings of this study.\n4.\nFINDINGS\nIn this section we use the data derived from the experiment to address our hypotheses about query suggestions and destinations, providing information on the effect of task type and topic familiarity where appropriate.\nParametric statistical testing is used in this analysis and the level of significance is set to \u074c <0.05, unless otherwise stated.\nAll Likert scales and semantic differentials used a 5-point scale where a rating closer to one signifies more agreement with the attitude statement.\n4.1 Subject Perceptions\nIn this section we present findings on how subjects perceived the systems that they used.\nResponses to post-search (per-system) and final questionnaires are used as the basis for our analysis.\n4.1.1 Search Process\nTo address the first research question wanted insight into subjects' perceptions of the search experience on each of the four systems.\nIn the post-search questionnaires, we asked subjects to complete four 5-point semantic differentials indicating their responses to the attitude statement: \"The search we asked you to perform was\".\nThe paired stimuli offered as responses were: \"relaxing\" \/ \"stressful\", \"interesting\" \/ \"boring\", \"restful\" \/ \"tiring\", and \"easy\" \/ \"difficult\".\nThe average obtained differential values are shown in Table 1 for each system and each task type.\nThe value corresponding to the differential \"All\" represents the mean of all three differentials, providing an overall measure of subjects' feelings.\nTable 1.\nPerceptions of search process (lower = better).\n(QS), etc.).\nThe most positive response across all systems for each differential-task pair is shown in bold.\nWe applied two-way analysis of variance (ANOVA) to each differential across all four systems and two task types.\nSubjects found the search easier on QuerySuggestion and QueryDestination than the other systems for known-item tasks .4 For exploratory tasks, only searches conducted on QueryDestination were easier than on the other systems .5 Subjects indicated that exploratory tasks on the three non-baseline systems were more stressful (i.e., less \"relaxing\") than the knownitem tasks .6 As we will discuss in more detail in Section 4.1.3, subjects regarded the familiarity of Baseline as a strength, and may have struggled to attempt a more complex task while learning a new interface feature such as query or destination suggestions.\n4.1.2 Interface Support\nWe solicited subjects' opinions on the search support offered by QuerySuggestion, QueryDestination, and SessionDestination.\nThe following Likert scales and semantic differentials were used:\n\u2022 Likert scale A: \"Using this system enhances my effectiveness in finding relevant information.\"\n(Effectiveness) 7 \u2022 Likert scale B: \"The queries\/destinations suggested helped me get closer to my information goal.\"\n(CloseToGoal) \u2022 Likert scale C: \"I would re-use the queries\/destinations suggested if I encountered a similar task in the future\" (Re-use) \u2022 Semantic differential A: \"The queries\/destinations suggested by the system were: \"relevant\" \/ \"irrelevant\", \"useful\" \/ \"useless\", \"appropriate\" \/ \"inappropriate\".\nWe did not include these in the post-search questionnaire when subjects used the Baseline system as they refer to interface support options that Baseline did not offer.\nTable 2 presents the average responses for each of these scales and differentials, using the labels after each of the first three Likert scales in the bulleted list above.\nThe values for the three semantic differentials are included at the bottom of the table, as is their overall average under \"All\".\nTable 2.\nPerceptions of system support (lower = better).\nThe results show that all three experimental systems improved subjects' perceptions of their search effectiveness over Baseline, although only QueryDestination did so significantly .8 Further examination of the effect size (measured using Cohen's d) revealed that QueryDestination affects search effectiveness most positively .9 QueryDestination also appears to get subjects closer to their information goal (CloseToGoal) than QuerySuggestion or\nSessionDestination, although only for exploratory search tasks .10 Additional comments on QuerySuggestion conveyed that subjects saw it as a convenience (to save them typing a reformulation) rather than a way to dramatically influence the outcome of their search.\nFor exploratory searches, users benefited more from being pointed to alternative information sources than from suggestions for iterative refinements of their queries.\nOur findings also show that our subjects felt that QueryDestination produced more \"relevant\" and \"useful\" suggestions for exploratory tasks than the other systems .11 All other observed differences between the systems were not statistically significant .12 The difference between performance of QueryDestination and SessionDestination is explained by the approach used to generate destinations (described in Section 2).\nSessionDestination's recommendations came from the end of users' session trails that often transcend multiple queries.\nThis increases the likelihood that topic shifts adversely affect their relevance.\n4.1.3 System Ranking\nIn the final questionnaire that followed completion of all tasks on all systems, subjects were asked to rank the four systems in descending order based on their preferences.\nTable 3 presents the mean average rank assigned to each of the systems.\nTable 3.\nRelative ranking of systems (lower = better).\nThese results indicate that subjects preferred QuerySuggestion and QueryDestination overall.\nHowever, none of the differences between systems' ratings are significant .13 One possible explanation for these systems being rated higher could be that although the popular destination systems performed well for exploratory searches while QuerySuggestion performed well for known-item searches, an overall ranking merges these two performances.\nThis relative ranking reflects subjects' overall perceptions, but does not separate them for each task category.\nOver all tasks there appeared to be a slight preference for QueryDestination, but as other results show, the effect of task type on subjects' perceptions is significant.\nThe final questionnaire also included open-ended questions that asked subjects to explain their system ranking, and describe what they liked and disliked about each system: Baseline: Subjects who preferred Baseline commented on the familiarity of the system (e.g., \"was familiar and I didn't end up using suggestions\" (S36)).\nThose who did not prefer this system disliked the lack of support for query formulation (\"Can be difficult if you don't pick good search terms\" (S20)) and difficulty locating relevant documents (e.g., \"Difficult to find what I was looking for\" (S13); \"Clunky current technology\" (S30)).\nQuerySuggestion: Subjects who rated QuerySuggestion highest commented on rapid support for query formulation (e.g., \"was useful in (1) saving typing (2) coming up with new ideas for query expansion\" (S12); \"helps me better phrase the search term\" (S24); \"made my next query easier\" (S21)).\nThose who did not prefer this system criticized suggestion quality (e.g., \"Not relevant\" (S11); \"Popular\nqueries weren't what I was looking for\" (S18)) and the quality of results they led to (e.g., \"Results (after clicking on suggestions) were of low quality\" (S35); \"Ultimately unhelpful\" (S1)).\nQueryDestination: Subjects who preferred this system commented mainly on support for accessing new information sources (e.g., \"provided potentially helpful and new areas \/ domains to look at\" (S27)) and bypassing the need to browse to these pages (\"Useful to try to ` cut to the chase' and go where others may have found answers to the topic\" (S3)).\nThose who did not prefer this system commented on the lack of specificity in the suggested domains (\"Should just link to site-specific query, not site itself\" (S16); \"Sites were not very specific\" (S24); \"Too general\/vague\" (S28) 14), and the quality of the suggestions (\"Not relevant\" (S11); \"Irrelevant\" (S6)).\nSessionDestination: Subjects who preferred this system commented on the utility of the suggested domains (\"suggestions make an awful lot of sense in providing search assistance, and seemed to help very nicely\" (S5)).\nHowever, more subjects commented on the irrelevance of the suggestions (e.g., \"did not seem reliable, not much help\" (S30); \"Irrelevant, not my style\" (S21), and the related need to include explanations about why the suggestions were offered (e.g., \"Low-quality results, not enough information presented\" (S35)).\nThese comments demonstrate a diverse range of perspectives on different aspects of the experimental systems.\nWork is obviously needed in improving the quality of the suggestions in all systems, but subjects seemed to distinguish the settings when each of these systems may be useful.\nEven though all systems can at times offer irrelevant suggestions, subjects appeared to prefer having them rather than not (e.g., one subject remarked \"suggestions were helpful in some cases and harmless in all\" (S15)).\n4.1.4 Summary\nThe findings obtained from our study on subjects' perceptions of the four systems indicate that subjects tend to prefer QueryDestination for the exploratory tasks and QuerySuggestion for the known-item searches.\nSuggestions to incrementally refine the current query may be preferred by searchers on known-item tasks when they may have just missed their information target.\nHowever, when the task is more demanding, searchers appreciate suggestions that have the potential to dramatically influence the direction of a search or greatly improve topic coverage.\n4.2 Search Tasks\nTo gain a better understanding of how subjects performed during the study, we analyze data captured on their perceptions of task completeness and the time that it took them to complete each task.\n4.2.1 Subject Perceptions\nIn the post-search questionnaire, subjects were asked to indicate on a 5-point Likert scale the extent to which they agreed with the following attitude statement: \"I believe I have succeeded in my performance of this task\" (Success).\nIn addition, they were asked to complete three 5-point semantic differentials indicating their response to the attitude statement: \"The task we asked you to perform was:\" The paired stimuli offered as possible responses were \"clear\" \/ \"unclear\", \"simple\" \/ \"complex\", and \"familiar\" \/ \"unfamiliar\".\nTable 4 presents the mean average response to these statements for each system and task type.\nTable 4.\nPerceptions of task and task success (lower = better).\nSubject responses demonstrate that users felt that their searches had been more successful using QueryDestination for exploratory tasks than with the other three systems (i.e., there was a two-way interaction between these two variables).15 In addition, subjects perceived a significantly greater sense of completion with knownitem tasks than with exploratory tasks .16 Subjects also found known-item tasks to be more \"simple\", \"clear\", and \"familiar\".\n17 These responses confirm differences in the nature of the tasks we had envisaged when planning the study.\nAs illustrated by the examples in Figure 3, the known-item tasks required subjects to retrieve a finite set of answers (e.g., \"find three interesting things to do during a weekend visit to Kyoto, Japan\").\nIn contrast, the exploratory tasks were multi-faceted, and required subjects to find out more about a topic or to find sufficient information to make a decision.\nThe end-point in such tasks was less well-defined and may have affected subjects' perceptions of when they had completed the task.\nGiven that there was no difference in the tasks attempted on each system, theoretically the perception of the tasks' simplicity, clarity, and familiarity should have been the same for all systems.\nHowever, we observe a clear interaction effect between the system and subjects' perception of the actual tasks.\n4.2.2 Task Completion Time\nIn addition to asking subjects to indicate the extent to which they felt the task was completed, we also monitored the time that it took them to indicate to the experimenter that they had finished.\nThe elapsed time from when the subject began issuing their first query until when they indicated that they were done was monitored using a stopwatch and recorded for later analysis.\nA stopwatch rather than system logging was used for this since we wanted to record the time regardless of system interactions.\nFigure 4 shows the average task completion time for each system and each task type.\nFigure 4.\nMean average task completion time (\u00b1 SEM).\nAs can be seen in the figure above, the task completion times for the known-item tasks differ greatly between systems .18 Subjects attempting these tasks on QueryDestination and QuerySuggestion complete them in less time than subjects on Baseline and SessionDestination .19 As discussed in the previous section, subjects were more familiar with the known-item tasks, and felt they were simpler and clearer.\nBaseline may have taken longer than the other systems since users had no additional support and had to formulate their own queries.\nSubjects generally felt that the recommendations offered by SessionDestination were of low relevance and usefulness.\nConsequently, the completion time increased slightly between these two systems perhaps as the subjects assessed the value of the proposed suggestions, but reaped little benefit from them.\nThe task completion times for the exploratory tasks were approximately equal on all four systems20, although the time on Baseline was slightly higher.\nSince these tasks had no clearly defined termination criteria (i.e., the subject decided when they had gathered sufficient information), subjects generally spent longer searching, and consulted a broader range of information sources than in the known-item tasks.\n4.2.3 Summary\nAnalysis of subjects' perception of the search tasks and aspects of task completion shows that the QuerySuggestion system made subjects feel more successful (and the task more \"simple\", \"clear\", and \"familiar\") for the known-item tasks.\nOn the other hand, QueryDestination was shown to lead to heightened perceptions of search success and task ease, clarity, and familiarity for the exploratory tasks.\nTask completion times on both systems were significantly lower than on the other systems for known-item tasks.\n4.3 Subject Interaction\nWe now focus our analysis on the observed interactions between searchers and systems.\nAs well as eliciting feedback on each system from our subjects, we also recorded several aspects of their interaction with each system in log files.\nIn this section, we analyze three interaction aspects: query iterations, search-result clicks, and subject engagement with the additional interface features offered by the three non-baseline systems.\n4.3.1 Queries and Result Clicks\nSearchers typically interact with search systems by submitting queries and clicking on search results.\nAlthough our system offers additional interface affordances, we begin this section by analyzing querying and clickthrough behavior of our subjects to better understand how they conducted core search activities.\nTable 5 shows the average number of query iterations and search results clicked for each system-task pair.\nThe average value in each cell is computed for 18 subjects on each task type and system.\nTable 5.\nAverage query iterations and result clicks (per task).\ndiscussed in the previous section, subjects using this system felt more successful in their searches yet they exhibited less of the traditional query and result-click interactions required for search success on traditional search systems.\nIt may be the case that subjects' queries on this system were more effective, but it is more likely that they interacted less with the system through these means and elected to use the popular destinations instead.\nOverall, subjects submitted most queries in QuerySuggestion, which is not surprising as this system actively encourages searchers to iteratively re-submit refined queries.\nSubjects interacted similarly with Baseline and SessionDestination systems, perhaps due to the low quality of the popular destinations in the latter.\nTo investigate this and related issues, we will next analyze usage of the suggestions on the three non-baseline systems.\n4.3.2 Suggestion Usage\nTo determine whether subjects found additional features useful, we measure the extent to which they were used when they were provided.\nSuggestion usage is defined as the proportion of submitted queries for which suggestions were offered and at least one suggestion was clicked.\nTable 6 shows the average usage for each system and task category.\nTable 6.\nSuggestion uptake (values are percentages).\nResults indicate that QuerySuggestion was used more for knownitem tasks than SessionDestination22, and QueryDestination was used more than all other systems for the exploratory tasks .23 For well-specified targets in known-item search, subjects appeared to use query refinement most heavily.\nIn contrast, when subjects were exploring, they seemed to benefit most from the recommendation of additional information sources.\nSubjects selected almost twice as many destinations per query when using QueryDestination compared to SessionDestination .24 As discussed earlier, this may be explained by the lower perceived relevance and usefulness of destinations recommended by SessionDestination.\n4.3.3 Summary\nAnalysis of log interaction data gathered during the study indicates that although subjects submitted fewer queries and clicked fewer search results on QueryDestination, their engagement with suggestions was highest on this system, particularly for exploratory search tasks.\nThe refined queries proposed by QuerySuggestion were used the most for the known-item tasks.\nThere appears to be a clear division between the systems: QuerySuggestion was preferred for known-item tasks, while QueryDestination provided most-used support for exploratory tasks.\n6.\nCONCLUSIONS\nWe presented a novel approach for enhancing users' Web search interaction by providing links to websites frequently visited by past searchers with similar information needs.\nA user study was conducted in which we evaluated the effectiveness of the proposed technique compared with a query refinement system and unaided Web search.\nResults of our study revealed that: (i) systems suggesting query refinements were preferred for known-item tasks, (ii) systems offering popular destinations were preferred for exploratory search tasks, and (iii) destinations should be mined from the end of query trails, not session trails.\nOverall, popular destination suggestions strategically influenced searches in a way not achievable by query suggestion approaches by offering a new way to resolve information problems, and enhance the informationseeking experience for many Web searchers.","keyphrases":["popular destin","enhanc web search","web search interact","user studi","relat queri","improv queri","retriev perform","inform-seek experi","queri trail","session trail","lookup-base approach","log-base evalu","search destin"],"prmu":["P","P","P","P","P","M","M","U","M","U","U","U","R"]} {"id":"J-11","title":"Trading Networks with Price-Setting Agents","abstract":"In a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations. Typically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market. We model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders. In this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered. We show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods. We extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers. Finally, we consider how the profits obtained by the traders depend on the underlying graph -- roughly, a trader can command a positive profit if and only if it has an essential connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders. Our work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.","lvl-1":"Trading Networks with Price-Setting Agents Larry Blume Dept. of Economics Cornell University, Ithaca NY lb19@cs.cornell.edu David Easley Dept. of Economics Cornell University, Ithaca NY dae3@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University, Ithaca NY kleinber@cs.cornell.edu \u00b4Eva Tardos Dept. of Computer Science Cornell University, Ithaca NY eva@cs.cornell.edu ABSTRACT In a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations.\nTypically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market.\nWe model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders.\nIn this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered.\nWe show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers.\nFinally, we consider how the profits obtained by the traders depend on the underlying graph - roughly, a trader can command a positive profit if and only if it has an essential connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION In a range of settings where markets mediate the interactions of buyers and sellers, one observes several recurring properties: Individual buyers and sellers often trade through intermediaries, not all buyers and sellers have access to the same intermediaries, and not all buyers and sellers trade at the same price.\nOne example of this setting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers'' limited access to capital, many farmers have no alternative to trading with middlemen in inefficient local markets.\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with these general characteristics.\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems.\nFor many assets there is no one market; trade in a single asset may occur simultaneously on the floor of an exchange, on crossing networks, on electronic exchanges, and in markets in other countries.\nSome buyers and sellers have access to many or all of these trading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading venues.\nIn fact, there is no price as different traders pay or receive different prices.\nIn many settings there is also a gap between the price a buyer pays for an asset, the ask price, and the price a seller receives for the asset, the bid price.\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange, where there is an interbank market with restricted access and a retail market with much more open access.\nSpreads, defined as the difference between bid and ask prices, differ significantly across these markets, even though the same asset is being traded in the two markets.\nIn this paper, we develop a framework in which such phenomena emerge from a game-theoretic model of trade, with buyers, sellers, and traders interacting on a network.\nThe edges of the network connect traders to buyers and sellers, and thus represent the access that different market participants have to one another.\nThe traders serve as intermediaries in a two-stage trading game: they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to; the sellers and buyers then react to the prices they face.\nThus, the network encodes the relative power in the structural positions of the market participants, including the implicit levels of competition among traders.\nWe show that this game always has a 143 subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe also analyze how trader profits depend on the network structure, essentially characterizing in graph-theoretic terms how a trader``s payoff is determined by the amount of competition it experiences with other traders.\nOur work here is connected to several lines of research in economics, finance, and algorithmic game theory, and we discuss these connections in more detail later in the introduction.\nAt a general level, our approach can be viewed as synthesizing two important strands of work: one that treats buyer-seller interaction using network structures, but without attempting to model the processses by which prices are actually formed [1, 4, 5, 6, 8, 9, 10, 13]; and another strand in the literature on market microstructure that incorporates price-setting intermediaries, but without network-type constraints on who can trade with whom [12].\nBy developing a network model that explicitly includes traders as price-setting agents, in a system together with buyers and sellers, we are able to capture price formation in a network setting as a strategic process carried out by intermediaries, rather than as the result of a centrally controlled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods.\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above, where the participants do not all have uniform access to one another.\nWe are given a set B of buyers, a set S of sellers, and a set T of traders.\nThere is an undirected graph G that indicates who is able to trade with whom.\nAll edges have one end in B \u222a S and the other in T; that is, each edge has the form (i, t) for i \u2208 S and t \u2208 T, or (j, t) for j \u2208 B and t \u2208 T.\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries.\nIn the most basic version of the model, we consider identical goods, one copy of which is initially held by each seller.\nBuyers and sellers each have a value for one copy of the good, and we assume that these values are common knowledge.\nWe will subsequently generalize this to a setting in which goods are distinguishable, buyers can value different goods differently, and potentially sellers can value transactions with different buyers differently as well.\nHaving different buyer valuations captures settings like house purchases; adding different seller valuations as well captures matching markets - for example, sellers as job applicants and buyers as employers, with both caring about who ends up with which good (and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good; the good comes in individisible units; and each seller initially holds one unit of the good.\nAll three types of agents value money at the same rate; and each i \u2208 B \u222a S additionally values one copy of the good at \u03b8i units of money.\nNo agent wants more than one copy of the good, so additional copies are valued at 0.\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of two edges: from a seller to a trader, and then from the trader to a buyer.\nThe particular way in which goods flow is determined by the following game.\nFirst, each trader offers a bid price to each seller it is connected to, and an ask price to each buyer it is connected to.\nSellers and buyers then choose from among the offers presented to them by traders.\nIf multiple traders propose the same price to a seller or buyer, then there is no strict best response for the seller or buyer.\nIn this case a selection must be made, and, as is standard (see for example [10]), we (the modelers) choose among the best offers.\nFinally, each trader buys a copy of the good from each seller that accepts its offer, and it sells a copy of the good to each buyer that accepts its offer.\nIf a particular trader t finds that more buyers than sellers accept its offers, then it has committed to provide more copies of the good than it has received, and we will say that this results in a large penalty to the trader for defaulting; the effect of this is that in equilibrium, no trader will choose bid and ask prices that result in a default.\nMore precisely, a strategy for each trader t is a specification of a bid price \u03b2ti for each seller i to which t is connected, and an ask price \u03b1tj for each buyer j to which t is connected.\n(We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers.)\nEach seller or buyer then chooses at most one incident edge, indicating the trader with whom they will transact, at the indicated price.\n(The choice of a single edge reflects the facts that (a) sellers each initially have only one copy of the good, and (b) buyers each only want one copy of the good.)\nThe payoffs are as follows: For each seller i, the payoff from selecting trader t is \u03b2ti, while the payoff from selecting no trader is \u03b8i.\n(In the former case, the seller receives \u03b2ti units of money, while in the latter it keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j \u2212\u03b1tj, whle the payoff from selecting no trader is 0.\n(In the former case, the buyer receives the good but gives up \u03b1tj units of money.)\nFor each trader t, with accepted offers from sellers i1, ... , is and buyers j1, ... , jb, the payoff is P r \u03b1tjr \u2212 P r \u03b2tir , minus a penalty \u03c0 if b > s.\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium, and hence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game.\nThe equilibrium concept we use is subgame perfect Nash equilibrium.\nSome Examples.\nTo help with thinking about the model, we now describe three illustrative examples, depicted in Figure 1.\nTo keep the figures from getting too cluttered, we adopt the following conventions: sellers are drawn as circles in the leftmost column and will be named i1, i2, ... from top to bottom; traders are drawn as squares in the middle column and will be named t1, t2, ... from top to bottom; and buyers are drawn as circles in the rightmost column and will be named j1, j2, ... from top to bottom.\nAll sellers in the examples will have valuations for the good equal to 0; the valuation of each buyer is drawn inside its circle; and the bid or ask price on each edge is drawn on top of the edge.\nIn Figure 1(a), we show how a standard second-price auction arises naturally from our model.\nSuppose the buyer valuations from top to bottom are w > x > y > z.\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1, and no other buyer accepts the offer of its adjacent trader: thus, trader t1 receives the good with a bid price of x, and makes w \u2212 x by selling the good to buyer j1 for w.\nIn this way, we can consider this particular instance as an auction for a single good in which the traders act as proxies for their adjacent buyers.\nThe buyer with the highest valuation for the good ends up with it, and the surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with > k buyers just as easily, by building a complete bipartite graph on k sellers and traders, and then attaching each trader to a single distinct buyer.\nIn Figure 1(b), we show how nodes with different positions in the network topology can achieve different payoffs, even when all 144 w x y z x w x x y y z z (a) Auction 1 1 1 0 x x 0 1 x x 1 (b) Heterogeneous outcomes 1 1 1 0 x x 0 1 x x 1 (c) Implicit perfect competition Figure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it.\n(b) A network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and buyers have no power due to their position in the network.\n(c) A form of implicit perfect competition: all bid\/ask spreads will be zero in equilibrium, even though no trader directly competes with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically.\nSpecifically, seller i2 and buyer j2 occupy powerful positions, because the two traders are competing for their business; on the other hand, the other sellers and buyers are in weak positions, because they each have only one option.\nAnd indeed, in every equilibrium, there is a real number x \u2208 [0, 1] such that both traders offer bid and ask prices of x to i2 and j2 respectively, while they offer bids of 0 and asks of 1 to the other sellers and buyers.\nThus, this example illustrates a few crucial ingredients that we will identify at a more general level shortly.\nSpecifically, i2 and j2 experience the benefits of perfect competition, in that the two traders drive the bid-ask spreads to 0 in competing for their business.\nOn the other hand, the other sellers and buyers experience the downsides of monopoly - they receive 0 payoff since they have only a single option for trade, and the corresponding trader makes all the profit.\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents - capturing the fact that there is no one fixed price in the kinds of markets that motivate the model, but rather different prices reflecting the relative power of the different agents involved.\nThe previous example shows perhaps the most natural way in which a trader``s profit on a particular transaction can drop to 0: when there is another trader who can replicate its function precisely.\n(In that example, two traders each had the ability to move a copy of the good from i2 to j2.)\nBut as our subsequent results will show, traders make zero profit more generally due to global, graph-theoretic reasons.\nThe example in Figure 1(c) gives an initial indication of this: one can show that for every equilibrium, there is a y \u2208 [0, 1] such that every bid and every ask price is equal to y.\nIn other words, all traders make zero profit, whether or not a copy of the good passes through them - and yet, no two traders have any seller-buyer paths in common.\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents; this is an example of implicit perfect competition determined by the network topology.\nExtending the Model to Distinguishable Goods.\nWe extend the basic model to a setting with distinguishable goods, as follows.\nInstead of having each agent i \u2208 B \u222a S have a single numerical valuation \u03b8i, we index valuations by pairs of buyers and sellers: if buyer j obtains the good initially held by seller i, it gets a utility of \u03b8ji, and if seller i sells its good to buyer j, it experiences a loss of utility of \u03b8ij .\nThis generalizes the case of indistinguishable goods, since we can always have these pairwise valuations depend only on one of the indices.\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer, and offering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids (respectively, asks) in the form of vectors, essentially specifying a menu with a price attached to each buyer (resp.\nseller).)\nEach buyer and seller selects an offer from an adjacent trader, and the payoffs to all agents are determined as before.\nThis general framework captures matching markets [10, 13]: for example, a job market that is mediated by agents or employment search services (as in hiring for corporate executives, or sports or entertainment figures).\nHere the sellers are job applicants, buyers are employers, and traders are the agents that mediate the job market.\nOf course, if one specifies pairwise valuations on buyers but just single valuations for sellers, we model a setting where buyers can distinguish among the goods, but sellers don``t care whom they sell to - this (roughly) captures settings like housing markets.\nOur Results.\nOur results will identify general forms of some of the principles noted in the examples discussed above - including the question of which buyers end up with the good; the question of how payoffs are differently realized by sellers, traders, and buyers; and the question of what structural properties of the network determine whether the traders will make positive profits.\nTo make these precise, we introduce the following notation.\nAny outcome of the game determines a final allocation of goods to some of the agents; this can be specified by a collection M of triples (ie, te, je), where ie \u2208 S, te \u2208 T, and je \u2208 B; moreover, each seller and each buyer appears in at most one triple.\nThe meaning is for each e \u2208 M, the good initially held by ie moves to je through te.\n(Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to P e\u2208M \u03b8jeie \u2212 \u03b8ieje .\nLet \u03b8\u2217 denote the maximum value of any allocation M that is feasible given the network.\nWe show that every instance of our game has an equilibrium, and that in every such equilibrium, the allocation has value \u03b8\u2217 145 in other words, it achieves the best value possible.\nThus, equilibria in this model are always efficient, in that the market enables the right set of people to get the good, subject to the network constraints.\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network; the dual of this linear program contains enough information to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium allocation is divided up as payoffs to the agents, and it is interesting to ask how this value is distributed - in particular how much profit a trader is able to make based on its position in the network.\nWe find that, although all equilibria have the same value, a given trader``s payoff can vary across different equilibria.\nHowever, we are able to characterize the maximum and minimum amounts that a given trader is able to make, where these maxima and minima are taken over all equilibria, and we give an efficient algorithm to compute this.\nIn particular, our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff: this occurs if and only there is some edge e incident to t that is essential, in the sense that deleting e reduces the value of the optimal allocation \u03b8\u2217 .\nWe also obtain results for the sum of all trader profits.\nRelated Work.\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price.\nThis reduced form of trade, built on the idealization of a market price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who sets the price, and agents don``t actually trade with each other.\nIn fact there is no market, in the everyday sense of that word, in the Walrasian model.\nThat is, there is no physical or virtual place where buyers and sellers interact to trade and set prices.\nThus in this simple model, all buyers and sellers are uniform and trade at the same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices.\nThe literature on imperfect competition is perhaps the oldest of these.\nHere a monopolist, or a group of oliogopolists, choose prices in order to maximize their profits (see [14] for the standard textbook treatment of these markets).\nA monopolist uses its knowledge of market demand to choose a price, or a collection of prices if it discriminates.\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of a single market is maintained.\nIn the equilibrium search literature, firms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have access to all firms and there are no intermediaries.\nIn the general equilibrium literature there have been various attempts to introduce price determination.\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand.\nThe Walrasian auctioneer is often introduced as a device to explain how this process works, but this is a fundamentally a metaphor for an iterative priceupdating algorithm, not for the internals of an actual market.\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them.\nBut again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does have price-setting agents (specialists), parts of it do determine separate bid and ask prices, and different agents receive different prices for the same asset (see [12] for a treatment of microstructure theory).\nWork in information economics has identified similar phenomena (see e.g. [7]).\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network structure determines prices.\nThese have posited price determination through definitions based on competitive equilibrium or the core, or through the use of truthful mechanisms.\nIn briefly reviewing this work, we will note the contrast with our approach, in that we model prices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers, generated using a probabilistic model capable of producing heavy-tailed degree distributions [11].\nEven-Dar et al. [6] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium.\nLeonard [10], Babaioff et al. [1], and Chu and Shen [4] consider an approach based on mechanism design: buyers and sellers reside at different nodes in a graph, and they incur a given transportation cost to trade with one another.\nLeonard studies VCG prices in this setting; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism.\nSince the concern here is with truthful mechanisms that operate on private valuations, there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition.\nIn contrast, our model has known valuations and prices arising from the strategic behavior of traders.\nThus, the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach: while we assume known valuations, we do not require a centralized authority to impose a mechanism.\nRather, price-setting is part of the strategic outcome, as in the real markets that motivate our work, and our equilibria are simultaneously budget-balanced and efficient - something not possible in the mechanism design frameworks that have been used.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart [9], analyze the prices at which trade occurs in a network, working within the framework of mechanism design.\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers, and then use an ascending auction mechanism, rather than strategic intermediaries, to determine the prices.\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction.\nIn fact, we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game, with traders producing prices at equilibrium matching the prices produced by their auction mechanism.1 Finally, the classic results of Shapley and Shubik [13] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core.\nThey study the dual of a linear program based on the matching problem, similar to what we use for a reduced version of our model in the next section, but their focus is different as they do not consider agents that seek to set prices.\n2.\nMARKETS WITH PAIR-TRADERS For understanding the ideas behind the analysis of the general model, it is very useful to first consider a special case with a re1 Kranton and Minehart, however, can also analyze a more general setting in which buyers values are private and thus buyers and sellers play a game of incomplete information.\nWe deal only with complete information.\n146 stricted form of traders that we refer to as pair-traders.\nIn this case, each trader is connected to just one buyer and one seller.\n(Thus, it essentially serves as a trade route between the two.)\nThe techniques we develop to handle this case will form a useful basis for reasoning about the case of traders that may be connected arbitrarily to the sellers and buyers.\nWe will relate profits in a subgame perfect Nash equilibrium to optimal solutions of a certain linear program, use this relation to show that all equilibria result in efficient allocation of the goods, and show that a pure equilibrium always exists.\nFirst, we consider the simplest model where sellers have indistinguishable items, and each buyer is interested in getting one item.\nThen we extend the results to the more general case of a matching market, as discussed in the previous section, where valuations depend on the identity of the seller and buyer.\nWe then characterize the minimum and maximum profits traders can make.\nIn the next section, we extend the results to traders that may be connected to any subset of sellers and buyers.\nGiven that we are working with pair-traders in this section, we can represent the problem using a bipartite graph G whose node set is B \u222a S, and where each trader t, connecting seller i and buyer j, appears as an edge t = (i, j) in G. Note, however, that we allow multiple traders to connect the same pair of agents.\nFor each buyer and seller i, we will use adj(i) to denote the set of traders who can trade with i. 2.1 Indistinguishable Goods The socially optimal trade for the case of indistinguishable goods is the solution of the transportation problem: sending goods along the edges representing the traders.\nThe edges along which trade occurs correspond to a matching in this bipartite graph, and the optimal trade is described by the following linear program.\nmax SV (x) = X t\u2208T :t=(i,j) xt(\u03b8j \u2212 \u03b8i) xt \u2265 0 \u2200t \u2208 T X t\u2208adj(i) xt \u2264 1 \u2200i \u2208 S X t\u2208adj(j) xt \u2264 1 \u2200j \u2208 B Next we consider an equilibrium.\nEach trader t = (i, j) must offer a bid \u03b2t and an ask \u03b1t.\n(We omit the subscript denoting the seller and buyer here since we are dealing with pair-traders.)\nGiven the bid and ask price, the agents react to these prices, as described earlier.\nInstead of focusing on prices, we will focus on profits.\nIf a seller i sells to a trader t \u2208 adj(i) with bid \u03b2t then his profit is pi = \u03b2t \u2212 \u03b8i.\nSimilarly, if a buyer j buys from a trader t \u2208 adj(j) with ask \u03b1t, then his profit is pj = \u03b8j \u2212 \u03b1t.\nFinally, if a trader t trades with ask \u03b1t and bid \u03b2t then his profit is yt = \u03b1t \u2212 \u03b2t.\nAll agents not involved in trade make 0 profit.\nWe will show that the profits at equilibrium are an optimal solution to the following linear program.\nmin sum(p, y) = X i\u2208B\u222aS pi + X t\u2208T yt yt \u2265 0 \u2200t \u2208 T : pi \u2265 0 \u2200i \u2208 S \u222a B : yt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) \u2200t = (i, j) \u2208 T LEMMA 2.1.\nAt equilibrium the profits must satisfy the above inequalities.\nProof.\nClearly all profits are nonnegative, as trading is optional for all agents.\nTo see why the last set of inequalities holds, consider two cases separately.\nFor a trader t who conducted trade, we get equality by definition.\nFor other traders t = (i, j), the value pi +\u03b8i is the price that seller i sold for (or \u03b8i if seller i decided to keep the good).\nOffering a bid \u03b2t > pi + \u03b8i would get the seller to sell to trader t. Similarly, \u03b8j \u2212 pj is the price that buyer j bought for (or \u03b8j if he didn``t buy), and for any ask \u03b1t < \u03b8j \u2212 pj, the buyer will buy from trader t.\nSo unless \u03b8j \u2212 pj \u2264 \u03b8i + pi the trader has a profitable deviation.\nNow we are ready to prove our first theorem: THEOREM 2.2.\nIn any equilibrium the trade is efficient.\nProof.\nLet x be a flow of goods resulting in an equilibrium, and let variables p and y be the profits.\nConsider the linear program describing the socially optimal trade.\nWe will also add a set of additional constraints xt \u2264 1 for all traders t \u2208 T; this can be added to the description, as it is implied by the other constraints.\nNow we claim that the two linear programs are duals of each other.\nThe variables pi for agents B \u222a S correspond to the equations P t\u2208adj(i) xt \u2264 1.\nThe additional dual variable yt corresponds to an additional inequality xt \u2264 1.\nThe optimality of the social value of the trade will follow from the claim that the solution of these two linear programs derived from an equilibrium satisfy the complementary slackness conditions for this pair of linear programs, and hence both x and (p, y) are optimal solutions to the corresponding linear programs.\nThere are three different complementary slackness conditions we need to consider, corresponding to the three sets of variables x, y and p. Any agent can only make profit if he transacts, so pi > 0 implies P t\u2208adj(i) xt = 1, and similarly, yt > 0 implies that xt = 1 also.\nFinally, consider a trader t with xt > 0 that trades between seller i and buyer j, and recall that we have seen above that the inequality yt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) is satisfied with equality for those who trade.\nNext we argue that equilibria always exist.\nTHEOREM 2.3.\nFor any efficient trade between buyers and sellers there is a pure equilibrium of bid-ask values that supports this trade.\nProof.\nConsider an efficient trade; let xt = 1 if t trades and 0 otherwise; and consider an optimal solution (p, y) to the dual linear program.\nWe would like to claim that all dual solutions correspond to equilibrium prices, but unfortunately this is not exactly true.\nBefore we can convert a dual solution to equilibrium prices, we may need to modify the solution slightly as follows.\nConsider any agent i that is only connected to a single trader t. Because the agent is only connected to a single trader, the variables yt and pi are dual variables corresponding to the same primal inequality xt \u2264 1, and they always appear together as yt + pi in all inequalities, and also in the objective function.\nThus there is an optimal solution in which pi = 0 for all agents i connected only to a single trader.\nAssume (p, y) is a dual solution where agents connected only to one trader have pi = 0.\nFor a seller i, let \u03b2t = \u03b8i + pi be the bid for all traders t adjacent to i. Similarly, for each buyer j, let \u03b1t = \u03b8j \u2212 pj be the ask for all traders t adjacent to j.\nWe claim that this set of bids and asks, together with the trade x, are an equilibrium.\nTo see why, note that all traders t adjacent to a seller or buyer i offer the same ask or bid, and so trading with any trader is equally good for agent i. Also, if i is not trading in the solution 147 x then by complementary slackness pi = 0, and hence not trading is also equally good for i.\nThis shows that sellers and buyers don``t have an incentive to deviate.\nWe need to show that traders have no incentive to deviate either.\nWhen a trader t is trading with seller i and buyer j, then profitable deviations would involve increasing \u03b1t or decreasing \u03b2t.\nBut by our construction (and assumption about monopolized agents) all sellers and buyers have multiple identical ask\/bid offers, or trade is occurring at valuation.\nIn either case such a deviation cannot be successful.\nFinally, consider a trader t = (i, j) who doesn``t trade.\nA deviation for t would involve offering a lower ask to seller i and a higher bid to seller j than their current trade.\nHowever, yt = 0 by complementary slackness, and hence pi + \u03b8i \u2265 \u03b8j \u2212 pj, so i sells for a price at least as high as the price at which j buys, so trader t cannot create profitable trade.\nNote that a seller or buyer i connected to a single trader t cannot have profit at equilibrium, so possible equilibrium profits are in one-to-one correspondence with dual solutions for which pi = 0 whenever i is monopolized by one trader.\nA disappointing feature of the equilibrium created by this proof is that some agents t may have to create ask-bid pairs where \u03b2t > \u03b1t, offering to buy for more than the price at which they are willing to sell.\nAgents that make such crossing bid-ask pairs never actually perform a trade, so it does not result in negative profit for the agent, but such pairs are unnatural.\nCrossing bid-ask pairs are weakly dominated by the strategy of offering a low bid \u03b2 = 0 and an extremely high ask to guarantee that neither is accepted.\nTo formulate a way of avoiding such crossing pairs, we say an equilibrium is cross-free if \u03b1t \u2265 \u03b2t for all traders t.\nWe now show there is always a cross-free equilibrium.\nTHEOREM 2.4.\nFor any efficient trade between buyers and sellers there is a pure cross-free equilibrium.\nProof.\nConsider an optimal solution to the dual linear program.\nTo get an equilibrium without crossing bids, we need to do a more general modification than just assuming that pi = 0 for all sellers and buyers connected to only a single trader.\nLet the set E be the set of edges t = (i, j) that are tight, in the sense that we have the equality yt = (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi).\nThis set E contain all the edges where trade occurs, and some more edges.\nWe want to make sure that pi = 0 for all sellers and buyers that have degree at most 1 in E. Consider a seller i that has pi > 0.\nWe must have i involved in a trade, and the edge t = (i, j) along which the trade occurs must be tight.\nSuppose this is the only tight edge adjacent to agent i; then we can decrease pi and increase yt till one of the following happens: either pi = 0 or the constraint of some other agent t \u2208 adj(i) becomes tight.\nThis change only increases the set of tight edges E, keeps the solution feasible, and does not change the objective function value.\nSo after doing this for all sellers, and analogously changing yt and pj for all buyers, we get an optimal solution where all sellers and buyers i either have pi = 0 or have at least two adjacent tight edges.\nNow we can set asks and bids to form a cross-free equilibrium.\nFor all traders t = (i, j) associated with an edge t \u2208 E we set \u03b1t and \u03b2t as before: we set the bid \u03b2t = pi + \u03b8i and the ask \u03b1t = \u03b8j \u2212pj.\nFor a trader t = (i, j) \u2208 E we have that pi +\u03b8i > \u03b8j \u2212pj and we set \u03b1t = \u03b2t to be any value in the range [\u03b8j \u2212 pj, pi + \u03b8i].\nThis guarantees that for each seller or buyer the best sell or buy offer is along the edge where trade occurs in the solution.\nThe askbid values along the tight edges guarantee that traders who trade cannot increase their spread.\nTraders t = (i, j) who do not trade cannot make profit due to the constraint pi + \u03b8i \u2265 \u03b8j \u2212 pj 1 1 1 0 0 1 0 1 1 0 0 0 1 (a) No trader profit 1 1 1 0 x x x x 1 x x 0 x (b) Trader profit Figure 2: Left: an equilibrium with crossing bids where traders make no money.\nRight: an equilibrium without crossing bids for any value x \u2208 [0, 1].\nTotal trader profit ranges between 1 and 2.\n2.2 Distinguishable Goods We now consider the case of distinguishable goods.\nAs in the previous section, we can write a transshipment linear program for the socially optimal trade, with the only change being in the objective function.\nmax SV (x) = X t\u2208T :t=(i,j) xt(\u03b8ji \u2212 \u03b8ij) We can show that the dual of this linear program corresponds to trader profits.\nRecall that we needed to add the constraints xt \u2264 1 for all traders.\nThe dual is then: min sum(p, y) = X i\u2208B\u222aS pi + X t\u2208T yt yt \u2265 0 \u2200t \u2208 T : pi \u2265 0 \u2200i \u2208 S \u222a B : yt \u2265 (\u03b8ji \u2212 pj) \u2212 (\u03b8ij + pi) \u2200t = (i, j) \u2208 T It is not hard to extend the proofs of Theorems 2.2 - 2.4 to this case.\nProfits in an equilibrium satisfy the dual constraints, and profits and trade satisfy complementary slackness.\nThis shows that trade is socially optimal.\nTaking an optimal dual solution where pi = 0 for all agents that are monopolized, we can convert it to an equilibrium, and with a bit more care, we can also create an equilibrium with no crossing bid-ask pairs.\nTHEOREM 2.5.\nAll equilibria for the case of pair-traders with distinguishable goods result in socially optimal trade.\nPure noncrossing equilibria exist.\n2.3 Trader Profits We have seen that all equilibria are efficient.\nHowever, it turns out that equilibria may differ in how the value of the allocation is spread between the sellers, buyers and traders.\nFigure 2 depicts a simple example of this phenomenon.\nOur goal is to understand how a trader``s profit is affected by its position in the network; we will use the characterization we obtained to work out the range of profits a trader can make.\nTo maximize the profit of a trader t (or a subset of traders T ) all we need to do is to find an optimal solution to the dual linear program maximizing the value of yt (or the sum P t\u2208T yt).\nSuch dual solutions will then correspond to equilibria with non-crossing prices.\n148 THEOREM 2.6.\nFor any trader t or subset of traders T the maximum total profit they can make in any equilibrium can be computed in polynomial time.\nThis maximum profit can be obtained by a non-crossing equilibrium.\nOne way to think about the profit of a trader t = (i, j) is as a subtraction from the value of the corresponding edge (i, j).\nThe value of the edge is the social value \u03b8ji \u2212 \u03b8ij if the trader makes no profit, and decreases to \u03b8ji \u2212 \u03b8ij \u2212 yt if the trader t insists on making yt profit.\nTrader t gets yt profit in equilibrium, if after this decrease in the value of the edge, the edge is still included in the optimal transshipment.\nTHEOREM 2.7.\nA trader t can make profit in an equilibrium if and only if t is essential for the social welfare, that is, if deleting agent t decreases social welfare.\nThe maximum profit he can make is exactly his value to society, that is, the increase his presence causes in the social welfare.\nIf we allow crossing equilibria, then we can also find the minimum possible profit.\nRecall that in the proof of Theorem 2.3, traders only made money off of sellers or buyers that they have a monopoly over.\nAllowing such equilibria with crossing bids we can find the minimum profit a trader or set of traders can make, by minimizing the value yt (or sum P t\u2208T yt) over all optimal solutions that satisfy pi = 0 whenever i is connected to only a single trader.\nTHEOREM 2.8.\nFor any trader t or subset of traders T the minimum total profit they can make in any equilibrium can be computed in polynomial time.\n3.\nGENERAL TRADERS Next we extend the results to a model where traders may be connected to an arbitrary number of sellers and buyers.\nFor a trader t \u2208 T we will use S(t) and B(t) to denote the set of buyers and sellers connected to trader t.\nIn this section we focus on the general case when goods are distinguishable (i.e. both buyers and sellers have valuations that are sensitive to the identity of the agent they are paired with in the allocation).\nIn the full version of the paper we also discuss the special case of indistinguishable goods in more detail.\nTo get the optimal trade, we consider the bipartite graph G = (S \u222a B, E) connecting sellers and buyers where an edge e = (i, j) connects a seller i and a buyer j if there is a trader adjacent to both: E = {(i, j) : adj(i) \u2229 adj(j) = \u2205}.\nOn this graph, we then solve the instance of the assignment problem that was also used in Section 2.2, with the value of edge (i, j) equal to \u03b8ji \u2212 \u03b8ij (since the value of trading between i and j is independent of which trader conducted the trade).\nWe will also use the dual of this linear program: min val(z) = X i\u2208B\u222aS zi zi \u2265 0 \u2200i \u2208 S \u222a B. zi + zj \u2265 \u03b8ji \u2212 \u03b8ij \u2200i \u2208 S, j \u2208 B : adj(i) \u2229 adj(j) = \u2205.\n3.1 Bids and Asks and Trader Optimization First we need to understand what bidding model we will use.\nEven when goods are indistinguishable, a trader may want to pricediscriminate, and offer different bid and ask values to different sellers and buyers.\nIn the case of distinguishable goods, we have to deal with a further complication: the trader has to name the good she is proposing to sell or buy, and can possibly offer multiple different products.\nThere are two variants of our model depending whether a trader makes a single bid or ask to a seller or buyer, or she offers a menu of options.\n(i) A trader t can offer a buyer j a menu of asks \u03b1tji, a vector of values for all the products that she is connected to, where \u03b1tji is the ask for the product of seller i. Symmetrically, a trader t can offer to each seller i a menu of bids \u03b2tij for selling to different buyers j. (ii) Alternatively, we can require that each trader t can make at most one ask to each seller and one bid for each buyer, and an ask has to include the product sold, and a bid has to offer a particular buyer to sell to.\nOur results hold in either model.\nFor notational simplicity we will use the menu option here.\nNext we need to understand the optimization problem of a trader t. Suppose we have bid and ask values for all other traders t \u2208 T, t = t.\nWhat are the best bid and ask offers trader t can make as a best response to the current set of bids and asks?\nFor each seller i let pi be the maximum profit seller i can make using bids by other traders, and symmetrically assume pj is the maximum profit buyer j can make using asks by other traders (let pi = 0 for any seller or buyer i who cannot make profit).\nNow consider a seller-buyer pair (i, j) that trader t can connect.\nTrader t will have to make a bid of at least \u03b2tij = \u03b8ij +pi to seller i and an ask of at most \u03b1tji = \u03b8ji \u2212pj to buyer j to get this trade, so the maximum profit she can make on this trade is vtij = \u03b1tji \u2212 \u03b2tij = \u03b8ji \u2212 pj \u2212 (\u03b8ij + pi).\nThe optimal trade for trader t is obtained by solving a matching problem to find the matching between the sellers S(t) and buyers B(t) that maximizes the total value vtij for trader t.\nWe will need the dual of the linear program of finding the trade of maximum profit for the trader t.\nWe will use qti as the dual variable associated with the constraint of seller or buyer i.\nThe dual is then the following problem.\nmin val(qt) = X i\u2208B(t)\u222aS(t) qti qti \u2265 0 \u2200i \u2208 S(t) \u222a B(t).\nqti + qtj \u2265 vtij \u2200i \u2208 S(t), j \u2208 B(t).\nWe view qti as the profit made by t from trading with seller or buyer i. Theorem 3.1 summarizes the above discussion.\nTHEOREM 3.1.\nFor a trader t, given the lowest bids \u03b2tij and highest asks \u03b1tji that can be accepted for sellers i \u2208 S(t) and buyers j \u2208 B(t), the best trade t can make is the maximum value matching between S(t) and B(t) with value vtij = \u03b1tji \u2212 \u03b2tij for the edge (i, j).\nThis maximum value is equal to the minimum of the dual linear program above.\n3.2 Efficient Trade and Equilibrium Now we can prove trade at equilibrium is always efficient.\nTHEOREM 3.2.\nEvery equilibrium results in an efficient allocation of the goods.\nProof.\nConsider an equilibrium, with xe = 1 if and only if trade occurs along edge e = (i, j).\nTrade is a solution to the transshipment linear program used in Section 2.2.\nLet pi denote the profit of seller or buyer i. Each trader t currently has the best solution to his own optimization problem.\nA trader t finds his optimal trade (given bids and asks by all other 149 traders) by solving a matching problem.\nLet qti for i \u2208 B(t)\u222aS(t) denote the optimal dual solution to this matching problem as described by Theorem 3.1.\nWhen setting up the optimization problem for a trader t above, we used pi to denote the maximum profit i can make without the offer of trader t. Note that this pi is exactly the same pi we use here, the profit of agent i.\nThis is clearly true for all traders t that are not trading with i in the equilibrium.\nTo see why it is true for the trader t that i is trading with we use that the current set of bid-ask values is an equilibrium.\nIf for any agent i the bid or ask of trader t were the unique best option, then t could extract more profit by offering a bit larger ask or a bit smaller bid, a contradiction.\nWe show the trade x is optimal by considering the dual solution zi = pi + P t qti for all agents i \u2208 B \u222a S.\nWe claim z is a dual solution, and it satisfies complementary slackness with trade x. To see this we need to show a few facts.\nWe need that zi > 0 implies that i trades.\nIf zi > 0 then either pi > 0 or qti > 0 for some trader t. Agent i can only make profit pi > 0 if he is involved in a trade.\nIf qti > 0 for some t, then trader t must trade with i, as his solution is optimal, and by complementary slackness for the dual solution, qti > 0 implies that t trades with i. For an edge (i, j) associated with a trader t we need to show the dual solution is feasible, that is zi + zj \u2265 \u03b8ji \u2212 \u03b8ij .\nRecall vtij = \u03b8ji \u2212pj \u2212(\u03b8ij +pi), and the dual constraint of the trader``s optimization problem requires qti + qtj \u2265 vtij.\nPutting these together, we have zi + zj \u2265 pi + qti + pj + qtj \u2265 vtij + pi + pj = \u03b8ji \u2212 \u03b8ij .\nFinally, we need to show that the trade variables x also satisfy the complementary slackness constraint: when xe > 0 for an edge e = (i, j) then the corresponding dual constraint is tight.\nLet t be the trader involved in the trade.\nBy complementary slackness of t``s optimization problem we have qti + qtj = vtij.\nTo see that z satisfies complementary slackness we need to argue that for all other traders t = t we have both qt i = 0 and qt j = 0.\nThis is true as qt i > 0 implies by complementary slackness of t ``s optimization problem that t must trade with i at optimum, and t = t is trading.\nNext we want to show that a non-crossing equilibrium always exists.\nWe call an equilibrium non-crossing if the bid-ask offers a trader t makes for a seller-buyer pair (i, j) never cross, that is \u03b2tij \u2264 \u03b1tji for all t, i, j. THEOREM 3.3.\nThere exists a non-crossing equilibrium supporting any socially optimal trade.\nProof.\nConsider an optimal trade x and a dual solution z as before.\nTo find a non-crossing equilibrium we need to divide the profit zi between i and the trader t trading with i.\nWe will use qti as the trader t``s profit associated with agent i for any i \u2208 S(t) \u222a B(t).\nWe will need to guarantee the following properties: Trader t trades with agent i whenever qti > 0.\nThis is one of the complementary slackness conditions to make sure the current trade is optimal for trader t. For all seller-buyer pairs (i, j) that a trader t can trade with, we have pi + qti + pj + qtj \u2265 \u03b8ji \u2212 \u03b8ij , (1) which will make sure that qt is a feasible dual solution for the optimization problem faced by trader t.\nWe need to have equality in (1) when trader t is trading between i and j.\nThis is one of the complementary slackness conditions for trader t, and will ensure that the trade of t is optimal for the trader.\nFinally, we want to arrange that each agent i with pi > 0 has multiple offers for making profit pi, and the trade occurs at one of his best offers.\nTo guarantee this in the corresponding bids and asks we need to make sure that whenever pi > 0 there are multiple t \u2208 adj(i) that have equation in the above constraint (1).\nWe start by setting pi = zi for all i \u2208 S \u222a B and qti = 0 for all i \u2208 S \u222a B and traders t \u2208 adj(i).\nThis guarantees all invariants except the last property about multiple t \u2208 adj(t) having equality in (1).\nWe will modify p and q to gradually enforce the last condition, while maintaining the others.\nConsider a seller with pi > 0.\nBy optimality of the trade and dual solution z, seller i must trade with some trader t, and that trader will have equality in (1) for the buyer j that he matches with i.\nIf this is the only trader t that has a tight constraint in (1) involving seller i then we increase qti and decrease pi till either pi = 0 or another trader t = t will be achieve equality in (1) for some buyer edge adjacent to i (possibly a different buyer j ).\nThis change maintains all invariants, and increases the set of sellers that also satisfy the last constraint.\nWe can do a similar change for a buyer j that has pj > 0 and has only one trader t with a tight constraint (1) adjacent to j.\nAfter possibly repeating this for all sellers and buyers, we get profits satisfying all constraints.\nNow we get equilibrium bid and ask values as follows.\nFor a trader t that has equality for the seller-buyer pair (i, j) in (1) we offer \u03b1tji = \u03b8ji \u2212 pj and \u03b2tij = \u03b8ij + pi.\nFor all other traders t and seller-buyer pairs (i, j) we have the invariant (1), and using this we know we can pick a value \u03b3 in the range \u03b8ij +pi+qti \u2265 \u03b3 \u2265 \u03b8ji \u2212 (pj + qtj ).\nWe offer bid and ask values \u03b2tij = \u03b1tji = \u03b3.\nNeither the bid nor the ask will be the unique best offer for the buyer, and hence the trade x remains an equilibrium.\n3.3 Trader Profits Finally we turn to the goal of understanding, in the case of general traders, how a trader``s profit is affected by its position in the network.\nFirst, we show how to maximize the total profit of a set of traders.\nThe profit of trader t in an equilibrium is P i qti.\nTo find the maximum possible profit for a trader t or a set of traders T , we need to do the following: Find profits pi \u2265 0 and qti > 0 so that zi = pi + P t\u2208adj(i) qti is an optimal dual solution, and also satisfies the constraints (1) for any seller i and buyer j connected through a trader t \u2208 T. Now, subject to all these conditions, we maximize the sum P t\u2208T P i\u2208S(t)\u222aB(t) qti.\nNote that this maximization is a secondary objective function to the primary objective that z is an optimal dual solution.\nThen we use the proof of Theorem 3.3 shows how to turn this into an equilibrium.\nTHEOREM 3.4.\nThe maximum value for P t\u2208T P i qti above is the maximum profit the set T of traders can make.\nProof.\nBy the proof of Theorem 3.2 the profits of trader t can be written in this form, so the set of traders T cannot make more profit than claimed in this theorem.\nTo see that T can indeed make this much profit, we use the proof of Theorem 3.3.\nWe modify that proof to start with profit vectors p and qt for t \u2208 T , and set qt = 0 for all traders t \u2208 T .\nWe verify that this starting solution satisfies the first three of the four required properties, and then we can follow the proof to make the fourth property true.\nWe omit the details of this in the present version.\nIn Section 2.3 we showed that in the case of pair traders, a trader t can make money if he is essential for efficient trade.\nThis is not 150 1 1 Figure 3: The top trader is essential for social welfare.\nYet the only equilibrium is to have bid and ask values equal to 0, and the trader makes no profit.\ntrue for the type of more general traders we consider here, as shown by the example in Figure 3.\nHowever, we still get a characterization for when a trader t can make a positive profit.\nTHEOREM 3.5.\nA trader t can make profit in an equilibrium if and only if there is a seller or buyer i adjacent to t such that the connection of trader t to agent i is essential for social welfarethat is, if deleting agent t from adj(i) decreases the value of the optimal allocation.\nProof.\nFirst we show the direction that if a trader t can make money there must be an agent i so that t``s connection to i is essential to social welfare.\nLet p, q be the profits in an equilibrium where t makes money, as described by Theorem 3.2 with P i\u2208S(t)\u222aB(t) qti > 0.\nSo we have some agent i with qti > 0.\nWe claim that the connection between agent i and trader t must be essential, in particular, we claim that social welfare must decrease by at least qti if we delete t from adj(t).\nTo see why note that decreasing the value of all edges of the form (i, j) associated with trader t by qti keeps the same trade optimum, as we get a matching dual solution by simply resetting qti to zero.\nTo see the opposite, assume deleting t from adj(t) decreases social welfare by some value \u03b3.\nAssume i is a seller (the case of buyers is symmetric), and decrease by \u03b3 the social value of each edge (i, j) for any buyer j such that t is the only agent connecting i and j. By assumption the trade is still optimal, and we let z be the dual solution for this matching.\nNow we use the same process as in the proof of Theorem 3.3 to create a non-crossing equilibrium starting with pi = zi for all i \u2208 S \u222aB, and qti = \u03b3, and all other q values 0.\nThis creates an equilibrium with non-crossing bids where t makes at least \u03b3 profit (due to trade with seller i).\nFinally, if we allow crossing equilibria, then we can find the minimum possible profit by simply finding a dual solution minimizing the dual variables associated with agents monopolized by some trader.\nTHEOREM 3.6.\nFor any trader t or subset of traders T , the minimum total profit they can make in any equilibrium can be computed in polynomial time.\n4.\nREFERENCES [1] M. Babaioff, N. Nisan, E. Pavlov.\nMechanisms for a Spatially Distributed Market.\nACM EC Conference, 2005.\n[2] C. Barrett, E. Mutambatsere.\nAgricultural markets in developing countries.\nThe New Palgrave Dictionary of Economics, 2nd edition, forthcoming.\n[3] Kenneth Burdett and Kenneth Judd.\nEquilibrium Price Disperison.\nEconometrica, 51\/4, July 1983, 955-969.\n[4] L. Chu, Z.-J.\nShen.\nAgent Competition Double Auction Mechanism.\nManagement Science, 52\/8, 2006.\n[5] G. Demange, D. Gale, M. Sotomayor.\nMulti-item auctions.\nJ. Political Econ.\n94(1986).\n[6] E. Even-Dar, M. Kearns, S. Suri.\nA Network Formation Game for Bipartite Exchange Economies.\nACM-SIAM Symp.\non Discrete Algorithms (SODA), 2007.\n[7] J. Kephart, J. Hanson, A. Greenwald.\nDynamic Pricing by Software Agents.\nComputer Networks, 2000.\n[8] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, S. Suri.\nEconomic Properties of Social Networks.\nNIPS 2004.\n[9] R. Kranton, D. Minehart.\nA Theory of Buyer-Seller Networks.\nAmerican Economic Review 91(3), June 2001.\n[10] H. Leonard.\nElicitation of Honest Preferences for the Assignment of Individuals to Positions.\nJ. Pol.\nEcon, 1983.\n[11] M. E. J. Newman.\nThe structure and function of complex networks.\nSIAM Review, 45:167-256, 2003.\n[12] M. O``Hara.\nMarket Microstructure Theory.\nBlackwell Publishers, Cambridge, MA, 1995.\n[13] L. Shapley M. Shubik, The Assignment Game I: The Core.\nIntl..\nJ. Game Theory 1\/2 111-130, 1972.\n[14] Jean Tirole.\nThe Theory of Industrial Organization.\nThe MIT Press, Cambridge, MA, 1988.\n151","lvl-3":"Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations.\nTypically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market.\nWe model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders.\nIn this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered.\nWe show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers.\nFinally, we consider how the profits obtained by the traders depend on the underlying graph--roughly, a trader can command a positive profit if and only if it has an \"essential\" connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.\n1.\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers, one observes several recurring properties: Individual buyers and sellers often trade through intermediaries, not all buyers and sellers have access to the same intermediaries, and not all buyers and sellers trade at the same price.\nOne example of this setting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers' limited access to capital, many farmers have no alternative to trading with middlemen in inefficient local markets.\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with these general characteristics.\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems.\nFor many assets there is no one market; trade in a single asset may occur simultaneously on the floor of an exchange, on crossing networks, on electronic exchanges, and in markets in other countries.\nSome buyers and sellers have access to many or all of these trading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading venues.\nIn fact, there is no \"price\" as different traders pay or receive different prices.\nIn many settings there is also a gap between the price a buyer pays for an asset, the ask price, and the price a seller receives for the asset, the bid price.\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange, where there is an interbank market with restricted access and a retail market with much more open access.\nSpreads, defined as the difference between bid and ask prices, differ significantly across these markets, even though the same asset is being traded in the two markets.\nIn this paper, we develop a framework in which such phenomena emerge from a game-theoretic model of trade, with buyers, sellers, and traders interacting on a network.\nThe edges of the network connect traders to buyers and sellers, and thus represent the access that different market participants have to one another.\nThe traders serve as intermediaries in a two-stage trading game: they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to; the sellers and buyers then react to the prices they face.\nThus, the network encodes the relative power in the structural positions of the market participants, including the implicit levels of competition among traders.\nWe show that this game always has a\nsubgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe also analyze how trader profits depend on the network structure, essentially characterizing in graph-theoretic terms how a trader's payoff is determined by the amount of competition it experiences with other traders.\nOur work here is connected to several lines of research in economics, finance, and algorithmic game theory, and we discuss these connections in more detail later in the introduction.\nAt a general level, our approach can be viewed as synthesizing two important strands of work: one that treats buyer-seller interaction using network structures, but without attempting to model the processses by which prices are actually formed [1, 4, 5, 6, 8, 9, 10, 13]; and another strand in the literature on market microstructure that incorporates price-setting intermediaries, but without network-type constraints on who can trade with whom [12].\nBy developing a network model that explicitly includes traders as price-setting agents, in a system together with buyers and sellers, we are able to capture price formation in a network setting as a strategic process carried out by intermediaries, rather than as the result of a centrally controlled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods.\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above, where the participants do not all have uniform access to one another.\nWe are given a set B of buyers, a set S of sellers, and a set T of traders.\nThere is an undirected graph G that indicates who is able to trade with whom.\nAll edges have one end in B U S and the other in T; that is, each edge has the form (i, t) for i E S and t E T, or (j, t) for j E B and t E T.\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries.\nIn the most basic version of the model, we consider identical goods, one copy of which is initially held by each seller.\nBuyers and sellers each have a value for one copy of the good, and we assume that these values are common knowledge.\nWe will subsequently generalize this to a setting in which goods are distinguishable, buyers can value different goods differently, and potentially sellers can value transactions with different buyers differently as well.\nHaving different buyer valuations captures settings like house purchases; adding different seller valuations as well captures matching markets--for example, sellers as job applicants and buyers as employers, with both caring about who ends up with which \"good\" (and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good; the good comes in individisible units; and each seller initially holds one unit of the good.\nAll three types of agents value money at the same rate; and each i E B U S additionally values one copy of the good at \u03b8i units of money.\nNo agent wants more than one copy of the good, so additional copies are valued at 0.\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of two edges: from a seller to a trader, and then from the trader to a buyer.\nThe particular way in which goods flow is determined by the following game.\nFirst, each trader offers a bid price to each seller it is connected to, and an ask price to each buyer it is connected to.\nSellers and buyers then choose from among the offers presented to them by traders.\nIf multiple traders propose the same price to a seller or buyer, then there is no strict best response for the seller or buyer.\nIn this case a selection must be made, and, as is standard (see for example [10]), we (the modelers) choose among the best offers.\nFinally, each trader buys a copy of the good from each seller that accepts its offer, and it sells a copy of the good to each buyer that accepts its offer.\nIf a particular trader t finds that more buyers than sellers accept its offers, then it has committed to provide more copies of the good than it has received, and we will say that this results in a large penalty to the trader for defaulting; the effect of this is that in equilibrium, no trader will choose bid and ask prices that result in a default.\nMore precisely, a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected, and an ask price \u03b1tj for each buyer j to which t is connected.\n(We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers.)\nEach seller or buyer then chooses at most one incident edge, indicating the trader with whom they will transact, at the indicated price.\n(The choice of a single edge reflects the facts that (a) sellers each initially have only one copy of the good, and (b) buyers each only want one copy of the good.)\nThe payoffs are as follows: For each seller i, the payoff from selecting trader t is 3ti, while the payoff from selecting no trader is \u03b8i.\n(In the former case, the seller receives 3ti units of money, while in the latter it keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j--\u03b1tj, whle the payoff from selecting no trader is 0.\n(In the former case, the buyer receives the good but gives up \u03b1tj units of money.)\nFor each trader t, with accepted offers from sellers i1,..., is and buyers j1,..., jb, the payoff is Pr \u03b1tjr--Pr 3tir, minus a penalty \u03c0 if b> s.\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium, and hence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game.\nThe equilibrium concept we use is subgame perfect Nash equilibrium.\nSome Examples.\nTo help with thinking about the model, we now describe three illustrative examples, depicted in Figure 1.\nTo keep the figures from getting too cluttered, we adopt the following conventions: sellers are drawn as circles in the leftmost column and will be named i1, i2,...from top to bottom; traders are drawn as squares in the middle column and will be named t1, t2,...from top to bottom; and buyers are drawn as circles in the rightmost column and will be named j1, j2,...from top to bottom.\nAll sellers in the examples will have valuations for the good equal to 0; the valuation of each buyer is drawn inside its circle; and the bid or ask price on each edge is drawn on top of the edge.\nIn Figure 1 (a), we show how a standard second-price auction arises naturally from our model.\nSuppose the buyer valuations from top to bottom are w> x> y> z.\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1, and no other buyer accepts the offer of its adjacent trader: thus, trader t1 receives the good with a bid price of x, and makes w--x by selling the good to buyer j1 for w.\nIn this way, we can consider this particular instance as an auction for a single good in which the traders act as \"proxies\" for their adjacent buyers.\nThe buyer with the highest valuation for the good ends up with it, and the surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with f> k buyers just as easily, by building a complete bipartite graph on k sellers and f traders, and then attaching each trader to a single distinct buyer.\nIn Figure 1 (b), we show how nodes with different positions in the network topology can achieve different payoffs, even when all\nFigure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it.\n(b)\nA network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and buyers have no power due to their position in the network.\n(c) A form of implicit perfect competition: all bid\/ask spreads will be zero in equilibrium, even though no trader directly \"competes\" with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically.\nSpecifically, seller i2 and buyer j2 occupy powerful positions, because the two traders are competing for their business; on the other hand, the other sellers and buyers are in weak positions, because they each have only one option.\nAnd indeed, in every equilibrium, there is a real number x E [0, 1] such that both traders offer bid and ask prices of x to i2 and j2 respectively, while they offer bids of 0 and asks of 1 to the other sellers and buyers.\nThus, this example illustrates a few crucial ingredients that we will identify at a more general level shortly.\nSpecifically, i2 and j2 experience the benefits of perfect competition, in that the two traders drive the bid-ask spreads to 0 in competing for their business.\nOn the other hand, the other sellers and buyers experience the downsides of monopoly--they receive 0 payoff since they have only a single option for trade, and the corresponding trader makes all the profit.\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents--capturing the fact that there is no one fixed \"price\" in the kinds of markets that motivate the model, but rather different prices reflecting the relative power of the different agents involved.\nThe previous example shows perhaps the most natural way in which a trader's profit on a particular transaction can drop to 0: when there is another trader who can replicate its function precisely.\n(In that example, two traders each had the ability to move a copy of the good from i2 to j2.)\nBut as our subsequent results will show, traders make zero profit more generally due to global, graph-theoretic reasons.\nThe example in Figure 1 (c) gives an initial indication of this: one can show that for every equilibrium, there is a y E [0, 1] such that every bid and every ask price is equal to y.\nIn other words, all traders make zero profit, whether or not a copy of the good passes through them--and yet, no two traders have any seller-buyer paths in common.\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents; this is an example of implicit perfect competition determined by the network topology.\nExtending the Model to Distinguishable Goods.\nWe extend the basic model to a setting with distinguishable goods, as follows.\nInstead of having each agent i E B U S have a single numerical valuation \u03b8i, we index valuations by pairs of buyers and sellers: if buyer j obtains the good initially held by seller i, it gets a utility of \u03b8ji, and if seller i sells its good to buyer j, it experiences a loss of utility of \u03b8ij.\nThis generalizes the case of indistinguishable goods, since we can always have these pairwise valuations depend only on one of the indices.\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer, and offering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids (respectively, asks) in the form of vectors, essentially specifying a \"menu\" with a price attached to each buyer (resp.\nseller).)\nEach buyer and seller selects an offer from an adjacent trader, and the payoffs to all agents are determined as before.\nThis general framework captures matching markets [10, 13]: for example, a job market that is mediated by agents or employment search services (as in hiring for corporate executives, or sports or entertainment figures).\nHere the sellers are job applicants, buyers are employers, and traders are the agents that mediate the job market.\nOf course, if one specifies pairwise valuations on buyers but just single valuations for sellers, we model a setting where buyers can distinguish among the goods, but sellers don't care whom they sell to--this (roughly) captures settings like housing markets.\nOur Results.\nOur results will identify general forms of some of the principles noted in the examples discussed above--including the question of which buyers end up with the good; the question of how payoffs are differently realized by sellers, traders, and buyers; and the question of what structural properties of the network determine whether the traders will make positive profits.\nTo make these precise, we introduce the following notation.\nAny outcome of the game determines a final allocation of goods to some of the agents; this can be specified by a collection M of triples (ie, te, je), where ie E S, te E T, and je E B; moreover, each seller and each buyer appears in at most one triple.\nThe meaning is for each e E M, the good initially held by ie moves to je through te.\n(Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie--\u03b8ieje.\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network.\nWe show that every instance of our game has an equilibrium, and that in every such equilibrium, the allocation has value \u03b8 \u2217 --\nin other words, it achieves the best value possible.\nThus, equilibria in this model are always efficient, in that the market enables the \"right\" set of people to get the good, subject to the network constraints.\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network; the dual of this linear program contains enough information to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium allocation is divided up as payoffs to the agents, and it is interesting to ask how this value is distributed--in particular how much profit a trader is able to make based on its position in the network.\nWe find that, although all equilibria have the same value, a given trader's payoff can vary across different equilibria.\nHowever, we are able to characterize the maximum and minimum amounts that a given trader is able to make, where these maxima and minima are taken over all equilibria, and we give an efficient algorithm to compute this.\nIn particular, our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff: this occurs if and only there is some edge e incident to t that is essential, in the sense that deleting e reduces the value of the optimal allocation \u03b8 \u2217.\nWe also obtain results for the sum of all trader profits.\nRelated Work.\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price.\nThis reduced form of trade, built on the idealization of a market price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who sets the price, and agents don't actually trade with each other.\nIn fact there is no market, in the everyday sense of that word, in the Walrasian model.\nThat is, there is no physical or virtual place where buyers and sellers interact to trade and set prices.\nThus in this simple model, all buyers and sellers are uniform and trade at the same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices.\nThe literature on imperfect competition is perhaps the oldest of these.\nHere a monopolist, or a group of oliogopolists, choose prices in order to maximize their profits (see [14] for the standard textbook treatment of these markets).\nA monopolist uses its knowledge of market demand to choose a price, or a collection of prices if it discriminates.\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of a single market is maintained.\nIn the equilibrium search literature, firms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have access to all firms and there are no intermediaries.\nIn the general equilibrium literature there have been various attempts to introduce price determination.\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand.\nThe Walrasian auctioneer is often introduced as a device to explain how this process works, but this is a fundamentally a metaphor for an iterative priceupdating algorithm, not for the internals of an actual market.\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them.\nBut again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does have price-setting agents (specialists), parts of it do determine separate bid and ask prices, and different agents receive different prices for the same asset (see [12] for a treatment of microstructure theory).\nWork in information economics has identified similar phenomena (see e.g. [7]).\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network structure determines prices.\nThese have posited price determination through definitions based on competitive equilibrium or the core, or through the use of truthful mechanisms.\nIn briefly reviewing this work, we will note the contrast with our approach, in that we model prices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers, generated using a probabilistic model capable of producing heavy-tailed degree distributions [11].\nEven-Dar et al. [6] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium.\nLeonard [10], Babaioff et al. [1], and Chu and Shen [4] consider an approach based on mechanism design: buyers and sellers reside at different nodes in a graph, and they incur a given transportation cost to trade with one another.\nLeonard studies VCG prices in this setting; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism.\nSince the concern here is with truthful mechanisms that operate on private valuations, there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition.\nIn contrast, our model has known valuations and prices arising from the strategic behavior of traders.\nThus, the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach: while we assume known valuations, we do not require a centralized authority to impose a mechanism.\nRather, price-setting is part of the strategic outcome, as in the real markets that motivate our work, and our equilibria are simultaneously budget-balanced and efficient--something not possible in the mechanism design frameworks that have been used.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart [9], analyze the prices at which trade occurs in a network, working within the framework of mechanism design.\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers, and then use an ascending auction mechanism, rather than strategic intermediaries, to determine the prices.\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction.\nIn fact, we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game, with traders producing prices at equilibrium matching the prices produced by their auction mechanism .1 Finally, the classic results of Shapley and Shubik [13] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core.\nThey study the dual of a linear program based on the matching problem, similar to what we use for a reduced version of our model in the next section, but their focus is different as they do not consider agents that seek to set prices.\n2.\nMARKETS WITH PAIR-TRADERS\n2.1 Indistinguishable Goods\nTHEOREM 2.2.\nIn any equilibrium the trade is efficient.\n2.2 Distinguishable Goods\n2.3 Trader Profits\n3.\nGENERAL TRADERS\n3.1 Bids and Asks and Trader Optimization\n3.2 Efficient Trade and Equilibrium\n3.3 Trader Profits\ntETi\ntETi","lvl-4":"Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations.\nTypically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market.\nWe model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders.\nIn this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered.\nWe show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers.\nFinally, we consider how the profits obtained by the traders depend on the underlying graph--roughly, a trader can command a positive profit if and only if it has an \"essential\" connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.\n1.\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers, one observes several recurring properties: Individual buyers and sellers often trade through intermediaries, not all buyers and sellers have access to the same intermediaries, and not all buyers and sellers trade at the same price.\nOne example of this setting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers' limited access to capital, many farmers have no alternative to trading with middlemen in inefficient local markets.\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with these general characteristics.\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems.\nFor many assets there is no one market; trade in a single asset may occur simultaneously on the floor of an exchange, on crossing networks, on electronic exchanges, and in markets in other countries.\nSome buyers and sellers have access to many or all of these trading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading venues.\nIn fact, there is no \"price\" as different traders pay or receive different prices.\nIn many settings there is also a gap between the price a buyer pays for an asset, the ask price, and the price a seller receives for the asset, the bid price.\nSpreads, defined as the difference between bid and ask prices, differ significantly across these markets, even though the same asset is being traded in the two markets.\nIn this paper, we develop a framework in which such phenomena emerge from a game-theoretic model of trade, with buyers, sellers, and traders interacting on a network.\nThe edges of the network connect traders to buyers and sellers, and thus represent the access that different market participants have to one another.\nThe traders serve as intermediaries in a two-stage trading game: they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to; the sellers and buyers then react to the prices they face.\nThus, the network encodes the relative power in the structural positions of the market participants, including the implicit levels of competition among traders.\nWe show that this game always has a\nsubgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe also analyze how trader profits depend on the network structure, essentially characterizing in graph-theoretic terms how a trader's payoff is determined by the amount of competition it experiences with other traders.\nBy developing a network model that explicitly includes traders as price-setting agents, in a system together with buyers and sellers, we are able to capture price formation in a network setting as a strategic process carried out by intermediaries, rather than as the result of a centrally controlled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods.\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above, where the participants do not all have uniform access to one another.\nWe are given a set B of buyers, a set S of sellers, and a set T of traders.\nThere is an undirected graph G that indicates who is able to trade with whom.\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries.\nIn the most basic version of the model, we consider identical goods, one copy of which is initially held by each seller.\nBuyers and sellers each have a value for one copy of the good, and we assume that these values are common knowledge.\nWe will subsequently generalize this to a setting in which goods are distinguishable, buyers can value different goods differently, and potentially sellers can value transactions with different buyers differently as well.\nHaving different buyer valuations captures settings like house purchases; adding different seller valuations as well captures matching markets--for example, sellers as job applicants and buyers as employers, with both caring about who ends up with which \"good\" (and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good; the good comes in individisible units; and each seller initially holds one unit of the good.\nAll three types of agents value money at the same rate; and each i E B U S additionally values one copy of the good at \u03b8i units of money.\nNo agent wants more than one copy of the good, so additional copies are valued at 0.\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of two edges: from a seller to a trader, and then from the trader to a buyer.\nThe particular way in which goods flow is determined by the following game.\nFirst, each trader offers a bid price to each seller it is connected to, and an ask price to each buyer it is connected to.\nSellers and buyers then choose from among the offers presented to them by traders.\nIf multiple traders propose the same price to a seller or buyer, then there is no strict best response for the seller or buyer.\nFinally, each trader buys a copy of the good from each seller that accepts its offer, and it sells a copy of the good to each buyer that accepts its offer.\nIf a particular trader t finds that more buyers than sellers accept its offers, then it has committed to provide more copies of the good than it has received, and we will say that this results in a large penalty to the trader for defaulting; the effect of this is that in equilibrium, no trader will choose bid and ask prices that result in a default.\nMore precisely, a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected, and an ask price \u03b1tj for each buyer j to which t is connected.\n(We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers.)\nEach seller or buyer then chooses at most one incident edge, indicating the trader with whom they will transact, at the indicated price.\n(The choice of a single edge reflects the facts that (a) sellers each initially have only one copy of the good, and (b) buyers each only want one copy of the good.)\nThe payoffs are as follows: For each seller i, the payoff from selecting trader t is 3ti, while the payoff from selecting no trader is \u03b8i.\n(In the former case, the seller receives 3ti units of money, while in the latter it keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j--\u03b1tj, whle the payoff from selecting no trader is 0.\n(In the former case, the buyer receives the good but gives up \u03b1tj units of money.)\nFor each trader t, with accepted offers from sellers i1,..., is and buyers j1,..., jb, the payoff is Pr \u03b1tjr--Pr 3tir, minus a penalty \u03c0 if b> s.\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium, and hence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game.\nThe equilibrium concept we use is subgame perfect Nash equilibrium.\nSome Examples.\nTo help with thinking about the model, we now describe three illustrative examples, depicted in Figure 1.\nAll sellers in the examples will have valuations for the good equal to 0; the valuation of each buyer is drawn inside its circle; and the bid or ask price on each edge is drawn on top of the edge.\nIn Figure 1 (a), we show how a standard second-price auction arises naturally from our model.\nSuppose the buyer valuations from top to bottom are w> x> y> z.\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1, and no other buyer accepts the offer of its adjacent trader: thus, trader t1 receives the good with a bid price of x, and makes w--x by selling the good to buyer j1 for w.\nIn this way, we can consider this particular instance as an auction for a single good in which the traders act as \"proxies\" for their adjacent buyers.\nThe buyer with the highest valuation for the good ends up with it, and the surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with f> k buyers just as easily, by building a complete bipartite graph on k sellers and f traders, and then attaching each trader to a single distinct buyer.\nIn Figure 1 (b), we show how nodes with different positions in the network topology can achieve different payoffs, even when all\nFigure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it.\n(b)\nA network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and buyers have no power due to their position in the network.\n(c) A form of implicit perfect competition: all bid\/ask spreads will be zero in equilibrium, even though no trader directly \"competes\" with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically.\nSpecifically, seller i2 and buyer j2 occupy powerful positions, because the two traders are competing for their business; on the other hand, the other sellers and buyers are in weak positions, because they each have only one option.\nAnd indeed, in every equilibrium, there is a real number x E [0, 1] such that both traders offer bid and ask prices of x to i2 and j2 respectively, while they offer bids of 0 and asks of 1 to the other sellers and buyers.\nThus, this example illustrates a few crucial ingredients that we will identify at a more general level shortly.\nSpecifically, i2 and j2 experience the benefits of perfect competition, in that the two traders drive the bid-ask spreads to 0 in competing for their business.\nOn the other hand, the other sellers and buyers experience the downsides of monopoly--they receive 0 payoff since they have only a single option for trade, and the corresponding trader makes all the profit.\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents--capturing the fact that there is no one fixed \"price\" in the kinds of markets that motivate the model, but rather different prices reflecting the relative power of the different agents involved.\nThe previous example shows perhaps the most natural way in which a trader's profit on a particular transaction can drop to 0: when there is another trader who can replicate its function precisely.\n(In that example, two traders each had the ability to move a copy of the good from i2 to j2.)\nBut as our subsequent results will show, traders make zero profit more generally due to global, graph-theoretic reasons.\nThe example in Figure 1 (c) gives an initial indication of this: one can show that for every equilibrium, there is a y E [0, 1] such that every bid and every ask price is equal to y.\nIn other words, all traders make zero profit, whether or not a copy of the good passes through them--and yet, no two traders have any seller-buyer paths in common.\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents; this is an example of implicit perfect competition determined by the network topology.\nExtending the Model to Distinguishable Goods.\nWe extend the basic model to a setting with distinguishable goods, as follows.\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer, and offering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids (respectively, asks) in the form of vectors, essentially specifying a \"menu\" with a price attached to each buyer (resp.\nseller).)\nEach buyer and seller selects an offer from an adjacent trader, and the payoffs to all agents are determined as before.\nHere the sellers are job applicants, buyers are employers, and traders are the agents that mediate the job market.\nOf course, if one specifies pairwise valuations on buyers but just single valuations for sellers, we model a setting where buyers can distinguish among the goods, but sellers don't care whom they sell to--this (roughly) captures settings like housing markets.\nOur Results.\nTo make these precise, we introduce the following notation.\n(Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie--\u03b8ieje.\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network.\nWe show that every instance of our game has an equilibrium, and that in every such equilibrium, the allocation has value \u03b8 \u2217 --\nin other words, it achieves the best value possible.\nThus, equilibria in this model are always efficient, in that the market enables the \"right\" set of people to get the good, subject to the network constraints.\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network; the dual of this linear program contains enough information to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium allocation is divided up as payoffs to the agents, and it is interesting to ask how this value is distributed--in particular how much profit a trader is able to make based on its position in the network.\nWe find that, although all equilibria have the same value, a given trader's payoff can vary across different equilibria.\nWe also obtain results for the sum of all trader profits.\nRelated Work.\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price.\nThis reduced form of trade, built on the idealization of a market price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who sets the price, and agents don't actually trade with each other.\nIn fact there is no market, in the everyday sense of that word, in the Walrasian model.\nThat is, there is no physical or virtual place where buyers and sellers interact to trade and set prices.\nThus in this simple model, all buyers and sellers are uniform and trade at the same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices.\nThe literature on imperfect competition is perhaps the oldest of these.\nHere a monopolist, or a group of oliogopolists, choose prices in order to maximize their profits (see [14] for the standard textbook treatment of these markets).\nA monopolist uses its knowledge of market demand to choose a price, or a collection of prices if it discriminates.\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of a single market is maintained.\nIn the equilibrium search literature, firms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have access to all firms and there are no intermediaries.\nIn the general equilibrium literature there have been various attempts to introduce price determination.\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand.\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them.\nBut again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does have price-setting agents (specialists), parts of it do determine separate bid and ask prices, and different agents receive different prices for the same asset (see [12] for a treatment of microstructure theory).\nWork in information economics has identified similar phenomena (see e.g. [7]).\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network structure determines prices.\nThese have posited price determination through definitions based on competitive equilibrium or the core, or through the use of truthful mechanisms.\nIn briefly reviewing this work, we will note the contrast with our approach, in that we model prices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers, generated using a probabilistic model capable of producing heavy-tailed degree distributions [11].\nEven-Dar et al. [6] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium.\nLeonard studies VCG prices in this setting; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism.\nIn contrast, our model has known valuations and prices arising from the strategic behavior of traders.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart [9], analyze the prices at which trade occurs in a network, working within the framework of mechanism design.\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers, and then use an ascending auction mechanism, rather than strategic intermediaries, to determine the prices.\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction.","lvl-2":"Trading Networks with Price-Setting Agents\nABSTRACT\nIn a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations.\nTypically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market.\nWe model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders.\nIn this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered.\nWe show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers.\nFinally, we consider how the profits obtained by the traders depend on the underlying graph--roughly, a trader can command a positive profit if and only if it has an \"essential\" connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.\nOur work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.\n1.\nINTRODUCTION\nIn a range of settings where markets mediate the interactions of buyers and sellers, one observes several recurring properties: Individual buyers and sellers often trade through intermediaries, not all buyers and sellers have access to the same intermediaries, and not all buyers and sellers trade at the same price.\nOne example of this setting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers' limited access to capital, many farmers have no alternative to trading with middlemen in inefficient local markets.\nA developing country may have many such partially overlapping markets existing alongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with these general characteristics.\nIn these markets much of the trade between buyers and sellers is intermediated by a variety of agents ranging from brokers to market makers to electronic trading systems.\nFor many assets there is no one market; trade in a single asset may occur simultaneously on the floor of an exchange, on crossing networks, on electronic exchanges, and in markets in other countries.\nSome buyers and sellers have access to many or all of these trading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading venues.\nIn fact, there is no \"price\" as different traders pay or receive different prices.\nIn many settings there is also a gap between the price a buyer pays for an asset, the ask price, and the price a seller receives for the asset, the bid price.\nOne of the most striking examples of this phenomenon occurs in the market for foreign exchange, where there is an interbank market with restricted access and a retail market with much more open access.\nSpreads, defined as the difference between bid and ask prices, differ significantly across these markets, even though the same asset is being traded in the two markets.\nIn this paper, we develop a framework in which such phenomena emerge from a game-theoretic model of trade, with buyers, sellers, and traders interacting on a network.\nThe edges of the network connect traders to buyers and sellers, and thus represent the access that different market participants have to one another.\nThe traders serve as intermediaries in a two-stage trading game: they strategically choose bid and ask prices to offer to the sellers and buyers they are connected to; the sellers and buyers then react to the prices they face.\nThus, the network encodes the relative power in the structural positions of the market participants, including the implicit levels of competition among traders.\nWe show that this game always has a\nsubgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods.\nWe also analyze how trader profits depend on the network structure, essentially characterizing in graph-theoretic terms how a trader's payoff is determined by the amount of competition it experiences with other traders.\nOur work here is connected to several lines of research in economics, finance, and algorithmic game theory, and we discuss these connections in more detail later in the introduction.\nAt a general level, our approach can be viewed as synthesizing two important strands of work: one that treats buyer-seller interaction using network structures, but without attempting to model the processses by which prices are actually formed [1, 4, 5, 6, 8, 9, 10, 13]; and another strand in the literature on market microstructure that incorporates price-setting intermediaries, but without network-type constraints on who can trade with whom [12].\nBy developing a network model that explicitly includes traders as price-setting agents, in a system together with buyers and sellers, we are able to capture price formation in a network setting as a strategic process carried out by intermediaries, rather than as the result of a centrally controlled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods.\nOur goal in formulating the model is to express the process of price-setting in markets such as those discussed above, where the participants do not all have uniform access to one another.\nWe are given a set B of buyers, a set S of sellers, and a set T of traders.\nThere is an undirected graph G that indicates who is able to trade with whom.\nAll edges have one end in B U S and the other in T; that is, each edge has the form (i, t) for i E S and t E T, or (j, t) for j E B and t E T.\nThis reflects the constraints that all buyer-seller transactions go through traders as intermediaries.\nIn the most basic version of the model, we consider identical goods, one copy of which is initially held by each seller.\nBuyers and sellers each have a value for one copy of the good, and we assume that these values are common knowledge.\nWe will subsequently generalize this to a setting in which goods are distinguishable, buyers can value different goods differently, and potentially sellers can value transactions with different buyers differently as well.\nHaving different buyer valuations captures settings like house purchases; adding different seller valuations as well captures matching markets--for example, sellers as job applicants and buyers as employers, with both caring about who ends up with which \"good\" (and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good; the good comes in individisible units; and each seller initially holds one unit of the good.\nAll three types of agents value money at the same rate; and each i E B U S additionally values one copy of the good at \u03b8i units of money.\nNo agent wants more than one copy of the good, so additional copies are valued at 0.\nEach agent has an initial endowment of money that is larger than any individual valuation \u03b8i; the effect of this is to guarantee that any buyer who ends up without a copy of the good has been priced out of the market due to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of two edges: from a seller to a trader, and then from the trader to a buyer.\nThe particular way in which goods flow is determined by the following game.\nFirst, each trader offers a bid price to each seller it is connected to, and an ask price to each buyer it is connected to.\nSellers and buyers then choose from among the offers presented to them by traders.\nIf multiple traders propose the same price to a seller or buyer, then there is no strict best response for the seller or buyer.\nIn this case a selection must be made, and, as is standard (see for example [10]), we (the modelers) choose among the best offers.\nFinally, each trader buys a copy of the good from each seller that accepts its offer, and it sells a copy of the good to each buyer that accepts its offer.\nIf a particular trader t finds that more buyers than sellers accept its offers, then it has committed to provide more copies of the good than it has received, and we will say that this results in a large penalty to the trader for defaulting; the effect of this is that in equilibrium, no trader will choose bid and ask prices that result in a default.\nMore precisely, a strategy for each trader t is a specification of a bid price 3ti for each seller i to which t is connected, and an ask price \u03b1tj for each buyer j to which t is connected.\n(We can also handle a model in which a trader may choose not to make an offer to certain of its adjacent sellers or buyers.)\nEach seller or buyer then chooses at most one incident edge, indicating the trader with whom they will transact, at the indicated price.\n(The choice of a single edge reflects the facts that (a) sellers each initially have only one copy of the good, and (b) buyers each only want one copy of the good.)\nThe payoffs are as follows: For each seller i, the payoff from selecting trader t is 3ti, while the payoff from selecting no trader is \u03b8i.\n(In the former case, the seller receives 3ti units of money, while in the latter it keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j--\u03b1tj, whle the payoff from selecting no trader is 0.\n(In the former case, the buyer receives the good but gives up \u03b1tj units of money.)\nFor each trader t, with accepted offers from sellers i1,..., is and buyers j1,..., jb, the payoff is Pr \u03b1tjr--Pr 3tir, minus a penalty \u03c0 if b> s.\nThe penalty is chosen to be large enough that a trader will never incur it in equilibrium, and hence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game.\nThe equilibrium concept we use is subgame perfect Nash equilibrium.\nSome Examples.\nTo help with thinking about the model, we now describe three illustrative examples, depicted in Figure 1.\nTo keep the figures from getting too cluttered, we adopt the following conventions: sellers are drawn as circles in the leftmost column and will be named i1, i2,...from top to bottom; traders are drawn as squares in the middle column and will be named t1, t2,...from top to bottom; and buyers are drawn as circles in the rightmost column and will be named j1, j2,...from top to bottom.\nAll sellers in the examples will have valuations for the good equal to 0; the valuation of each buyer is drawn inside its circle; and the bid or ask price on each edge is drawn on top of the edge.\nIn Figure 1 (a), we show how a standard second-price auction arises naturally from our model.\nSuppose the buyer valuations from top to bottom are w> x> y> z.\nThe bid and ask prices shown are consistent with an equilibrium in which i1 and j1 accept the offers of trader t1, and no other buyer accepts the offer of its adjacent trader: thus, trader t1 receives the good with a bid price of x, and makes w--x by selling the good to buyer j1 for w.\nIn this way, we can consider this particular instance as an auction for a single good in which the traders act as \"proxies\" for their adjacent buyers.\nThe buyer with the highest valuation for the good ends up with it, and the surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with f> k buyers just as easily, by building a complete bipartite graph on k sellers and f traders, and then attaching each trader to a single distinct buyer.\nIn Figure 1 (b), we show how nodes with different positions in the network topology can achieve different payoffs, even when all\nFigure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it.\n(b)\nA network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and buyers have no power due to their position in the network.\n(c) A form of implicit perfect competition: all bid\/ask spreads will be zero in equilibrium, even though no trader directly \"competes\" with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically.\nSpecifically, seller i2 and buyer j2 occupy powerful positions, because the two traders are competing for their business; on the other hand, the other sellers and buyers are in weak positions, because they each have only one option.\nAnd indeed, in every equilibrium, there is a real number x E [0, 1] such that both traders offer bid and ask prices of x to i2 and j2 respectively, while they offer bids of 0 and asks of 1 to the other sellers and buyers.\nThus, this example illustrates a few crucial ingredients that we will identify at a more general level shortly.\nSpecifically, i2 and j2 experience the benefits of perfect competition, in that the two traders drive the bid-ask spreads to 0 in competing for their business.\nOn the other hand, the other sellers and buyers experience the downsides of monopoly--they receive 0 payoff since they have only a single option for trade, and the corresponding trader makes all the profit.\nNote further how this natural behavior emerges from the fact that traders are able to offer different prices to different agents--capturing the fact that there is no one fixed \"price\" in the kinds of markets that motivate the model, but rather different prices reflecting the relative power of the different agents involved.\nThe previous example shows perhaps the most natural way in which a trader's profit on a particular transaction can drop to 0: when there is another trader who can replicate its function precisely.\n(In that example, two traders each had the ability to move a copy of the good from i2 to j2.)\nBut as our subsequent results will show, traders make zero profit more generally due to global, graph-theoretic reasons.\nThe example in Figure 1 (c) gives an initial indication of this: one can show that for every equilibrium, there is a y E [0, 1] such that every bid and every ask price is equal to y.\nIn other words, all traders make zero profit, whether or not a copy of the good passes through them--and yet, no two traders have any seller-buyer paths in common.\nThe price spreads have been driven to zero by a global constraint imposed by the long cycle through all the agents; this is an example of implicit perfect competition determined by the network topology.\nExtending the Model to Distinguishable Goods.\nWe extend the basic model to a setting with distinguishable goods, as follows.\nInstead of having each agent i E B U S have a single numerical valuation \u03b8i, we index valuations by pairs of buyers and sellers: if buyer j obtains the good initially held by seller i, it gets a utility of \u03b8ji, and if seller i sells its good to buyer j, it experiences a loss of utility of \u03b8ij.\nThis generalizes the case of indistinguishable goods, since we can always have these pairwise valuations depend only on one of the indices.\nA strategy for a trader now consists of offering a bid to each seller that specifies both a price and a buyer, and offering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids (respectively, asks) in the form of vectors, essentially specifying a \"menu\" with a price attached to each buyer (resp.\nseller).)\nEach buyer and seller selects an offer from an adjacent trader, and the payoffs to all agents are determined as before.\nThis general framework captures matching markets [10, 13]: for example, a job market that is mediated by agents or employment search services (as in hiring for corporate executives, or sports or entertainment figures).\nHere the sellers are job applicants, buyers are employers, and traders are the agents that mediate the job market.\nOf course, if one specifies pairwise valuations on buyers but just single valuations for sellers, we model a setting where buyers can distinguish among the goods, but sellers don't care whom they sell to--this (roughly) captures settings like housing markets.\nOur Results.\nOur results will identify general forms of some of the principles noted in the examples discussed above--including the question of which buyers end up with the good; the question of how payoffs are differently realized by sellers, traders, and buyers; and the question of what structural properties of the network determine whether the traders will make positive profits.\nTo make these precise, we introduce the following notation.\nAny outcome of the game determines a final allocation of goods to some of the agents; this can be specified by a collection M of triples (ie, te, je), where ie E S, te E T, and je E B; moreover, each seller and each buyer appears in at most one triple.\nThe meaning is for each e E M, the good initially held by ie moves to je through te.\n(Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to Pe \u2208 M \u03b8jeie--\u03b8ieje.\nLet \u03b8 \u2217 denote the maximum value of any allocation M that is feasible given the network.\nWe show that every instance of our game has an equilibrium, and that in every such equilibrium, the allocation has value \u03b8 \u2217 --\nin other words, it achieves the best value possible.\nThus, equilibria in this model are always efficient, in that the market enables the \"right\" set of people to get the good, subject to the network constraints.\nWe establish the existence and efficiency of equilibria by constructing a linear program to capture the flow of goods through the network; the dual of this linear program contains enough information to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium allocation is divided up as payoffs to the agents, and it is interesting to ask how this value is distributed--in particular how much profit a trader is able to make based on its position in the network.\nWe find that, although all equilibria have the same value, a given trader's payoff can vary across different equilibria.\nHowever, we are able to characterize the maximum and minimum amounts that a given trader is able to make, where these maxima and minima are taken over all equilibria, and we give an efficient algorithm to compute this.\nIn particular, our results here imply a clean combinatorial characterization of when a given trader t can achieve non-zero payoff: this occurs if and only there is some edge e incident to t that is essential, in the sense that deleting e reduces the value of the optimal allocation \u03b8 \u2217.\nWe also obtain results for the sum of all trader profits.\nRelated Work.\nThe standard baseline approach for analyzing the interaction of buyers and sellers is the Walrasian model in which anonymous buyers and sellers trade a good at a single market clearing price.\nThis reduced form of trade, built on the idealization of a market price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come from or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who sets the price, and agents don't actually trade with each other.\nIn fact there is no market, in the everyday sense of that word, in the Walrasian model.\nThat is, there is no physical or virtual place where buyers and sellers interact to trade and set prices.\nThus in this simple model, all buyers and sellers are uniform and trade at the same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which examine how prices are set rather than just determining equilibrium prices.\nThe literature on imperfect competition is perhaps the oldest of these.\nHere a monopolist, or a group of oliogopolists, choose prices in order to maximize their profits (see [14] for the standard textbook treatment of these markets).\nA monopolist uses its knowledge of market demand to choose a price, or a collection of prices if it discriminates.\nOliogopolists play a game in which their payoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of a single market is maintained.\nIn the equilibrium search literature, firms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have access to all firms and there are no intermediaries.\nIn the general equilibrium literature there have been various attempts to introduce price determination.\nA standard proof technique for the existence of competitive equilibrium involves a price adjustment mechanism in which prices respond to excess demand.\nThe Walrasian auctioneer is often introduced as a device to explain how this process works, but this is a fundamentally a metaphor for an iterative priceupdating algorithm, not for the internals of an actual market.\nMore sophisticated processes have been introduced to study the stability of equilibrium prices or the information necessary to compute them.\nBut again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does have price-setting agents (specialists), parts of it do determine separate bid and ask prices, and different agents receive different prices for the same asset (see [12] for a treatment of microstructure theory).\nWork in information economics has identified similar phenomena (see e.g. [7]).\nBut there is little research in these literatures examining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network structure determines prices.\nThese have posited price determination through definitions based on competitive equilibrium or the core, or through the use of truthful mechanisms.\nIn briefly reviewing this work, we will note the contrast with our approach, in that we model prices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of prices at competitive equilibrium in a bipartite graph on buyers and sellers, generated using a probabilistic model capable of producing heavy-tailed degree distributions [11].\nEven-Dar et al. [6] build on this to consider the strategic aspects of network formation when prices arise from competitive equilibrium.\nLeonard [10], Babaioff et al. [1], and Chu and Shen [4] consider an approach based on mechanism design: buyers and sellers reside at different nodes in a graph, and they incur a given transportation cost to trade with one another.\nLeonard studies VCG prices in this setting; Babaioff et al. and Chu and Shen additionally provide a a budget-balanced mechanism.\nSince the concern here is with truthful mechanisms that operate on private valuations, there is an inherent trade-off between the efficiency of the allocation and the budget-balance condition.\nIn contrast, our model has known valuations and prices arising from the strategic behavior of traders.\nThus, the assumptions behind our model are in a sense not directly comparable to those underlying the mechanism design approach: while we assume known valuations, we do not require a centralized authority to impose a mechanism.\nRather, price-setting is part of the strategic outcome, as in the real markets that motivate our work, and our equilibria are simultaneously budget-balanced and efficient--something not possible in the mechanism design frameworks that have been used.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart [9], analyze the prices at which trade occurs in a network, working within the framework of mechanism design.\nKranton and Minehart use a bipartite graph with direct links between buyers and sellers, and then use an ascending auction mechanism, rather than strategic intermediaries, to determine the prices.\nTheir auction has desirable equilibrium properties but as Kranton and Minehart note it is an abstraction of how goods are allocated and prices are determined that is similar in spirit to the Walrasian auctioneer abstraction.\nIn fact, we can show how the basic model of Kranton and Minehart can be encoded as an instance of our game, with traders producing prices at equilibrium matching the prices produced by their auction mechanism .1 Finally, the classic results of Shapley and Shubik [13] on the assignment game can be viewed as studying the result of trade on a bipartite graph in terms of the core.\nThey study the dual of a linear program based on the matching problem, similar to what we use for a reduced version of our model in the next section, but their focus is different as they do not consider agents that seek to set prices.\n2.\nMARKETS WITH PAIR-TRADERS\nFor understanding the ideas behind the analysis of the general model, it is very useful to first consider a special case with a re1Kranton and Minehart, however, can also analyze a more general setting in which buyers values are private and thus buyers and sellers play a game of incomplete information.\nWe deal only with complete information.\nstricted form of traders that we refer to as pair-traders.\nIn this case, each trader is connected to just one buyer and one seller.\n(Thus, it essentially serves as a \"trade route\" between the two.)\nThe techniques we develop to handle this case will form a useful basis for reasoning about the case of traders that may be connected arbitrarily to the sellers and buyers.\nWe will relate profits in a subgame perfect Nash equilibrium to optimal solutions of a certain linear program, use this relation to show that all equilibria result in efficient allocation of the goods, and show that a pure equilibrium always exists.\nFirst, we consider the simplest model where sellers have indistinguishable items, and each buyer is interested in getting one item.\nThen we extend the results to the more general case of a matching market, as discussed in the previous section, where valuations depend on the identity of the seller and buyer.\nWe then characterize the minimum and maximum profits traders can make.\nIn the next section, we extend the results to traders that may be connected to any subset of sellers and buyers.\nGiven that we are working with pair-traders in this section, we can represent the problem using a bipartite graph G whose node set is B U S, and where each trader t, connecting seller i and buyer j, appears as an edge t = (i, j) in G. Note, however, that we allow multiple traders to connect the same pair of agents.\nFor each buyer and seller i, we will use adj (i) to denote the set of traders who can trade with i.\n2.1 Indistinguishable Goods\nThe socially optimal trade for the case of indistinguishable goods is the solution of the transportation problem: sending goods along the edges representing the traders.\nThe edges along which trade occurs correspond to a matching in this bipartite graph, and the optimal trade is described by the following linear program.\nProof.\nClearly all profits are nonnegative, as trading is optional for all agents.\nTo see why the last set of inequalities holds, consider two cases separately.\nFor a trader t who conducted trade, we get equality by definition.\nFor other traders t = (i, j), the value pi + \u03b8i is the price that seller i sold for (or \u03b8i if seller i decided to keep the good).\nOffering a bid \u03b2t> pi + \u03b8i would get the seller to sell to trader t. Similarly, \u03b8j--pj is the price that buyer j bought for (or \u03b8j if he didn't buy), and for any ask \u03b1t <\u03b8j--pj, the buyer will buy from trader t.\nSo unless \u03b8j--pj <\u03b8i + pi the trader has a profitable deviation.\nNow we are ready to prove our first theorem:\nTHEOREM 2.2.\nIn any equilibrium the trade is efficient.\nProof.\nLet x be a flow of goods resulting in an equilibrium, and let variables p and y be the profits.\nConsider the linear program describing the socially optimal trade.\nWe will also add a set of additional constraints xt <1 for all traders t E T; this can be added to the description, as it is implied by the other constraints.\nNow we claim that the two linear programs are to the equations E duals of each other.\nThe variables pi for agents B U S correspond tEadj (i) xt <1.\nThe additional dual variable yt corresponds to an additional inequality xt <1.\nThe optimality of the social value of the trade will follow from the claim that the solution of these two linear programs derived from an equilibrium satisfy the complementary slackness conditions for this pair of linear programs, and hence both x and (p, y) are optimal solutions to the corresponding linear programs.\nThere are three different complementary slackness conditions we need to consider, corresponding to the three sets of variables x, y implies E\nand p. Any agent can only make profit if he transacts, so pi> 0 tEadj (i) xt = 1, and similarly, yt> 0 implies that xt = 1 also.\nFinally, consider a trader t with xt> 0 that trades between seller i and buyer j, and recall that we have seen above that the inequality yt> (\u03b8j--pj)--(\u03b8i + pi) is satisfied with equality for those who trade.\nNext we consider an equilibrium.\nEach trader t = (i, j) must offer a bid \u03b2t and an ask \u03b1t.\n(We omit the subscript denoting the seller and buyer here since we are dealing with pair-traders.)\nGiven the bid and ask price, the agents react to these prices, as described earlier.\nInstead of focusing on prices, we will focus on profits.\nIf a seller i sells to a trader t E adj (i) with bid \u03b2t then his profit is pi = \u03b2t--\u03b8i.\nSimilarly, if a buyer j buys from a trader t E adj (j) with ask \u03b1t, then his profit is pj = \u03b8j--\u03b1t.\nFinally, if a trader t trades with ask \u03b1t and bid \u03b2t then his profit is yt = \u03b1t--\u03b2t.\nAll agents not involved in trade make 0 profit.\nWe will show that the profits at equilibrium are an optimal solution to the following linear program.\nProof.\nConsider an efficient trade; let xt = 1 if t trades and 0 otherwise; and consider an optimal solution (p, y) to the dual linear program.\nWe would like to claim that all dual solutions correspond to equilibrium prices, but unfortunately this is not exactly true.\nBefore we can convert a dual solution to equilibrium prices, we may need to modify the solution slightly as follows.\nConsider any agent i that is only connected to a single trader t. Because the agent is only connected to a single trader, the variables yt and pi are dual variables corresponding to the same primal inequality xt <1, and they always appear together as yt + pi in all inequalities, and also in the objective function.\nThus there is an optimal solution in which pi = 0 for all agents i connected only to a single trader.\nAssume (p, y) is a dual solution where agents connected only to one trader have pi = 0.\nFor a seller i, let \u03b2t = \u03b8i + pi be the bid for all traders t adjacent to i. Similarly, for each buyer j, let \u03b1t = \u03b8j--pj be the ask for all traders t adjacent to j.\nWe claim that this set of bids and asks, together with the trade x, are an equilibrium.\nTo see why, note that all traders t adjacent to a seller or buyer i offer the same ask or bid, and so trading with any trader is equally good for agent i. Also, if i is not trading in the solution\nx then by complementary slackness pi = 0, and hence not trading is also equally good for i.\nThis shows that sellers and buyers don't have an incentive to deviate.\nWe need to show that traders have no incentive to deviate either.\nWhen a trader t is trading with seller i and buyer j, then profitable deviations would involve increasing \u03b1t or decreasing \u03b2t.\nBut by our construction (and assumption about monopolized agents) all sellers and buyers have multiple identical ask\/bid offers, or trade is occurring at valuation.\nIn either case such a deviation cannot be successful.\nFinally, consider a trader t = (i, j) who doesn't trade.\nA deviation for t would involve offering a lower ask to seller i and a higher bid to seller j than their current trade.\nHowever, yt = 0 by complementary slackness, and hence pi + \u03b8i> \u03b8j--pj, so i sells for a price at least as high as the price at which j buys, so trader t cannot create profitable trade.\nNote that a seller or buyer i connected to a single trader t cannot have profit at equilibrium, so possible equilibrium profits are in one-to-one correspondence with dual solutions for which pi = 0 whenever i is monopolized by one trader.\nA disappointing feature of the equilibrium created by this proof is that some agents t may have to create ask-bid pairs where \u03b2t> \u03b1t, offering to buy for more than the price at which they are willing to sell.\nAgents that make such crossing bid-ask pairs never actually perform a trade, so it does not result in negative profit for the agent, but such pairs are unnatural.\nCrossing bid-ask pairs are weakly dominated by the strategy of offering a low bid \u03b2 = 0 and an extremely high ask to guarantee that neither is accepted.\nTo formulate a way of avoiding such crossing pairs, we say an equilibrium is cross-free if \u03b1t> \u03b2t for all traders t.\nWe now show there is always a cross-free equilibrium.\nProof.\nConsider an optimal solution to the dual linear program.\nTo get an equilibrium without crossing bids, we need to do a more general modification than just assuming that pi = 0 for all sellers and buyers connected to only a single trader.\nLet the set E be the set of edges t = (i, j) that are tight, in the sense that we have the equality yt = (\u03b8j--pj)--(\u03b8i + pi).\nThis set E contain all the edges where trade occurs, and some more edges.\nWe want to make sure that pi = 0 for all sellers and buyers that have degree at most 1 in E. Consider a seller i that has pi> 0.\nWe must have i involved in a trade, and the edge t = (i, j) along which the trade occurs must be tight.\nSuppose this is the only tight edge adjacent to agent i; then we can decrease pi and increase yt till one of the following happens: either pi = 0 or the constraint of some other agent t' E adj (i) becomes tight.\nThis change only increases the set of tight edges E, keeps the solution feasible, and does not change the objective function value.\nSo after doing this for all sellers, and analogously changing yt and pj for all buyers, we get an optimal solution where all sellers and buyers i either have pi = 0 or have at least two adjacent tight edges.\nNow we can set asks and bids to form a cross-free equilibrium.\nFor all traders t = (i, j) associated with an edge t E E we set \u03b1t and \u03b2t as before: we set the bid \u03b2t = pi + \u03b8i and the ask \u03b1t = \u03b8j--pj.\nFor a trader t = (i, j) E ~ E we have that pi + \u03b8i> \u03b8j--pj and we set \u03b1t = \u03b2t to be any value in the range [\u03b8j--pj, pi + \u03b8i].\nThis guarantees that for each seller or buyer the best sell or buy offer is along the edge where trade occurs in the solution.\nThe askbid values along the tight edges guarantee that traders who trade cannot increase their spread.\nTraders t = (i, j) who do not trade cannot make profit due to the constraint pi + \u03b8i> \u03b8j--pj\nFigure 2: Left: an equilibrium with crossing bids where traders make no money.\nRight: an equilibrium without crossing bids for any value x E [0, 1].\nTotal trader profit ranges between 1 and 2.\n2.2 Distinguishable Goods\nWe now consider the case of distinguishable goods.\nAs in the previous section, we can write a transshipment linear program for the socially optimal trade, with the only change being in the objective function.\nWe can show that the dual of this linear program corresponds to trader profits.\nRecall that we needed to add the constraints xt <1 for all traders.\nThe dual is then:\nIt is not hard to extend the proofs of Theorems 2.2--2.4 to this case.\nProfits in an equilibrium satisfy the dual constraints, and profits and trade satisfy complementary slackness.\nThis shows that trade is socially optimal.\nTaking an optimal dual solution where pi = 0 for all agents that are monopolized, we can convert it to an equilibrium, and with a bit more care, we can also create an equilibrium with no crossing bid-ask pairs.\nTHEOREM 2.5.\nAll equilibria for the case of pair-traders with distinguishable goods result in socially optimal trade.\nPure noncrossing equilibria exist.\n2.3 Trader Profits\nWe have seen that all equilibria are efficient.\nHowever, it turns out that equilibria may differ in how the value of the allocation is spread between the sellers, buyers and traders.\nFigure 2 depicts a simple example of this phenomenon.\nOur goal is to understand how a trader's profit is affected by its position in the network; we will use the characterization we obtained to work out the range of profits a trader can make.\nTo maximize the profit of a trader t (or a subset of traders T') all we need to do is to find an optimal solution to the dual linear program maximizing the value of yt (or the sum P tET, yt).\nSuch dual solutions will then correspond to equilibria with non-crossing prices.\nTHEOREM 2.6.\nFor any trader t or subset of traders T' the maximum total profit they can make in any equilibrium can be computed in polynomial time.\nThis maximum profit can be obtained by a non-crossing equilibrium.\nOne way to think about the profit of a trader t = (i, j) is as a subtraction from the value of the corresponding edge (i, j).\nThe value of the edge is the social value \u03b8ji--\u03b8ij if the trader makes no profit, and decreases to \u03b8ji--\u03b8ij--yt if the trader t insists on making yt profit.\nTrader t gets yt profit in equilibrium, if after this decrease in the value of the edge, the edge is still included in the optimal transshipment.\nTHEOREM 2.7.\nA trader t can make profit in an equilibrium if and only if t is essential for the social welfare, that is, if deleting agent t decreases social welfare.\nThe maximum profit he can make is exactly his value to society, that is, the increase his presence causes in the social welfare.\nIf we allow crossing equilibria, then we can also find the minimum possible profit.\nRecall that in the proof of Theorem 2.3, traders only made money off of sellers or buyers that they have a monopoly over.\nAllowing such equilibria with crossing bids we can find the minimum profit a trader or set of traders can make, by minimizing the value yt (or sum PtET, yt) over all optimal solutions that satisfy pi = 0 whenever i is connected to only a single trader.\n3.\nGENERAL TRADERS\nNext we extend the results to a model where traders may be connected to an arbitrary number of sellers and buyers.\nFor a trader t E T we will use S (t) and B (t) to denote the set of buyers and sellers connected to trader t.\nIn this section we focus on the general case when goods are distinguishable (i.e. both buyers and sellers have valuations that are sensitive to the identity of the agent they are paired with in the allocation).\nIn the full version of the paper we also discuss the special case of indistinguishable goods in more detail.\nTo get the optimal trade, we consider the bipartite graph G = (S U B, E) connecting sellers and buyers where an edge e = (i, j) connects a seller i and a buyer j if there is a trader adjacent to both: E = f (i, j): adj (i) n adj (j) = ~ 01.\nOn this graph, we then solve the instance of the assignment problem that was also used in Section 2.2, with the value of edge (i, j) equal to \u03b8ji--\u03b8ij (since the value of trading between i and j is independent of which trader conducted the trade).\nWe will also use the dual of this linear program: Xmin val (z) = zi> zi + zj>\n3.1 Bids and Asks and Trader Optimization\nFirst we need to understand what bidding model we will use.\nEven when goods are indistinguishable, a trader may want to pricediscriminate, and offer different bid and ask values to different sellers and buyers.\nIn the case of distinguishable goods, we have to deal with a further complication: the trader has to name the good she is proposing to sell or buy, and can possibly offer multiple different products.\nThere are two variants of our model depending whether a trader makes a single bid or ask to a seller or buyer, or she offers a menu of options.\n(i) A trader t can offer a buyer j a menu of asks \u03b1tji, a vector of values for all the products that she is connected to, where \u03b1tji is the ask for the product of seller i. Symmetrically, a trader t can offer to each seller i a menu of bids 3tij for selling to different buyers j. (ii) Alternatively, we can require that each trader t can make at most one ask to each seller and one bid for each buyer, and an ask has to include the product sold, and a bid has to offer a particular buyer to sell to.\nOur results hold in either model.\nFor notational simplicity we will use the menu option here.\nNext we need to understand the optimization problem of a trader t. Suppose we have bid and ask values for all other traders t' E T, t' = ~ t.\nWhat are the best bid and ask offers trader t can make as a best response to the current set of bids and asks?\nFor each seller i let pi be the maximum profit seller i can make using bids by other traders, and symmetrically assume pj is the maximum profit buyer j can make using asks by other traders (let pi = 0 for any seller or buyer i who cannot make profit).\nNow consider a seller-buyer pair (i, j) that trader t can connect.\nTrader t will have to make a bid of at least 3tij = \u03b8ij + pi to seller i and an ask of at most \u03b1tji = \u03b8ji--pj to buyer j to get this trade, so the maximum profit she can make on this trade is vtij = \u03b1tji--3tij = \u03b8ji--pj--(\u03b8ij + pi).\nThe optimal trade for trader t is obtained by solving a matching problem to find the matching between the sellers S (t) and buyers B (t) that maximizes the total value vtij for trader t.\nWe will need the dual of the linear program of finding the trade of maximum profit for the trader t.\nWe will use qti as the dual variable associated with the constraint of seller or buyer i.\nThe dual is then the following problem.\nWe view qti as the profit made by t from trading with seller or buyer i. Theorem 3.1 summarizes the above discussion.\n3.2 Efficient Trade and Equilibrium\nNow we can prove trade at equilibrium is always efficient.\nTHEOREM 3.2.\nEvery equilibrium results in an efficient allocation of the goods.\nProof.\nConsider an equilibrium, with xe = 1 if and only if trade occurs along edge e = (i, j).\nTrade is a solution to the transshipment linear program used in Section 2.2.\nLet pi denote the profit of seller or buyer i. Each trader t currently has the best solution to his own optimization problem.\nA trader t finds his optimal trade (given bids and asks by all other\ntraders) by solving a matching problem.\nLet qti for i E B (t) US (t) denote the optimal dual solution to this matching problem as described by Theorem 3.1.\nWhen setting up the optimization problem for a trader t above, we used pi to denote the maximum profit i can make without the offer of trader t. Note that this pi is exactly the same pi we use here, the profit of agent i.\nThis is clearly true for all traders t' that are not trading with i in the equilibrium.\nTo see why it is true for the trader t that i is trading with we use that the current set of bid-ask values is an equilibrium.\nIf for any agent i the bid or ask of trader t were the unique best option, then t could extract more profit by offering a bit larger ask or a bit smaller bid, a contradiction.\nWe show the trade x is optimal by considering the dual solution t qti for all agents i E B U S.\nWe claim z is a dual solution, and it satisfies complementary slackness with trade x. To see this we need to show a few facts.\nWe need that zi> 0 implies that i trades.\nIf zi> 0 then either pi> 0 or qti> 0 for some trader t. Agent i can only make profit pi> 0 if he is involved in a trade.\nIf qti> 0 for some t, then trader t must trade with i, as his solution is optimal, and by complementary slackness for the dual solution, qti> 0 implies that t trades with i. For an edge (i, j) associated with a trader t we need to show the dual solution is feasible, that is zi + zj> \u03b8ji--\u03b8ij.\nRecall vtij = \u03b8ji--pj--(\u03b8ij + pi), and the dual constraint of the trader's optimization problem requires qti + qtj> vtij.\nPutting these together, we have zi + zj> pi + qti + pj + qtj> vtij + pi + pj = \u03b8ji--\u03b8ij.\nFinally, we need to show that the trade variables x also satisfy the complementary slackness constraint: when xe> 0 for an edge e = (i, j) then the corresponding dual constraint is tight.\nLet t be the trader involved in the trade.\nBy complementary slackness of t's optimization problem we have qti + qtj = vtij.\nTo see that z satisfies complementary slackness we need to argue that for all other traders t' = ~ t we have both qtii = 0 and qt j = 0.\nThis is true as qtii> 0 implies by complementary slackness of t' 's optimization problem that t' must trade with i at optimum, and t = ~ t' is trading.\nNext we want to show that a non-crossing equilibrium always exists.\nWe call an equilibrium non-crossing if the bid-ask offers a trader t makes for a seller--buyer pair (i, j) never cross, that is 3tij <\u03b1tji for all t, i, j. THEOREM 3.3.\nThere exists a non-crossing equilibrium supporting any socially optimal trade.\nProof.\nConsider an optimal trade x and a dual solution z as before.\nTo find a non-crossing equilibrium we need to divide the profit zi between i and the trader t trading with i.\nWe will use qti as the trader t's profit associated with agent i for any i E S (t) U B (t).\nWe will need to guarantee the following properties: Trader t trades with agent i whenever qti> 0.\nThis is one of the complementary slackness conditions to make sure the current trade is optimal for trader t. For all seller-buyer pairs (i, j) that a trader t can trade with, we have\nwhich will make sure that qt is a feasible dual solution for the optimization problem faced by trader t.\nWe need to have equality in (1) when trader t is trading between i and j.\nThis is one of the complementary slackness conditions for trader t, and will ensure that the trade of t is optimal for the trader.\nFinally, we want to arrange that each agent i with pi> 0 has multiple offers for making profit pi, and the trade occurs at one of his best offers.\nTo guarantee this in the corresponding bids and asks we need to make sure that whenever pi> 0 there are multiple t E adj (i) that have equation in the above constraint (1).\nWe start by setting pi = zi for all i E S U B and qti = 0 for all i E S U B and traders t E adj (i).\nThis guarantees all invariants except the last property about multiple t E adj (t) having equality in (1).\nWe will modify p and q to gradually enforce the last condition, while maintaining the others.\nConsider a seller with pi> 0.\nBy optimality of the trade and dual solution z, seller i must trade with some trader t, and that trader will have equality in (1) for the buyer j that he matches with i.\nIf this is the only trader t that has a tight constraint in (1) involving seller i then we increase qti and decrease pi till either pi = 0 or another trader t' = ~ t will be achieve equality in (1) for some buyer edge adjacent to i (possibly a different buyer j').\nThis change maintains all invariants, and increases the set of sellers that also satisfy the last constraint.\nWe can do a similar change for a buyer j that has pj> 0 and has only one trader t with a tight constraint (1) adjacent to j.\nAfter possibly repeating this for all sellers and buyers, we get profits satisfying all constraints.\nNow we get equilibrium bid and ask values as follows.\nFor a trader t that has equality for the seller--buyer pair (i, j) in (1) we offer \u03b1tji = \u03b8ji--pj and 3tij = \u03b8ij + pi.\nFor all other traders t and seller--buyer pairs (i, j) we have the invariant (1), and using this we know we can pick a value - y in the range \u03b8ij + pi + qti> - y> \u03b8ji--(pj + qtj).\nWe offer bid and ask values 3tij = \u03b1tji = - y. Neither the bid nor the ask will be the unique best offer for the buyer, and hence the trade x remains an equilibrium.\n3.3 Trader Profits\nFinally we turn to the goal of understanding, in the case of general traders, how a trader's profit is affected by its position in the network.\nThe profit of trader t in an equilibrium is E First, we show how to maximize the total profit of a set of traders.\ni qti.\nTo find the maximum possible profit for a trader t or a set of traders T', we need to do the following: Find profits pi> 0 and qti> 0 so that zi = pi + EtEadj (i) qti is an optimal dual solution, and also satisfies the constraints (1) for any seller i and buyer j connected through a trader t E T. Now, subject to all these conditions, we maximize the sum E EiES (t), B (t) qti.\nNote that this maxi\ntETi\nmization is a secondary objective function to the primary objective that z is an optimal dual solution.\nThen we use the proof of Theorem 3.3 shows how to turn this into an equilibrium.\nTHEOREM 3.4.\nThe maximum value for E i qti above\ntETi\nis the maximum profit the set T' of traders can make.\nProof.\nBy the proof of Theorem 3.2 the profits of trader t can be written in this form, so the set of traders T' cannot make more profit than claimed in this theorem.\nTo see that T' can indeed make this much profit, we use the proof of Theorem 3.3.\nWe modify that proof to start with profit vectors p and qt for t E T', and set qt = 0 for all traders t E ~ T'.\nWe verify that this starting solution satisfies the first three of the four required properties, and then we can follow the proof to make the fourth property true.\nWe omit the details of this in the present version.\nIn Section 2.3 we showed that in the case of pair traders, a trader t can make money if he is essential for efficient trade.\nThis is not\nFigure 3: The top trader is essential for social welfare.\nYet the only equilibrium is to have bid and ask values equal to 0, and the trader makes no profit.\ntrue for the type of more general traders we consider here, as shown by the example in Figure 3.\nHowever, we still get a characterization for when a trader t can make a positive profit.\nTHEOREM 3.5.\nA trader t can make profit in an equilibrium if and only if there is a seller or buyer i adjacent to t such that the connection of trader t to agent i is essential for social welfare--that is, if deleting agent t from adj (i) decreases the value of the optimal allocation.\nProof.\nFirst we show the direction that if a trader t can make money there must be an agent i so that t's connection to i is essential to social welfare.\nLet p, q be the profits in an equilibrium where t makes money, as described by Theorem 3.2 with Pi \u2208 S (t) \u222a B (t) qti> 0.\nSo we have some agent i with qti> 0.\nWe claim that the connection between agent i and trader t must be essential, in particular, we claim that social welfare must decrease by at least qti if we delete t from adj (t).\nTo see why note that decreasing the value of all edges of the form (i, j) associated with trader t by qti keeps the same trade optimum, as we get a matching dual solution by simply resetting qti to zero.\nTo see the opposite, assume deleting t from adj (t) decreases social welfare by some value - y. Assume i is a seller (the case of buyers is symmetric), and decrease by - y the social value of each edge (i, j) for any buyer j such that t is the only agent connecting i and j. By assumption the trade is still optimal, and we let z be the dual solution for this matching.\nNow we use the same process as in the proof of Theorem 3.3 to create a non-crossing equilibrium starting with pi = zi for all i \u2208 S \u222a B, and qti = - y, and all other q values 0.\nThis creates an equilibrium with non-crossing bids where t makes at least - y profit (due to trade with seller i).\nFinally, if we allow crossing equilibria, then we can find the minimum possible profit by simply finding a dual solution minimizing the dual variables associated with agents monopolized by some trader.\nTHEOREM 3.6.\nFor any trader t or subset of traders T', the minimum total profit they can make in any equilibrium can be computed in polynomial time.","keyphrases":["trade network","trade network","market","algorithm game theori","buyer and seller interact","initi endow of monei","bid price","perfect competit","benefit","maximum and minimum amount","econom and financ","strateg behavior of trader","complementari slack","monopoli"],"prmu":["P","P","P","M","M","M","M","R","U","M","M","M","U","U"]} {"id":"C-20","title":"Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach","abstract":"A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned. In this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner. We specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities. We make use of server virtualization technologies to enable the replication and migration of server functions. We propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.","lvl-1":"Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach K.K. Ramakrishnan, Prashant Shenoy , Jacobus Van der Merwe AT&T Labs-Research \/ University of Massachusetts ABSTRACT A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities.\nWe make use of server virtualization technologies to enable the replication and migration of server functions.\nWe propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Design, Reliability 1.\nINTRODUCTION A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications.\nA relatively minor outage can disrupt and inconvenience a large number of users.\nToday these services are almost exclusively hosted in data centers.\nRecent advances in server virtualization technologies [8, 14, 22] allow for the live migration of services within a local area network (LAN) environment.\nIn the LAN environment, these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion.\nNot only can it support planned maintenance events [8], but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [22].\nWhen using these technologies in a LAN environment, services execute in a virtual server, and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another, without any significant downtime for the service or application.\nIn particular, since the virtual server retains the same network address as before, any ongoing network level interactions are not disrupted.\nSimilarly, in a LAN environment, storage requirements are normally met via either network attached storage (NAS) or via a storage area network (SAN) which is still reachable from the new physical server location to allow for continued storage access.\nUnfortunately in a wide area environment (WAN), live server migration is not as easily achievable for two reasons: First, live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original.\nWhile this is fairly easily achieved in a shared LAN environment, no current mechanisms are available to efficiently achieve the same feat in a WAN environment.\nSecond, while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [20, 7, 11], these mechanisms are ill suited to live data center migration, because in general the available technologies are unaware of application\/service level semantics.\nIn this paper we outline a design for live service migration across WANs.\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN.\nThe essence of our approach is cooperative, context aware migration, where a migration management system orchestrates the data center migration across all three subsystems involved, namely the server platforms, the wide area network and the disk storage system.\nWhile conceptually similar in nature to the LAN based work described above, using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved.\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services.\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative, context aware functionality needed across the different subsystems to enable this.\n262 2.\nLIVE DATA CENTER MIGRATION ACROSS WANS Three essential subsystems are involved with hosting services in a data center: First, the servers host the application or service logic.\nSecond, services are normally hosted in a data center to provide shared access through a network, either the Internet or virtual private networks (VPNs).\nFinally, most applications require disk storage for storing data and the amount of disk space and the frequency of access varies greatly between different services\/applications.\nDisruptions, failures, or in general, outages of any kind of any of these components will cause service disruption.\nFor this reason, prior work and current practices have addressed the robustness of individual components.\nFor example, data centers typically have multiple network connections and redundant LAN devices to ensure redundancy at the networking level.\nSimilarly, physical servers are being designed with redundant hot-swappable components (disks, processor blades, power supplies etc).\nFinally, redundancy at the storage level can be provided through sophisticated data mirroring technologies.\nThe focus of our work, however, is on the case where such local redundancy mechanisms are not sufficient.\nSpecifically, we are interested in providing service availability when the data center as a whole becomes unavailable, for example because of data center wide maintenance operations, or because of catastrophic events.\nAs such, our basic approach is to migrate services between data centers across the wide are network (WAN).\nBy necessity, moving or migrating services from one data center to another needs to consider all three of these components.\nHistorically, such migration has been disruptive in nature, requiring downtime of the actual services involved, or requiring heavy weight replication techniques.\nIn the latter case concurrently running replicas of a service can be made available thus allowing a subset of the service to be migrated or maintained without impacting the service as a whole.\nWe argue that these existing mechanisms are inadequate to meet the needs of network-based services, including real-time services, in terms of continuous availability and operation.\nInstead, we advocate an approach where server, network and storage subsystems cooperate and coordinate actions, in a manner that is cognizant of the service context in order to realize seamless migration across wide area networks.\nIn this section we briefly describe the technical building blocks that would enable our approach.\nAs outlined below, some of these building blocks exist, or exist in part, while in other cases we use the desire for high availability of services as the driver for the changes we are proposing.\n2.1 Live Virtual Server Migration The main enabler for our approach is the live server migration capabilities that have been developed in the context of server virtualization in recent years [5, 8].\nIn this approach an entire running operating system (including any active applications) executing as a virtual server is being transfered from one physical machine to another.\nSince the virtual server is migrated in its entirety, both application and kernel level state gets migrated, including any state associated with ongoing network connections.\nAssuming that network level reachability to the virtual server``s network addresses are maintained after the migration, the implication is that applications executing in the virtual server experience very little downtime (in the order of tens to hundreds of milliseconds) and ongoing network connections remain intact.\nIn order to maintain network level reachability, the IP address(es) associated with the virtual server has to be reachable at the physical server where the virtual server is migrated to.\nIn a LAN environment this is achieved either by issuing an unsolicited ARP reply to establish the binding between the new MAC address and the IP address, or by relying on layer-two technologies to allow the virtual server to reuse its (old) MAC address [8].\nBecause of the difficulty of moving network level (i.e., IP addresses) in a routed non-LAN environment, use of live server migration as a management tool has been limited to the LAN environments [22].\nHowever, virtual server migration across the wide area will also be an attractive tool, specifically to deal with outages, and therefore propose networking mechanisms to enable this.\nIf disk storage needs are being met with network attached storage (NAS), the storage becomes just another network based application and can therefore be addressed in the same way with LAN based migration [8].\nModern virtualization environments also include support for other forms of (local) storage including storage area networks (SANs) [23].\nHowever, since we propose to use WAN server migration as a means to deal with complete data center outages, these mechanisms are inadequate for our purposes and below we propose extension to remote replication technologies which can work in concert with server migration to minimize service downtime.\n2.2 Networking Requirements From the discussion above, a key requirement for live server migration across a WAN is the ability to have the IP address(es) of the virtual server be reachable at the new data center location immediately after the migration has completed.\nThis presents a significant challenge for a number of reasons.\nFirst, despite decades of work in this area, IP address mobility remains an unresolved problem that is typically only addressed at manual configuration time scales.\nThe second challenge comes from the fact that current routing protocols are well known to have convergence issues which is ill suited to the time constraints imposed by live migration.\nThird, in today``s WAN networking environment connectivity changes are typically initiated, and controlled, by network operators or network management systems.\nAgain this is poorly suited to WAN server migration where it is essential that the migration software, which is closely monitoring the status of the server migration process, initiate this change at the appropriate time.\nOur approach to addressing the networking requirements for live WAN migration builds on the observations that not all networking changes in this approach are time critical and further that instantaneous changes are best achieved in a localized manner.\nSpecifically, in our solution, described in detail in Section 3, we allow the migration software to initiate the necessary networking changes as soon as the need for migration has been identified.\nWe make use of tunneling technologies during this initial phase to preemptively establish connectivity between the data centers involved.\nOnce server migration is complete, the migration software initiates a local change to direct traffic towards the new data center via the tunnel.\nSlower time scale network changes then phase out this local network connectivity change for a more optimal network wide path to the new data center.\n2.3 Storage Replication Requirements Data availability is typically addressed by replicating business data on a local\/primary storage system, to some remote location from where it can be accessed.\nFrom a business\/usability point of view, such remote replication is driven by two metrics [9].\nFirst 263 is the recovery-point-objective which is the consistent data point to which data can be restored after a disaster.\nSecond is the recoverytime-objective which is the time it takes to recover to that consistent data point after a disaster [13].\nRemote replication can be broadly classified into the following two categories: \u00a1 Synchronous replication: every data block written to a local storage system is replicated to the remote location before the local write operation returns.\n\u00a1 Asynchronous replication: in this case the local and remote storage systems are allowed to diverge.\nThe amount of divergence between the local and remote copies is typically bounded by either a certain amount of data, or by a certain amount of time.\nSynchronous replication is normally recommended for applications, such as financial databases, where consistency between local and remote storage systems is a high priority.\nHowever, these desirable properties come at a price.\nFirst, because every data block needs to be replicated remotely, synchronous replication systems can not benefit from any local write coalescing of data if the same data blocks are written repeatedly [16].\nSecond, because data have to be copied to the remote location before the write operation returns, synchronous replication has a direct performance impact on the application, since both lower throughput and increased latency of the path between the primary and the remote systems are reflected in the time it takes for the local disk write to complete.\nAn alternative is to use asynchronous replication.\nHowever, because the local and remote systems are allowed to diverge, asynchronous replication always involves some data loss in the event of a failure of the primary system.\nBut, because write operations can be batched and pipelined, asynchronous replication systems can move data across the network in a much more efficient manner than synchronous replication systems.\nFor WAN live server migration we seek a more flexible replication system where the mode can be dictated by the migration semantics.\nSpecifically, to support live server migration we propose a remote replication system where the initial transfer of data between the data centers is performed via asynchronous replication to benefit from the efficiency of that mode of operation.\nWhen the bulk of the data have been transfered in this manner, replication switches to synchronous replication in anticipation of the completion of the server migration step.\nThe final server migration step triggers a simultaneous switch-over to the storage system at the new data center.\nIn this manner, when the virtual server starts executing in the new data center, storage requirements can be locally met.\n3.\nWAN MIGRATION SCENARIOS In this section we illustrate how our cooperative, context aware approach can combine the technical building blocks described in the previous section to realize live server migration across a wide area network.\nWe demonstrate how the coordination of server virtualization and migration technologies, the storage replication subsystem and the network can achieve live migration of the entire data center across the WAN.\nWe utilize different scenarios to demonstrate our approach.\nIn Section 3.1 we outline how our approach can be used to achieve the safe live migration of a data center when planned maintenance events are handled.\nIn Section 3.2 we show the use of live server migration to mitigate the effects of unplanned outages or failures.\n3.1 Maintenance Outages We deal with maintenance outages in two parts.\nFirst, we consider the case where the service has no (or very limited) storage requirements.\nThis might for example be the case with a network element such as a voice-over-IP (VoIP) gateway.\nSecond, we deal with the more general case where the service also requires the migration of data storage to the new data center.\nWithout Requiring Storage to be Migrated: Without storage to be replicated, the primary components that we need to coordinate are the server migration and network mobility.\nFigure 1 shows the environment where the application running in a virtual server VS has to be moved from a physical server in data center A to a physical server in data center B. Prior to the maintenance event, the coordinating migration management system (MMS) would signal to both the server management system as well as the network that a migration is imminent.\nThe server management system would initiate the migration of the virtual server from physical server a (cents$#\u00a6\u00a5 ) to physical server b (cents$#\u00a6\u00a7 ).\nAfter an initial bulk state transfer as preparation for migration, the server management system will mirror any state changes between the two virtual servers.\nSimilarly, for the network part, based on the signal received from the MMS, the service provider edge (cents\u00a9\u00a8 ) router will initiate a number of steps to prepare for the migration.\nSpecifically, as shown in Figure 1(b), the migration system will cause the network to create a tunnel between cents\u00a9\u00a8 and cents\u00a9\u00a8 which will be used subsequently to transfer data destined to VS to data center B.\nWhen the MMS determines a convenient point to quiesce the VS, another signal is sent to both the server management system and the network.\nFor the server management system, this signal will indicate the final migration of the VS from data center A to data center B, i.e., after this the VS will become active in data center B. For the network, this second signal enables the network data path to switchover locally at cents\u00a9\u00a8\u00a9\u00a5 to the remote data center.\nSpecifically, from this point in time, any traffic destined for the virtual server address that arrives at cents\u00a9\u00a8\u00a9\u00a5 will be switched onto the tunnel to cents\u00a9\u00a8\u00a9\u00a7 for delivery to data center B. Note that at this point, from a server perspective the migration is complete as the VS is now active in data center B. However, traffic is sub-optimally flowing first to cents\u00a9\u00a8\u00a9\u00a5 and then across the tunnel to cents\u00a9\u00a8$\u00a7 .\nTo rectify this situation another networking step is involved.\nSpecifically, cents\u00a9\u00a8\u00a9\u00a7 starts to advertise a more preferred route to reach VS, than the route currently being advertised by cents\u00a9\u00a8$\u00a5 .\nIn this manner, as ingress PEs to the network (cents\u00a9\u00a8$ to cents\u00a9\u00a8$ in Figure 1) receive the more preferred route, traffic will start to flow to cents\u00a9\u00a8\u00a9\u00a7 directly and the tunnel between cents\u00a9\u00a8\u00a9\u00a5 and cents\u00a9\u00a8\u00a9\u00a7 can be torn down leading to the final state shown in Figure 1(c).\nRequiring Storage Migration: When storage has to also be replicated, it is critical that we achieve the right balance between performance (impact on the application) and the recovery point or data loss when the switchover occurs to the remote data center.\nTo achieve this, we allow the storage to be replicated asynchronously, prior to any initiation of the maintenance event, or, assuming the amount of data to be transfered is relatively small, asynchronous replication can be started in anticipation of a migration that is expected to happen shortly.\nAsynchronous replication during this initial phase allows for the application to see no performance impact.\nHowever, when the maintenance event is imminent, the MMS would signal to the replication system to switch from asynchronous replication to synchronous replication to ensure that there is no loss of data during migration.\nWhen data is being replicated synchronously, there will be a performance impact on the application.\n264 Figure 1: Live server migration across a WAN This requires us to keep the exposure to the amount of time we replicate on a synchronous basis to a minimum.\nWhen the MMS signals to the storage system the requirement to switch to synchronous replication, the storage system completes all the pending asynchronous operations and then proceeds to perform all the subsequent writes by synchronously replicating it to the remote data center.\nThus, between the server migration and synchronous replication, both the application state and all the storage operations are mirrored at the two environments in the two data centers.\nWhen all the pending write operations are copied over, then as in the previous case, we quiesce the application and the network is signaled to switch traffic over to the remote data center.\nFrom this point on, both storage and server migration operations are complete and activated in data center B.\nAs above, the network state still needs to be updated to ensure optimal data flow directly to data center B. Note that while we have described the live server migration process as involving the service provider for the networking part, it is possible for a data center provider to perform a similar set of functions without involving the service provider.\nSpecifically, by creating a tunnel between the customer edge (CE) routers in the data center, and performing local switching on the appropriate CE, rather than on the PE, the data center provider can realize the same functionality.\n3.2 Unplanned Outages We propose to also use cooperative, context aware migration to deal with unplanned data center outages.\nThere are multiple considerations that go into managing data center operations to plan and overcome failures through migration.\nSome of these are: (1) amount of overhead under normal operation to overcome anticipated failures; (2) amount of data loss affordable (recovery point objective - RPO); (3) amount of state that has to be migrated; and (4) time available from anticipated failure to occurrence of event.\nAt the one extreme, one might incur the overhead of completely mirroring the application at the remote site.\nThis has the consequence of both incurring processing and network overhead under normal operation as well as impacting application performance (latency and throughput) throughout.\nThe other extreme is to only ensure data recovery and to start a new copy of the application at the remote site after an outage.\nIn this case, application memory state such as ongoing sessions are lost, but data stored on disk is replicated and available in a consistent state.\nNeither this hot standby nor the cold standby approach described are desirable due to the overhead or the loss of application memory state.\nAn intermediate approach is to recover control and essential state of the application, in addition to data stored on disk, to further minimize disruptions to users.\nA spectrum of approaches are possible.\nIn a VoIP server, for instance, session-based information can be mirrored without mirroring the data flowing through each session.\nMore generally, this points to the need to checkpoint some application state in addition to mirroring data on disk.\nCheckpointing application state involves storing application state either periodically or in an application-aware manner like databases do and then copying it to the remote site.\nOf course, this has the consequence that the application can be restarted remotely at the checkpoint boundary only.\nSimilarly, for storage one may use asynchronous replication with a periodic snapshot ensuring all writes are up-to-date at the remote site at the time of checkpointing.\nSome data loss may occur upon an unanticipated, catastrophic failure, but the recovery point may be fairly small, depending on the frequency of checkpointing application and storage state.\nCoordination between 265 the checkpointing of the application state and the snapshot of storage is key to successful migration while meeting the desired RPOs.\nIncremental checkpointing of application and storage is key to efficiency, and we see existing techniques to achieve this [4, 3, 11].\nFor instance, rather than full application mirroring, a virtualized replica can be maintained as a warm standby-in dormant or hibernating state-enabling a quick switch-over to the previously checkpointed state.\nTo make the switch-over seamless, in addition to replicating data and recovering state, network support is needed.\nSpecifically, on detecting the unavailability of the primary site, the secondary site is made active, and the same mechanism described in Section 3.1 is used to switch traffic over to reach the secondary site via the pre-established tunnel.\nNote that for simplicity of exposition we assume here that the PE that performs the local switch over is not affected by the failure.\nThe approach can however, easily be extended to make use of a switchover at a router deeper in the network.\nThe amount of state and storage that has to be migrated may vary widely from application to application.\nThere may be many situations where, in principle, the server can be stateless.\nFor example, a SIP proxy server may not have any persistent state and the communication between the clients and the proxy server may be using UDP.\nIn such a case, the primary activity to be performed is in the network to move the communication over to the new data center site.\nLittle or no overhead is incurred under normal operation to enable the migration to a new data center.\nFailure recovery involves no data loss and we can deal with near instantaneous, catastrophic failures.\nAs more and more state is involved with the server, more overhead is incurred to checkpoint application state and potentially to take storage snapshots, either periodically or upon application prompting.\nIt also means that the RPO is a function of the interval between checkpoints, when we have to deal with instantaneous failures.\nThe more advanced information we have of an impending failure, the more effective we can be in having the state migrated over to the new data center, so that we can still have a tighter RPO when operations are resumed at the new site.\n4.\nRELATED WORK Prior work on this topic falls into several categories: virtual machine migration, storage replication and network support.\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [15].\nMost virtual machine software, such as Xen [8] and VMWare [14] support live migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second; details of Xen``s live migration techniques are discussed in [8].\nAs indicated earlier, these techniques assume that migration is being done on a LAN.\nVM migration has also been studied in the Shirako system [10] and for grid environments [17, 19].\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration, but with downtimes [18, 12].\nRecently live WAN migration using IP tunnels was demonstrated in [21], where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application; we advocate an alternate approach that assumes edge router support.\nIn the context of storage, there exist numerous commercial products that perform replication, such as IBM Extended Remote Copy, HP Continuous Access XP, and EMC RepliStor.\nAn excellent description of these and others, as well as a detailed taxonomy of the different approaches for replication can be found in [11].\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [1].\nRecently, we proposed the notion of semantic-aware replication [13] where the system supports both synchronous and asynchronous replication concurrently and use signals from the file system to determine whether to replicate a particular write synchronously and asynchronously.\nIn the context of network support, our work is related to the RouterFarm approach [2], which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers.\nIn addition to being in a different application area, our approach differs from the RouterFarm work in two regards.\nFirst, we propose to have the required network changes be triggered by functionality outside of the network (as opposed to network management functions inside the network).\nSecond, due to the stringent timing requirements of live migration, we expect that our approach would require new router functionality (as opposed to being realizable via the existing configuration interfaces).\nFinally, the recovery oriented computing (ROC) work emphasizes recovery from failures rather than failure avoidance [6].\nIn a similar spirit to ROC, we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers (rather than full replication to mask such failures).\n5.\nCONCLUSION A significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocated a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities.\nWe advocated using server virtualization technologies to enable the replication and migration of server functions.\nWe proposed new network functions to enable server migration and replication across wide area networks (such as the Internet or a geographically distributed virtual private network), and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\n6.\nREFERENCES [1] M. Abd-El-Malek, W. V. Courtright II, C. Cranor, G. R. Ganger, J. Hendricks, A. J. Klosterman, M. Mesnier, M. Prasad, B. Salmon, R. R. Sambasivan, S. Sinnamohideen, J. D. Strunk, E. Thereska, M. Wachs, and J. J. Wylie.\nUrsa minor: versatile cluster-based storage.\nUSENIX Conference on File and Storage Technologies, December 2005.\n[2] Mukesh Agrawal, Susan Bailey, Albert Greenberg, Jorge Pastor, Panagiotis Sebos, Srinivasan Seshan, Kobus van der Merwe, and Jennifer Yates.\nRouterfarm: Towards a dynamic, manageable network edge.\nSIGCOMM Workshop on Internet Network Management (INM), September 2006.\n[3] L. Alvisi.\nUnderstanding the Message Logging Paradigm for Masking Process Crashes.\nPhD thesis, Cornell, January 1996.\n[4] L. Alvisi and K. Marzullo.\nMessage logging: Pessimistic, optimistic, and causal.\nIn Proceedings of the 15th International Conference on Distributed Computing Systems, pages 229-236.\nIEEE Computer Society, June 1995.\n266 [5] Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebar, Ian Pratt, and Andrew Warfield.\nXen and the art of virtualization.\nIn the Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), October 2003.\n[6] A. Brown and D. A. Patterson.\nEmbracing failure: A case for recovery-oriented computing (roc).\n2001 High Performance Transaction Processing Symposium, October 2001.\n[7] K. Brown, J. Katcher, R. Walters, and A. Watson.\nSnapmirror and snaprestore: Advances in snapshot technology.\nNetwork Appliance Technical Report TR3043.\nwww.\nne t app.\nc om\/t e c h_ l i br ar y\/3043.\nht ml .\n[8] C. Clark, K. Fraser, S. Hand, J. Hanse, E. Jul, C. Limpach, I. Pratt, and A. Warfiel.\nLive migration of virtual machines.\nIn Proceedings of NSDI, May 2005.\n[9] Disaster Recovery Journal.\nBusiness continuity glossary.\nht t p: \/\/www.\ndr j .\nc om\/gl os s ar y\/dr j gl os s ar y. ht ml .\n[10] Laura Grit, David Irwin, , Aydan Yumerefendi, and Jeff Chase.\nVirtual machine hosting for networked clusters: Building the foundations for autonomic orchestration.\nIn In the First International Workshop on Virtualization Technology in Distributed Computing (VTDC), November 2006.\n[11] M. Ji, A. Veitch, and J. Wilkes.\nSeneca: Remote mirroring done write.\nUSENIX 2003 Annual Technical Conference, June 2003.\n[12] M. Kozuch and M. Satyanarayanan.\nInternet suspend and resume.\nIn Proceedings of the Fourth IEEE Workshop on Mobile Computing Systems and Applications, Calicoon, NY, June 2002.\n[13] Xiaotao Liu, Gal Niv, K. K. Ramakrishnan, Prashant Shenoy, and Jacobus Van der Merwe.\nThe case for semantic aware remote replication.\nIn Proc.\n2nd International Workshop on Storage Security and Survivability (StorageSS 2006), Alexandria, VA, October 2006.\n[14] Michael Nelson, Beng-Hong Lim, and Greg Hutchins.\nFast Transparent Migration for Virtual Machines.\nIn USENIX Annual Technical Conference, 2005.\n[15] Mendel Rosenblum and Tal Garfinkel.\nVirtual machine monitors: Current technology and future trends.\nComputer, 38(5):39-47, 2005.\n[16] C. Ruemmler and J. Wilkes.\nUnix disk access patterns.\nProceedings of Winter 1993 USENIX, Jan 1993.\n[17] Paul Ruth, Junghwan Rhee, Dongyan Xu, Rick Kennell, and Sebastien Goasguen.\nAutonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure.\nIn IEEE International Conference on Autonomic Computing (ICAC), June 2006.\n[18] Constantine P. Sapuntzakis, Ramesh Chandra, Ben Pfaff, Jim Chow, Monica S. Lam, and Mendel Rosenblum.\nOptimizing the migration of virtual computers.\nIn Proceedings of the 5th Symposium on Operating Systems Design and Implementation, December 2002.\n[19] A. Sundararaj, A. Gupta, and P. Dinda.\nIncreasing Application Performance in Virtual Environments through Run-time Inference and Adaptation.\nIn Fourteenth International Symposium on High Performance Distributed Computing (HPDC), July 2005.\n[20] Symantec Corporation.\nVeritas Volume Replicator Administrator``s Guide.\nht t p: \/\/f t p. s uppor t .\nve r i t as .\nc om\/pub\/s uppor t \/ pr oduc t s \/Vol ume _ Re pl i c at or \/2%83842.\npdf , 5.0 edition, 2006.\n[21] F. Travostino, P. Daspit, L. Gommans, C. Jog, C. de Laat, J. Mambretti, I. Monga, B. van Oudenaarde, S. Raghunath, and P. Wang.\nSeamless live migration of virtual machines over the man\/wan.\nElsevier Future Generations Computer Systems, 2006.\n[22] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif.\nBlack-box and gray-box strategies for virtual machine migration.\nIn Proceedings of the Usenix Symposium on Networked System Design and Implementation (NSDI), Cambridge, MA, April 2007.\n[23] A xen way to iscsi virtualization?\nhttp:\/\/www.internetnews.com\/dev-news\/article.php\/3669246, April 2007.\n267","lvl-3":"Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities.\nWe make use of server virtualization technologies to enable the replication and migration of server functions.\nWe propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\n1.\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications.\nA relatively minor outage can disrupt and inconvenience a large number of users.\nToday these services are almost exclusively hosted in data centers.\nRecent advances in server virtualization technologies [8, 14, 22] allow for the live migration of services within a local area network\n(LAN) environment.\nIn the LAN environment, these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion.\nNot only can it support planned maintenance events [8], but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [22].\nWhen using these technologies in a LAN environment, services execute in a virtual server, and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another, without any significant downtime for the service or application.\nIn particular, since the virtual server retains the same network address as before, any ongoing network level interactions are not disrupted.\nSimilarly, in a LAN environment, storage requirements are normally met via either network attached storage (NAS) or via a storage area network (SAN) which is still reachable from the new physical server location to allow for continued storage access.\nUnfortunately in a wide area environment (WAN), live server migration is not as easily achievable for two reasons: First, live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original.\nWhile this is fairly easily achieved in a shared LAN environment, no current mechanisms are available to efficiently achieve the same feat in a WAN environment.\nSecond, while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [20, 7, 11], these mechanisms are ill suited to live data center migration, because in general the available technologies are unaware of application\/service level semantics.\nIn this paper we outline a design for live service migration across WANs.\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN.\nThe essence of our approach is cooperative, context aware migration, where a migration management system orchestrates the data center migration across all three subsystems involved, namely the server platforms, the wide area network and the disk storage system.\nWhile conceptually similar in nature to the LAN based work described above, using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved.\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services.\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative, context aware functionality needed across the different subsystems to enable this.\n2.\nLIVE DATA CENTER MIGRATION ACROSS WANS\n2.1 Live Virtual Server Migration\n2.2 Networking Requirements\n2.3 Storage Replication Requirements\n3.\nWAN MIGRATION SCENARIOS\n3.1 Maintenance Outages\n3.2 Unplanned Outages\n4.\nRELATED WORK\nPrior work on this topic falls into several categories: virtual machine migration, storage replication and network support.\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [15].\nMost virtual machine software, such as Xen [8] and VMWare [14] support \"live\" migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second; details of Xen's live migration techniques are discussed in [8].\nAs indicated earlier, these techniques assume that migration is being done on a LAN.\nVM migration has also been studied in the Shirako system [10] and for grid environments [17, 19].\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration, but with downtimes [18, 12].\nRecently live WAN migration using IP tunnels was demonstrated in [21], where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application; we advocate an alternate approach that assumes edge router support.\nIn the context of storage, there exist numerous commercial products that perform replication, such as IBM Extended Remote Copy, HP Continuous Access XP, and EMC RepliStor.\nAn excellent description of these and others, as well as a detailed taxonomy of the different approaches for replication can be found in [11].\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [1].\nRecently, we proposed the notion of semantic-aware replication [13] where the system supports both synchronous and asynchronous replication concurrently and use \"signals\" from the file system to determine whether to replicate a particular write synchronously and asynchronously.\nIn the context of network support, our work is related to the RouterFarm approach [2], which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers.\nIn addition to being in a different application area, our approach differs from the RouterFarm work in two regards.\nFirst, we propose to have the required network changes be triggered by functionality outside of the network (as opposed to network management functions inside the network).\nSecond, due to the stringent timing requirements of live migration, we expect that our approach would require new router functionality (as opposed to being realizable via the existing configuration interfaces).\nFinally, the recovery oriented computing (ROC) work emphasizes recovery from failures rather than failure avoidance [6].\nIn a similar spirit to ROC, we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers (rather than full replication to mask such failures).\n5.\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocated a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities.\nWe advocated using server virtualization technologies to enable the replication and migration of server functions.\nWe proposed new network functions to enable server migration and replication across wide area networks (such as the Internet or a geographically distributed virtual private network), and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.","lvl-4":"Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities.\nWe make use of server virtualization technologies to enable the replication and migration of server functions.\nWe propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\n1.\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nA relatively minor outage can disrupt and inconvenience a large number of users.\nToday these services are almost exclusively hosted in data centers.\nRecent advances in server virtualization technologies [8, 14, 22] allow for the live migration of services within a local area network\n(LAN) environment.\nIn the LAN environment, these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion.\nNot only can it support planned maintenance events [8], but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [22].\nWhen using these technologies in a LAN environment, services execute in a virtual server, and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another, without any significant downtime for the service or application.\nIn particular, since the virtual server retains the same network address as before, any ongoing network level interactions are not disrupted.\nSimilarly, in a LAN environment, storage requirements are normally met via either network attached storage (NAS) or via a storage area network (SAN) which is still reachable from the new physical server location to allow for continued storage access.\nUnfortunately in a wide area environment (WAN), live server migration is not as easily achievable for two reasons: First, live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original.\nSecond, while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [20, 7, 11], these mechanisms are ill suited to live data center migration, because in general the available technologies are unaware of application\/service level semantics.\nIn this paper we outline a design for live service migration across WANs.\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN.\nThe essence of our approach is cooperative, context aware migration, where a migration management system orchestrates the data center migration across all three subsystems involved, namely the server platforms, the wide area network and the disk storage system.\nWhile conceptually similar in nature to the LAN based work described above, using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved.\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services.\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative, context aware functionality needed across the different subsystems to enable this.\n4.\nRELATED WORK\nPrior work on this topic falls into several categories: virtual machine migration, storage replication and network support.\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [15].\nAs indicated earlier, these techniques assume that migration is being done on a LAN.\nVM migration has also been studied in the Shirako system [10] and for grid environments [17, 19].\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration, but with downtimes [18, 12].\nRecently live WAN migration using IP tunnels was demonstrated in [21], where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application; we advocate an alternate approach that assumes edge router support.\nAn excellent description of these and others, as well as a detailed taxonomy of the different approaches for replication can be found in [11].\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [1].\nIn the context of network support, our work is related to the RouterFarm approach [2], which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers.\nIn addition to being in a different application area, our approach differs from the RouterFarm work in two regards.\nSecond, due to the stringent timing requirements of live migration, we expect that our approach would require new router functionality (as opposed to being realizable via the existing configuration interfaces).\nIn a similar spirit to ROC, we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers (rather than full replication to mask such failures).\n5.\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocated a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities.\nWe advocated using server virtualization technologies to enable the replication and migration of server functions.\nWe proposed new network functions to enable server migration and replication across wide area networks (such as the Internet or a geographically distributed virtual private network), and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.","lvl-2":"Live Data Center Migration across WANs: A Robust Cooperative Context Aware Approach\nABSTRACT\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocate a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe specifically seek to achieve high availability of data center services in the face of both planned and unanticipated outages of data center facilities.\nWe make use of server virtualization technologies to enable the replication and migration of server functions.\nWe propose new network functions to enable server migration and replication across wide area networks (e.g., the Internet), and finally show the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.\n1.\nINTRODUCTION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nThese concerns are exacerbated by the increased use of the Internet for mission critical business and real-time entertainment applications.\nA relatively minor outage can disrupt and inconvenience a large number of users.\nToday these services are almost exclusively hosted in data centers.\nRecent advances in server virtualization technologies [8, 14, 22] allow for the live migration of services within a local area network\n(LAN) environment.\nIn the LAN environment, these technologies have proven to be a very effective tool to enable data center management in a non-disruptive fashion.\nNot only can it support planned maintenance events [8], but it can also be used in a more dynamic fashion to automatically balance load between the physical servers in a data center [22].\nWhen using these technologies in a LAN environment, services execute in a virtual server, and the migration services provided by the underlying virtualization framework allows for a virtual server to be migrated from one physical server to another, without any significant downtime for the service or application.\nIn particular, since the virtual server retains the same network address as before, any ongoing network level interactions are not disrupted.\nSimilarly, in a LAN environment, storage requirements are normally met via either network attached storage (NAS) or via a storage area network (SAN) which is still reachable from the new physical server location to allow for continued storage access.\nUnfortunately in a wide area environment (WAN), live server migration is not as easily achievable for two reasons: First, live migration requires the virtual server to maintain the same network address so that from a network connectivity viewpoint the migrated server is indistinguishable from the original.\nWhile this is fairly easily achieved in a shared LAN environment, no current mechanisms are available to efficiently achieve the same feat in a WAN environment.\nSecond, while fairly sophisticated remote replication mechanisms have been developed in the context of disaster recovery [20, 7, 11], these mechanisms are ill suited to live data center migration, because in general the available technologies are unaware of application\/service level semantics.\nIn this paper we outline a design for live service migration across WANs.\nOur design makes use of existing server virtualization technologies and propose network and storage mechanisms to facilitate migration across a WAN.\nThe essence of our approach is cooperative, context aware migration, where a migration management system orchestrates the data center migration across all three subsystems involved, namely the server platforms, the wide area network and the disk storage system.\nWhile conceptually similar in nature to the LAN based work described above, using migration technologies across a wide area network presents unique challenges and has to our knowledge not been achieved.\nOur main contribution is the design of a framework that will allow the migration across a WAN of all subsystems involved with enabling data center services.\nWe describe new mechanisms as well as extensions to existing technologies to enable this and outline the cooperative, context aware functionality needed across the different subsystems to enable this.\n2.\nLIVE DATA CENTER MIGRATION ACROSS WANS\nThree essential subsystems are involved with hosting services in a data center: First, the servers host the application or service logic.\nSecond, services are normally hosted in a data center to provide shared access through a network, either the Internet or virtual private networks (VPNs).\nFinally, most applications require disk storage for storing data and the amount of disk space and the frequency of access varies greatly between different services\/applications.\nDisruptions, failures, or in general, outages of any kind of any of these components will cause service disruption.\nFor this reason, prior work and current practices have addressed the robustness of individual components.\nFor example, data centers typically have multiple network connections and redundant LAN devices to ensure redundancy at the networking level.\nSimilarly, physical servers are being designed with redundant hot-swappable components (disks, processor blades, power supplies etc).\nFinally, redundancy at the storage level can be provided through sophisticated data mirroring technologies.\nThe focus of our work, however, is on the case where such local redundancy mechanisms are not sufficient.\nSpecifically, we are interested in providing service availability when the data center as a whole becomes unavailable, for example because of data center wide maintenance operations, or because of catastrophic events.\nAs such, our basic approach is to migrate services between data centers across the wide are network (WAN).\nBy necessity, moving or migrating services from one data center to another needs to consider all three of these components.\nHistorically, such migration has been disruptive in nature, requiring downtime of the actual services involved, or requiring heavy weight replication techniques.\nIn the latter case concurrently running replicas of a service can be made available thus allowing a subset of the service to be migrated or maintained without impacting the service as a whole.\nWe argue that these existing mechanisms are inadequate to meet the needs of network-based services, including real-time services, in terms of continuous availability and operation.\nInstead, we advocate an approach where server, network and storage subsystems cooperate and coordinate actions, in a manner that is cognizant of the service context in order to realize seamless migration across wide area networks.\nIn this section we briefly describe the technical building blocks that would enable our approach.\nAs outlined below, some of these building blocks exist, or exist in part, while in other cases we use the desire for high availability of services as the driver for the changes we are proposing.\n2.1 Live Virtual Server Migration\nThe main enabler for our approach is the live server migration capabilities that have been developed in the context of server virtualization in recent years [5, 8].\nIn this approach an entire running operating system (including any active applications) executing as a virtual server is being transfered from one physical machine to another.\nSince the virtual server is migrated in its entirety, both application and kernel level state gets migrated, including any state associated with ongoing network connections.\nAssuming that network level reachability to the virtual server's network addresses are maintained after the migration, the implication is that applications executing in the virtual server experience very little downtime (in the order of tens to hundreds of milliseconds) and ongoing network connections remain intact.\nIn order to maintain network level reachability, the IP address (es) associated with the virtual server has to be reachable at the physical server where the virtual server is migrated to.\nIn a LAN environment this is achieved either by issuing an unsolicited ARP reply to establish the binding between the new MAC address and the IP address, or by relying on layer-two technologies to allow the virtual server to reuse its (old) MAC address [8].\nBecause of the difficulty of moving network level (i.e., IP addresses) in a routed non-LAN environment, use of live server migration as a management tool has been limited to the LAN environments [22].\nHowever, virtual server migration across the wide area will also be an attractive tool, specifically to deal with outages, and therefore propose networking mechanisms to enable this.\nIf disk storage needs are being met with network attached storage (NAS), the storage becomes just another network based application and can therefore be addressed in the same way with LAN based migration [8].\nModern virtualization environments also include support for other forms of (local) storage including storage area networks (SANs) [23].\nHowever, since we propose to use WAN server migration as a means to deal with complete data center outages, these mechanisms are inadequate for our purposes and below we propose extension to remote replication technologies which can work in concert with server migration to minimize service downtime.\n2.2 Networking Requirements\nFrom the discussion above, a key requirement for live server migration across a WAN is the ability to have the IP address (es) of the virtual server be reachable at the new data center location immediately after the migration has completed.\nThis presents a significant challenge for a number of reasons.\nFirst, despite decades of work in this area, IP address mobility remains an unresolved problem that is typically only addressed at manual configuration time scales.\nThe second challenge comes from the fact that current routing protocols are well known to have convergence issues which is ill suited to the time constraints imposed by live migration.\nThird, in today's WAN networking environment connectivity changes are typically initiated, and controlled, by network operators or network management systems.\nAgain this is poorly suited to WAN server migration where it is essential that the migration software, which is closely monitoring the status of the server migration process, initiate this change at the appropriate time.\nOur approach to addressing the networking requirements for live WAN migration builds on the observations that not all networking changes in this approach are time critical and further that instantaneous changes are best achieved in a localized manner.\nSpecifically, in our solution, described in detail in Section 3, we allow the migration software to initiate the necessary networking changes as soon as the need for migration has been identified.\nWe make use of tunneling technologies during this initial phase to preemptively establish connectivity between the data centers involved.\nOnce server migration is complete, the migration software initiates a local change to direct traffic towards the new data center via the tunnel.\nSlower time scale network changes then phase out this local network connectivity change for a more optimal network wide path to the new data center.\n2.3 Storage Replication Requirements\nData availability is typically addressed by replicating business data on a local\/primary storage system, to some remote location from where it can be accessed.\nFrom a business\/usability point of view, such remote replication is driven by two metrics [9].\nFirst\nis the recovery-point-objective which is the consistent data point to which data can be restored after a disaster.\nSecond is the recoverytime-objective which is the time it takes to recover to that consistent data point after a disaster [13].\nRemote replication can be broadly classified into the following two categories: Synchronous replication: every data block written to a local storage system is replicated to the remote location before the local write operation returns.\nAsynchronous replication: in this case the local and remote storage systems are allowed to diverge.\nThe amount of divergence between the local and remote copies is typically bounded by either a certain amount of data, or by a certain amount of time.\nSynchronous replication is normally recommended for applications, such as financial databases, where consistency between local and remote storage systems is a high priority.\nHowever, these desirable properties come at a price.\nFirst, because every data block needs to be replicated remotely, synchronous replication systems cannot benefit from any local write coalescing of data if the same data blocks are written repeatedly [16].\nSecond, because data have to be copied to the remote location before the write operation returns, synchronous replication has a direct performance impact on the application, since both lower throughput and increased latency of the path between the primary and the remote systems are reflected in the time it takes for the local disk write to complete.\nAn alternative is to use asynchronous replication.\nHowever, because the local and remote systems are allowed to diverge, asynchronous replication always involves some data loss in the event of a failure of the primary system.\nBut, because write operations can be batched and pipelined, asynchronous replication systems can move data across the network in a much more efficient manner than synchronous replication systems.\nFor WAN live server migration we seek a more flexible replication system where the mode can be dictated by the migration semantics.\nSpecifically, to support live server migration we propose a remote replication system where the initial transfer of data between the data centers is performed via asynchronous replication to benefit from the efficiency of that mode of operation.\nWhen the bulk of the data have been transfered in this manner, replication switches to synchronous replication in anticipation of the completion of the server migration step.\nThe final server migration step triggers a simultaneous switch-over to the storage system at the new data center.\nIn this manner, when the virtual server starts executing in the new data center, storage requirements can be locally met.\n3.\nWAN MIGRATION SCENARIOS\nIn this section we illustrate how our cooperative, context aware approach can combine the technical building blocks described in the previous section to realize live server migration across a wide area network.\nWe demonstrate how the coordination of server virtualization and migration technologies, the storage replication subsystem and the network can achieve live migration of the entire data center across the WAN.\nWe utilize different scenarios to demonstrate our approach.\nIn Section 3.1 we outline how our approach can be used to achieve the safe live migration of a data center when planned maintenance events are handled.\nIn Section 3.2 we show the use of live server migration to mitigate the effects of unplanned outages or failures.\n3.1 Maintenance Outages\nWe deal with maintenance outages in two parts.\nFirst, we consider the case where the service has no (or very limited) storage requirements.\nThis might for example be the case with a network element such as a voice-over-IP (VoIP) gateway.\nSecond, we deal with the more general case where the service also requires the migration of data storage to the new data center.\nWithout Requiring Storage to be Migrated: Without storage to be replicated, the primary components that we need to coordinate are the server migration and network \"mobility\".\nFigure 1 shows the environment where the application running in a virtual server \"VS\" has to be moved from a physical server in data center A to a physical server in data center B. Prior to the maintenance event, the coordinating \"migration management system\" (MMS) would signal to both the server management system as well as the network that a migration is imminent.\nThe server management system would initiate the migration of the virtual server from physical server \"a\" () to physical server \"b\" ().\nAfter an initial bulk state transfer as \"preparation for migration\", the server management system will mirror any state changes between the two virtual servers.\nSimilarly, for the network part, based on the signal received from the MMS, the service provider edge () router will initiate a number of steps to prepare for the migration.\nSpecifically, as shown in Figure 1 (b), the migration system will cause the network to create a tunnel between and which will be used subsequently to transfer data destined to VS to data center B.\nWhen the MMS determines a convenient point to quiesce the VS, another signal is sent to both the server management system and the network.\nFor the server management system, this signal will indicate the final migration of the VS from data center A to data center B, i.e., after this the VS will become active in data center B. For the network, this second signal enables the network data path to switchover locally at to the remote data center.\nSpecifically, from this point in time, any traffic destined for the virtual server address that arrives at will be switched onto the tunnel to for delivery to data center B. Note that at this point, from a server perspective the migration is complete as the VS is now active in data center B. However, traffic is sub-optimally flowing first to and then across the tunnel to.\nTo rectify this situation another networking step is involved.\nSpecifically, starts to advertise a more preferred route to reach VS, than the route currently being advertised by.\nIn this manner, as ingress PEs to the network (to in Figure 1) receive the more preferred route, traffic will start to flow to directly and the tunnel between and can be torn down leading to the final state shown in Figure 1 (c).\nRequiring Storage Migration: When storage has to also be replicated, it is critical that we achieve the right balance between performance (impact on the application) and the recovery point or data loss when the switchover occurs to the remote data center.\nTo achieve this, we allow the storage to be replicated asynchronously, prior to any initiation of the maintenance event, or, assuming the amount of data to be transfered is relatively small, asynchronous replication can be started in anticipation of a migration that is expected to happen shortly.\nAsynchronous replication during this initial phase allows for the application to see no performance impact.\nHowever, when the maintenance event is imminent, the MMS would signal to the replication system to switch from asynchronous replication to synchronous replication to ensure that there is no loss of data during migration.\nWhen data is being replicated synchronously, there will be a performance impact on the application.\nFigure 1: Live server migration across a WAN\nThis requires us to keep the exposure to the amount of time we replicate on a synchronous basis to a minimum.\nWhen the MMS signals to the storage system the requirement to switch to synchronous replication, the storage system completes all the pending asynchronous operations and then proceeds to perform all the subsequent writes by synchronously replicating it to the remote data center.\nThus, between the server migration and synchronous replication, both the application state and all the storage operations are mirrored at the two environments in the two data centers.\nWhen all the pending write operations are copied over, then as in the previous case, we quiesce the application and the network is signaled to switch traffic over to the remote data center.\nFrom this point on, both storage and server migration operations are complete and activated in data center B.\nAs above, the network state still needs to be updated to ensure optimal data flow directly to data center B. Note that while we have described the live server migration process as involving the service provider for the networking part, it is possible for a data center provider to perform a similar set of functions without involving the service provider.\nSpecifically, by creating a tunnel between the customer edge (CE) routers in the data center, and performing local switching on the appropriate CE, rather than on the PE, the data center provider can realize the same functionality.\n3.2 Unplanned Outages\nWe propose to also use cooperative, context aware migration to deal with unplanned data center outages.\nThere are multiple considerations that go into managing data center operations to plan and overcome failures through migration.\nSome of these are: (1) amount of overhead under normal operation to overcome anticipated failures; (2) amount of data loss affordable (recovery point objective - RPO); (3) amount of state that has to be migrated; and (4) time available from anticipated failure to occurrence of event.\nAt the one extreme, one might incur the overhead of completely mirroring the application at the remote site.\nThis has the consequence of both incurring processing and network overhead under normal operation as well as impacting application performance (latency and throughput) throughout.\nThe other extreme is to only ensure data recovery and to start a new copy of the application at the remote site after an outage.\nIn this case, application memory state such as ongoing sessions are lost, but data stored on disk is replicated and available in a consistent state.\nNeither this hot standby nor the cold standby approach described are desirable due to the overhead or the loss of application memory state.\nAn intermediate approach is to recover control and essential state of the application, in addition to data stored on disk, to further minimize disruptions to users.\nA spectrum of approaches are possible.\nIn a VoIP server, for instance, session-based information can be mirrored without mirroring the data flowing through each session.\nMore generally, this points to the need to checkpoint some application state in addition to mirroring data on disk.\nCheckpointing application state involves storing application state either periodically or in an application-aware manner like databases do and then copying it to the remote site.\nOf course, this has the consequence that the application can be restarted remotely at the checkpoint boundary only.\nSimilarly, for storage one may use asynchronous replication with a periodic snapshot ensuring all writes are up-to-date at the remote site at the time of checkpointing.\nSome data loss may occur upon an unanticipated, catastrophic failure, but the recovery point may be fairly small, depending on the frequency of checkpointing application and storage state.\nCoordination between\nthe checkpointing of the application state and the snapshot of storage is key to successful migration while meeting the desired RPOs.\nIncremental checkpointing of application and storage is key to efficiency, and we see existing techniques to achieve this [4, 3, 11].\nFor instance, rather than full application mirroring, a virtualized replica can be maintained as a \"warm standby\"--in dormant or hibernating state--enabling a quick switch-over to the previously checkpointed state.\nTo make the switch-over seamless, in addition to replicating data and recovering state, network support is needed.\nSpecifically, on detecting the unavailability of the primary site, the secondary site is made active, and the same mechanism described in Section 3.1 is used to switch traffic over to reach the secondary site via the pre-established tunnel.\nNote that for simplicity of exposition we assume here that the PE that performs the local switch over is not affected by the failure.\nThe approach can however, easily be extended to make use of a switchover at a router \"deeper\" in the network.\nThe amount of state and storage that has to be migrated may vary widely from application to application.\nThere may be many situations where, in principle, the server can be stateless.\nFor example, a SIP proxy server may not have any persistent state and the communication between the clients and the proxy server may be using UDP.\nIn such a case, the primary activity to be performed is in the network to move the communication over to the new data center site.\nLittle or no overhead is incurred under normal operation to enable the migration to a new data center.\nFailure recovery involves no data loss and we can deal with near instantaneous, catastrophic failures.\nAs more and more state is involved with the server, more overhead is incurred to checkpoint application state and potentially to take storage snapshots, either periodically or upon application prompting.\nIt also means that the RPO is a function of the interval between checkpoints, when we have to deal with instantaneous failures.\nThe more advanced information we have of an impending failure, the more effective we can be in having the state migrated over to the new data center, so that we can still have a tighter RPO when operations are resumed at the new site.\n4.\nRELATED WORK\nPrior work on this topic falls into several categories: virtual machine migration, storage replication and network support.\nAt the core of our technique is the ability of encapsulate applications within virtual machines that can be migrated without application downtimes [15].\nMost virtual machine software, such as Xen [8] and VMWare [14] support \"live\" migration of VMs that involve extremely short downtimes ranging from tens of milliseconds to a second; details of Xen's live migration techniques are discussed in [8].\nAs indicated earlier, these techniques assume that migration is being done on a LAN.\nVM migration has also been studied in the Shirako system [10] and for grid environments [17, 19].\nCurrent virtual machine software support a suspend and resume feature that can be used to support WAN migration, but with downtimes [18, 12].\nRecently live WAN migration using IP tunnels was demonstrated in [21], where an IP tunnel is set up from the source to destination server to transparently forward packets to and from the application; we advocate an alternate approach that assumes edge router support.\nIn the context of storage, there exist numerous commercial products that perform replication, such as IBM Extended Remote Copy, HP Continuous Access XP, and EMC RepliStor.\nAn excellent description of these and others, as well as a detailed taxonomy of the different approaches for replication can be found in [11].\nThe Ursa Minor system argues that no single fault model is optimal for all applications and proposed supporting data-type specific selections of fault models and encoding schemes for replication [1].\nRecently, we proposed the notion of semantic-aware replication [13] where the system supports both synchronous and asynchronous replication concurrently and use \"signals\" from the file system to determine whether to replicate a particular write synchronously and asynchronously.\nIn the context of network support, our work is related to the RouterFarm approach [2], which makes use of orchestrated network changes to realize near hitless maintenance on provider edge routers.\nIn addition to being in a different application area, our approach differs from the RouterFarm work in two regards.\nFirst, we propose to have the required network changes be triggered by functionality outside of the network (as opposed to network management functions inside the network).\nSecond, due to the stringent timing requirements of live migration, we expect that our approach would require new router functionality (as opposed to being realizable via the existing configuration interfaces).\nFinally, the recovery oriented computing (ROC) work emphasizes recovery from failures rather than failure avoidance [6].\nIn a similar spirit to ROC, we advocate using mechanisms from live VM migration to storage replication to support planned and unplanned outages in data centers (rather than full replication to mask such failures).\n5.\nCONCLUSION\nA significant concern for Internet-based service providers is the continued operation and availability of services in the face of outages, whether planned or unplanned.\nIn this paper we advocated a cooperative, context-aware approach to data center migration across WANs to deal with outages in a non-disruptive manner.\nWe sought to achieve high availability of data center services in the face of both planned and incidental outages of data center facilities.\nWe advocated using server virtualization technologies to enable the replication and migration of server functions.\nWe proposed new network functions to enable server migration and replication across wide area networks (such as the Internet or a geographically distributed virtual private network), and finally showed the utility of intelligent and dynamic storage replication technology to ensure applications have access to data in the face of outages with very tight recovery point objectives.","keyphrases":["data center migrat","wan","storag replic","storag","internet-base servic","lan","virtual server","synchron replic","asynchron replic","network support","voic-over-ip","voip","databas"],"prmu":["P","P","P","P","M","U","R","M","M","M","U","U","U"]} {"id":"C-34","title":"Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks","abstract":"Security schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks. A new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model -- H(k,u,m,v,n) and the mapping relationship between cluster deployed sensor networks and the H(k,u,m,v,n) are proposed. By utilizing nice properties of H(k,u,m,v,n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC (Key Distribution Center) and polynomial pool schemes. Furthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected. Theoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.","lvl-1":"Researches on Scheme of Pairwise Key Establishment for DistributedSensor Networks Wang Lei Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 wanglei_hn@hn165.com Chen Zhi-ping Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 jt_zpchen@hnu.cn Jiang Xin-hua Fujian University Technology Fuzhou,Funjian, PR.China (+)86-591-8755-9001, 350014 xhj@csu.edu.cn ABSTRACT Security schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks.\nA new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H(k,u,m,v,n) and the mapping relationship between cluster deployed sensor networks and the H(k,u,m,v,n) are proposed.\nBy utilizing nice properties of H(k,u,m,v,n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC(Key Distribution Center) and polynomial pool schemes.\nFurthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected.\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.\nCategories and Subject Descriptors C.2.4 [Computer-Communication-Networks]: Distributed Systems-Distributed applications.\nGeneral Terms: Security.\n1.\nINTRODUCTION Security communication is an important requirement in many sensor network applications, so shared secret keys are used between communicating nodes to encrypt data.\nAs one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques.\nHowever, due to the sensor nodes' limited computational capabilities, battery energy, and available memory, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC).\nSeveral alternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [14].\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [1].\nIn the scheme, each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key.\nChan et al. further extended this idea and presented two key predistribution schemes: a q-composite key pre-distribution scheme and a random pairwise keys scheme.\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys.\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [2].\nInspired by the studies above and the polynomial-based key pre-distribution protocol [3], Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [4].\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [1]] and [2].\nBased on such a framework, they presented two pairwise key pre-distribution schemes: a random subset assignment scheme and a grid-based scheme.\nA polynomial pool is used in those schemes, instead of using a key pool in the previous techniques.\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool.\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid, assigns each sensor node to a unique coordinate in the grid, and gives the node the secrets generated from the corresponding row and column polynomials.\nBased on this grid, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key.\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [5].\nRather than on Blundo's scheme their approach is based on Blom's scheme [6].\nIn some cases, it is essentially equivalent to the one in [4].\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme.\nHowever, the pairwise key establishment problem in sensor networks is still not well solved.\nFor the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly.\nAs a result, a small number of compromised nodes may affect a large fraction of pairwise keys [3].\nThough the random pairwise keys scheme doses not suffer from the above security problem, it incurs a high memory overhead, which increases linearly with the number of nodes in the network if the level of security is kept constant [2][4].\nFor the random subset assignment scheme, it suffers higher communication and computation overheads.\nIn 2004, Liu proposed a new hypercube-based pairwise key predistribution scheme [7], which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube.\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme, including the guarantee of establishing pairwise keys and the resilience to node compromises.\nAlso, when perfect security against node compromise is required, the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes.\nThough hypercube-based scheme (we consider the grid-based scheme is a special case of hypercube-based scheme) has many attractive properties, it requires any two nodes in sensor networks can communication directly with each other.\nThis strong assumption is impractical in most of the actual applications of the sensor networks.\nIn this paper, we present a kind of new cluster-based distribution model of sensor networks, and for which, we propose a new pairwise key pre-distribution scheme.\nThe main contributions of this paper are as follows: Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution, we setup a clusterbased topology that is practical with the real deployment of sensor networks.\nBased on the topology, we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key.\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do, and it still maintains high probability of key establishment, low memory overhead and good security performance.\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model.\nThe structure of this paper is arranged as follows: In section 3, a new distribution model of cluster deployed sensor networks is presented.\nIn section 4, a new Hierarchical Hypercube model is proposed.\nIn section 5, the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed.\nIn section 6 and section 7, new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described.\nFinally, section 8 presents a conclusion.\n2.\nPRELIMINARY Definition 1 (Key Predistribution): The procedure, which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution, is called Key Predistribution.\nDefinition 2 (Pairwise Key): For any two nodes A and B, if they have a common key E, then the key E is called a pairwise key between them.\nDefinition 3 (Key Path): For any two nodes A0 and Ak, when there has not a pairwise key between them, if there exists a path A0,A1,A2,......,Ak-1,Ak, and there exists at least one pairwise key between the nodes Ai and Aj for 0\u2264i\u2264k-1 and 1\u2264j\u2264k, then the path consisted of A0,A1,A2,......,Ak-1,Ak is called a Key Path between A0 and Ak.\nDefinition 4 (n-dimensional Hypercube): An n-dimensional Hypercube (or n\u2212cube) H(v,n) is a topology with the following properties: (1) It is consisted of n\u00b7vn-1 edges, (2) Each node can be coded as a string with n positions such as b1b2...bn, where 0\u2264b1,b2,...,bn\u2264v-1, (3) Any two nodes are called neighbors, which means that there is an edge between them, iff there is just one position different between their node codes.\n3.\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS In some actual applications of sensor networks, sensors can be deployed through airplanes.\nSupposing that the deployment rounds of sensors are k, and the communication radius of any sensors is r, then the sensors deployed in the same round can be regarded as belonging to a same Cluster.\nWe assign a unique cluster number l (1 \u2264 l \u2264 k) for each cluster.\nSupposing that the sensors form a connected graph in any cluster after deployment through airplanes, and then the Fig.1 presents an actual model of clusters deployed sensor networks.\nFigure.1 An actual model of clusters deployed sensor networks.\nFrom Figure.1, it is easy to know that, for a given node A, there exist lots of nodes in the same cluster of A, which can be communicated directly with A, since the nodes are deployed densely in a cluster.\nBut there exist much less nodes in a cluster neighboring to the cluster of A, which can be communicated directly with A. since the two clusters are not deployed at the same time.\n4.\nHIERARCHICAL HYPERCUBE MODEL Definition 5 (k-levels Hierarchical Hypercube): Let there are N nodes totally, then a k-levels Hierarchical Hypercube named H(k,u,m,v,n) can be constructed as follows: 1) The N nodes are divided into k clusters averagely, and the [N\/k] nodes in any cluster are connected into an n-dimensional Hypercube: In the n-dimensional Hypercube, any node is encoded 55 as i1i2...in, which are called In-Cluster-Hypercube-Node-Codes, where 0 \u2264 i1,i2,...in \u2264 v-1,v=[ n kN \/ ],[j] equals to an integer not less than j.\nSo we can obtain k such kind of different hypercubes.\n2) The k different hypercubes obtained above are encoded as j1j2...jm, which are called Out-Cluster-Hypercube-Node-Codes, where 0 \u2264 j1,j2,...jm \u2264 u-1,u=[ m k ].\nAnd the nodes in the k different hypercubes are connected into m-dimensional hypercubes according to the following rules: The nodes with same In-Cluster-Hypercube-Node-Codes and different Out-ClusterHypercube-Node-Codes are connected into an m-dimensional hypercube.\n(The graph constructed through above steps is called a k-levels Hierarchical Hypercube abbreviated as H(k,u,m,v,n).)\n3) Any node A in H(k,u,m,v,n) can be encoded as (i, j), where i(i=i1i2...in, 0 \u2264 i1,i2,...in \u2264 v-1) is the In-Cluster-HypercubeNode-Code of node A, and j(j=j1j2...jm, 0 \u2264 j1,j2,...jm \u2264 u-1) is the Out-Cluster-Hypercube-Node-Code of node A. Obviously, the H(k,u,m,v,n) model has the following good properties: Property 1: The diameter of H(k,u,m,v,n) model is m+n. Proof: Since the diameter of n-dimensional hypercube is n, and the diameter of m-dimensional hypercube is m, so it is easy to know that the diameter of H(k,u,m,v,n) model is m+n from the definition 5.\nProperty 2: The distance between any two nodes A(i1, j1) and B(i2, j2) in H(k,u,m,v,n) model is d(A,B)= dh(i1, i2)+dh(j1, j2), where dh represents the Hamming distance.\nProof: Since the distance between any two nodes in hypercube equals to the Hamming distance between them, so it is obvious that the theorem 2``s conclusion stands from definition 5.\n5.\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H(K,U,M,V,N) Obviously, from the description in section 3 and 4, we can know that the clusters deployed sensor network can be mapped into a klevels- hierarchical hypercube model as follows: At first, the k clusters in the sensor network can be mapped into k different levels (or hypercubes) in the k-levels- hierarchical hypercube model.\nThen, the sensor nodes in each cluster can be encoded with the In-Cluster-Hypercube-Node-Codes, and the sensor nodes in the k different clusters with the same In-ClusterHypercube-Node-Codes can be encoded with the Out-ClusterHypercube-Node-Codes according to the definition 5 respectively.\nConsequently, the whole sensor network has been mapped into a k-levels- hierarchical hypercube model.\n6.\nH(K,U,M,V,N) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS In order to overcome the drawbacks of polynomial-based and polynomial pool-based key predistribution algorithms, this paper proposed an innovative H(k,u,m,v,n) model-based key predistribution scheme and pairwise key establishment algorithm, which combines the advantages of polynomial-based and key pool-based encryption schemes, and is based on the KDC and polynomials pool-based key predistribution models.\nThe new H(k,u,m,v,n) model-based pairwise key establishment algorithm includes three main steps: (1) Generation of the polynomials pool and key predistribution, (2) Direct pairwise key discovery, (3) Path key discovery.\n6.1 Generation of Polynomials Pool and Key Predistribution Supposing that, the sensor network includes N nodes, and is deployed through k different rounds.\nThen we can predistribute keys for each sensor node on the basis of the H(k,u,m,v,n) model as follows: Step 1: Key setup server randomly generates a bivariate polynomials pool such as the following: F={ f i iiil n >< \u2212121 ,...,,, (x,y), f j jjjinii m >< \u2212121 ,...,,,,...,2,1 (x,y) | 0 \u2264 iii n 121 ... \u2212\u2264\u2264\u2264 \u2264 v-1, 1 \u2264 i \u2264 n, 1 \u2264 l \u2264 k; 0 \u2264 jjj m 121 ... \u2212\u2264\u2264\u2264 \u2264 u-1 , 1 \u2264 j \u2264 m} with vn *m*um-1 +[N\/vn ]*n*vn-1 different t-degree bivariate polynomials over a finite field Fq, and then assigns a unique polynomial ID to each bivariate polynomial in F. Step 2: In each round, key setup server assigns a unique node ID: (i1i2...in,j1j2...jm) to each sensor node from small to big, where 0 \u2264 i1,i2,...in \u2264 v-1, 0 \u2264 j1,j2,...jm \u2264 u-1.\nStep 3: key setup server assigns a unique cluster ID: l to all the sensor nodes deployed in the same round, where 1 \u2264 l \u2264 k. Step 4: key setup server predistributes m+n bivariate polynomials { f iiil n 1 ,...,,, 32 >< (i1,y),..., f n iiil n >< \u2212121 ,...,,, (in,y); f jjinii m 1 ,...,,,...,2,1 2 >< ( j1,y),..., f m jjinii m >< \u221211 ,...,,,...,2,1 ( jm,y) } and the corresponding polynomial IDs to the sensor node deployed in the lth round and with ID (i1i2...in, j1j2...jm).\n6.2 Direct Pairwise Key Discovery If the node A(i1i2...in,j1j2...jm) in the sensor network wants to establish pairwise key with a node B (i'1i'2...i`n,j'1j'2...j'm), then node A can establish pairwise key with the node B trough the following methods.\nFirstly, node A computes out the distance between itself and node B: d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d=1, then node A obtains the direct pairwise key between itself and node B according to the following theorem 1: Theorem 1: For any two sensor nodes A(i1i2...in,j1j2...jm) and B (i'1i'2...i`n,j'1j'2...j'm) in the sensor network, supposing that the 56 distance between nodes A and B is d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d=1, then there exists a direct pairwise key between nodes A and B. Poof: Since d=1, then there is d1=1, d2=0, or d1=0, d2=1.\n1) If d1=1, d2=0: From d2=0, there is nodes A, B belong to the same cluster.\nSupposing that nodes A, B belong to the same cluster l, then from d1=1 \u21d2 There is only one position different between i1i2...in and i``1i``2...i``n. Let it=i``t, when 1 \u2264 t \u2264 n-1, and in \u2260 i``n \u21d2 f n iiil n >< \u2212121 ,...,,, (in,i``n)= f n iiil n >\u2032\u2032\u2032< \u2212121 ,...,,, (i``n,in).\nSo, there exists a direct pairwise key f n iiil n >< \u2212121 ,...,,, (in,i``n) between nodes A and B. 2) If d1=0, d2=1: From d2=1 \u21d2 There is only one position different between j1j2...jm and j``1j``2...j``m. Let jt=j``t, when 1 \u2264 t \u2264 m1, and jm \u2260 j``m.\nSince d1=0 \u21d2 i1i2...in equals to i``1i``2...i``n \u21d2 f m jjjinii m >< \u2212121 ,...,,,,...,2,1 (jm, j``m)= f m jjji nii m >\u2032\u2032\u2032\u2032\u2032\u2032< \u2212121 ,...,,,,...,2,1 (j``m,jm).\nSo, there exists a direct pairwise key f m jjjinii m >< \u2212121 ,...,,,,...,2,1 (jm, j``m) between nodes A and B.\nAccording to theorem 1, we present the detailed description of the direct pairwise key discovery algorithm as follows: Step 1: Obtain the node IDs and cluster IDs of the source node A and destination node B; Step 2: Compute out the distance between nodes A and B: d= d1+ d2; Step 3: If d1=1, d2=0, then select out a common polynomial share of nodes A and B from { f iiil n 1 ,...,,, 32 >< ,..., f n iiil n >< \u2212121 ,...,,, } to establish direct pairwise key; Step 4: If d1=0, d2=1, then select out a common polynomial share of nodes A and B from { f jjinii m 1 ,...,,,...,2,1 2 >< ,..., f m jjjinii m >< \u2212121 ,...,,,,...,2,1 } to establish direct pairwise key; Step 5: Otherwise, there exists no direct pairwise key between nodes A and B. And then turn to the following path key discovery process.\n6.3 Path Key Discovery If d>1, then node A can establish path key with node B according to the following theorem 2: Theorem 2: For any two sensor nodes A(i1i2...in,j1j2...jm) and B (i'1i'2...i`n,j'1j'2...j'm) in the sensor network, supposing that the distance between nodes A and B is d= d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n) and d2=dh(j1j2...jm, j'1j'2...j'm).\nIf d>1, then there exists a path key between nodes A and B. Proof: Let d1=a, d2=b, then we can think that it \u2260 i`t, when 1 \u2264 t \u2264 a; but it=i`t, when t>a; and jt \u2260 j't, when 1 \u2264 t \u2264 b; but jt=j't, when t>b. Obviously, nodes A(i1i2...in, j1j2...jm) ,(i'1i2 i3...in, j1j2...jm),(i'1i'2 i3...in, j1j2...jm),...,(i'1i'2...i`n, j1j2...jm) belong to the same cluster.\nSo, according to the supposing condition of The nodes in the same cluster form a connected graph, there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nFor nodes (i'1i'2...i`n,j1j2...jm), (i'1i'2...i`n,j'1 j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...jm-1 jm),..., (i'1i'2...i`n,j'1j'2...j'm-1jm), since they have the same Out-Cluster-Hypercube-Node-Codes with the node B(i'1i'2...i`n,j'1j'2...j'm), so nodes (i'1i'2...i`n,j1j2...jm), (i'1i'2...i`n,j'1 j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...jm-1 jm),..., (i'1i'2...i`n,j'1j'2...j'm-1 jm) and node B belong to a same logical hypercube.\nObviously, from the supposing condition of The whole sensor network forms a connected graph, there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nSo, it is obvious that there exists a path key between nodes A and B.\nAccording to theorem 2, we present the detailed description of the path key discovery algorithm as follows: Step 1: Compute out the intermediate nodes (i'1i2 i3...in,j1j2...jm), (i'1i'2 i3...in,j1j2...jm),...,(i'1i'2...i`n, j1j2...jm) and (i'1i'2...i`n,j1`j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...j'm-1 jm),...,(i'1i'2...i`n,j'1j'2...j'm-1 jm) from the source node A(i1i2...in,j1j2...jm) and the destination node B (i'1i'2...i`n,j'1j'2...j'm).\nStep 2: In those nodes series A(i1i2...in,j1j2...jm), (i'1i2 i3...in,j1j2...jm), (i'1i'2 i3...in,j1j2...jm),...,(i'1i'2...i`n,j1j2...jm) and (i'1i'2...i`n,j1`j2 j3...jm), (i'1i'2...i`n,j'1j'2 j3...j'm-1 jm),..., (i'1i'2...i`n, j'1j'2...j'm-1 jm), B (i'1i'2...i`n,j'1j'2...j'm), the neighboring nodes select their common polynomial share to establish direct pairwise key.\nFrom theorem 2, it is easy to know that any source node A can compute out a key path P to the destination node B according to the above algorithm, when there are no compromised nodes in the sensor network.\nOnce the key path P is computed out, then node A can send messages to B along the path P to establish indirect pairwise key with node B. Fig.2 presents a example of key path establishment.\nFigure.2 Key path establishment example.\nFor example: In the above Figure.2, node A((012),(1234)) can establish pairwise key with node B((121),(2334)) through the following key path: A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192 E((121),(1234)) \u2192 F((121),(2234)) \u2192 B((121),(2334)), where node F shall route through nodes G, H, I, J to establish direct pairwise key with node B. 57 According to the properties of H(k,u,m,v,n) model, we can prove that the following theorem by combing the proof of theorem 2: Theorem 3: Supposing that there exist no compromised nodes in the sensor network, and the distance between node A and B, then there exists a shortest key path with k distance between node A and B logically.\nThat is to say, node A can establish indirect pairwise key with node B through t-1 intermediate nodes.\nProof: Supposing that the distance between node A(i1i2...in, j1j2...jm) and B (i'1i'2...i`n, j'1j'2...j'm) is d=d1+ d2, where d1=dh(i1i2...in, i'1i'2...i`n), d2=dh(j1j2...jm, j'1j'2...j'm).\nSince d=t, according to the construction properties of H(k,u,m,v,n), it is easy to know that there exist t-1 intermediate nodes I1,...,It-1, in the logical space H(k,u,m,v,n), which satisfy that the distance between any two neighboring nodes in the nodes series A, I1,...,It1, B equals to 1.\nSo according to the theorem 1, we can know that nodes A, I1,...,It-1, B form a correct key path between node A and B.\nIf any two neighboring nodes in the nodes series A, I1,...,It-1, B can communicate directly, then node A can establish indirect pairwise key with node B through those t-1 intermediate nodes.\n6.4 Dynamic Path Key Discovery The path key discovery algorithm proposed in the above section can establish a key path correctly, only when there exist no compromised nodes in the whole sensor network, since the key path is computed out beforehand.\nAnd the proposed algorithm cannot find an alternative key path when there exist some compromised nodes or some intermediate nodes not in the communication radius, even that there exists other alternative key paths in the sensor network.\nFrom the following example we can know that there are many parallel paths in the H(k,u,m,v,n) model for any two given source and destination nodes, since the H(k,u,m,v,n) model is high fault-tolerant[9,10] .\nFigure.3 Alternative key path establishment example.\nFor example: Considering the key path establishment example given in the above section based on Figure.2: A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192 E((121),(1234)) \u2192 F((121),(2234)) \u2192 B((121),(2334)), supposing that node F((121),(2234)) has compromised, then from Figure.3, we can know that there exists another alternative key path as A((012),(1234)) \u2192 C((112),(1234)) \u2192 D((122),(1234)) \u2192E((121),(1234)) \u2192 M((121),(1334)) \u2192 B((121),(2334)), which can be used to establish the indirect pairwise key between node A and B, where node E shall route through nodes D and K to establish direct pairwise key with node M, and node M shall route through nodes N, O, G, H, I, J to establish direct pairwise key with node B.\nSince the sensors are source limited, so they are easy to die or out of the communication radius, therefore the algorithm proposed in the above section cannot guarantee to establish correct key path efficiently.\nIn this section, we will propose a dynamic path key discovery algorithm as follows, which can improve the probability of key path effectively: Algorithm I: Dynamic key path establishment algorithm based on H(k,u,m,v,n) model for cluster deployed sensor networks.\nInput: Sub-sensor network H(k,u,m,v,n), which has some compromised \/fault sensors and fault links, And two reachable nodes A(a1...an,a``1...a``m) and B(b1...bn,b``1...b``m) in H(k,u,m,v,n), where a``t \u2260 b``t, t\u2208[1,s], a``t=b``t, t >s. Output: A correct key path from node A to B in H(k,u,m,v,n).\nStep 1: Obtain the code strings of node A and B: A \u2190 (a1...an,a``1...a``m), B \u2190 (b1...bn,b``1...b``m), where aj, bj [0,\u2208 u-1], a``j, b``j [0,\u2208 v-1].\nStep 2: If a``1...a``m = b``1...b``m, then node A can find a route to B according to the routing algorithms of hypercube [9-10].\nStep 3: Otherwise, node A can find a route to C(b1...bn, a``1...a``m) according to the Algorithm I or Algorithm II.\nThen let I0=C(b1...bn,a``1...a``m), I1=(b1...bn,b``1 a``2...a``m),..., Is=B(b1...bn,b``1 b``2...b``s a``s+1...a``m), and each node It in the above nodes series find a route to its neighboring node It+1 on the basis of the location information (Detailed routing algorithms based on location information can see the references[11-14]).\nStep 4: Algorithm exits.\nIf such kind of a correct key path exists, then through which node A can establish an indirect pairwise key with node B. Otherwise, node A fails to establish an indirect pairwise key with node B. And node A will tries again to establish an indirect pairwise key with node B some time later.\n7.\nALGORITHM ANALYSES 7.1 Practical Analyses According to the former description and analyses, it is easy to know that the above newly proposed algorithm has the following properties: Property 3: When there exist no fault and compromised nodes, by using new pairwise key predistribution scheme based on H(k,u,m,v,n) model, the probability of direct pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))\/(N-1), where N is the total number of nodes in the sensor network, and N=um * vn .\nProof: Since the predistributed pairwise keys for any node FA ={ f iiil n 1 ,...,,, 32 >< (i1,y),..., f n iiil n >< \u2212121 ,...,,, (in,y); f jjinii m 1 ,...,,,...,2,1 2 >< (j1 ,y),..., f m jjinii m >< \u221211 ,...,,,...,2,1 ( jm,y) } in the newly proposed algorithm.\nObviously, in the logical hypercube formed by the nodes in the same cluster of node A, there are n(v-1) nodes, which 58 have direct pairwise key with node A. And in the logical hypercube formed by the nodes in different clusters from that of node A, there are m(u-1) nodes, which have direct pairwise key with node A. Therefore, there are totally m(u-1)+n(v-1) nodes, which have direct pairwise key with node A. So, the probability of pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))\/(N-1), since the whole sensor network has N sensor nodes in all.\nFigure.4 presents the comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H(8,2,3,v,n) model.\n2 3 4 5 6 7 8 9 10 0 0.002 0.004 0.006 0.008 0.01 Number of Dimension ProbabilitytoEstablishDirectKey N = 8000 N=10000 N=20000 N=30000 Figure.4 Comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H(8,2,3,v,n) model.\nFrom Figure.4, it is easy to know that by using new pairwise key predistribution scheme based on H(k,u,m,v,n) model, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the scale of the sensor networks, and in addition, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the dimension n, when the scale of the sensor network is fixed.\nTheorem 4: Supposing that the total sensors is N in the sensor network, then when u \u2265 v2 , the probability of direct pairwise key establishment between any two nodes, when using the key distribution scheme based on the hypercube model H(v,p), is smaller than that when using the key distribution scheme based on the H(k,u,m,v,n) model.\nProof: Since u \u2265 v, then we can let u=vt , where t \u2265 2.\nSince the total number of nodes in H(v,p) is vp =N, the total number of nodes in H(k,u,m,v,n) is um * vn =N. Let p=x+n, then there is um *vn = vx * vn \u21d2 um =vx \u21d2 x=tm.\nFrom the property 3, it is easy to know that the probability of direct pairwise key establishment between any two nodes can be estimated as P=(m(u-1)+n(v-1))\/(N-1).\nAccording to the description in [7], it is well know that the probability of direct pairwise key establishment between any two nodes can be estimated as P''= p(v-1)\/(N-1)= (x(v-1)+n(v-1))\/(N-1).\nNext, we will prove that m(u-1) \u2265 x(v-1): m(u-1)= m(vt -1), x(v-1)= tm(v-1).\nConstruct a function as f(t)= vt -1- t(v-1), where t \u2265 2.\nWhen t=2, it is obvious that there is f(t)= vt -2v+1=-LRB- v-1)2 \u2265 0 and f''(t)=t vt-1 - v+1 \u2265 2v- v+1= v+1>0.\nSo, there is f(t) \u2265 0 \u21d2 vt -1 \u2265 t(v-1) \u21d2 m(vt -1) \u2265 tm(v-1) \u21d2 m(u1) \u2265 x(v-1).\nTherefore, the conclusion of the theorem stands.\nAs for the conclusion of theorem 4, we give an example to illustrate.\nSupposing that the total number of nodes in the sensor network is N=214 , and H(k,u,m,v,n)=H(16,4,2,2,10), H(v,p)= H(10,14), then the probability of direct pairwise key establishment between any two nodes based on the H(k,u,m,v,n) model is P= (m(u-1)+n(v1))\/(N-1)= (2(4-1)+10(2-1))\/(214 -1)=16\/(214 -1), but the probability of direct pairwise key establishment between any two nodes based on the H(v,p) model is P''= p(v-1)\/(N-1)=14(2-1)\/(214 -1)= 14\/(214 1).\nSupposing that the total number of nodes in the sensor network is N, Figure.5 illustrates the comparison between the probability of direct pairwise key establishment between any two nodes based on the H(k,u,m,v,n) model and the probability of direct pairwise key establishment between any two nodes based on the H(v,p) model, when u=4 and v=2.\n1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 x 10 -3 scaleofthesensornetwork ProbabilitytoEstablishDirectKey H(k,u,m,v,n)model-based H(v,p)model-based Figure.5 Comparison between the probability of direct pairwise key establishment between H(v,n) and H(k,u,m,v,n) models.\nFrom Figure.5, it is easy to know that the theorem 5 stands.\nTheorem 5: Supposing that the total sensors is N in the sensor network, then the pairwise key distribution scheme based on the hypercube model H(v,p), is only a special case of the pairwise key distribution scheme based on the H(k,u,m,v,n) model.\nProof: As for the pairwise key distribution scheme based on the H(k,u,m,v,n) model, let k=1 (u=1, m=0), which means that the total sensor network includes only one cluster.\nThen obviously, the H(k,u,m,v,n) model will degrade into the H(v,n) model.\nAccording to the former anayses in this paper and the definition of the pairwise key distribution scheme based on the hypercube model H(v,p) in [7], it is easy to know that the conclusion of the theorem stands.\n59 7.2 Security Analyses By using the pairwise key establishment algorithm based on the H(k,u,m,v,n) model, the intruders can launch two kinds of attacks: 1) The attackers may target the pairwise key between two particular sensor node, in order to compromise the pairwise key between them, or prevent them to establish pairwise key.\n2) The attackers may attack against the whole sensor network, inorder to decrease the probability of the pairwise key establishment, or increase the cost of the pairwise key establishment.\nAttacks against a Pair of sensor nodes 1.\nSupposing that the intruders want to attack two particular sensor nodes u,v, where u,v are all not compromised nodes, but the intruders want to compromise the pairwise key between them.\n1) If u,v can establish direct pairwise key, then the only way to compromise the key is to compromise the common bivariate polynomial f(x,y) between u,v.\nSince the degree of the bivariate polynomial f(x,y) is t, so the intruders need to compromise at least t+1 sensor nodes that have a share of the bivariate polynomial f(x,y).\n2) If u,v can establish indirect pairwise key through intermediate nodes, then the intruders need to compromise at least one intermediate node, or compromise the common bivariate polynomial f(x,y) between two neighboring intermediate nodes.\nBut even if the intruders succeed to do that, node u and v can still reestablish indirect pairwise key through alternative intermediate nodes.\n2.\nSupposing that the intruders want to attack two particular sensor nodes u,v, where u,v are all not compromised nodes, but the intruders want to prevent them to establish the pairwise key.\nThen, the intruders need to compromise all of the m+n bivariate polynomials of node u or v.\nSince the degree of the bivariate polynomial f(x,y) is t, so for bivariate polynomial, the intruders need to compromise at least t+1 sensor nodes that have a share of the given bivariate polynomial.\nTherefore, the intruders need to compromise (m+n)(t+1) sensor nodes altogether to prevent u,v to establish the pairwise key.\nAttacks against the sensor network Supposing that the Attackers know the distribution of the polynomials over sensor nodes, it may systematically attack the network by compromising the polynomials in F one by one in order to compromise the entire network.\nAssume the fraction of the compromised polynomials is pc, then there are up to N''=pc \u00d7 { vn v N umv n n mn \u00d7\u00d7+\u00d7\u00d7 ][ }= pc \u00d7\u00d7N (m+n) Sensor nodes that have at least one compromised polynomial share.\nAmong all of the remaining N- N'' sensor nodes, none of them includes a compromised polynomial share.\nSo, the remaining N- N'' sensor nodes can establish direct pairwise key by using any one of their polynomial shares.\nHowever, the indirect pairwise keys in the remaining N- N'' sensor nodes may be affected.\nAnd they may need to re-establish a new indirect pairwise key between them by select alternative intermediate nodes that do not belong to the N'' compromised nodes.\nSupposing that the scale of the sensor network is N=10000, Figure.6 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H(k,u,m,v,n) distribution models.\nFrom Figure.6, it is easy to know that, when the scale of the sensor network is fixed, the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes.\n0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0 1000 2000 3000 4000 5000 6000 7000 8000 F rac tion of C om prom is ed B ivariate P oly nom ialsSensorNodeswithatleastoneCompromisedPolynomialShare H (1,0,0,100,2) H (2,2,1,71,2) H (4,2,2,50,2) H (8,2,3,36,2) Figure.6 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H(k,u,m,v,n) distribution models.\nTheorem 6: Supposing that the total sensors is N in the sensor network, and the fraction of compromised nodes is pc, then when u>v, the number of affected nodes of the H(v,p) model based key predistribution scheme, is bigger than that of the H(k,u,m,v,n) model based key predistribution scheme.\nProof: Since the number of affected nodes of the H(k,u,m,v,n) model based key predistribution scheme is pc \u00d7\u00d7N (m+n), and it is proved in [7] that the number of affected nodes of the H(v,p) model based key predistribution scheme is pc \u00d7\u00d7N p. Let p=x+n, then there is um * vn = vx * vn \u21d2 um =vx .\nSince u>v \u21d2 x>m \u21d2 pc \u00d7\u00d7N (m+n)< pc \u00d7\u00d7N (x+n)= pc \u00d7\u00d7N p. Supposing that the scale of the sensor network is N=10000, Figure.7 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H(9,3,2,2,n) and H(2,p) distribution models.\nFrom Figure.7, it is easy to know that the conclusion of theorem 9 is correct, and the number of the affected sensor nodes in the sensor network increases with the increasing of the number of compromised nodes, when the scale of the sensor network is fixed.\n60 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Fraction of Compromised Bivariate Polynomials SensorNodeswithatleastoneCompromisedPolynomialShare H(9,3,2,34,2) H(16,4,2,25,2) H(225,15,2,7,2) H(1296,36,2,3,2) H(2,14) Figure.7 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on H(9,3,2,2,n) and H(2,p) distribution models.\n8.\nCONCLUSION A new hierarchical hypercube model named H(k,u,m,v,n) is proposed, which can be used for pairwise key predistribution for cluster deployed sensor networks.\nAnd Based on the H(k,u,m,v,n) model, an innovative pairwise key predistribution scheme and algorithm are designed respectively, by combing the good properties of the Polynomial Key and Key Pool encryption schemes.\nThe new algorithm uses the good characteristics of node codes and high fault-tolerance of H(k,u,m,v,n) model to route and predistribute pairwise keys, in which nodes are not needed to be able to communicate with each other directly such as that the algorithms proposed by [7] shall need.\nSo, the traditional pairwise key predistribution algorithm based on hypercube model [7] is only a special case of the new algorithm proposed in this paper.\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks.\n9.\nACKNOWLEDGMENTS Our thanks to ACM SIGCHI for allowing us to modify templates they had developed, and to nature science fund of Fujian province of PR.China under grant No.A0510024.\n10.\nREFERENCES [1] L. Eschenauer and V. Gligor.\nA key-management scheme for distribute sensor networks.\nIn Proceedings of the 9th ACM Conference on Computer and Communication Security.\nACM Press, Washington DC, USA, 2002, 41-47.\n[2] H. Chan, A. Perrig, and D. Song.\nRandom key predistribution schemes for sensor networks.\nIn IEEE Symposium on Security and Privacy.\nIEEE Computer Society, California, USA, 2003, 197-213.\n[3] C. Blundo, A. D. Santis, A. Herzberg, S. Kutten, U. Vaccaro, and M. Yung.\nPerfectly-secure key distribution for dynamic conferences.\nLecture Notes in Computer Science.\n1993, 740, 471-486.\n[4] D. Liu and P. Ning.\nEstablishing pairwise keys in distributed sensor networks.\nIn Proceedings of the 10th ACM Conference on Computer and Communications Security.\nACM Press, Washingtion, DC, USA, 2003, 52-61.\n[5] W. Du, J. Deng, Y. Han, and P. Varshney.\nA pairwise key pre-distribution scheme for wireless sensor networks.\nIn Proceedings of the Tenth ACM Conference on Computer and Communications Security.\nWashingtion, DC, USA,2003, 4251.\n[6] R. Blom.\nAn optimal class of symmetric key generation systems.\nAdvances in Cryptology: Proceedings of EUROCRYPT 84.\nLecture Notes in Computer Science.\n1985, 209, :335-338.\n[7] Donggang Liu, Peng Ning, Rongfang Li, Establishing Pairwise Keys in Distributed Sensor Networks.\nACM Journal Name, 2004, 20, 1-35.\n[8] L. Fang, W. Du, and N. Peng.\nA Beacon-Less Location Discovery Scheme for Wireless Sensor Networks, INFOCOM 2005.\n[9] Wang Lei, Lin Ya-ping, Maximum safety path matrix based fault-tolerant routing algorithm for hypercube interconnection network.\nJournal of software.\n2004,15(7), 994-1004.\n[10] Wang Lei, Lin Ya-ping, Maximum safety path vector based fault-tolerant routing algorithm for hypercube interconnection network.\nJournal of China Institute of Communications.\n2004, 16(4), 130-137.\n[11] Lin Ya-ping, Wang Lei, Location information based hierarchical data congregation routing algorithm for sensor networks.\nChinese Journal of electronics.\n2004, 32(11), 1801-1805.\n[12] W. Heinzelman, J. Kulik, and H. Balakrishnan, Negotiation Based Protocols for Disseminating Information in Wireless Sensor Networks.\nACM Wireless Networks.\n2002, 8, 169185.\n[13] Manjeshwar,A.; Agrawal,D.P. TEEN: a routing protocol for enhanced efficiency in wireless sensor networks].\nIn Proceedings of 15th Parallel and Distributed Processing Symposium].\nIEEE Computer Society, San Francisco, USA, 2001, 2009-2015.\n[14] B. Krishnamachari, D. Estrin, and S. Wicker.\nModelling Data-Centric Routing in Wireless Sensor Networks.\nIn Proceedings of IEEE Infocom, 2002.\n61","lvl-3":"Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks.\nA new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H (k, u, m, v, n) and the mapping relationship between cluster deployed sensor networks and the H (k, u, m, v, n) are proposed.\nBy utilizing nice properties of H (k, u, m, v, n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC (Key Distribution Center) and polynomial pool schemes.\nFurthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected.\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.\n1.\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications, so shared secret keys are used between communicating nodes to encrypt data.\nAs one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques.\nHowever, due to the sensor nodes' limited computational capabilities, battery energy, and available memory, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC).\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [14].\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [1].\nIn the scheme, each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key.\nChan et al. further extended this idea and presented two key predistribution schemes: a q-composite key pre-distribution scheme and a random pairwise keys scheme.\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys.\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [2].\nInspired by the studies above and the polynomial-based key pre-distribution protocol [3], Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [4].\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [1]] and [2].\nBased on such a framework, they presented two pairwise key pre-distribution schemes: a random subset assignment scheme and a grid-based scheme.\nA polynomial pool is used in those schemes, instead of using a key pool in the previous techniques.\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool.\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid, assigns each sensor node to a unique coordinate in the grid, and gives the node the secrets generated from the corresponding row and column polynomials.\nBased on this grid, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key.\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [5].\nRather than on Blundo's scheme their approach is based on Blom's scheme [6].\nIn some cases, it is essentially equivalent to the one in [4].\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme.\nHowever, the pairwise key establishment problem in sensor networks is still not well solved.\nFor the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly.\nAs a result, a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [3].\nThough the random pairwise keys scheme doses not suffer from the above security problem, it incurs a high memory overhead, which increases linearly with the number of nodes in the network if the level of security is kept constant [2] [4].\nFor the random subset assignment scheme, it suffers higher communication and computation overheads.\nIn 2004, Liu proposed a new hypercube-based pairwise key predistribution scheme [7], which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube.\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme, including the guarantee of establishing pairwise keys and the resilience to node compromises.\nAlso, when perfect security against node compromise is required, the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes.\nThough hypercube-based scheme (we consider the grid-based scheme is a special case of hypercube-based scheme) has many attractive properties, it requires any two nodes in sensor networks can communication directly with each other.\nThis strong assumption is impractical in most of the actual applications of the sensor networks.\nIn this paper, we present a kind of new cluster-based distribution model of sensor networks, and for which, we propose a new pairwise key pre-distribution scheme.\nThe main contributions of this paper are as follows: Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution, we setup a clusterbased topology that is practical with the real deployment of sensor networks.\nBased on the topology, we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key.\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do, and it still maintains high probability of key establishment, low memory overhead and good security performance.\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model.\nThe structure of this paper is arranged as follows: In section 3, a new distribution model of cluster deployed sensor networks is presented.\nIn section 4, a new Hierarchical Hypercube model is proposed.\nIn section 5, the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed.\nIn section 6 and section 7, new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described.\nFinally, section 8 presents a conclusion.\n2.\nPRELIMINARY\nDefinition 1 (Key Predistribution): The procedure, which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution, is called Key Predistribution.\nDefinition 2 (Pairwise Key): For any two nodes A and B, if they have a common key E, then the key E is called a pairwise key between them.\nDefinition 3 (Key Path): For any two nodes A0 and Ak, when there has not a pairwise key between them, if there exists a path A0,A1,A2,...,Ak-1, Ak, and there exists at least one pairwise key between the nodes Ai and Aj for 05i5k-1 and 15j5k, then the path consisted of A0, A1, A2,..., Ak-1, Ak is called a Key Path between A0 and Ak.\nDefinition 4 (n-dimensional Hypercube): An n-dimensional Hypercube (or n \u2212 cube) H (v, n) is a topology with the following properties: (1) It is consisted of n \u00b7 vn-1 edges, (2) Each node can be coded as a string with n positions such as b1b2...bn, where 05b1,b2,...,bn5v-1, (3) Any two nodes are called neighbors, which means that there is an edge between them, iff there is just one position different between their node codes.\n3.\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS\n4.\nHIERARCHICAL HYPERCUBE MODEL\n5.\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H (K, U, M, V, N)\n6.\nH (K, U, M, V, N) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS\n6.1 Generation of Polynomials Pool and Key Predistribution\n6.2 Direct Pairwise Key Discovery\n6.3 Path Key Discovery\nFigure .2 Key path establishment example.\n6.4 Dynamic Path Key Discovery\n7.\nALGORITHM ANALYSES\n7.1 Practical Analyses\n7.2 Security Analyses\nAttacks against a Pair of sensor nodes\nAttacks against the sensor network\nFraction of Compromised B ivariate Polynomials\nFraction of Compromised Bivariate Polynomials\n8.\nCONCLUSION\nA new hierarchical hypercube model named H (k, u, m, v, n) is proposed, which can be used for pairwise key predistribution for cluster deployed sensor networks.\nAnd Based on the H (k, u, m, v, n) model, an innovative pairwise key predistribution scheme and algorithm are designed respectively, by combing the good properties of the Polynomial Key and Key Pool encryption schemes.\nThe new algorithm uses the good characteristics of node codes and high fault-tolerance of H (k, u, m, v, n) model to route and predistribute pairwise keys, in which nodes are not needed to be able to communicate with each other directly such as that the algorithms proposed by [7] shall need.\nSo, the traditional pairwise key predistribution algorithm based on hypercube model [7] is only a special case of the new algorithm proposed in this paper.\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks.","lvl-4":"Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks.\nA new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H (k, u, m, v, n) and the mapping relationship between cluster deployed sensor networks and the H (k, u, m, v, n) are proposed.\nBy utilizing nice properties of H (k, u, m, v, n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC (Key Distribution Center) and polynomial pool schemes.\nFurthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected.\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.\n1.\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications, so shared secret keys are used between communicating nodes to encrypt data.\nAs one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques.\nHowever, due to the sensor nodes' limited computational capabilities, battery energy, and available memory, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC).\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [14].\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [1].\nIn the scheme, each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key.\nChan et al. further extended this idea and presented two key predistribution schemes: a q-composite key pre-distribution scheme and a random pairwise keys scheme.\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys.\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [2].\nBased on such a framework, they presented two pairwise key pre-distribution schemes: a random subset assignment scheme and a grid-based scheme.\nA polynomial pool is used in those schemes, instead of using a key pool in the previous techniques.\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool.\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid, assigns each sensor node to a unique coordinate in the grid, and gives the node the secrets generated from the corresponding row and column polynomials.\nBased on this grid, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key.\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [5].\nRather than on Blundo's scheme their approach is based on Blom's scheme [6].\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme.\nHowever, the pairwise key establishment problem in sensor networks is still not well solved.\nFor the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly.\nAs a result, a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [3].\nThough the random pairwise keys scheme doses not suffer from the above security problem, it incurs a high memory overhead, which increases linearly with the number of nodes in the network if the level of security is kept constant [2] [4].\nFor the random subset assignment scheme, it suffers higher communication and computation overheads.\nIn 2004, Liu proposed a new hypercube-based pairwise key predistribution scheme [7], which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube.\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme, including the guarantee of establishing pairwise keys and the resilience to node compromises.\nAlso, when perfect security against node compromise is required, the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes.\nThough hypercube-based scheme (we consider the grid-based scheme is a special case of hypercube-based scheme) has many attractive properties, it requires any two nodes in sensor networks can communication directly with each other.\nThis strong assumption is impractical in most of the actual applications of the sensor networks.\nIn this paper, we present a kind of new cluster-based distribution model of sensor networks, and for which, we propose a new pairwise key pre-distribution scheme.\nBased on the topology, we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key.\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model.\nThe structure of this paper is arranged as follows: In section 3, a new distribution model of cluster deployed sensor networks is presented.\nIn section 4, a new Hierarchical Hypercube model is proposed.\nIn section 5, the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed.\nIn section 6 and section 7, new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described.\nFinally, section 8 presents a conclusion.\n2.\nPRELIMINARY\nDefinition 1 (Key Predistribution): The procedure, which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution, is called Key Predistribution.\nDefinition 2 (Pairwise Key): For any two nodes A and B, if they have a common key E, then the key E is called a pairwise key between them.\n8.\nCONCLUSION\nA new hierarchical hypercube model named H (k, u, m, v, n) is proposed, which can be used for pairwise key predistribution for cluster deployed sensor networks.\nAnd Based on the H (k, u, m, v, n) model, an innovative pairwise key predistribution scheme and algorithm are designed respectively, by combing the good properties of the Polynomial Key and Key Pool encryption schemes.\nSo, the traditional pairwise key predistribution algorithm based on hypercube model [7] is only a special case of the new algorithm proposed in this paper.\nTheoretical and experimental analyses show that the newly proposed algorithm is an efficient pairwise key establishment algorithm that is suitable for the cluster deployed sensor networks.","lvl-2":"Researches on Scheme of Pairwise Key Establishment for Distributed Sensor Networks\nABSTRACT\nSecurity schemes of pairwise key establishment, which enable sensors to communicate with each other securely, play a fundamental role in research on security issue in wireless sensor networks.\nA new kind of cluster deployed sensor networks distribution model is presented, and based on which, an innovative Hierarchical Hypercube model - H (k, u, m, v, n) and the mapping relationship between cluster deployed sensor networks and the H (k, u, m, v, n) are proposed.\nBy utilizing nice properties of H (k, u, m, v, n) model, a new general framework for pairwise key predistribution and a new pairwise key establishment algorithm are designed, which combines the idea of KDC (Key Distribution Center) and polynomial pool schemes.\nFurthermore, the working performance of the newly proposed pairwise key establishment algorithm is seriously inspected.\nTheoretic analysis and experimental figures show that the new algorithm has better performance and provides higher possibilities for sensor to establish pairwise key, compared with previous related works.\n1.\nINTRODUCTION\nSecurity communication is an important requirement in many sensor network applications, so shared secret keys are used between communicating nodes to encrypt data.\nAs one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques.\nHowever, due to the sensor nodes' limited computational capabilities, battery energy, and available memory, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC).\nSeveral\nalternative approaches have been developed recently to perform pairwise key establishment on resource-constrained sensor networks without involving the use of traditional cryptography [14].\nEschenauer and Gligor proposed a basic probabilistic key predistribution scheme for pairwise key establishment [1].\nIn the scheme, each sensor node randomly picks a set of keys from a key pool before the deployment so that any two of the sensor nodes have a certain probability to share at least one common key.\nChan et al. further extended this idea and presented two key predistribution schemes: a q-composite key pre-distribution scheme and a random pairwise keys scheme.\nThe q-composite scheme requires any two sensors share at least q pre-distributed keys.\nThe random scheme randomly picks pair of sensors and assigns each pair a unique random key [2].\nInspired by the studies above and the polynomial-based key pre-distribution protocol [3], Liu et al. further developed the idea addressed in the previous works and proposed a general framework of polynomial pool-based key predistribution [4].\nThe basic idea can be considered as the combination of the polynomial-based key pre-distribution and the key pool idea used in [1]] and [2].\nBased on such a framework, they presented two pairwise key pre-distribution schemes: a random subset assignment scheme and a grid-based scheme.\nA polynomial pool is used in those schemes, instead of using a key pool in the previous techniques.\nThe random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool.\nThe gridbased scheme associates polynomials with the rows and the columns of an artificial grid, assigns each sensor node to a unique coordinate in the grid, and gives the node the secrets generated from the corresponding row and column polynomials.\nBased on this grid, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key.\nA similar approach to those schemes described by Liu et al was independently developed by Du et a. [5].\nRather than on Blundo's scheme their approach is based on Blom's scheme [6].\nIn some cases, it is essentially equivalent to the one in [4].\nAll of those schemes above improve the security over the basic probabilistic key pre-distribution scheme.\nHowever, the pairwise key establishment problem in sensor networks is still not well solved.\nFor the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly.\nAs a result, a small number of compromised nodes may affect a\nlarge fraction of pairwise keys [3].\nThough the random pairwise keys scheme doses not suffer from the above security problem, it incurs a high memory overhead, which increases linearly with the number of nodes in the network if the level of security is kept constant [2] [4].\nFor the random subset assignment scheme, it suffers higher communication and computation overheads.\nIn 2004, Liu proposed a new hypercube-based pairwise key predistribution scheme [7], which extends the grid-based scheme from a two dimensional grid to a multi-dimensional hypercube.\nThe analysis shows that hypercube-based scheme keeps some attractive properties of the grid-based scheme, including the guarantee of establishing pairwise keys and the resilience to node compromises.\nAlso, when perfect security against node compromise is required, the hypercube-based scheme can support a larger network by adding more dimensions instead of increasing the storage overhead on sensor nodes.\nThough hypercube-based scheme (we consider the grid-based scheme is a special case of hypercube-based scheme) has many attractive properties, it requires any two nodes in sensor networks can communication directly with each other.\nThis strong assumption is impractical in most of the actual applications of the sensor networks.\nIn this paper, we present a kind of new cluster-based distribution model of sensor networks, and for which, we propose a new pairwise key pre-distribution scheme.\nThe main contributions of this paper are as follows: Combining the deployment knowledge of sensor networks and the polynomial pool-based key pre-distribution, we setup a clusterbased topology that is practical with the real deployment of sensor networks.\nBased on the topology, we propose a novel cluster distribution based hierarchical hypercube model to establish the pairwise key.\nThe key contribution is that our scheme dose not require the assumption of all nodes can directly communicate with each other as the previous schemes do, and it still maintains high probability of key establishment, low memory overhead and good security performance.\nWe develop a kind of new pairwise key establishment algorithm with our hierarchical hypercube model.\nThe structure of this paper is arranged as follows: In section 3, a new distribution model of cluster deployed sensor networks is presented.\nIn section 4, a new Hierarchical Hypercube model is proposed.\nIn section 5, the mapping relationship between the clusters deployed sensor network and Hierarchical Hypercube model is discussed.\nIn section 6 and section 7, new pairwise key establishment algorithm are designed based on the Hierarchical Hypercube model and detailed analyses are described.\nFinally, section 8 presents a conclusion.\n2.\nPRELIMINARY\nDefinition 1 (Key Predistribution): The procedure, which is used to encode the corresponding encryption and decryption algorithms in sensor nodes before distribution, is called Key Predistribution.\nDefinition 2 (Pairwise Key): For any two nodes A and B, if they have a common key E, then the key E is called a pairwise key between them.\nDefinition 3 (Key Path): For any two nodes A0 and Ak, when there has not a pairwise key between them, if there exists a path A0,A1,A2,...,Ak-1, Ak, and there exists at least one pairwise key between the nodes Ai and Aj for 05i5k-1 and 15j5k, then the path consisted of A0, A1, A2,..., Ak-1, Ak is called a Key Path between A0 and Ak.\nDefinition 4 (n-dimensional Hypercube): An n-dimensional Hypercube (or n \u2212 cube) H (v, n) is a topology with the following properties: (1) It is consisted of n \u00b7 vn-1 edges, (2) Each node can be coded as a string with n positions such as b1b2...bn, where 05b1,b2,...,bn5v-1, (3) Any two nodes are called neighbors, which means that there is an edge between them, iff there is just one position different between their node codes.\n3.\nMODEL OF CLUSTERS DEPLOYED SENSOR NETWORKS\nIn some actual applications of sensor networks, sensors can be deployed through airplanes.\nSupposing that the deployment rounds of sensors are k, and the communication radius of any sensors is r, then the sensors deployed in the same round can be regarded as belonging to a same Cluster.\nWe assign a unique cluster number l (1 <_ l <_ k) for each cluster.\nSupposing that the sensors form a connected graph in any cluster after deployment through airplanes, and then the Fig. 1 presents an actual model of clusters deployed sensor networks.\nFigure .1 An actual model of clusters deployed sensor networks.\nFrom Figure .1, it is easy to know that, for a given node A, there exist lots of nodes in the same cluster of A, which can be communicated directly with A, since the nodes are deployed densely in a cluster.\nBut there exist much less nodes in a cluster neighboring to the cluster of A, which can be communicated directly with A. since the two clusters are not deployed at the same time.\n4.\nHIERARCHICAL HYPERCUBE MODEL\nDefinition 5 (k-levels Hierarchical Hypercube): Let there are N nodes totally, then a k-levels Hierarchical Hypercube named H (k, u, m, v, n) can be constructed as follows:\nas i1i2...in, which are called In-Cluster-Hypercube-Node-Codes, where 0 \u2264 i1, i2,...in \u2264 v-1, v =[n N\/k], [j] equals to an integer not less than j.\nSo we can obtain k such kind of different hypercubes.\n2) The k different hypercubes obtained above are encoded as j1j2...jm, which are called Out-Cluster-Hypercube-Node-Codes, where 0 \u2264 j1, j2,...jm \u2264 u-1, u =[m k].\nAnd the nodes in the k different hypercubes are connected into m-dimensional hypercubes according to the following rules: The nodes with same In-Cluster-Hypercube-Node-Codes and different Out-ClusterHypercube-Node-Codes are connected into an m-dimensional hypercube.\n(The graph constructed through above steps is called a k-levels Hierarchical Hypercube abbreviated as H (k, u, m, v, n).)\n3) Any node A in H (k, u, m, v, n) can be encoded as (i, j), where i (i = i1i2...in, 0 \u2264 i1, i2,...in \u2264 v-1) is the In-Cluster-HypercubeNode-Code of node A, and j (j = j1j2...jm, 0 \u2264 j1, j2,...jm \u2264 u-1) is the Out-Cluster-Hypercube-Node-Code of node A. Obviously, the H (k, u, m, v, n) model has the following good properties:\nProperty 1: The diameter of H (k, u, m, v, n) model is m + n.\nProof: Since the diameter of n-dimensional hypercube is n, and the diameter of m-dimensional hypercube is m, so it is easy to know that the diameter of H (k, u, m, v, n) model is m + n from the definition 5.\nProperty 2: The distance between any two nodes A (i1, j1) and B (i2, j2) in H (k, u, m, v, n) model is d (A, B) = dh (i1, i2) + dh (j1, j2), where dh represents the Hamming distance.\nProof: Since the distance between any two nodes in hypercube equals to the Hamming distance between them, so it is obvious that the theorem 2's conclusion stands from definition 5.\n5.\nMAPPING CLUSTERS DEPLOYED SENSOR NETWORKS TO H (K, U, M, V, N)\nObviously, from the description in section 3 and 4, we can know that the clusters deployed sensor network can be mapped into a klevels - hierarchical hypercube model as follows: At first, the k clusters in the sensor network can be mapped into k different levels (or hypercubes) in the k-levels - hierarchical hypercube model.\nThen, the sensor nodes in each cluster can be encoded with the In-Cluster-Hypercube-Node-Codes, and the sensor nodes in the k different clusters with the same In-ClusterHypercube-Node-Codes can be encoded with the Out-ClusterHypercube-Node-Codes according to the definition 5 respectively.\nConsequently, the whole sensor network has been mapped into a k-levels - hierarchical hypercube model.\n6.\nH (K, U, M, V, N) MODEL-BASED PAIRWISE KEY PREDISTRIBUTION ALGORITHM FOR SENSOR NETWORKS\nIn order to overcome the drawbacks of polynomial-based and polynomial pool-based key predistribution algorithms, this paper proposed an innovative H (k, u, m, v, n) model-based key predistribution scheme and pairwise key establishment algorithm, which combines the advantages of polynomial-based and key pool-based encryption schemes, and is based on the KDC and polynomials pool-based key predistribution models.\nThe new H (k, u, m, v, n) model-based pairwise key establishment algorithm includes three main steps: (1) Generation of the polynomials pool and key predistribution, (2) Direct pairwise key discovery, (3) Path key discovery.\n6.1 Generation of Polynomials Pool and Key Predistribution\nSupposing that, the sensor network includes N nodes, and is deployed through k different rounds.\nThen we can predistribute keys for each sensor node on the basis of the H (k, u, m, v, n) model as follows: Step 1: Key setup server randomly generates a bivariate polynomials pool such as the following: F ={f i l, (x, y), f \n+ [N\/vn] * n * vn-1 different t-degree bivariate polynomials over a finite field Fq, and then assigns a unique polynomial ID to each bivariate polynomial in F. Step 2: In each round, key setup server assigns a unique node ID: (i1i2...in, j1j2...jm) to each sensor node from small to big, where 0 \u2264 i1, i2,...in \u2264 v-1, 0 \u2264 j1, j2,...jm \u2264 u-1.\nStep 3: key setup server assigns a unique cluster ID: l to all the sensor nodes deployed in the same round, where 1 \u2264 l \u2264 k. Step 4: key setup server predistributes m + n bivariate polynomials {f l i i in\ncorresponding polynomial IDs to the sensor node deployed in the lth round and with ID (i1i2...in, j1j2...jm).\n6.2 Direct Pairwise Key Discovery\nIf the node A (i1i2...in, j1j2...jm) in the sensor network wants to establish pairwise key with a node B (i' 1i' 2...i 'n, j' 1j' 2...j'm), then node A can establish pairwise key with the node B trough the following methods.\nFirstly, node A computes out the distance between itself and node B: d = d1 + d2, where d1 = dh (i1i2...in, i' 1i' 2...i 'n) and d2 = dh (j1j2...jm, j' 1j' 2...j'm).\nIf d = 1, then node A obtains the direct pairwise key between itself and node B according to the following theorem 1: Theorem 1: For any two sensor nodes A (i1i2...in, j1j2...jm) and B (i' 1i' 2...i 'n, j' 1j' 2...j'm) in the sensor network, supposing that the\ndistance between nodes A and B is d = d1 + d2, where d1 = dh (i1i2...in, i' 1i' 2...i 'n) and d2 = dh (j1j2...jm, j' 1j' 2...j'm).\nIf d = 1, then there exists a direct pairwise key between nodes A and B. Poof: Since d = 1, then there is d1 = 1, d2 = 0, or d1 = 0, d2 = 1.\n1) If d1 = 1, d2 = 0: From d2 = 0, there is nodes A, B belong to the same cluster.\nSupposing that nodes A, B belong to the same cluster l, then from d1 = 1 \u21d2 There is only one position different between i1i2...in and i' 1i' 2...i 'n.\nLet it = i' t, when 1 \u2264 t \u2264 n-1, and in \u2260 i 'n \u21d2 f n l, (in, i 'n) = f n l, (i 'n, in).\nSo, there exists a direct pairwise key f nl, (in, i 'n) between nodes A and B.\n2) If d1 = 0, d2 = 1: From d2 = 1 \u21d2 There is only one position different between j1j2...jm and j' 1j' 2...j'm.\nLet jt = j' t, when 1 \u2264 t \u2264 m1, and jm \u2260 j'm.\nSince d1 = 0 \u21d2 i1i2...in equals to\ni' 1i' 2...i 'n \u21d2 f (jm, j'm) = f (j'm, jm).\nSo, there exists a direct pairwise key f (jm, j'm) between nodes A and B.\nAccording to theorem 1, we present the detailed description of the direct pairwise key discovery algorithm as follows: Step 1: Obtain the node IDs and cluster IDs of the source node A and destination node B; Step 2: Compute out the distance between nodes A and B: d = d1 + d2; Step 3: If d1 = 1, d2 = 0, then select out a common polynomial share of nodes A and B from {f l i i in\nestablish direct pairwise key; Step 4: If d1 = 0, d2 = 1, then select out a common polynomial share of nodes A and B from {f i i in j jm 1 <1,2,...,,2,...,>,..., f } to establish direct pairwise key; Step 5: Otherwise, there exists no direct pairwise key between nodes A and B. And then turn to the following path key discovery process.\n6.3 Path Key Discovery\nIf d> 1, then node A can establish path key with node B according to the following theorem 2: Theorem 2: For any two sensor nodes A (i1i2...in, j1j2...jm) and B (i' 1i' 2...i 'n, j' 1j' 2...j'm) in the sensor network, supposing that the distance between nodes A and B is d = d1 + d2, where d1 = dh (i1i2...in, i' 1i' 2...i 'n) and d2 = dh (j1j2...jm, j' 1j' 2...j'm).\nIf d> 1, then there exists a path key between nodes A and B. Proof: Let d1 = a, d2 = b, then we can think that it \u2260 i' t, when 1 \u2264 t \u2264 a; but it = i' t, when t> a; and jt \u2260 j' t, when 1 \u2264 t \u2264 b; but jt = j' t, when t> b. Obviously, nodes A (i1i2...in, j1j2...jm), (i' 1i2 i3...in, j1j2...jm), (i' 1i' 2 i3...in, j1j2...jm),..., (i' 1i' 2...i 'n, j1j2...jm) belong to the same cluster.\nSo, according to the supposing condition of \"The nodes in the same cluster form a connected graph\", there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nFor nodes (i' 1i' 2...i 'n, j1j2...jm), (i' 1i' 2...i 'n, j' 1 j2 j3...jm), (i' 1i' 2...i 'n, j' 1j' 2 j3...jm-1 jm),..., (i' 1i' 2...i 'n, j' 1j' 2...j'm -1 jm), since they have the same Out-Cluster-Hypercube-Node-Codes with the node B (i' 1i' 2...i 'n, j' 1j' 2...j'm), so nodes (i' 1i' 2...i 'n, j1j2...jm), (i' 1i' 2...i 'n, j' 1 j2 j3...jm), (i' 1i' 2...i 'n, j' 1j' 2 j3...jm-1 jm),..., (i' 1i' 2...i 'n, j' 1j' 2...j'm -1 jm) and node B belong to a same logical hypercube.\nObviously, from the supposing condition of \"The whole sensor network forms a connected graph\", there is a route among those nodes.\nIn addition, in those nodes, the distance between any two neighboring nodes is 1, so from theorem 1, it is easy to know that there exists direct pairwise key between any two neighboring nodes among those nodes.\nSo, it is obvious that there exists a path key between nodes A and B.\nAccording to theorem 2, we present the detailed description of the path key discovery algorithm as follows:\ntheir common polynomial share to establish direct pairwise key.\nFrom theorem 2, it is easy to know that any source node A can compute out a key path P to the destination node B according to the above algorithm, when there are no compromised nodes in the sensor network.\nOnce the key path P is computed out, then node A can send messages to B along the path P to establish indirect pairwise key with node B. Fig. 2 presents a example of key path establishment.\nFigure .2 Key path establishment example.\nFor example: In the above Figure .2, node A ((012), (1234)) can establish pairwise key with node B ((121), (2334)) through the following key path: A ((012), (1234)) \u2192 C ((112), (1234)) \u2192 D ((122), (1234)) \u2192 E ((121), (1234)) \u2192 F ((121), (2234)) \u2192 B ((121), (2334)), where node F shall route through nodes G, H, I, J to establish direct pairwise key with node B.\nAccording to the properties of H (k, u, m, v, n) model, we can prove that the following theorem by combing the proof of theorem 2: Theorem 3: Supposing that there exist no compromised nodes in the sensor network, and the distance between node A and B, then there exists a shortest key path with k distance between node A and B logically.\nThat is to say, node A can establish indirect pairwise key with node B through t-1 intermediate nodes.\nProof: Supposing that the distance between node A (i1i2...in, j1j2...jm) and B (i' 1i' 2...i 'n, j' 1j' 2...j'm) is d = d1 + d2, where d1 = dh (i1i2...in, i' 1i' 2...i 'n), d2 = dh (j1j2...jm, j' 1j' 2...j'm).\nSince d = t, according to the construction properties of H (k, u, m, v, n), it is easy to know that there exist t-1 intermediate nodes I1,...,It-1, in the logical space H (k, u, m, v, n), which satisfy that the distance between any two neighboring nodes in the nodes series A, I1,..., It1, B equals to 1.\nSo according to the theorem 1, we can know that nodes A, I1,...,It-1, B form a correct key path between node A and B.\nIf any two neighboring nodes in the nodes series A, I1,...,It-1, B can communicate directly, then node A can establish indirect pairwise key with node B through those t-1 intermediate nodes.\n6.4 Dynamic Path Key Discovery\nThe path key discovery algorithm proposed in the above section can establish a key path correctly, only when there exist no compromised nodes in the whole sensor network, since the key path is computed out beforehand.\nAnd the proposed algorithm cannot find an alternative key path when there exist some compromised nodes or some intermediate nodes not in the communication radius, even that there exists other alternative key paths in the sensor network.\nFrom the following example we can know that there are many parallel paths in the H (k, u, m, v, n) model for any two given source and destination nodes, since the H (k, u, m, v, n) model is high fault-tolerant [9,10].\nFigure .3 Alternative key path establishment example.\nFor example: Considering the key path establishment example given in the above section based on Figure .2:\nsupposing that node F ((121), (2234)) has compromised, then from Figure .3, we can know that there exists another alternative key path as A ((012), (1234)) \u2192 C ((112), (1234)) \u2192 D ((122), (1234)) \u2192 E ((121), (1234)) \u2192 M ((121), (1334)) \u2192 B ((121), (2334)), which can be used to establish the indirect pairwise key between node A and B, where node E shall route through nodes D and K to establish direct pairwise key with node M, and node M shall route through nodes N, O, G, H, I, J to establish direct pairwise key with node B.\nSince the sensors are source limited, so they are easy to die or out of the communication radius, therefore the algorithm proposed in the above section cannot guarantee to establish correct key path efficiently.\nIn this section, we will propose a dynamic path key discovery algorithm as follows, which can improve the probability of key path effectively: Algorithm I: Dynamic key path establishment algorithm based on H (k, u, m, v, n) model for cluster deployed sensor networks.\nInput: Sub-sensor network H (k, u, m, v, n), which has some compromised \/ fault sensors and fault links, And two reachable nodes A (a1...an, a' 1...a'm) and B (b1...bn, b' 1...b'm) in H (k, u, m, v, n), where a' t #b' t, tE [1, s], a' t = b' t, t> s.\nOutput: A correct key path from node A to B in H (k, u, m, v, n).\nStep 1: Obtain the code strings of node A and B: A F (a1...an, a' 1...a'm), B F (b1...bn, b' 1...b'm), where aj, bj \u2208 [0,u-1], a' j, b' j \u2208 [0,v-1].\nStep 2: If a' 1...a'm = b' 1...b'm, then node A can find a route to B according to the routing algorithms of hypercube [9-10].\nStep 3: Otherwise, node A can find a route to C (b1...bn, a' 1...a'm) according to the Algorithm I or Algorithm II.\nThen let I0 = C (b1...bn, a' 1...a'm), I1 = (b1...bn, b' 1 a' 2...a'm),..., Is = B (b1...bn, b' 1 b' 2...b's a's +1...a'm), and each node It in the above nodes series find a route to its neighboring node It +1 on the basis of the location information (Detailed routing algorithms based on location information can see the references [11-14]).\nStep 4: Algorithm exits.\nIf such kind of a correct key path exists, then through which node A can establish an indirect pairwise key with node B. Otherwise, node A fails to establish an indirect pairwise key with node B. And node A will tries again to establish an indirect pairwise key with node B some time later.\n7.\nALGORITHM ANALYSES\n7.1 Practical Analyses\nAccording to the former description and analyses, it is easy to know that the above newly proposed algorithm has the following properties: Property 3: When there exist no fault and compromised nodes, by using new pairwise key predistribution scheme based on H (k, u, m, v, n) model, the probability of direct pairwise key establishment between any two nodes can be estimated as P = (m (u-1) + n (v-1)) \/ (N-1), where N is the total number of nodes in the sensor network, and N = um * vn.\nProof: Since the predistributed pairwise keys for any node FA\n, y),..., f (jm, y)} in the newly proposed algorithm.\nObviously, in the logical hypercube formed by the nodes in the same cluster of node A, there are n (v-1) nodes, which\nhave direct pairwise key with node A. And in the logical hypercube formed by the nodes in different clusters from that of node A, there are m (u-1) nodes, which have direct pairwise key with node A. Therefore, there are totally m (u-1) + n (v-1) nodes, which have direct pairwise key with node A. So, the probability of pairwise key establishment between any two nodes can be estimated as P = (m (u-1) + n (v-1)) \/ (N-1), since the whole sensor network has N sensor nodes in all.\nFigure .4 presents the comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H (8,2,3, v, n) model.\nFigure .4 Comparision between the probability of direct pairwise key establishment between any two nodes and the dimension n, when the sensor network has different total nodes, and use the new pairwise key predistribution scheme based on H (8,2,3, v, n) model.\nFrom Figure .4, it is easy to know that by using new pairwise key predistribution scheme based on H (k, u, m, v, n) model, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the scale of the sensor networks, and in addition, the probability of direct pairwise key establishment between any two nodes decreases with the increasing of the dimension n, when the scale of the sensor network is fixed.\nTheorem 4: Supposing that the total sensors is N in the sensor network, then when u>--v2, the probability of direct pairwise key establishment between any two nodes, when using the key distribution scheme based on the hypercube model H (v, p), is smaller than that when using the key distribution scheme based on the H (k, u, m, v, n) model.\nProof: Since u>--v, then we can let u = vt, where t>--2.\nSince the total number of nodes in H (v, p) is vp = N, the total number of nodes in H (k, u, m, v, n) is um * vn = N. Let p = x + n, then there is um * vn = vx * vn => um = vx => x = tm.\nFrom the property 3, it is easy to know that the probability of direct pairwise key establishment between any two nodes can be estimated as P = (m (u-1) + n (v-1)) \/ (N-1).\nAccording to the description in [7], it is well know that the probability of direct pairwise key establishment between any two nodes can be estimated as P' = p (v-1) \/ (N-1) = (x (v-1) + n (v-1)) \/ (N-1).\nNext, we will prove that m (u-1)>--x (v-1):\nTherefore, the conclusion of the theorem stands.\nAs for the conclusion of theorem 4, we give an example to illustrate.\nSupposing that the total number of nodes in the sensor network is N = 214, and H (k, u, m, v, n) = H (16,4,2,2,10), H (v, p) = H (10,14), then the probability of direct pairwise key establishment between any two nodes based on the H (k, u, m, v, n) model is P = (m (u-1) + n (v1)) \/ (N-1) = (2 (4-1) +10 (2-1)) \/ (214-1) = 16 \/ (214-1), but the probability of direct pairwise key establishment between any two nodes based on the H (v, p) model is P' = p (v-1) \/ (N-1) = 14 (2-1) \/ (214-1) = 14 \/ (2141).\nSupposing that the total number of nodes in the sensor network is N, Figure .5 illustrates the comparison between the probability of direct pairwise key establishment between any two nodes based on the H (k, u, m, v, n) model and the probability of direct pairwise key establishment between any two nodes based on the H (v, p) model, when u = 4 and v = 2.\n01 2 3 4 5 6 7 8 9 10 scale of the sensor network Figure .5 Comparison between the probability of direct pairwise key establishment between H (v, n) and H (k, u, m, v, n) models.\nFrom Figure .5, it is easy to know that the theorem 5 stands.\nTheorem 5: Supposing that the total sensors is N in the sensor network, then the pairwise key distribution scheme based on the hypercube model H (v, p), is only a special case of the pairwise key distribution scheme based on the H (k, u, m, v, n) model.\nProof: As for the pairwise key distribution scheme based on the H (k, u, m, v, n) model, let k = 1 (u = 1, m = 0), which means that the total sensor network includes only one cluster.\nThen obviously, the H (k, u, m, v, n) model will degrade into the H (v, n) model.\nAccording to the former anayses in this paper and the definition of the pairwise key distribution scheme based on the hypercube model H (v, p) in [7], it is easy to know that the conclusion of the theorem stands.\n7.2 Security Analyses\nBy using the pairwise key establishment algorithm based on the H (k, u, m, v, n) model, the intruders can launch two kinds of attacks: 1) The attackers may target the pairwise key between two particular sensor node, in order to compromise the pairwise key between them, or prevent them to establish pairwise key.\n2) The attackers may attack against the whole sensor network, inorder to decrease the probability of the pairwise key establishment, or increase the cost of the pairwise key establishment.\nAttacks against a Pair of sensor nodes\n1.\nSupposing that the intruders want to attack two particular sensor nodes u, v, where u, v are all not compromised nodes, but the intruders want to compromise the pairwise key between them.\n1) If u, v can establish direct pairwise key, then the only way to compromise the key is to compromise the common bivariate polynomial f (x, y) between u, v.\nSince the degree of the bivariate polynomial f (x, y) is t, so the intruders need to compromise at least t +1 sensor nodes that have a share of the bivariate polynomial f (x, y).\n2) If u, v can establish indirect pairwise key through intermediate nodes, then the intruders need to compromise at least one intermediate node, or compromise the common bivariate polynomial f (x, y) between two neighboring intermediate nodes.\nBut even if the intruders succeed to do that, node u and v can still reestablish indirect pairwise key through alternative intermediate nodes.\n2.\nSupposing that the intruders want to attack two particular sensor nodes u, v, where u, v are all not compromised nodes, but the intruders want to prevent them to establish the pairwise key.\nThen, the intruders need to compromise all of the m + n bivariate polynomials of node u or v.\nSince the degree of the bivariate polynomial f (x, y) is t, so for bivariate polynomial, the intruders need to compromise at least t +1 sensor nodes that have a share of the given bivariate polynomial.\nTherefore, the intruders need to compromise (m + n) (t +1) sensor nodes altogether to prevent u, v to establish the pairwise key.\nAttacks against the sensor network\nSupposing that the Attackers know the distribution of the polynomials over sensor nodes, it may systematically attack the network by compromising the polynomials in F one by one in order to compromise the entire network.\nAssume the fraction of the compromised polynomials is pc, then there are up to n N' = pcx {v m u nx x m + [] x x} = pcxNx (m + n) N n vvn Sensor nodes that have at least one compromised polynomial share.\nAmong all of the remaining N - N' sensor nodes, none of them includes a compromised polynomial share.\nSo, the remaining N - N' sensor nodes can establish direct pairwise key by using any one of their polynomial shares.\nHowever, the indirect pairwise keys in the remaining N - N' sensor nodes may be affected.\nAnd they may need to re-establish a new indirect pairwise key between them by select alternative intermediate nodes that do not belong to the N' compromised nodes.\nSupposing that the scale of the sensor network is N = 10000, Figure .6 presents the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H (k, u, m, v, n) distribution models.\nFraction of Compromised B ivariate Polynomials\nFigure .6 the comparison between pc and the number of sensor nodes with at least one compromised polynomial share in sensor networks based on different H (k, u, m, v, n) distribution models.\nTheorem 6: Supposing that the total sensors is N in the sensor network, and the fraction of compromised nodes is pc, then when u> v, the number of affected nodes of the H (v, p) model based key predistribution scheme, is bigger than that of the H (k, u, m, v, n) model based key predistribution scheme.\nProof: Since the number of affected nodes of the H (k, u, m, v, n) model based key predistribution scheme is pcxNx (m + n), and it is proved in [7] that the number of affected nodes of the H (v, p) model based key predistribution scheme is pcxNx p. Let p = x + n, then there is um * vn = vx * vn => um = vx.\nSince u> v => x> m => pcxNx (m + n) \u03b8} Rlow f = {ri f |ef (i) < \u03b8} These sets are specific for each (hotel, feature) pair, and in our experiments we took \u03b8 = 4.\nThis rather high value is close to the average rating across all features across all hotels, and is justified by the fact that our data set contains mostly high quality hotels.\nFor each city, we take all hotels and compute the average ratings in the sets Rhigh f and Rlow f (see Table 3).\nThe average rating amongst reviews following low prior expectations is significantly higher than the average rating following high expectations.\nAs further evidence, we consider all hotels for which the function eV (i) (the expectation for the feature Value) has a high value (greater than 4) for some i, and a low value (less than 4) for some other i. Intuitively, these are the hotels for which there is a minimal degree of variation in the timely sequence of reviews: i.e., the cumulative average of ratings was at some point high and afterwards became low, or vice-versa.\nSuch variations are observed for about half of all hotels in each city.\nFigure 3 plots the median (across considered hotels) rating, rV , when ef (i) is not more than x but greater than x \u2212 0.5.\n2.5 3 3.5 4 4.5 5 2.5 3 3.5 4 4.5 5 Medianofrating expectation Boston Sydney Vegas Figure 3: The ratings tend to decrease as the expectation increases.\n138 There are two ways to interpret the function ef (i): \u2022 The expected value for feature f obtained by user i before his experience with the service, acquired by reading reports submitted by past users.\nIn this case, an overly high value for ef (i) would drive the user to submit a negative report (or vice versa), stemming from the difference between the actual value of the service, and the inflated expectation of this value acquired before his experience.\n\u2022 The expected value of feature f for all subsequent visitors of the site, if user i were not to submit a report.\nIn this case, the motivation for a negative report following an overly high value of ef is different: user i seeks to correct the expectation of future visitors to the site.\nUnlike the interpretation above, this does not require the user to derive an a priori expectation for the value of f. Note that neither interpretation implies that the average up to report i is inversely related to the rating at report i.\nThere might exist a measure of influence exerted by past reports that pushes the user behind report i to submit ratings which to some extent conforms with past reports: a low value for ef (i) can influence user i to submit a low rating for feature f because, for example, he fears that submitting a high rating will make him out to be a person with low standards5 .\nThis, at first, appears to contradict Hypothesis 2.\nHowever, this conformity rating cannot continue indefinitely: once the set of reports project a sufficiently deflated estimate for vf , future reviewers with comparatively positive impressions will seek to correct this misconception.\n4.2 Impact of textual comments on quality expectation Further insight into the rating behavior of TripAdvisor users can be obtained by analyzing the relationship between the weights wf and the values ef (i).\nIn particular, we examine the following hypothesis: Hypothesis 3.\nWhen a large proportion of the text of a review discusses a certain feature, the difference between the rating for that feature and the average rating up to that point tends to be large.\nThe intuition behind this claim is that when the user is adamant about voicing his opinion regarding a certain feature, his opinion differs from the collective opinion of previous postings.\nThis relies on the characteristic of reputation systems as feedback forums where a user is interested in projecting his opinion, with particular strength if this opinion differs from what he perceives to be the general opinion.\nTo test Hypothesis 3 we measure the average absolute difference between the expectation ef (i) and the rating ri f when the weight wi f is high, respectively low.\nWeights are classified high or low by comparing them with certain cutoff values: wi f is low if smaller than 0.1, while wi f is high if greater than \u03b8f .\nDifferent cutoff values were used for different features: \u03b8R = 0.4, \u03b8S = 0.4, \u03b8C = 0.2, and \u03b8V = 0.7.\nCleanliness has a lower cutoff since it is a feature rarely discussed; Value has a high cutoff for the opposite reason.\nResults are presented in Table 4.\n5 The idea that negative reports can encourage further negative reporting has been suggested before [14] Table 4: Average of |ri f \u2212ef (i)| when weights are high (first value in the cell) and low (second value in the cell) with P-values for the difference in sq. brackets.\nCity R S C V 1.058 1.208 1.728 1.356 Boston 0.701 0.838 0.760 0.917 [0.022] [0.063] [0.000] [0.218] 1.048 1.351 1.218 1.318 Sydney 0.752 0.759 0.767 0.908 [0.179] [0.009] [0.165] [0.495] 1.184 1.378 1.472 1.642 Las Vegas 0.772 0.834 0.808 1.043 [0.071] [0.020] [0.006] [0.076] This demonstrates that when weights are unusually high, users tend to express an opinion that does not conform to the net average of previous ratings.\nAs we might expect, for a feature that rarely was a high weight in the discussion, (e.g., cleanliness) the difference is particularly large.\nEven though the difference in the feature Value is quite large for Sydney, the P-value is high.\nThis is because only few reviews discussed value heavily.\nThe reason could be cultural or because there was less of a reason to discuss this feature.\n4.3 Reporting Incentives Previous models suggest that users who are not highly opinionated will not choose to voice their opinions [12].\nIn this section, we extend this model to account for the influence of expectations.\nThe motivation for submitting feedback is not only due to extreme opinions, but also to the difference between the current reputation (i.e., the prior expectation of the user) and the actual experience.\nSuch a rating model produces ratings that most of the time deviate from the current average rating.\nThe ratings that confirm the prior expectation will rarely be submitted.\nWe test on our data set the proportion of ratings that attempt to correct the current estimate.\nWe define a deviant rating as one that deviates from the current expectation by at least some threshold \u03b8, i.e., |ri f \u2212 ef (i)| \u2265 \u03b8.\nFor each of the three considered cities, the following tables, show the proportion of deviant ratings for \u03b8 = 0.5 and \u03b8 = 1.\nTable 5: Proportion of deviant ratings with \u03b8 = 0.5 City O R S C V Boston 0.696 0.619 0.676 0.604 0.684 Sydney 0.645 0.615 0.672 0.614 0.675 Las Vegas 0.721 0.641 0.694 0.662 0.724 Table 6: Proportion of deviant ratings with \u03b8 = 1 City O R S C V Boston 0.420 0.397 0.429 0.317 0.446 Sydney 0.360 0.367 0.442 0.336 0.489 Las Vegas 0.510 0.421 0.483 0.390 0.472 The above results suggest that a large proportion of users (close to one half, even for the high threshold value \u03b8 = 1) deviate from the prior average.\nThis reinforces the idea that users are more likely to submit a report when they believe they have something distinctive to add to the current stream of opinions for some feature.\nSuch conclusions are in total agreement with prior evidence that the distribution of reports often follows bi-modal, U-shaped distributions.\n139 5.\nMODELLING THE BEHAVIOR OF RATERS To account for the observations described in the previous sections, we propose a model for the behavior of the users when submitting online reviews.\nFor a given hotel, we make the assumption that the quality experienced by the users is normally distributed around some value vf , which represents the objective quality offered by the hotel on the feature f.\nThe rating submitted by user i on feature f is: \u02c6ri f = \u03b4f vi f + (1 \u2212 \u03b4f ) \u00b7 sign vi f \u2212 ef (i) c + d(vi f , ef (i)|wi f ) (2) where: \u2022 vi f is the (unknown) quality actually experienced by the user.\nvi f is assumed normally distributed around some value vf ; \u2022 \u03b4f \u2208 [0, 1] can be seen as a measure of the bias when reporting feedback.\nHigh values reflect the fact that users rate objectively, without being influenced by prior expectations.\nThe value of \u03b4f may depend on various factors; we fix one value for each feature f; \u2022 c is a constant between 1 and 5; \u2022 wi f is the weight of feature f in the textual comment of review i, computed according to Eq.\n(1); \u2022 d(vi f , ef (i)|wi f ) is a distance function between the expectation and the observation of user i.\nThe distance function satisfies the following properties: - d(y, z|w) \u2265 0 for all y, z \u2208 [0, 5], w \u2208 [0, 1]; - |d(y, z|w)| < |d(z, x|w)| if |y \u2212 z| < |z \u2212 x|; - |d(y, z|w1)| < |d(y, z|w2)| if w1 < w2; - c + d(vf , ef (i)|wi f ) \u2208 [1, 5]; The second term of Eq.\n(2) encodes the bias of the rating.\nThe higher the distance between the true observation vi f and the function ef , the higher the bias.\n5.1 Model Validation We use the data set of TripAdvisor reviews to validate the behavior model presented above.\nWe split for convenience the rating values in three ranges: bad (B = {1, 2}), indifferent (I = {3, 4}), and good (G = {5}), and perform the following two tests: \u2022 First, we will use our model to predict the ratings that have extremal values.\nFor every hotel, we take the sequence of reports, and whenever we encounter a rating that is either good or bad (but not indifferent) we try to predict it using Eq.\n(2) \u2022 Second, instead of predicting the value of extremal ratings, we try to classify them as either good or bad.\nFor every hotel we take the sequence of reports, and for each report (regardless of it value) we classify it as being good or bad However, to perform these tests, we need to estimate the objective value, vf , that is the average of the true quality observations, vi f .\nThe algorithm we are using is based on the intuition that the amount of conformity rating is minimized.\nIn other words, the value vf should be such that as often as possible, bad ratings follow expectations above vf and good ratings follow expectations below vf .\nFormally, we define the sets: \u03931 = {i|ef (i) < vf and ri f \u2208 B}; \u03932 = {i|ef (i) > vf and ri f \u2208 G}; that correspond to irregularities where even though the expectation at point i is lower than the delivered value, the rating is poor, and vice versa.\nWe define vf as the value that minimize these union of the two sets: vf = arg min vf |\u03931 \u222a \u03932| (3) In Eq.\n(2) we replace vi f by the value vf computed in Eq.\n(3), and use the following distance function: d(vf , ef (i)|wi f ) = |vf \u2212 ef (i)| vf \u2212 ef (i) |vf 2 \u2212 ef (i)2 | \u00b7 (1 + 2wi f ); The constant c \u2208 I was set to min{max{ef (i), 3}}, 4}.\nThe values for \u03b4f were fixed at {0.7, 0.7, 0.8, 0.7, 0.6} for the features {Overall, Rooms, Service, Cleanliness, Value} respectively.\nThe weights are computed as described in Section 3.\nAs a first experiment, we take the sets of extremal ratings {ri f |ri f \/\u2208 I} for each hotel and feature.\nFor every such rating, ri f , we try to estimate it by computing \u02c6ri f using Eq.\n(2).\nWe compare this estimator with the one obtained by simply averaging the ratings over all hotels and features: i.e., \u00afrf = j,r j f =0 rj f j,r j f =0 1 ; Table 7 presents the ratio between the root mean square error (RMSE) when using \u02c6ri f and \u00afrf to estimate the actual ratings.\nIn all cases the estimate produced by our model is better than the simple average.\nTable 7: Average of RMSE(\u02c6rf ) RMSE(\u00afrf ) City O R S C V Boston 0.987 0.849 0.879 0.776 0.913 Sydney 0.927 0.817 0.826 0.720 0.681 Las Vegas 0.952 0.870 0.881 0.947 0.904 As a second experiment, we try to distinguish the sets Bf = {i|ri f \u2208 B} and Gf = {i|ri f \u2208 G} of bad, respectively good ratings on the feature f. For example, we compute the set Bf using the following classifier (called \u03c3): ri f \u2208 Bf (\u03c3f (i) = 1) \u21d4 \u02c6ri f \u2264 4; Tables 8, 9 and 10 present the Precision(p), Recall(r) and s = 2pr p+r for classifier \u03c3, and compares it with a naive majority classifier, \u03c4, \u03c4f (i) = 1 \u21d4 |Bf | \u2265 |Gf |: We see that recall is always higher for \u03c3 and precision is usually slightly worse.\nFor the s metric \u03c3 tends to add a 140 Table 8: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Boston O R S C V p 0.678 0.670 0.573 0.545 0.610 \u03c3 r 0.626 0.659 0.619 0.612 0.694 s 0.651 0.665 0.595 0.577 0.609 p 0.684 0.706 0.647 0.611 0.633 \u03c4 r 0.597 0.541 0.410 0.383 0.562 s 0.638 0.613 0.502 0.471 0.595 Table 9: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Las Vegas O R S C V p 0.654 0.748 0.592 0.712 0.583 \u03c3 r 0.608 0.536 0.791 0.474 0.610 s 0.630 0.624 0.677 0.569 0.596 p 0.685 0.761 0.621 0.748 0.606 \u03c4 r 0.542 0.505 0.767 0.445 0.441 s 0.605 0.607 0.670 0.558 0.511 1-20% improvement over \u03c4, much higher in some cases for hotels in Sydney.\nThis is likely because Sydney reviews are more positive than those of the American cities and cases where the number of bad reviews exceeded the number of good ones are rare.\nReplacing the test algorithm with one that plays a 1 with probability equal to the proportion of bad reviews improves its results for this city, but it is still outperformed by around 80%.\n6.\nSUMMARY OF RESULTS AND CONCLUSION The goal of this paper is to explore the factors that drive a user to submit a particular rating, rather than the incentives that encouraged him to submit a report in the first place.\nFor that we use two additional sources of information besides the vector of numerical ratings: first we look at the textual comments that accompany the reviews, and second we consider the reports that have been previously submitted by other users.\nUsing simple natural language processing algorithms, we were able to establish a correlation between the weight of a certain feature in the textual comment accompanying the review, and the noise present in the numerical rating.\nSpecifically, it seems that users who discuss amply a certain feature are likely to agree on a common rating.\nThis observation allows the construction of feature-by-feature estimators of quality that have a lower variance, and are hopefully less noisy.\nNevertheless, further evidence is required to support the intuition that ratings corresponding to high weights are expert opinions that deserve to be given higher priority when computing estimates of quality.\nSecond, we emphasize the dependence of ratings on previous reports.\nPrevious reports create an expectation of quality which affects the subjective perception of the user.\nWe validate two facts about the hotel reviews we collected from TripAdvisor: First, the ratings following low expectations (where the expectation is computed as the average of the previous reports) are likely to be higher than the ratings Table 10: Precision(p), Recall(r), s= 2pr p+r while spotting poor ratings for Sydney O R S C V p 0.650 0.463 0.544 0.550 0.580 \u03c3 r 0.234 0.378 0.571 0.169 0.592 s 0.343 0.452 0.557 0.259 0.586 p 0.562 0.615 0.600 0.500 0.600 \u03c4 r 0.054 0.098 0.101 0.015 0.175 s 0.098 0.168 0.172 0.030 0.271 following high expectations.\nIntuitively, the perception of quality (and consequently the rating) depends on how well the actual experience of the user meets her expectation.\nSecond, we include evidence from the textual comments, and find that when users devote a large fraction of the text to discussing a certain feature, they are likely to motivate a divergent rating (i.e., a rating that does not conform to the prior expectation).\nIntuitively, this supports the hypothesis that review forums act as discussion groups where users are keen on presenting and motivating their own opinion.\nWe have captured the empirical evidence in a behavior model that predicts the ratings submitted by the users.\nThe final rating depends, as expected, on the true observation, and on the gap between the observation and the expectation.\nThe gap tends to have a bigger influence when an important fraction of the textual comment is dedicated to discussing a certain feature.\nThe proposed model was validated on the empirical data and provides better estimates of the ratings actually submitted.\nOne assumption that we make is about the existence of an objective quality value vf for the feature f.\nThis is rarely true, especially over large spans of time.\nOther explanations might account for the correlation of ratings with past reports.\nFor example, if ef (i) reflects the true value of f at a point in time, the difference in the ratings following high and low expectations can be explained by hotel revenue models that are maximized when the value is modified accordingly.\nHowever, the idea that variation in ratings is not primarily a function of variation in value turns out to be a useful one.\nOur approach to approximate this elusive ``objective value'' is by no means perfect, but conforms neatly to the idea behind the model.\nA natural direction for future work is to examine concrete applications of our results.\nSignificant improvements of quality estimates are likely to be obtained by incorporating all empirical evidence about rating behavior.\nExactly how different factors affect the decisions of the users is not clear.\nThe answer might depend on the particular application, context and culture.\n7.\nREFERENCES [1] A. Admati and P. Pfleiderer.\nNoisytalk.com: Broadcasting opinions in a noisy environment.\nWorking Paper 1670R, Stanford University, 2000.\n[2] P. B., L. Lee, and S. Vaithyanathan.\nThumbs up?\nsentiment classification using machine learning techniques.\nIn Proceedings of the EMNLP-02, the Conference on Empirical Methods in Natural Language Processing, 2002.\n[3] H. Cui, V. Mittal, and M. Datar.\nComparative 141 Experiments on Sentiment Classification for Online Product Reviews.\nIn Proceedings of AAAI, 2006.\n[4] K. Dave, S. Lawrence, and D. Pennock.\nMining the peanut gallery:opinion extraction and semantic classification of product reviews.\nIn Proceedings of the 12th International Conference on the World Wide Web (WWW03), 2003.\n[5] C. Dellarocas, N. Awad, and X. Zhang.\nExploring the Value of Online Product Ratings in Revenue Forecasting: The Case of Motion Pictures.\nWorking paper, 2006.\n[6] C. Forman, A. Ghose, and B. Wiesenfeld.\nA Multi-Level Examination of the Impact of Social Identities on Economic Transactions in Electronic Markets.\nAvailable at SSRN: http:\/\/ssrn.com\/abstract=918978, July 2006.\n[7] A. Ghose, P. Ipeirotis, and A. Sundararajan.\nReputation Premiums in Electronic Peer-to-Peer Markets: Analyzing Textual Feedback and Network Structure.\nIn Third Workshop on Economics of Peer-to-Peer Systems, (P2PECON), 2005.\n[8] A. Ghose, P. Ipeirotis, and A. Sundararajan.\nThe Dimensions of Reputation in electronic Markets.\nWorking Paper CeDER-06-02, New York University, 2006.\n[9] A. Harmon.\nAmazon Glitch Unmasks War of Reviewers.\nThe New York Times, February 14, 2004.\n[10] D. Houser and J. Wooders.\nReputation in Auctions: Theory and Evidence from eBay.\nJournal of Economics and Management Strategy, 15:353-369, 2006.\n[11] M. Hu and B. Liu.\nMining and summarizing customer reviews.\nIn Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD04), 2004.\n[12] N. Hu, P. Pavlou, and J. Zhang.\nCan Online Reviews Reveal a Product``s True Quality?\nIn Proceedings of ACM Conference on Electronic Commerce (EC 06), 2006.\n[13] K. Kalyanam and S. McIntyre.\nReturn on reputation in online auction market.\nWorking Paper 02\/03-10-WP, Leavey School of Business, Santa Clara University., 2001.\n[14] L. Khopkar and P. Resnick.\nSelf-Selection, Slipping, Salvaging, Slacking, and Stoning: the Impacts of Negative Feedback at eBay.\nIn Proceedings of ACM Conference on Electronic Commerce (EC 05), 2005.\n[15] M. Melnik and J. Alm.\nDoes a seller``s reputation matter?\nevidence from ebay auctions.\nJournal of Industrial Economics, 50(3):337-350, 2002.\n[16] R. Olshavsky and J. Miller.\nConsumer Expectations, Product Performance and Perceived Product Quality.\nJournal of Marketing Research, 9:19-21, February 1972.\n[17] A. Parasuraman, V. Zeithaml, and L. Berry.\nA Conceptual Model of Service Quality and Its Implications for Future Research.\nJournal of Marketing, 49:41-50, 1985.\n[18] A. Parasuraman, V. Zeithaml, and L. Berry.\nSERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality.\nJournal of Retailing, 64:12-40, 1988.\n[19] P. Pavlou and A. Dimoka.\nThe Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation.\nInformation Systems Research, 17(4):392-414, 2006.\n[20] A. Popescu and O. Etzioni.\nExtracting product features and opinions from reviews.\nIn Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, 2005.\n[21] R. Teas.\nExpectations, Performance Evaluation, and Consumers'' Perceptions of Quality.\nJournal of Marketing, 57:18-34, 1993.\n[22] E. White.\nChatting a Singer Up the Pop Charts.\nThe Wall Street Journal, October 15, 1999.\nAPPENDIX A. LIST OF WORDS, LR, ASSOCIATED TO THE FEATURE ROOMS All words serve as prefixes: room, space, interior, decor, ambiance, atmosphere, comfort, bath, toilet, bed, building, wall, window, private, temperature, sheet, linen, pillow, hot, water, cold, water, shower, lobby, furniture, carpet, air, condition, mattress, layout, design, mirror, ceiling, lighting, lamp, sofa, chair, dresser, wardrobe, closet 142","lvl-3":"Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services.\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult.\nIn this paper, we investigate underlying factors that influence user behavior when reporting feedback.\nWe look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports.\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature.\nSecond, we show that a user's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews.\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.\n1.\nMOTIVATIONS\nThe spread of the internet has made it possible for online feedback forums (or reputation mechanisms) to become an important channel for Word-of-mouth regarding products, services or other types of commercial interactions.\nNumerous empirical studies [10, 15, 13, 5] show that buyers se\nriously consider online feedback when making purchasing decisions, and are willing to pay reputation premiums for products or services that have a good reputation.\nRecent analysis, however, raises important questions regarding the ability of existing forums to reflect the real quality of a product.\nIn the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews.\nFor example, [12, 1] show that Amazons ratings of books or CDs follow with great probability bi-modal, U-shaped distributions where most of the ratings are either very good, or very bad.\nControlled experiments, on the other hand, reveal opinions on the same items that are normally distributed.\nUnder these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false.\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.\nHu et al. [12] propose the \"Brag-and-Moan Model\" where users rate only if their utility of the product (drawn from a normal distribution) falls outside a median interval.\nThe authors conclude that the model explains the empirical distribution of reports, and offers insights into smarter ways of estimating the true quality of the product.\nIn the present paper we extend this line of research, and attempt to explain further facts about the behavior of users when reporting online feedback.\nUsing actual hotel reviews from the TripAdvisor2 website, we consider two additional sources of information besides the basic numerical ratings submitted by users.\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings.\nWe use text-mining techniques similar to [7] and [3], however, we are only interested in identifying what aspects of the service the user is discussing, without computing the semantic orientation of the text.\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature.\nIntuitively, lengthy comments reveal the importance of the feature to the user.\nSince people tend to be more knowledgeable in the aspects they consider important, users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature.\nSecond we investigate the relationship between a review\nFigure 1: The TripAdvisor page displaying reviews\nfor a popular Boston hotel.\nName of hotel and advertisements were deliberatively erased.\nand the reviews that preceded it.\nA perusal of online reviews shows that ratings are often part of discussion threads, where one post is not necessarily independent of other posts.\nOne may see, for example, users who make an effort to contradict, or vehemently agree with, the remarks of previous users.\nBy analyzing the time sequence of reports, we conclude that past reviews influence the future reports, as they create some prior expectation regarding the quality of service.\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [17, 18, 16, 21] which will later reflect in the user's rating.\nWe propose a model that captures the dependence of ratings on prior expectations, and validate it using the empirical data we collected.\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews.\nOur first result can be used to determine a featureby-feature estimate of quality, where for each feature, a different subset of reviews (i.e., those with lengthy comments of that feature) is considered.\nThe second leads to an algorithm that outputs a more precise estimate of the real quality.\n2.\nTHE DATA SET\n2.1 Formal notation\n3.\nEVIDENCE FROM TEXTUAL COMMENTS\n4.\nTHE INFLUENCE OF PAST RATINGS\n4.1 Prior Expectations\n4.2 Impact of textual comments on quality expectation\n4.3 Reporting Incentives\n5.\nMODELLING THE BEHAVIOR OF RATERS\n5.1 Model Validation\n6.\nSUMMARY OF RESULTS AND CONCLUSION\n7.\nREFERENCES\nAPPENDIX A. LIST OF WORDS, LR, ASSOCIATED TO THE FEATURE ROOMS","lvl-4":"Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services.\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult.\nIn this paper, we investigate underlying factors that influence user behavior when reporting feedback.\nWe look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports.\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature.\nSecond, we show that a user's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews.\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.\n1.\nMOTIVATIONS\nriously consider online feedback when making purchasing decisions, and are willing to pay reputation premiums for products or services that have a good reputation.\nRecent analysis, however, raises important questions regarding the ability of existing forums to reflect the real quality of a product.\nIn the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews.\nUnder these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false.\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.\nHu et al. [12] propose the \"Brag-and-Moan Model\" where users rate only if their utility of the product (drawn from a normal distribution) falls outside a median interval.\nThe authors conclude that the model explains the empirical distribution of reports, and offers insights into smarter ways of estimating the true quality of the product.\nIn the present paper we extend this line of research, and attempt to explain further facts about the behavior of users when reporting online feedback.\nUsing actual hotel reviews from the TripAdvisor2 website, we consider two additional sources of information besides the basic numerical ratings submitted by users.\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings.\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature.\nIntuitively, lengthy comments reveal the importance of the feature to the user.\nSince people tend to be more knowledgeable in the aspects they consider important, users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature.\nSecond we investigate the relationship between a review\nFigure 1: The TripAdvisor page displaying reviews\nfor a popular Boston hotel.\nName of hotel and advertisements were deliberatively erased.\nand the reviews that preceded it.\nA perusal of online reviews shows that ratings are often part of discussion threads, where one post is not necessarily independent of other posts.\nOne may see, for example, users who make an effort to contradict, or vehemently agree with, the remarks of previous users.\nBy analyzing the time sequence of reports, we conclude that past reviews influence the future reports, as they create some prior expectation regarding the quality of service.\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [17, 18, 16, 21] which will later reflect in the user's rating.\nWe propose a model that captures the dependence of ratings on prior expectations, and validate it using the empirical data we collected.\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews.\nOur first result can be used to determine a featureby-feature estimate of quality, where for each feature, a different subset of reviews (i.e., those with lengthy comments of that feature) is considered.\nThe second leads to an algorithm that outputs a more precise estimate of the real quality.","lvl-2":"Understanding User Behavior in Online Feedback Reporting\nABSTRACT\nOnline reviews have become increasingly popular as a way to judge the quality of various products and services.\nPrevious work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult.\nIn this paper, we investigate underlying factors that influence user behavior when reporting feedback.\nWe look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports.\nWe first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature.\nSecond, we show that a user's rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews.\nBoth give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.\nOur hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.\n1.\nMOTIVATIONS\nThe spread of the internet has made it possible for online feedback forums (or reputation mechanisms) to become an important channel for Word-of-mouth regarding products, services or other types of commercial interactions.\nNumerous empirical studies [10, 15, 13, 5] show that buyers se\nriously consider online feedback when making purchasing decisions, and are willing to pay reputation premiums for products or services that have a good reputation.\nRecent analysis, however, raises important questions regarding the ability of existing forums to reflect the real quality of a product.\nIn the absence of clear incentives, users with a moderate outlook will not bother to voice their opinions, which leads to an unrepresentative sample of reviews.\nFor example, [12, 1] show that Amazons ratings of books or CDs follow with great probability bi-modal, U-shaped distributions where most of the ratings are either very good, or very bad.\nControlled experiments, on the other hand, reveal opinions on the same items that are normally distributed.\nUnder these circumstances, using the arithmetic mean to predict quality (as most forums actually do) gives the typical user an estimator with high variance that is often false.\nImproving the way we aggregate the information available from online reviews requires a deep understanding of the underlying factors that bias the rating behavior of users.\nHu et al. [12] propose the \"Brag-and-Moan Model\" where users rate only if their utility of the product (drawn from a normal distribution) falls outside a median interval.\nThe authors conclude that the model explains the empirical distribution of reports, and offers insights into smarter ways of estimating the true quality of the product.\nIn the present paper we extend this line of research, and attempt to explain further facts about the behavior of users when reporting online feedback.\nUsing actual hotel reviews from the TripAdvisor2 website, we consider two additional sources of information besides the basic numerical ratings submitted by users.\nThe first is simple linguistic evidence from the textual review that usually accompanies the numerical ratings.\nWe use text-mining techniques similar to [7] and [3], however, we are only interested in identifying what aspects of the service the user is discussing, without computing the semantic orientation of the text.\nWe find that users who comment more on the same feature are more likely to agree on a common numerical rating for that particular feature.\nIntuitively, lengthy comments reveal the importance of the feature to the user.\nSince people tend to be more knowledgeable in the aspects they consider important, users who discuss a given feature in more details might be assumed to have more authority in evaluating that feature.\nSecond we investigate the relationship between a review\nFigure 1: The TripAdvisor page displaying reviews\nfor a popular Boston hotel.\nName of hotel and advertisements were deliberatively erased.\nand the reviews that preceded it.\nA perusal of online reviews shows that ratings are often part of discussion threads, where one post is not necessarily independent of other posts.\nOne may see, for example, users who make an effort to contradict, or vehemently agree with, the remarks of previous users.\nBy analyzing the time sequence of reports, we conclude that past reviews influence the future reports, as they create some prior expectation regarding the quality of service.\nThe subjective perception of the user is influenced by the gap between the prior expectation and the actual performance of the service [17, 18, 16, 21] which will later reflect in the user's rating.\nWe propose a model that captures the dependence of ratings on prior expectations, and validate it using the empirical data we collected.\nBoth results can be used to improve the way reputation mechanisms aggregate the information from individual reviews.\nOur first result can be used to determine a featureby-feature estimate of quality, where for each feature, a different subset of reviews (i.e., those with lengthy comments of that feature) is considered.\nThe second leads to an algorithm that outputs a more precise estimate of the real quality.\n2.\nTHE DATA SET\nWe use in this paper real hotel reviews collected from the popular travel site TripAdvisor.\nTripAdvisor indexes hotels from cities across the world, along with reviews written by travelers.\nUsers can search the site by giving the hotel's name and location (optional).\nThe reviews for a given hotel are displayed as a list (ordered from the most recent to the oldest), with 5 reviews per page.\nThe reviews contain:\n\u2022 information about the author of the review (e.g., dates of stay, username of the reviewer, location of the reviewer); \u2022 the overall rating (from 1, lowest, to 5, highest); \u2022 a textual review containing a title for the review, free comments, and the main things the reviewer liked and disliked; \u2022 numerical ratings (from 1, lowest, to 5, highest) for different features (e.g., cleanliness, service, location, etc. .)\nBelow the name of the hotel, TripAdvisor displays the address of the hotel, general information (number of rooms, number of stars, short description, etc), the average overall rating, the TripAdvisor ranking, and an average rating for each feature.\nFigure 1 shows the page for a popular Boston hotel whose name (along with advertisements) was explicitly erased.\nWe selected three cities for this study: Boston, Sydney and Las Vegas.\nFor each city we considered all hotels that had at least 10 reviews, and recorded all reviews.\nTable 1 presents the number of hotels considered in each city, the total number of reviews recorded for each city, and the distribution of hotels with respect to the star-rating (as available on the TripAdvisor site).\nNote that not all hotels have a star-rating.\nTable 1: A summary of the data set.\nFor each review we recorded the overall rating, the textual review (title and body of the review) and the numerical rating on 7 features: Rooms (R), Service (S), Cleanliness (C), Value (V), Food (F), Location (L) and Noise (N).\nTripAdvisor does not require users to submit anything other than the overall rating, hence a typical review rates few additional features, regardless of the discussion in the textual comment.\nOnly the features Rooms (R), Service (S), Cleanliness (C) and Value (V) are rated by a significant number of users.\nHowever, we also selected the features Food (F), Location (L) and Noise (N) because they are referred to in a significant number of textual comments.\nFor each feature we record the numerical rating given by the user, or 0 when the rating is missing.\nThe typical length of the textual comment amounts to approximately 200 words.\nAll data was collected by crawling the TripAdvisor site in September 2006.\n2.1 Formal notation\nWe will formally refer to a review by a tuple (r, T) where:\n\u2022 r = (rf) is a vector containing the ratings\nrf \u2208 {0, 1,...5} for the features f \u2208 F = {O, R, S, C, V, F, L, N}; note that the overall rating, rO, is abusively recorded as the rating for the feature Overall (O);\n\u2022 T is the textual comment that accompanies the review.\nReviews are indexed according to the variable i, such that (ri, Ti) is the ith review in our database.\nSince we don't record the username of the reviewer, we will also say that the ith review in our data set was submitted by user i.\nWhen we need to consider only the reviews of a given hotel, h, we will use (ri (h), Ti (h)) to denote the ith review about the hotel h.\n3.\nEVIDENCE FROM TEXTUAL COMMENTS\nThe free textual comments associated to online reviews are a valuable source of information for understanding the reasons behind the numerical ratings left by the reviewers.\nThe text may, for example, reveal concrete examples of aspects that the user liked or disliked, thus justifying some of the high, respectively low ratings for certain features.\nThe text may also offer guidelines for understanding the preferences of the reviewer, and the weights of different features when computing an overall rating.\nThe problem, however, is that free textual comments are difficult to read.\nUsers are required to scroll through many reviews and read mostly repetitive information.\nSignificant improvements would be obtained if the reviews were automatically interpreted and aggregated.\nUnfortunately, this seems a difficult task for computers since human users often use witty language, abbreviations, cultural specific phrases, and the figurative style.\nNevertheless, several important results use the textual comments of online reviews in an automated way.\nUsing well established natural language techniques, reviews or parts of reviews can be classified as having a positive or negative semantic orientation.\nPang et al. [2] classify movie reviews into positive\/negative by training three different classifiers (Naive Bayes, Maximum Entropy and SVM) using classification features based on unigrams, bigrams or part-of-speech tags.\nDave et al. [4] analyze reviews from CNet and Amazon, and surprisingly show that classification features based on unigrams or bigrams perform better than higher-order n-grams.\nThis result is challenged by Cui et al. [3] who look at large collections of reviews crawled from the web.\nThey show that the size of the data set is important, and that bigger training sets allow classifiers to successfully use more complex classification features based on n-grams.\nHu and Liu [11] also crawl the web for product reviews and automatically identify product attributes that have been discussed by reviewers.\nThey use Wordnet to compute the semantic orientation of product evaluations and summarize user reviews by extracting positive and negative evaluations of different product features.\nPopescu and Etzioni [20] analyze a similar setting, but use search engine hit-counts to identify product attributes; the semantic orientation is assigned through the relaxation labeling technique.\nGhose et al. [7, 8] analyze seller reviews from the Amazon secondary market to identify the different dimensions (e.g., delivery, packaging, customer support, etc.) of reputation.\nThey parse the text, and tag the part-of-speech for each word.\nFrequent nouns, noun phrases and verbal phrases are identified as dimensions of reputation, while the corresponding modifiers (i.e., adjectives and adverbs) are used to derive numerical scores for each dimension.\nThe enhanced reputation measure correlates better with the pricing information observed in the market.\nPavlou and Dimoka [19] analyze eBay reviews and find that textual comments have an important impact on reputation premiums.\nOur approach is similar to the previously mentioned works, in the sense that we identify the aspects (i.e., hotel features) discussed by the users in the textual reviews.\nHowever, we do not compute the semantic orientation of the text, nor attempt to infer missing ratings.\nWe define the weight, wif, of feature f E F in the text Ti associated with the review (ri, Ti), as the fraction of Ti dedicated to discussing aspects (both positive and negative) related to feature f.\nWe propose an elementary method to approximate the values of these weights.\nFor each feature we manually construct the word list Lf containing approximately 50 words that are most commonly associated to the feature f.\nThe initial words were selected from reading some of the reviews, and seeing what words coincide with discussion of which features.\nThe list was then extended by adding all thesaurus entries that were related to the initial words.\nFinally, we brainstormed for missing words that would normally be associated with each of the features.\nLet Lf nTi be the list of terms common to both Lf and Ti.\nEach term of Lf is counted the number of times it appears in T i, with two exceptions:\n\u2022 in cases where the user submits a title to the review, we account for the title text by appending it three times to the review text T i.\nThe intuitive assumption is that the user's opinion is more strongly reflected in the title, rather than in the body of the review.\nFor example, many reviews are accurately summarized by titles such as\" Excellent service, terrible location\" or\" Bad value for money\"; \u2022 certain words that occur only once in the text are counted multiple times if their relevance to that fea\nture is particularly strong.\nThese were' root' words for each feature (e.g.,'s taff' is a root word for the feature Service), and were weighted either 2 or 3.\nEach feature was assigned up to 3 such root words, so almost all words are counted only once.\nThe list of words for the feature Rooms is given for reference in Appendix A.\nThe weight wif is computed as:\nf \u2208, ILf n TiI where JLf nTiJ is the number of terms common to Lf and T i.\nThe weight for the feature Overall was set to min {| T i | 5000, 1} where JTiJ is the number of character in T i.\nThe following is a TripAdvisor review for a Boston hotel (the name of the hotel is omitted):\" I'll start by saying that\nThis numerical ratings associated to this review are rO = 3, rR = 3, rS = 3, rC = 4, rV = 2 for features Overall (O), Rooms (R), Service (S), Cleanliness (C) and Value (V) respectively.\nThe ratings for the features Food (F), Location (L) and Noise (N) are absent (i.e., rF = rL = rN = 0).\nThe weights wf are computed from the following lists of common terms:\nThe root words'S taff' and' Center' were tripled and doubled respectively.\nThe overall weight of the textual review is wO = 0.197.\nThese values account reasonably well for the weights of different features in the discussion of the reviewer.\nOne point to note is that some terms in the lists Lf possess an inherent semantic orientation.\nFor example the word' grime' (belonging to the list LC) would be used most often to assert the presence, and not the absence of grime.\nThis is unavoidable, but care was taken to ensure words from both sides of the spectrum were used.\nFor this reason, some lists such as LR contain only nouns of objects that one would typically describe in a room (see Appendix A).\nThe goal of this section is to analyse the influence of the weights wif on the numerical ratings rif.\nIntuitively, users who spent a lot of their time discussing a feature f (i.e., wif is high) had something to say about their experience with regard to this feature.\nObviously, feature f is important for user i.\nSince people tend to be more knowledgeable in the aspects they consider important, our hypothesis is that the ratings rif (corresponding to high weights wif) constitute a subset of \"expert\" ratings for feature f. Figure 2 plots the distribution of the rates ri (h) C with respect to the weights wi (h) C for the cleanliness of a Las Vegas hotel, h. Here, the high ratings are restricted to the reviews that discuss little the cleanliness.\nWhenever cleanliness appears in the discussion, the ratings are low.\nMany hotels exhibit similar rating patterns for various features.\nRatings corresponding to low weights span the whole spectrum from 1 to 5, while the ratings corresponding to high weights are more grouped together (either around good or bad ratings).\nWe therefore make the following hypothesis: HYPOTHESIS 1.\nThe ratings rif corresponding to the reviews where wif is high, are more similar to each other than to the overall collection of ratings.\nTo test the hypothesis, we take the entire set of reviews, and feature by feature, we compute the standard deviation of the ratings with high weights, and the standard deviation of the entire set of ratings.\nHigh weights were defined as those belonging to the upper 20% of the weight range for the corresponding feature.\nIf Hypothesis 1 were true, the standard deviation of all ratings should be higher than the standard deviation of the ratings with high weights.\nFigure 2: The distribution of ratings against the weight of the cleanliness feature.\nWe use a standard T-test to measure the significance of the results.\nCity by city and feature by feature, Table 2 presents the average standard deviation of all ratings, and the average standard deviation of ratings with high weights.\nIndeed, the ratings with high weights have lower standard deviation, and the results are significant at the standard 0.05 significance threshold (although for certain cities taken independently there doesn't seem to be a significant difference, the results are significant for the entire data set).\nPlease note that only the features O, R, S, C and V were considered, since for the others (F, L, and N) we didn't have enough ratings.\nTable 2: Average standard deviation for all ratings, and average standard deviation for ratings with high weights.\nIn square brackets, the corresponding p-values for a positive difference between the two.\nHypothesis 1 not only provides some basic understanding regarding the rating behavior of online users, it also suggests some ways of computing better quality estimates.\nWe can, for example, construct a feature-by-feature quality estimate with much lower variance: for each feature we take the subset of reviews that amply discuss that feature, and output as a quality estimate the average rating for this subset.\nInitial experiments suggest that the average feature-by-feature ratings computed in this way are different from the average ratings computed on the whole data set.\nGiven that, indeed, high weights are indicators of \"expert\" opinions, the estimates obtained in this way are more accurate than the current ones.\nNevertheless, the validation of this underlying assumption requires further controlled experiments.\n4.\nTHE INFLUENCE OF PAST RATINGS\nTwo important assumptions are generally made about reviews submitted to online forums.\nThe first is that ratings truthfully reflect the quality observed by the users; the second is that reviews are independent from one another.\nWhile anecdotal evidence [9, 22] challenges the first assumption3, in this section, we address the second.\nA perusal of online reviews shows that reviews are often part of discussion threads, where users make an effort to contradict, or vehemently agree with the remarks of previous users.\nConsider, for example, the following review:\" I don't understand the negative reviews...the hotel was a little dark, but that was the style.\nIt was very artsy.\nYes it was close to the freeway, but in my opinion the sound of an occasional loud car is better than hearing the\" ding ding\" of slot machines all night!\nThe staff on-hand is FABULOUS.\nThe waitresses are great (and *** does not deserve the bad review she got, she was 100% attentive to us!)\n, the bartenders are friendly and professional at the same time ...\" Here, the user was disturbed by previous negative reports, addressed these concerns, and set about trying to correct them.\nNot surprisingly, his ratings were considerably higher than the average ratings up to this point.\nIt seems that TripAdvisor users regularly read the reports submitted by previous users before booking a hotel, or before writing a review.\nPast reviews create some prior expectation regarding the quality of service, and this expectation has an influence on the submitted review.\nWe believe this observation holds for most online forums.\nThe subjective perception of quality is directly proportional to how well the actual experience meets the prior expectation, a fact confirmed by an important line of econometric and marketing research [17, 18, 16, 21].\nThe correlation between the reviews has also been confirmed by recent research on the dynamics of online review forums [6].\n4.1 Prior Expectations\nWe define the prior expectation of user i regarding the feature f, as the average of the previously available ratings on the feature f4:\nAs a first hypothesis, we assert that the rating rif is a function of the prior expectation ef (i):\nWe define high and low expectations as those that are above, respectively below a certain cutoff value 0.\nThe set of reviews preceded by high, respectively low expectations\nTable 3: Average ratings for reviews preceded by low (first value in the cell) and high (second value in the cell) expectations.\nThe P-values for a positive difference are given square brackets.\nare defined as follows:\nThese sets are specific for each (hotel, feature) pair, and in our experiments we took 0 = 4.\nThis rather high value is close to the average rating across all features across all hotels, and is justified by the fact that our data set contains mostly high quality hotels.\nFor each city, we take all hotels and compute the average ratings in the sets Rhighf and Rlow f (see Table 3).\nThe average rating amongst reviews following low prior expectations is significantly higher than the average rating following high expectations.\nAs further evidence, we consider all hotels for which the function eV (i) (the expectation for the feature Value) has a high value (greater than 4) for some i, and a low value (less than 4) for some other i. Intuitively, these are the hotels for which there is a minimal degree of variation in the timely sequence of reviews: i.e., the cumulative average of ratings was at some point high and afterwards became low, or vice-versa.\nSuch variations are observed for about half of all hotels in each city.\nFigure 3 plots the median (across considered hotels) rating, rV, when ef (i) is not more than x but greater than x \u2212 0.5.\nFigure 3: The ratings tend to decrease as the expectation increases.\nThere are two ways to interpret the function ef (i): \u2022 The expected value for feature f obtained by user i before his experience with the service, acquired by reading reports submitted by past users.\nIn this case, an overly high value for ef (i) would drive the user to submit a negative report (or vice versa), stemming from the difference between the actual value of the service, and the inflated expectation of this value acquired before his experience.\n\u2022 The expected value of feature f for all subsequent visitors of the site, if user i were not to submit a report.\nIn this case, the motivation for a negative report following an overly high value of ef is different: user i seeks to correct the expectation of future visitors to the site.\nUnlike the interpretation above, this does not require the user to derive an a priori expectation for the value of f. Note that neither interpretation implies that the average up to report i is inversely related to the rating at report i.\nThere might exist a measure of influence exerted by past reports that pushes the user behind report i to submit ratings which to some extent conforms with past reports: a low value for ef (i) can influence user i to submit a low rating for feature f because, for example, he fears that submitting a high rating will make him out to be a person with low standards5.\nThis, at first, appears to contradict Hypothesis 2.\nHowever, this conformity rating cannot continue indefinitely: once the set of reports project a sufficiently deflated estimate for vf, future reviewers with comparatively positive impressions will seek to correct this misconception.\n4.2 Impact of textual comments on quality expectation\nFurther insight into the rating behavior of TripAdvisor users can be obtained by analyzing the relationship between the weights wf and the values ef (i).\nIn particular, we examine the following hypothesis: HYPOTHESIS 3.\nWhen a large proportion of the text of a review discusses a certain feature, the difference between the rating for that feature and the average rating up to that point tends to be large.\nThe intuition behind this claim is that when the user is adamant about voicing his opinion regarding a certain feature, his opinion differs from the collective opinion of previous postings.\nThis relies on the characteristic of reputation systems as feedback forums where a user is interested in projecting his opinion, with particular strength if this opinion differs from what he perceives to be the general opinion.\nTo test Hypothesis 3 we measure the average absolute difference between the expectation ef (i) and the rating rif when the weight wif is high, respectively low.\nWeights are classified high or low by comparing them with certain cutoff values: wif is low if smaller than 0.1, while wif is high if greater than 0f.\nDifferent cutoff values were used for different features: 0R = 0.4, 0S = 0.4, 0C = 0.2, and 0V = 0.7.\nCleanliness has a lower cutoff since it is a feature rarely discussed; Value has a high cutoff for the opposite reason.\nResults are presented in Table 4.\nTable 4: Average of | rif \u2212 ef (i) | when weights are high\nThis demonstrates that when weights are unusually high, users tend to express an opinion that does not conform to the net average of previous ratings.\nAs we might expect, for a feature that rarely was a high weight in the discussion, (e.g., cleanliness) the difference is particularly large.\nEven though the difference in the feature Value is quite large for Sydney, the P-value is high.\nThis is because only few reviews discussed value heavily.\nThe reason could be cultural or because there was less of a reason to discuss this feature.\n4.3 Reporting Incentives\nPrevious models suggest that users who are not highly opinionated will not choose to voice their opinions [12].\nIn this section, we extend this model to account for the influence of expectations.\nThe motivation for submitting feedback is not only due to extreme opinions, but also to the difference between the current reputation (i.e., the prior expectation of the user) and the actual experience.\nSuch a rating model produces ratings that most of the time deviate from the current average rating.\nThe ratings that confirm the prior expectation will rarely be submitted.\nWe test on our data set the proportion of ratings that attempt to \"correct\" the current estimate.\nWe define a deviant rating as one that deviates from the current expectation by at least some threshold 0, i.e., | rif \u2212 ef (i) | \u2265 0.\nFor each of the three considered cities, the following tables, show the proportion of deviant ratings for 0 = 0.5 and 0 = 1.\nTable 5: Proportion of deviant ratings with 0 = 0.5\nTable 6: Proportion of deviant ratings with 0 = 1\nThe above results suggest that a large proportion of users (close to one half, even for the high threshold value 0 = 1) deviate from the prior average.\nThis reinforces the idea that users are more likely to submit a report when they believe they have something distinctive to add to the current stream of opinions for some feature.\nSuch conclusions are in total agreement with prior evidence that the distribution of reports often follows bi-modal, U-shaped distributions.\n5.\nMODELLING THE BEHAVIOR OF RATERS\nTo account for the observations described in the previous sections, we propose a model for the behavior of the users when submitting online reviews.\nFor a given hotel, we make the assumption that the quality experienced by the users is normally distributed around some value vf, which represents the \"objective\" quality offered by the hotel on the feature f.\nThe rating submitted by user i on feature f is:\nwhere:\n\u2022 vif is the (unknown) quality actually experienced by the user.\nvif is assumed normally distributed around some value vf; \u2022 \u03b4f \u2208 [0, 1] can be seen as a measure of the bias when reporting feedback.\nHigh values reflect the fact that users rate objectively, without being influenced by prior expectations.\nThe value of \u03b4f may depend on various factors; we fix one value for each feature f; \u2022 c is a constant between 1 and 5; \u2022 wif is the weight of feature f in the textual comment of review i, computed according to Eq.\n(1); \u2022 d (vif, ef (i) | wif) is a distance function between the expectation and the observation of user i.\nThe distance function satisfies the following properties:--d (y, z | w) \u2265 0 for all y, z \u2208 [0, 5], w \u2208 [0, 1];--| d (y, z | w) | <| d (z, x | w) | if | y \u2212 z | <| z \u2212 x |;--| d (y, z | w1) | <| d (y, z | w2) | if w1 \u03c4p (4) Keep Di in the inverted lists if pr(Di) > \u03c4p Figure 7: Global document pruning based on pr.\nthen P(computer) = 0.1.\nThe cost of including I(ti) in the pindex is its size |I(ti)|.\nThus, in our greedy approach in Figure 6, we include I(ti)``s in the decreasing order of P(ti)\/|I(ti)| as long as |IP | \u2264 s \u00b7 |IF |.\nLater in our experiment section, we evaluate what fraction of queries can be handled by IP when we employ this greedy keyword-pruning policy.\n4.3 Document pruning At a high level, document pruning tries to take advantage of the observation that most users are mainly interested in viewing the top few answers to a query.\nGiven this, it is unnecessary to keep all postings in an inverted list I(ti), because users will not look at most of the documents in the list anyway.\nWe depict the conceptual diagram of the document pruning policy in Figure 4.\nIn the figure, we vertically prune postings corresponding to D4, D5 and D6 of t1 and D8 of t3, assuming that these documents are unlikely to be part of top-k answers to user queries.\nAgain, our goal is to develop a pruning policy such that (1) we can compute the correctness indicator function C from IP alone and (2) we can handle the largest fraction of queries with IP .\nIn the next few sections, we discuss a few alternative approaches for document pruning.\n4.3.1 Global PR-based pruning We first investigate the pruning policy that is commonly used by existing search engines.\nThe basic idea for this pruning policy is that the query-independent quality score pr(D) is a very important factor in computing the final ranking of the document (e.g. PageRank is known to be one of the most important factors determining the overall ranking in the search results), so we build the p-index by keeping only those documents whose pr values are high (i.e., pr(D) > \u03c4p for a threshold value \u03c4p).\nThe hope is that most of the top-ranked results are likely to have high pr(D) values, so the answer computed from this p-index is likely to be similar to the answer computed from the full index.\nFigure 7 describes this pruning policy more formally, where we sort all documents Di``s by their respective pr(Di) values and keep a Di in the p-index when its Algorithm 4.4 Local document pruning V SL N: maximum size of a single posting list Procedure (1) Foreach I(ti) \u2208 IF (2) Sort Di``s in I(ti) based on pr(Di) (3) If |I(ti)| \u2264 N Then keep all Di``s (4) Else keep the top-N Di``s with the highest pr(Di) Figure 8: Local document pruning based on pr.\nAlgorithm 4.5 Extended keyword-specific document pruning Procedure (1) For each I(ti) (2) Keep D \u2208 I(ti) if pr(D) > \u03c4pi or tr(D, ti) > \u03c4ti Figure 9: Extended keyword-specific document pruning based on pr and tr.\npr(Di) value is higher than the global threshold value \u03c4p.\nWe refer to this pruning policy as global PR-based pruning (GPR).\nVariations of this pruning policy are possible.\nFor example, we may adjust the threshold value \u03c4p locally for each inverted list I(ti), so that we maintain at least a certain number of postings for each inverted list I(ti).\nThis policy is shown in Figure 8.\nWe refer to this pruning policy as local PR-based pruning (LPR).\nUnfortunately, the biggest shortcoming of this policy is that we can prove that we cannot compute the correctness function C from IP alone when IP is constructed this way.\nTheorem 3 No PR-based document pruning can provide the result guarantee.\n2 Proof Assume we create IP based on the GPR policy (generalizing the proof to LPR is straightforward) and that every document D with pr(D) > \u03c4p is included in IP .\nAssume that the kth entry in the top-k results, has a ranking score of r(Dk, q) = fr(tr(Dk, q), pr(Dk)).\nNow consider another document Dj that was pruned from IP because pr(Dj) < \u03c4p.\nEven so, it is still possible that the document``s tr(Dj, q) value is very high such that r(Dj, q) = fr(tr(Dj, q), pr(Dj)) > r(Dk, q).\nTherefore, under a PR-based pruning policy, the quality of the answer computed from IP can be significantly worse than that from IF and it is not possible to detect this degradation without computing the answer from IF .\nIn the next section, we propose simple yet essential changes to this pruning policy that allows us to compute the correctness function C from IP alone.\n4.3.2 Extended keyword-specific pruning The main problem of global PR-based document pruning policies is that we do not know the term-relevance score tr(D, ti) of the pruned documents, so a document not in IP may have a higher ranking score than the ones returned from IP because of their high tr scores.\nHere, we propose a new pruning policy, called extended keyword-specific document pruning (EKS), which avoids this problem by pruning not just based on the query-independent pr(D) score but also based on the term-relevance tr(D, ti) score.\nThat is, for every inverted list I(ti), we pick two threshold values, \u03c4pi for pr and \u03c4ti for tr, such that if a document D \u2208 I(ti) satisfies pr(D) > \u03c4pi or tr(D, ti) > \u03c4ti, we include it in I(ti) of IP .\nOtherwise, we prune it from IP .\nFigure 9 formally describes this algorithm.\nThe threshold values, \u03c4pi and \u03c4ti, may be selected in a number of different ways.\nFor example, if pr and tr have equal weight in the final ranking and if we want to keep at most N postings in each inverted list I(ti), we may want to set the two threshold values equal to \u03c4i (\u03c4pi = \u03c4ti = \u03c4i) and adjust \u03c4i such that N postings remain in I(ti).\nThis new pruning policy, when combined with a monotonic scoring function, enables us to compute the correctness indicator function C from the pruned index.\nWe use the following example to explain how we may compute C. Example 4 Consider the query q = {t1, t2} and a monotonic ranking function, f(pr(D), tr(D, t1), tr(D, t2)).\nThere are three possible scenarios on how a document D appears in the pruned index IP .\n1.\nD appears in both I(t1) and I(t2) of IP : Since complete information of D appears in IP , we can compute the exact Algorithm 4.6 Computing Answer from IP Input Query q = {t1, ... , tw} Output A: top-k result, C: correctness indicator function Procedure (1) For each Di \u2208 I(t1) \u222a \u00b7 \u00b7 \u00b7 \u222a I(tw) (2) For each tm \u2208 q (3) If Di \u2208 I(tm) (4) tr\u2217(Di, tm) = tr(Di, tm) (5) Else (6) tr\u2217(Di, tm) = \u03c4tm (7) f(Di) = f(pr(Di), tr\u2217(Di, t1), ... , tr\u2217(Di, tn)) (8) A = top-k Di``s with highest f(Di) values (9) C = j 1 if all Di \u2208 A appear in all I(ti), ti \u2208 q 0 otherwise Figure 10: Ranking based on thresholds tr\u03c4 (ti) and pr\u03c4 (ti).\nscore of D based on pr(D), tr(D, t1) and tr(D, t2) values in IP : f(pr(D), tr(D, t1), tr(D, t2)).\n2.\nD appears only in I(t1) but not in I(t2): Since D does not appear in I(t2), we do not know tr(D, t2), so we cannot compute its exact ranking score.\nHowever, from our pruning criteria, we know that tr(D, t2) cannot be larger than the threshold value \u03c4t2.\nTherefore, from the monotonicity of f (Definition 2), we know that the ranking score of D, f(pr(D), tr(D, t1), tr(D, t2)), cannot be larger than f(pr(D), tr(D, t1), \u03c4t2).\n3.\nD does not appear in any list: Since D does not appear at all in IP , we do not know any of the pr(D), tr(D, t1), tr(D, t2) values.\nHowever, from our pruning criteria, we know that pr(D) \u2264 \u03c4p1 and \u2264 \u03c4p2 and that tr(D, t1) \u2264 \u03c4t1 and tr(D, t2) \u2264 \u03c4t2.\nTherefore, from the monotonicity of f, we know that the ranking score of D, cannot be larger than f(min(\u03c4p1, \u03c4p2), \u03c4t1, \u03c4t2).\n2 The above example shows that when a document does not appear in one of the inverted lists I(ti) with ti \u2208 q, we cannot compute its exact ranking score, but we can still compute its upper bound score by using the threshold value \u03c4ti for the missing values.\nThis suggests the algorithm in Figure 10 that computes the top-k result A from IP together with the correctness indicator function C.\nIn the algorithm, the correctness indicator function C is set to one only if all documents in the top-k result A appear in all inverted lists I(ti) with ti \u2208 q, so we know their exact score.\nIn this case, because these documents have scores higher than the upper bound scores of any other documents, we know that no other documents can appear in the top-k.\nThe following theorem formally proves the correctness of the algorithm.\nIn [11] Fagin et al., provides a similar proof in the context of multimedia middleware.\nTheorem 4 Given an inverted index IP pruned by the algorithm in Figure 9, a query q = {t1, ... , tw} and a monotonic ranking function, the top-k result from IP computed by Algorithm 4.6 is the same as the top-k result from IF if C = 1.\n2 Proof Let us assume Dk is the kth ranked document computed from IP according to Algorithm 4.6.\nFor every document Di \u2208 IF that is not in the top-k result from IP , there are two possible scenarios: First, Di is not in the final answer because it was pruned from all inverted lists I(tj), 1 \u2264 j \u2264 w, in IP .\nIn this case, we know that pr(Di) \u2264 min1\u2264j\u2264w\u03c4pj < pr(Dk) and that tr(Di, tj) \u2264 \u03c4tj < tr(Dk, tj), 1 \u2264 j \u2264 w. From the monotonicity assumption, it follows that the ranking score of DI is r(Di) < r(Dk).\nThat is, Di``s score can never be larger than that of Dk.\nSecond, Di is not in the answer because Di is pruned from some inverted lists, say, I(t1), ... , I(tm), in IP .\nLet us assume \u00afr(Di) = f(pr(Di),\u03c4t1,... ,\u03c4tm,tr(Di, tm+1),... ,tr(Di, tw)).\nThen, from tr(Di, tj) \u2264 \u03c4tj(1 \u2264 j \u2264 m) and the monotonicity assumption, 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesguaranteed\u2212f(s) Fraction of index \u2212 s Fraction of queries guaranteed per fraction of index queries guaranteed Figure 11: Fraction of guaranteed queries f(s) answered in a keyword-pruned p-index of size s. we know that r(Di) \u2264 \u00afr(Di).\nAlso, Algorithm 4.6 sets C = 1 only when the top-k documents have scores larger than \u00afr(Di).\nTherefore, r(Di) cannot be larger than r(Dk).\n5.\nEXPERIMENTAL EVALUATION In order to perform realistic tests for our pruning policies, we implemented a search engine prototype.\nFor the experiments in this paper, our search engine indexed about 130 million pages, crawled from the Web during March of 2004.\nThe crawl started from the Open Directory``s [10] homepage and proceeded in a breadth-first manner.\nOverall, the total uncompressed size of our crawled Web pages is approximately 1.9 TB, yielding a full inverted index IF of approximately 1.2 TB.\nFor the experiments reported in this section we used a real set of queries issued to Looksmart [22] on a daily basis during April of 2003.\nAfter keeping only the queries containing keywords that were present in our inverted index, we were left with a set of about 462 million queries.\nWithin our query set, the average number of terms per query is 2 and 98% of the queries contain at most 5 terms.\nSome experiments require us to use a particular ranking function.\nFor these, we use the ranking function similar to the one used in [20].\nMore precisely, our ranking function r(D, q) is r(D, q) = prnorm(D) + trnorm(D, q) (3) where prnorm(D) is the normalized PageRank of D computed from the downloaded pages and trnorm(D, q) is the normalized TF.IDF cosine distance of D to q.\nThis function is clearly simpler than the real functions employed by commercial search engines, but we believe for our evaluation this simple function is adequate, because we are not studying the effectiveness of a ranking function, but the effectiveness of pruning policies.\n5.1 Keyword pruning In our first experiment we study the performance of the keyword pruning, described in Section 4.2.\nMore specifically, we apply the algorithm HS of Figure 6 to our full index IF and create a keyword-pruned p-index IP of size s. For the construction of our keyword-pruned p-index we used the query frequencies observed during the first 10 days of our data set.\nThen, using the remaining 20-day query load, we measured f(s), the fraction of queries handled by IP .\nAccording to the algorithm of Figure 5, a query can be handled by IP (i.e., C = 1) if IP includes the inverted lists for all of the query``s keywords.\nWe have repeated the experiment for varying values of s, picking the keywords greedily as discussed in Section 4.2.\nThe result is shown in Figure 11.\nThe horizontal axis denotes the size s of the p-index as a fraction of the size of IF .\nThe vertical axis shows the fraction f(s) of the queries that the p-index of size s can answer.\nThe results of Figure 11, are very encouraging: we can answer a significant fraction of the queries with a small fraction of the original index.\nFor example, approximately 73% of the queries can be answered using 30% of the original index.\nAlso, we find that when we use the keyword pruning policy only, the optimal index size is s = 0.17.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesguaranteed-f(s) Fraction of index - s Fraction of queries guaranteed for top-20 per fraction of index fraction of queries guaranteed (EKS) Figure 12: Fraction of guaranteed queries f(s) answered in a document-pruned p-index of size s. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractionofqueriesanswered index size - s Fraction of queries answered for top-20 per fraction of index GPR LPR EKS Figure 13: Fraction of queries answered in a document-pruned p-index of size s. 5.2 Document pruning We continue our experimental evaluation by studying the performance of the various document pruning policies described in Section 4.3.\nFor the experiments on document pruning reported here we worked with a 5.5% sample of the whole query set.\nThe reason behind this is merely practical: since we have much less machines compared to a commercial search engine it would take us about a year of computation to process all 462 million queries.\nFor our first experiment, we generate a document-pruned p-index of size s by using the Extended Keyword-Specific pruning (EKS) in Section 4.\nWithin the p-index we measure the fraction of queries that can be guaranteed (according to Theorem 4) to be correct.\nWe have performed the experiment for varying index sizes s and the result is shown in Figure 12.\nBased on this figure, we can see that our document pruning algorithm performs well across the scale of index sizes s: for all index sizes larger than 40%, we can guarantee the correct answer for about 70% of the queries.\nThis implies that our EKS algorithm can successfully identify the necessary postings for calculating the top-20 results for 70% of the queries by using at least 40% of the full index size.\nFrom the figure, we can see that the optimal index size s = 0.20 when we use EKS as our pruning policy.\nWe can compare the two pruning schemes, namely the keyword pruning and EKS, by contrasting Figures 11 and 12.\nOur observation is that, if we would have to pick one of the two pruning policies, then the two policies seem to be more or less equivalent for the p-index sizes s \u2264 20%.\nFor the p-index sizes s > 20%, keyword pruning does a much better job as it provides a higher number of guarantees at any given index size.\nLater in Section 5.3, we discuss the combination of the two policies.\nIn our next experiment, we are interested in comparing EKS with the PR-based pruning policies described in Section 4.3.\nTo this end, apart from EKS, we also generated document-pruned pindexes for the Global pr-based pruning (GPR) and the Local prbased pruning (LPR) policies.\nFor each of the polices we created document-pruned p-indexes of varying sizes s.\nSince GPR and LPR cannot provide a correctness guarantee, we will compare the fraction of queries from each policy that are identical (i.e. the same results in the same order) to the top-k results calculated from the full index.\nHere, we will report our results for k = 20; the results are similar for other values of k.\nThe results are shown in Figure 13.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Averagefractionofdocsinanswer index size - s Average fraction of docs in answer for top-20 per fraction of index GPR LPR EKS Figure 14: Average fraction of the top-20 results of p-index with size s contained in top-20 results of the full index.\nFraction of queries guaranteed for top-20 per fraction of index, using keyword and document 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Keyword fraction of index - sh 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Document fraction of index - sv 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction of queries guaranteed - f(s) Figure 15: Combining keyword and document pruning.\nThe horizontal axis shows the size s of the p-index; the vertical axis shows the fraction f(s) of the queries whose top-20 results are identical to the top-20 results of the full index, for a given size s. By observing Figure 13, we can see that GPR performs the worst of the three policies.\nOn the other hand EKS, picks up early, by answering a great fraction of queries (about 62%) correctly with only 10% of the index size.\nThe fraction of queries that LPR can answer remains below that of EKS until about s = 37%.\nFor any index size larger than 37%, LPR performs the best.\nIn the experiment of Figure 13, we applied the strict definition that the results of the p-index have to be in the same order as the ones of the full index.\nHowever, in a practical scenario, it may be acceptable to have some of the results out of order.\nTherefore, in our next experiment we will measure the fraction of the results coming from an p-index that are contained within the results of the full index.\nThe result of the experiment is shown on Figure 14.\nThe horizontal axis is, again, the size s of the p-index; the vertical axis shows the average fraction of the top-20 results common with the top-20 results from the full index.\nOverall, Figure 14 depicts that EKS and LPR identify the same high (\u2248 96%) fraction of results on average for any size s \u2265 30%, with GPR not too far behind.\n5.3 Combining keyword and document pruning In Sections 5.1 and 5.2 we studied the individual performance of our keyword and document pruning schemes.\nOne interesting question however is how do these policies perform in combination?\nWhat fraction of queries can we guarantee if we apply both keyword and document pruning in our full index IF ?\nTo answer this question, we performed the following experiment.\nWe started with the full index IF and we applied keyword pruning to create an index Ih P of size sh \u00b7 100% of IF .\nAfter that, we further applied document pruning to Ih P , and created our final pindex IP of size sv \u00b7100% of Ih P .\nWe then calculated the fraction of guaranteed queries in IP .\nWe repeated the experiment for different values of sh and sv.\nThe result is shown on Figure 15.\nThe x-axis shows the index size sh after applying keyword pruning; the y-axis shows the index size sv after applying document pruning; the z-axis shows the fraction of guaranteed queries after the two prunings.\nFor example the point (0.2, 0.3, 0.4) means that if we apply keyword pruning and keep 20% of IF , and subsequently on the resulting index we apply document pruning keeping 30% (thus creating a pindex of size 20%\u00b730% = 6% of IF ) we can guarantee 40% of the queries.\nBy observing Figure 15, we can see that for p-index sizes smaller than 50%, our combined pruning does relatively well.\nFor example, by performing 40% keyword and 40% document pruning (which translates to a pruned index with s = 0.16) we can provide a guarantee for about 60% of the queries.\nIn Figure 15, we also observe a plateau for sh > 0.5 and sv > 0.5.\nFor this combined pruning policy, the optimal index size is at s = 0.13, with sh = 0.46 and sv = 0.29.\n6.\nRELATED WORK [3, 30] provide a good overview of inverted indexing in Web search engines and IR systems.\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [6, 23, 33].\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used.\nThe works in [1, 5, 7, 20, 27] are the most related to ours, as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking.\nHowever, [1, 5, 7, 27] do not consider any query-independent quality (such as PageRank) in the ranking function.\n[32] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results.\nOur work essentially extends [1, 2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the correctness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [18, 19, 21, 31].\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost.\nThe exact ranking functions employed by current search engines are closely guarded secrets.\nIn general, however, the rankings are based on query-dependent relevance and queryindependent document quality.\nQuery-dependent relevance can be calculated in a variety of ways (see [3, 30]).\nSimilarly, there are a number of works that measure the quality of the documents, typically as captured through link-based analysis [17, 28, 26].\nSince our work does not assume a particular form of ranking function, it is complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\nThe main idea is to either stop the traversal of the inverted lists early, or to shrink the lists by pruning postings from the lists [14, 4, 11, 8].\nOur proof for the correctness indicator function was primarily inspired by [12].\n7.\nCONCLUDING REMARKS Web search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads.\nWhile this approach may improve performance, by computing the top results from a pruned index we may notice a significant degradation in the result quality.\nIn this paper, we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order.\nWe studied two pruning techniques, namely keyword-based and document-based pruning as well as their combination.\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results.\nIn particular, a keyword-pruned index can guarantee 73% of the queries with a size of 30% of the full index, while a document-pruned index can guarantee 68% of the queries with the same size.\nWhen we combine the two pruning algorithms we can guarantee 60% of the queries with an index size of 16%.\nIt is our hope that our work will help search engines develop better, faster and more efficient indexes and thus provide for a better user search experience on the Web.\n8.\nREFERENCES [1] V. N. Anh, O. de Kretser, and A. Moffat.\nVector-space ranking with effective early termination.\nIn SIGIR, 2001.\n[2] V. N. Anh and A. Moffat.\nPruning strategies for mixed-mode querying.\nIn CIKM, 2006.\n[3] R. A. Baeza-Yates and B. A. Ribeiro-Neto.\nModern Information Retrieval.\nACM Press \/ Addison-Wesley, 1999.\n[4] N. Bruno, L. Gravano, and A. Marian.\nEvaluating top-k queries over web-accessible databases.\nIn ICDE, 2002.\n[5] S. B\u00a8uttcher and C. L. A. Clarke.\nA document-centric approach to static index pruning in text retrieval systems.\nIn CIKM, 2006.\n[6] B. Cahoon, K. S. McKinley, and Z. Lu.\nEvaluating the performance of distributed architectures for information retrieval using a variety of workloads.\nACM TOIS, 18(1), 2000.\n[7] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. Maarek, and A. Soffer.\nStatic index pruning for information retrieval systems.\nIn SIGIR, 2001.\n[8] S. Chaudhuri and L. Gravano.\nOptimizing queries over multimedia repositories.\nIn SIGMOD, 1996.\n[9] T. H. Cormen, C. E. Leiserson, and R. L. Rivest.\nIntroduction to Algorithms, 2nd Edition.\nMIT Press\/McGraw Hill, 2001.\n[10] Open directory.\nhttp:\/\/www.dmoz.org.\n[11] R. Fagin.\nCombining fuzzy information: an overview.\nIn SIGMOD Record, 31(2), 2002.\n[12] R. Fagin, A. Lotem, and M. Naor.\nOptimal aggregation algorithms for middleware.\nIn PODS, 2001.\n[13] A. Gulli and A. Signorini.\nThe indexable web is more than 11.5 billion pages.\nIn WWW, 2005.\n[14] U. Guntzer, G. Balke, and W. Kiessling.\nTowards efficient multi-feature queries in heterogeneous environments.\nIn ITCC, 2001.\n[15] Z. Gy\u00a8ongyi, H. Garcia-Molina, and J. Pedersen.\nCombating web spam with trustrank.\nIn VLDB, 2004.\n[16] B. J. Jansen and A. Spink.\nAn analysis of web documents retrieved and viewed.\nIn International Conf.\non Internet Computing, 2003.\n[17] J. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJournal of the ACM, 46(5):604-632, September 1999.\n[18] R. Lempel and S. Moran.\nPredictive caching and prefetching of query results in search engines.\nIn WWW, 2003.\n[19] R. Lempel and S. Moran.\nOptimizing result prefetching in web search engines with segmented indices.\nACM Trans.\nInter.\nTech., 4(1), 2004.\n[20] X. Long and T. Suel.\nOptimized query execution in large search engines with global page ordering.\nIn VLDB, 2003.\n[21] X. Long and T. Suel.\nThree-level caching for efficient query processing in large web search engines.\nIn WWW, 2005.\n[22] Looksmart inc. http:\/\/www.looksmart.com.\n[23] S. Melnik, S. Raghavan, B. Yang, and H. Garcia-Molina.\nBuilding a distributed full-text index for the web.\nACM TOIS, 19(3):217-241, 2001.\n[24] A. Ntoulas, J. Cho, C. Olston.\nWhat``s new on the web?\nThe evolution of the web from a search engine perspective.\nIn WWW, 2004.\n[25] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly.\nDetecting spam web pages through content analysis.\nIn WWW, 2006.\n[26] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe pagerank citation ranking: Bringing order to the web.\nTechnical report, Stanford University.\n[27] M. Persin, J. Zobel, and R. Sacks-Davis.\nFiltered document retrieval with frequency-sorted indexes.\nJournal of the American Society of Information Science, 47(10), 1996.\n[28] M. Richardson and P. Domingos.\nThe intelligent surfer: Probabilistic combination of link and content information in pagerank.\nIn Advances in Neural Information Processing Systems, 2002.\n[29] S. Robertson and K. Sp\u00a8arck-Jones.\nRelevance weighting of search terms.\nJournal of the American Society for Information Science, 27:129-46, 1976.\n[30] G. Salton and M. J. McGill.\nIntroduction to modern information retrieval.\nMcGraw-Hill, first edition, 1983.\n[31] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira, R. Fonseca, and B. Riberio-Neto.\nRank-preserving two-level caching for scalable search engines.\nIn SIGIR, 2001.\n[32] M. Theobald, G. Weikum, and R. Schenkel.\nTop-k query evaluation with probabilistic guarantees.\nIn VLDB, 2004.\n[33] A. Tomasic and H. Garcia-Molina.\nPerformance of inverted indices in shared-nothing distributed text document information retrieval systems.\nIn Parallel and Distributed Information Systems, 1993.","lvl-3":"Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information.\nIn order to cope with the vast amounts of query loads, search engines prune their index to keep documents that are likely to be returned as top results, and use this pruned index to compute the first batches of results.\nWhile this approach can improve performance by reducing the size of the index, if we compute the top results only from the pruned index we may notice a significant degradation in the result quality: if a document should be in the top results but was not included in the pruned index, it will be placed behind the results computed from the pruned index.\nGiven the fierce competition in the online search market, this phenomenon is clearly undesirable.\nIn this paper, we study how we can avoid any degradation of result quality due to the pruning-based performance optimization, while still realizing most of its benefit.\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results, even though we are computing the first batch from the pruned index most of the time.\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages.\n1.\nINTRODUCTION\nThe amount of information on the Web is growing at a prodigious rate [24].\nAccording to a recent study [13], it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department.\n\u2020 This work is partially supported by NSF grants, IIS-0534784, IIS0347993, and CNS-0626702.\nAny opinions, findings, and conclusions or recommendations expressed in this material are those of the author (s) and do not necessarily reflect the views of the funding institutions.\nWeb currently consists of more than 11 billion pages.\nDue to this immense amount of available information, the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web.\nTypically, the Web search engines, similar to other information retrieval applications, utilize a data structure called inverted index.\nAn inverted index provides for the efficient retrieval of the documents (or Web pages) that contain a particular keyword.\nIn most cases, a query that the user issues may have thousands or even millions of matching documents.\nIn order to avoid overwhelming the users with a huge amount of results, the search engines present the results in batches of 10 to 20 relevant documents.\nThe user then looks through the first batch of results and, if she doesn't find the answer she is looking for, she may potentially request to view the next batch or decide to issue a new query.\nA recent study [16] indicated that approximately 80% of the users examine at most the first 3 batches of the results.\nThat is, 80% of the users typically view at most 30 to 60 results for every query that they issue to a search engine.\nAt the same time, given the size of the Web, the inverted index that the search engines maintain can grow very large.\nSince the users are interested in a small number of results (and thus are viewing a small portion of the index for every query that they issue), using an index that is capable of returning all the results for a query may constitute a significant waste in terms of time, storage space and computational resources, which is bound to get worse as the Web grows larger over time [24].\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results (by using, for example, the pruning techniques in [7, 20]) and compute the first batch of answers using the pruned index.\nWhile this approach has been shown to give significant improvement in performance, it also leads to noticeable degradation in the quality of the search results, because the top answers are computed only from the pruned index [7, 20].\nThat is, even if a page should be placed as the top-matching page according to a search engine's ranking metric, the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [7, 20].\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible.\nIn this paper, we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit.\nThat is, we present a number of simple (yet important) changes in the pruning techniques for creating the pruned index.\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages (according to the search-engine's ranking metric) are always placed at the top of search results, even though we are computing the first batch of answers from the pruned index most of the time.\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today's search engines.\nFinally, we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results.\nFigure 1: (a) Search engine replicates its full index IF to in\ncrease query-answering capacity.\n(b) In the 1st tier, small pindexes IP handle most of the queries.\nWhen IP cannot answer a query, it is redirected to the 2nd tier, where the full index IF is used to compute the answer.\n2.\nCLUSTER ARCHITECTURE AND COST SAVINGS FROM A PRUNED INDEX\n2.1 Two-tier index architecture\n2.2 Correctness guarantee under two-tier architecture\n3.\nOPTIMAL SIZE OF THE P-INDEX\n4.\nPRUNING POLICIES\n4.1 Assumptions on ranking function\n4.2 Keyword pruning\n4.3 Document pruning\n4.3.1 Global PR-based pruning\n4.3.2 Extended keyword-specific pruning\n5.\nEXPERIMENTAL EVALUATION\n5.1 Keyword pruning\n5.2 Document pruning\n5.3 Combining keyword and document pruning\n6.\nRELATED WORK\n[3, 30] provide a good overview of inverted indexing in Web search engines and IR systems.\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [6, 23, 33].\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used.\nThe works in [1, 5, 7, 20, 27] are the most related to ours, as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking.\nHowever, [1, 5, 7, 27] do not consider any query-independent quality (such as PageRank) in the ranking function.\n[32] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results.\nOur work essentially extends [1, 2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the correctness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [18, 19, 21, 31].\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost.\nThe exact ranking functions employed by current search engines are closely guarded secrets.\nIn general, however, the rankings are based on query-dependent relevance and queryindependent document \"quality.\"\nQuery-dependent relevance can be calculated in a variety of ways (see [3, 30]).\nSimilarly, there are a number of works that measure the \"quality\" of the documents, typically as captured through link-based analysis [17, 28, 26].\nSince our work does not assume a particular form of ranking function, it is complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\nThe main idea is to either stop the traversal of the inverted lists early, or to shrink the lists by pruning postings from the lists [14, 4, 11, 8].\nOur proof for the correctness indicator function was primarily inspired by [12].\n7.\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads.\nWhile this approach may improve performance, by computing the top results from a pruned index we may notice a significant degradation in the result quality.\nIn this paper, we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order.\nWe studied two pruning techniques, namely keyword-based and document-based pruning as well as their combination.\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results.\nIn particular, a keyword-pruned index can guarantee 73% of the queries with a size of 30% of the full index, while a document-pruned index can guarantee 68% of the queries with the same size.\nWhen we combine the two pruning algorithms we can guarantee 60% of the queries with an index size of 16%.\nIt is our hope that our work will help search engines develop better, faster and more efficient indexes and thus provide for a better user search experience on the Web.","lvl-4":"Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information.\nIn order to cope with the vast amounts of query loads, search engines prune their index to keep documents that are likely to be returned as top results, and use this pruned index to compute the first batches of results.\nWhile this approach can improve performance by reducing the size of the index, if we compute the top results only from the pruned index we may notice a significant degradation in the result quality: if a document should be in the top results but was not included in the pruned index, it will be placed behind the results computed from the pruned index.\nGiven the fierce competition in the online search market, this phenomenon is clearly undesirable.\nIn this paper, we study how we can avoid any degradation of result quality due to the pruning-based performance optimization, while still realizing most of its benefit.\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results, even though we are computing the first batch from the pruned index most of the time.\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages.\n1.\nINTRODUCTION\nAccording to a recent study [13], it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department.\n\u2020 This work is partially supported by NSF grants, IIS-0534784, IIS0347993, and CNS-0626702.\nDue to this immense amount of available information, the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web.\nTypically, the Web search engines, similar to other information retrieval applications, utilize a data structure called inverted index.\nAn inverted index provides for the efficient retrieval of the documents (or Web pages) that contain a particular keyword.\nIn most cases, a query that the user issues may have thousands or even millions of matching documents.\nIn order to avoid overwhelming the users with a huge amount of results, the search engines present the results in batches of 10 to 20 relevant documents.\nThe user then looks through the first batch of results and, if she doesn't find the answer she is looking for, she may potentially request to view the next batch or decide to issue a new query.\nA recent study [16] indicated that approximately 80% of the users examine at most the first 3 batches of the results.\nThat is, 80% of the users typically view at most 30 to 60 results for every query that they issue to a search engine.\nAt the same time, given the size of the Web, the inverted index that the search engines maintain can grow very large.\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results (by using, for example, the pruning techniques in [7, 20]) and compute the first batch of answers using the pruned index.\nWhile this approach has been shown to give significant improvement in performance, it also leads to noticeable degradation in the quality of the search results, because the top answers are computed only from the pruned index [7, 20].\nThat is, even if a page should be placed as the top-matching page according to a search engine's ranking metric, the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [7, 20].\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible.\nIn this paper, we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit.\nThat is, we present a number of simple (yet important) changes in the pruning techniques for creating the pruned index.\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages (according to the search-engine's ranking metric) are always placed at the top of search results, even though we are computing the first batch of answers from the pruned index most of the time.\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today's search engines.\nFinally, we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results.\nFigure 1: (a) Search engine replicates its full index IF to in\ncrease query-answering capacity.\n(b) In the 1st tier, small pindexes IP handle most of the queries.\nWhen IP cannot answer a query, it is redirected to the 2nd tier, where the full index IF is used to compute the answer.\n6.\nRELATED WORK\n[3, 30] provide a good overview of inverted indexing in Web search engines and IR systems.\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [6, 23, 33].\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used.\nHowever, [1, 5, 7, 27] do not consider any query-independent quality (such as PageRank) in the ranking function.\n[32] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results.\nOur work essentially extends [1, 2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the correctness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [18, 19, 21, 31].\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost.\nThe exact ranking functions employed by current search engines are closely guarded secrets.\nIn general, however, the rankings are based on query-dependent relevance and queryindependent document \"quality.\"\nSimilarly, there are a number of works that measure the \"quality\" of the documents, typically as captured through link-based analysis [17, 28, 26].\nSince our work does not assume a particular form of ranking function, it is complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\n7.\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads.\nWhile this approach may improve performance, by computing the top results from a pruned index we may notice a significant degradation in the result quality.\nIn this paper, we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order.\nWe studied two pruning techniques, namely keyword-based and document-based pruning as well as their combination.\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results.\nIn particular, a keyword-pruned index can guarantee 73% of the queries with a size of 30% of the full index, while a document-pruned index can guarantee 68% of the queries with the same size.\nWhen we combine the two pruning algorithms we can guarantee 60% of the queries with an index size of 16%.\nIt is our hope that our work will help search engines develop better, faster and more efficient indexes and thus provide for a better user search experience on the Web.","lvl-2":"Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee\nABSTRACT\nThe Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information.\nIn order to cope with the vast amounts of query loads, search engines prune their index to keep documents that are likely to be returned as top results, and use this pruned index to compute the first batches of results.\nWhile this approach can improve performance by reducing the size of the index, if we compute the top results only from the pruned index we may notice a significant degradation in the result quality: if a document should be in the top results but was not included in the pruned index, it will be placed behind the results computed from the pruned index.\nGiven the fierce competition in the online search market, this phenomenon is clearly undesirable.\nIn this paper, we study how we can avoid any degradation of result quality due to the pruning-based performance optimization, while still realizing most of its benefit.\nOur contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results, even though we are computing the first batch from the pruned index most of the time.\nWe also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages.\n1.\nINTRODUCTION\nThe amount of information on the Web is growing at a prodigious rate [24].\nAccording to a recent study [13], it is estimated that the \u2217 Work done while author was at UCLA Computer Science Department.\n\u2020 This work is partially supported by NSF grants, IIS-0534784, IIS0347993, and CNS-0626702.\nAny opinions, findings, and conclusions or recommendations expressed in this material are those of the author (s) and do not necessarily reflect the views of the funding institutions.\nWeb currently consists of more than 11 billion pages.\nDue to this immense amount of available information, the users are becoming more and more dependent on the Web search engines for locating relevant information on the Web.\nTypically, the Web search engines, similar to other information retrieval applications, utilize a data structure called inverted index.\nAn inverted index provides for the efficient retrieval of the documents (or Web pages) that contain a particular keyword.\nIn most cases, a query that the user issues may have thousands or even millions of matching documents.\nIn order to avoid overwhelming the users with a huge amount of results, the search engines present the results in batches of 10 to 20 relevant documents.\nThe user then looks through the first batch of results and, if she doesn't find the answer she is looking for, she may potentially request to view the next batch or decide to issue a new query.\nA recent study [16] indicated that approximately 80% of the users examine at most the first 3 batches of the results.\nThat is, 80% of the users typically view at most 30 to 60 results for every query that they issue to a search engine.\nAt the same time, given the size of the Web, the inverted index that the search engines maintain can grow very large.\nSince the users are interested in a small number of results (and thus are viewing a small portion of the index for every query that they issue), using an index that is capable of returning all the results for a query may constitute a significant waste in terms of time, storage space and computational resources, which is bound to get worse as the Web grows larger over time [24].\nOne natural solution to this problem is to create a small index on a subset of the documents that are likely to be returned as the top results (by using, for example, the pruning techniques in [7, 20]) and compute the first batch of answers using the pruned index.\nWhile this approach has been shown to give significant improvement in performance, it also leads to noticeable degradation in the quality of the search results, because the top answers are computed only from the pruned index [7, 20].\nThat is, even if a page should be placed as the top-matching page according to a search engine's ranking metric, the page may be placed behind the ones contained in the pruned index if the page did not become part of the pruned index for various reasons [7, 20].\nGiven the fierce competition among search engines today this degradation is clearly undesirable and needs to be addressed if possible.\nIn this paper, we study how we can avoid any degradation of search quality due to the above performance optimization while still realizing most of its benefit.\nThat is, we present a number of simple (yet important) changes in the pruning techniques for creating the pruned index.\nOur main contribution is a new answer computation algorithm that guarantees that the top-matching pages (according to the search-engine's ranking metric) are always placed at the top of search results, even though we are computing the first batch of answers from the pruned index most of the time.\nThese enhanced pruning techniques and answer-computation algorithms are explored in the context of the cluster architecture commonly employed by today's search engines.\nFinally, we study and present how search engines can minimize the operational cost of answering queries while providing high quality search results.\nFigure 1: (a) Search engine replicates its full index IF to in\ncrease query-answering capacity.\n(b) In the 1st tier, small pindexes IP handle most of the queries.\nWhen IP cannot answer a query, it is redirected to the 2nd tier, where the full index IF is used to compute the answer.\n2.\nCLUSTER ARCHITECTURE AND COST SAVINGS FROM A PRUNED INDEX\nTypically, a search engine downloads documents from the Web and maintains a local inverted index that is used to answer queries quickly.\nInverted indexes.\nAssume that we have collected a set of documents D = {D1,..., DM} and that we have extracted all the terms T = {t1,..., tn} from the documents.\nFor every single term ti \u2208 T we maintain a list I (ti) of document IDs that contain ti.\nEvery entry in I (ti) is called a posting and can be extended to include additional information, such as how many times ti appears in a document, the positions of ti in the document, whether ti is bold\/italic, etc. .\nThe set of all the lists I = {I (t1),..., I (tn)} is our inverted index.\n2.1 Two-tier index architecture\nSearch engines are accepting an enormous number of queries every day from eager users searching for relevant information.\nFor example, Google is estimated to answer more than 250 million user queries per day.\nIn order to cope with this huge query load, search engines typically replicate their index across a large cluster of machines as the following example illustrates: Example 1 Consider a search engine that maintains a cluster of machines as in Figure 1 (a).\nThe size of its full inverted index IF is larger than what can be stored in a single machine, so each copy of IF is stored across four different machines.\nWe also suppose that one copy of IF can handle the query load of 1000 queries\/sec.\nAssuming that the search engine gets 5000 queries\/sec, it needs to replicate IF five times to handle the load.\nOverall, the search engine needs to maintain 4 \u00d7 5 = 20 machines in its cluster.\n\u2751 While fully replicating the entire index IF multiple times is a straightforward way to scale to a large number of queries, typical query loads at search engines exhibit certain localities, allowing for significant reduction in cost by replicating only a small portion of the full index.\nIn principle, this is typically done by pruning a full index IF to create a smaller, pruned index (or p-index) IP, which contains a subset of the documents that are likely to be returned as top results.\nGiven the p-index, search engines operate by employing a twotier index architecture as we show in Figure 1 (b): All incoming queries are first directed to one of the p-indexes kept in the 1st tier.\nIn the cases where a p-index cannot compute the answer (e.g. was unable to find enough documents to return to the user) the query is answered by redirecting it to the 2nd tier, where we maintain a full index IF.\nThe following example illustrates the potential reduction in the query-processing cost by employing this two-tier index architecture.\nExample 2 Assume the same parameter settings as in Example 1.\nThat is, the search engine gets a query load of 5000 queries\/sec\n(2) If (C = 1) Then (3) Return A (4) Else (5) A = ComputeAnswer (q, IF) (6) Return A\nFigure 2: Computing the answer under the two-tier architecture with the result correctness guarantee.\nand every copy of an index (both the full IF and p-index IP) can handle up to 1000 queries\/sec.\nAlso assume that the size of IP is one fourth of IF and thus can be stored on a single machine.\nFinally, suppose that the p-indexes can handle 80% of the user queries by themselves and only forward the remaining 20% queries to IF.\nUnder this setting, since all 5000\/sec user queries are first directed to a p-index, five copies of IP are needed in the 1st tier.\nFor the 2nd tier, since 20% (or 1000 queries\/sec) are forwarded, we need to maintain one copy of IF to handle the load.\nOverall we need a total of 9 machines (five machines for the five copies of IP and four machines for one copy of IF).\nCompared to Example 1, this is more than 50% reduction in the number of machines.\n\u2751 The above example demonstrates the potential cost saving achieved by using a p-index.\nHowever, the two-tier architecture may have a significant drawback in terms of its result quality compared to the full replication of IF; given the fact that the p-index contains only a subset of the data of the full index, it is possible that, for some queries, the p-index may not contain the top-ranked document according to the particular ranking criteria used by the search engine and fail to return it as the top page, leading to noticeable quality degradation in search results.\nGiven the fierce competition in the online search market, search engine operators desperately try to avoid any reduction in search quality in order to maximize user satisfaction.\n2.2 Correctness guarantee under two-tier architecture\nHow can we avoid the potential degradation of search quality under the two-tier architecture?\nOur basic idea is straightforward: We use the top-k result from the p-index only if we know for sure that the result is the same as the top-k result from the full index.\nThe algorithm in Figure 2 formalizes this idea.\nIn the algorithm, when we compute the result from IP (Step 1), we compute not only the top-k result A, but also the correctness indicator function C defined as follows: Definition 1 (Correctness indicator function) Given a query q, the p-index IP returns the answer A together with a correctness indicator function C. C is set to 1 if A is guaranteed to be identical (i.e. same results in the same order) to the result computed from the full index IF.\nIf it is possible that A is different, C is set to 0.\n\u2751 Note that the algorithm returns the result from IP (Step 3) only when it is identical to the result from IF (condition C = 1 in Step 2).\nOtherwise, the algorithm recomputes and returns the result from the full index IF (Step 5).\nTherefore, the algorithm is guaranteed to return the same result as the full replication of IF all the time.\nNow, the real challenge is to find out (1) how we can compute the correctness indicator function C and (2) how we should prune the index to make sure that the majority of queries are handled by IP alone.\nA straightforward way to calculate C is to compute the top-k answer both from IP and IF and compare them.\nThis naive solution, however, incurs a cost even higher than the full replication of IF because the answers are computed twice: once from IP and once from IF.\nIs there any way to compute the correctness indicator function C only from IP without computing the answer from IF?\nQuestion 2 How should we prune IF to IP to realize the maximum cost saving?\nThe effectiveness of Algorithm 2.1 critically depends on how often the correctness indicator function C is evaluated to be 1.\nIf C = 0 for all queries, for example, the answers to all queries will be computed twice, once from IP (Step 1) and once from IF (Step 5), so the performance will be worse than the full replication of IF.\nWhat will be the optimal way to prune IF to IP, such that C = 1 for a large fraction of queries?\nIn the next few sections, we try to address these questions.\n3.\nOPTIMAL SIZE OF THE P-INDEX\nIntuitively, there exists a clear tradeoff between the size of IP and the fraction of queries that IP can handle: When IP is large and has more information, it will be able to handle more queries, but the cost for maintaining and looking up IP will be higher.\nWhen IP is small, on the other hand, the cost for IP will be smaller, but more queries will be forwarded to IF, requiring us to maintain more copies of IF.\nGiven this tradeoff, how should we determine the optimal size of IP in order to maximize the cost saving?\nTo find the answer, we start with a simple example.\nExample 3 Again, consider a scenario similar to Example 1, where the query load is 5000 queries\/sec, each copy of an index can handle 1000 queries\/sec, and the full index spans across 4 machines.\nBut now, suppose that if we prune IF by 75% to IP, (i.e., the size of IP, is 25% of IF), IP, can handle 40% of the queries (i.e., C = 1 for 40% of the queries).\nAlso suppose that if IF is pruned by 50% to IP 2, IP 2 can handle 80% of the queries.\nWhich one of the IP,, IP 2 is preferable for the 1st-tier index?\nneeded when we use IP, for the 1st tier.\nAt the 1st tier, we need 5 To find out the answer, we first compute the number of machines copies of IP, to handle the query load of 5000 queries\/sec.\nSince the size of IP, is 25% of IF (that requires 4 machines), one copy of IP, requires one machine.\nTherefore, the total number of machines required for the 1st tier is 5 \u00d7 1 = 5 (5 copies of IP, with 1 machine per copy).\nAlso, since IP, can handle 40% of the queries, the 2nd tier has to handle 3000 queries\/sec (60% of the 5000 queries\/sec), so we need a total of 3 \u00d7 4 = 12 machines for the 2nd tier (3 copies of IF with 4 machines per copy).\nOverall, when we use IP, for the 1st tier, we need 5 + 12 = 17 machines to handle the load.\nWe can do similar analysis when we use IP 2 and see that a total of 14 machines are needed when IP 2 is used.\nGiven this result, we can conclude that using IP 2 is preferable.\n\u2751 The above example shows that the cost of the two-tier architecture depends on two important parameters: the size of the p-index and the fraction of the queries that can be handled by the 1st tier index alone.\nWe use s to denote the size of the p-index relative to IF (i.e., if s = 0.2, for example, the p-index is 20% of the size of IF).\nWe use f (s) to denote the fraction of the queries that a p-index of size s can handle (i.e., if f (s) = 0.3, 30% of the queries return the value C = 1 from IP).\nIn general, we can expect that f (s) will increase as s gets larger because IP can handle more queries as its size grows.\nIn Figure 3, we show an example graph of f (s) over s. Given the notation, we can state the problem of p-index-size optimization as follows.\nIn formulating the problem, we assume that the number of machines required to operate a two-tier architecture\nFigure 3: Example function showing the fraction of guaranteed\nqueries f (s) at a given size s of the p-index.\nis roughly proportional to the total size of the indexes necessary to handle the query load.\nProblem 1 (Optimal index size) Given a query load Q and the function f (s), find the optimal p-index size s that minimizes the total size of the indexes necessary to handle the load Q. \u2751 The following theorem shows how we can determine the optimal index size.\nTheorem 1 The cost for handling the query load Q is minimal when the size of the p-index, s, satisfies df (s) d s = 1.\n\u2751 Proof The proof of this and the following theorems is omitted due to space constraints.\nThis theorem shows that the optimal point is when the slope of the f (s) curve is 1.\nFor example, in Figure 3, the optimal size is when s = 0.16.\nNote that the exact shape of the f (s) graph may vary depending on the query load and the pruning policy.\nFor example, even for the same p-index, if the query load changes significantly, fewer (or more) queries may be handled by the p-index, decreasing (or increasing) f (s).\nSimilarly, if we use an effective pruning policy, more queries will be handled by IP than when we use an ineffective pruning policy, increasing f (s).\nTherefore, the function f (s) and the optimal-index size may change significantly depending on the query load and the pruning policy.\nIn our later experiments, however, we find that even though the shape of the f (s) graph changes noticeably between experiments, the optimal index size consistently lies between 10%--30% in most experiments.\n4.\nPRUNING POLICIES\nIn this section, we show how we should prune the full index IF to IP, so that (1) we can compute the correctness indicator function C from IP itself and (2) we can handle a large fraction of queries by IP.\nIn designing the pruning policies, we note the following two localities in the users' search behavior:\n1.\nKeyword locality: Although there are many different words in the document collection that the search engine indexes, a few popular keywords constitute the majority of the query loads.\nThis keyword locality implies that the search engine will be able to answer a significant fraction of user queries even if it can handle only these few popular keywords.\n2.\nDocument locality: Even if a query has millions of matching documents, users typically look at only the first few results [16].\nThus, as long as search engines can compute the first few top-k answers correctly, users often will not notice that the search engine actually has not computed the correct answer for the remaining results (unless the users explicitly request them).\nBased on the above two localities, we now investigate two different types of pruning policies: (1) a keyword pruning policy, which takes advantage of the keyword locality by pruning the whole inverted list I (ti) for unpopular keywords ti's and (2) a document pruning policy, which takes advantage of the document locality by keeping only a few postings in each list I (ti), which are likely to be included in the top-k results.\nAs we discussed before, we need to be able to compute the correctness indicator function from the pruned index alone in order to provide the correctness guarantee.\nSince the computation of correctness indicator function may critically depend on the particular ranking function used by a search engine, we first clarify our assumptions on the ranking function.\n4.1 Assumptions on ranking function\nConsider a query q = ft1, t2,..., tw} that contains a subset of the index terms.\nThe goal of the search engine is to return the documents that are most relevant to query q.\nThis is done in two steps: first we use the inverted index to find all the documents that contain the terms in the query.\nSecond, once we have the relevant documents, we calculate the rank (or score) of each one of the documents with respect to the query and we return to the user the documents that rank the highest.\nMost of the major search engines today return documents containing all query terms (i.e. they use AND-semantics).\nIn order to make our discussions more concise, we will also assume the popular AND-semantics while answering a query.\nIt is straightforward to extend our results to OR-semantics as well.\nThe exact ranking function that search engines employ is a closely guarded secret.\nWhat is known, however, is that the factors in determining the document ranking can be roughly categorized into two classes: Query-dependent relevance.\nThis particular factor of relevance captures how relevant the query is to every document.\nAt a high level, given a document D, for every term ti a search engine assigns a term relevance score tr (D, ti) to D. Given the tr (D, ti) scores for every ti, then the query-dependent relevance of D to the query, noted as tr (D, q), can be computed by combining the individual term relevance values.\nOne popular way for calculating the query--dependent relevance is to represent both the document D and the query q using the TF.IDF vector space model [29] and employ a cosine distance metric.\nSince the exact form of tr (D, ti) and tr (D, q) differs depending on the search engine, we will not restrict to any particular form; instead, in order to make our work applicable in the general case, we will make the generic assumption that the query-dependent relevance is computed as a function of the individual term relevance values in the query:\nQuery-independent document quality.\nThis is a factor that measures the overall \"quality\" of a document D independent of the particular query issued by the user.\nPopular techniques that compute the general quality of a page include PageRank [26], HITS [17] and the likelihood that the page is a \"spam\" page [25, 15].\nHere, we will use pr (D) to denote this query-independent part of the final ranking function for document D.\nThe final ranking score r (D, q) of a document will depend on both the query-dependent and query-independent parts of the ranking function.\nThe exact combination of these parts may be done in a variety of ways.\nIn general, we can assume that the final ranking score of a document is a function of its query-dependent and query-independent relevance scores.\nMore formally:\nFor example, fr (tr (D, q), pr (D)) may take the form fr (tr (D, q), pr (D)) = \u03b1 \u2022 tr (D, q) + (1--\u03b1) \u2022 pr (D), thus giving weight \u03b1 to the query-dependent part and the weight 1--\u03b1 to the query-independent part.\nIn Equations 1 and 2 the exact form of fr and ftr can vary depending on the search engine.\nTherefore, to make our discussion applicable independent of the particular ranking function used by search engines, in this paper, we will make only the generic assumption that the ranking function r (D, q) is monotonic on its parameters tr (D, t1),.\n.\n.\n, tr (D, tw) and pr (D).\nFigure 4: Keyword and document pruning.\nFigure 5: Result guarantee in keyword pruning.\nDefinition 2 A function f (\u03b1, \u03b2,..., \u03c9) is monotonic if t1\u03b11> \u03b12, t1\u03b21> \u03b22,...t1\u03c91>\nRoughly, the monotonicity of the ranking function implies that, between two documents D1 and D2, if D1 has higher querydependent relevance than D2 and also a higher query-independent score than D2, then D1 should be ranked higher than D2, which we believe is a reasonable assumption in most practical settings.\n4.2 Keyword pruning\nGiven our assumptions on the ranking function, we now investigate the \"keyword pruning\" policy, which prunes the inverted index IF \"horizontally\" by removing the whole I (ti)'s corresponding to the least frequent terms.\nIn Figure 4 we show a graphical representation of keyword pruning, where we remove the inverted lists for t3 and t5, assuming that they do not appear often in the query load.\nNote that after keyword pruning, if all keywords ft1,..., tn} in the query q appear in IP, the p-index has the same information as IF as long as q is concerned.\nIn other words, if all keywords in q appear in IP, the answer computed from IP is guaranteed to be the same as the answer computed from IF.\nFigure 5 formalizes this observation and computes the correctness indicator function C for a keyword-pruned index IP.\nIt is straightforward to prove that the answer from IP is identical to that from IF if C = 1 in the above algorithm.\nWe now consider the issue of optimizing the IP such that it can handle the largest fraction of queries.\nThis problem can be formally stated as follows: Problem 2 (Optimal keyword pruning) Given the query load Q and a goal index size s \u2022 IIF I for the pruned index, select the inverted lists IP = fI (t1),.\n.\n.\n, I (th)} such that IIP I \u03c4p for a threshold value \u03c4p).\nThe hope is that most of the top-ranked results are likely to have high pr (D) values, so the answer computed from this p-index is likely to be similar to the answer computed from the full index.\nFigure 7 describes this pruning policy more formally, where we sort all documents Di's by their respective pr (Di) values and keep a Di in the p-index when its\n(1) Foreach I (ti) E IF (2) Sort Di's in I (ti) based on pr (Di) (3) If II (ti) I \u03c4p is included in IP.\nAssume that the kth entry in the top-k results, has a ranking score of r (Dk, q) = fr (tr (Dk, q), pr (Dk)).\nNow consider another document Dj that was pruned from IP because pr (Dj) <\u03c4p.\nEven so, it is still possible that the document's tr (Dj, q) value is very high such that r (Dj, q) = fr (tr (Dj, q), pr (Dj))> r (Dk, q).\n\u25a0 Therefore, under a PR-based pruning policy, the quality of the answer computed from IP can be significantly worse than that from IF and it is not possible to detect this degradation without computing the answer from IF.\nIn the next section, we propose simple yet essential changes to this pruning policy that allows us to compute the correctness function C from IP alone.\n4.3.2 Extended keyword-specific pruning\nThe main problem of global PR-based document pruning policies is that we do not know the term-relevance score tr (D, ti) of the pruned documents, so a document not in IP may have a higher ranking score than the ones returned from IP because of their high tr scores.\nHere, we propose a new pruning policy, called extended keyword-specific document pruning (EKS), which avoids this problem by pruning not just based on the query-independent pr (D) score but also based on the term-relevance tr (D, ti) score.\nThat is, for every inverted list I (ti), we pick two threshold values, \u03c4pi for pr and \u03c4ti for tr, such that if a document D E I (ti) satisfies pr (D)> \u03c4pi or tr (D, ti)> \u03c4ti, we include it in I (ti) of IP.\nOtherwise, we prune it from IP.\nFigure 9 formally describes this algorithm.\nThe threshold values, \u03c4pi and \u03c4ti, may be selected in a number of different ways.\nFor example, if pr and tr have equal weight in the final ranking and if we want to keep at most N postings in each inverted list I (ti), we may want to set the two threshold values equal to \u03c4i (\u03c4pi = \u03c4ti = \u03c4i) and adjust \u03c4i such that N postings remain in I (ti).\nThis new pruning policy, when combined with a monotonic scoring function, enables us to compute the correctness indicator function C from the pruned index.\nWe use the following example to explain how we may compute C. Example 4 Consider the query q = ft1, t2} and a monotonic ranking function, f (pr (D), tr (D, t1), tr (D, t2)).\nThere are three possible scenarios on how a document D appears in the pruned index IP.\n1.\nD appears in both I (t1) and I (t2) of IP: Since complete information of D appears in IP, we can compute the exact\nFigure 10: Ranking based on thresholds tr\u03c4 (ti) and pr\u03c4 (ti).\nscore of D based on pr (D), tr (D, t1) and tr (D, t2) values in IP: f (pr (D), tr (D, t1), tr (D, t2)).\n2.\nD appears only in I (t1) but not in I (t2): Since D does not appear in I (t2), we do not know tr (D, t2), so we cannot compute its exact ranking score.\nHowever, from our pruning criteria, we know that tr (D, t2) cannot be larger than the threshold value \u03c4t2.\nTherefore, from the monotonicity of f (Definition 2), we know that the ranking score of D, f (pr (D), tr (D, t1), tr (D, t2)), cannot be larger than f (pr (D), tr (D, t1), \u03c4t2).\n3.\nD does not appear in any list: Since D does not appear at all in IP, we do not know any of the pr (D), tr (D, t1), tr (D, t2) values.\nHowever, from our pruning criteria, we know that pr (D) <\u03c4p1 and <\u03c4p2 and that tr (D, t1) <\u03c4t1 and tr (D, t2) <\u03c4t2.\nTherefore, from the monotonicity of f, we know that the ranking score of D, cannot be larger than\nThe above example shows that when a document does not appear in one of the inverted lists I (ti) with ti E q, we cannot compute its exact ranking score, but we can still compute its upper bound score by using the threshold value \u03c4ti for the missing values.\nThis suggests the algorithm in Figure 10 that computes the top-k result A from IP together with the correctness indicator function C.\nIn the algorithm, the correctness indicator function C is set to one only if all documents in the top-k result A appear in all inverted lists I (ti) with ti E q, so we know their exact score.\nIn this case, because these documents have scores higher than the upper bound scores of any other documents, we know that no other documents can appear in the top-k.\nThe following theorem formally proves the correctness of the algorithm.\nIn [11] Fagin et al., provides a similar proof in the context of multimedia middleware.\nTheorem 4 Given an inverted index IP pruned by the algorithm in Figure 9, a query q = ft1,..., tw} and a monotonic ranking function, the top-k result from IP computed by Algorithm 4.6 is the same as the top-k result from IF if C = 1.\n\u2751 Proof Let us assume Dk is the kth ranked document computed from IP according to Algorithm 4.6.\nFor every document Di E IF that is not in the top-k result from IP, there are two possible scenarios: First, Di is not in the final answer because it was pruned from all inverted lists I (tj), 1 20%, keyword pruning does a much better job as it provides a higher number of guarantees at any given index size.\nLater in Section 5.3, we discuss the combination of the two policies.\nIn our next experiment, we are interested in comparing EKS with the PR-based pruning policies described in Section 4.3.\nTo this end, apart from EKS, we also generated document-pruned pindexes for the Global pr-based pruning (GPR) and the Local prbased pruning (LPR) policies.\nFor each of the polices we created document-pruned p-indexes of varying sizes s.\nSince GPR and LPR cannot provide a correctness guarantee, we will compare the fraction of queries from each policy that are identical (i.e. the same results in the same order) to the top-k results calculated from the full index.\nHere, we will report our results for k = 20; the results are similar for other values of k.\nThe results are shown in Figure 13.\nFigure 14: Average fraction of the top-20 results of p-index with size s contained in top-20 results of the full index.\nFraction of queries guaranteed for top-20 per fraction of index, using keyword and document Figure 15: Combining keyword and document pruning.\nThe horizontal axis shows the size s of the p-index; the vertical axis shows the fraction f (s) of the queries whose top-20 results are identical to the top-20 results of the full index, for a given size s. By observing Figure 13, we can see that GPR performs the worst of the three policies.\nOn the other hand EKS, picks up early, by answering a great fraction of queries (about 62%) correctly with only 10% of the index size.\nThe fraction of queries that LPR can answer remains below that of EKS until about s = 37%.\nFor any index size larger than 37%, LPR performs the best.\nIn the experiment of Figure 13, we applied the strict definition that the results of the p-index have to be in the same order as the ones of the full index.\nHowever, in a practical scenario, it may be acceptable to have some of the results out of order.\nTherefore, in our next experiment we will measure the fraction of the results coming from an p-index that are contained within the results of the full index.\nThe result of the experiment is shown on Figure 14.\nThe horizontal axis is, again, the size s of the p-index; the vertical axis shows the average fraction of the top-20 results common with the top-20 results from the full index.\nOverall, Figure 14 depicts that EKS and LPR identify the same high (\u2248 96%) fraction of results on average for any size s \u2265 30%, with GPR not too far behind.\n5.3 Combining keyword and document pruning\nIn Sections 5.1 and 5.2 we studied the individual performance of our keyword and document pruning schemes.\nOne interesting question however is how do these policies perform in combination?\nWhat fraction of queries can we guarantee if we apply both keyword and document pruning in our full index IF?\nTo answer this question, we performed the following experiment.\nWe started with the full index IF and we applied keyword pruning to create an index IhP of size sh \u00b7 100% of IF.\nAfter that, we further applied document pruning to IhP, and created our final pindex IP of size sv \u00b7 100% of IhP.\nWe then calculated the fraction of guaranteed queries in IP.\nWe repeated the experiment for different values of sh and sv.\nThe result is shown on Figure 15.\nThe x-axis shows the index size sh after applying keyword pruning; the y-axis shows the index size sv after applying document pruning; the z-axis\nshows the fraction of guaranteed queries after the two prunings.\nFor example the point (0.2, 0.3, 0.4) means that if we apply keyword pruning and keep 20% of IF, and subsequently on the resulting index we apply document pruning keeping 30% (thus creating a pindex of size 20% \u00b7 30% = 6% of IF) we can guarantee 40% of the queries.\nBy observing Figure 15, we can see that for p-index sizes smaller than 50%, our combined pruning does relatively well.\nFor example, by performing 40% keyword and 40% document pruning (which translates to a pruned index with s = 0.16) we can provide a guarantee for about 60% of the queries.\nIn Figure 15, we also observe a \"plateau\" for sh> 0.5 and sv> 0.5.\nFor this combined pruning policy, the optimal index size is at s = 0.13, with sh = 0.46 and sv = 0.29.\n6.\nRELATED WORK\n[3, 30] provide a good overview of inverted indexing in Web search engines and IR systems.\nExperimental studies and analyses of various partitioning schemes for an inverted index are presented in [6, 23, 33].\nThe pruning algorithms that we have presented in this paper are independent of the partitioning scheme used.\nThe works in [1, 5, 7, 20, 27] are the most related to ours, as they describe pruning techniques based on the idea of keeping the postings that contribute the most in the final ranking.\nHowever, [1, 5, 7, 27] do not consider any query-independent quality (such as PageRank) in the ranking function.\n[32] presents a generic framework for computing approximate top-k answers with some probabilistic bounds on the quality of results.\nOur work essentially extends [1, 2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the correctness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of reducing the cost associated with queries [18, 19, 21, 31].\nThis thread of work is also orthogonal to ours because a caching scheme may operate on top of our p-index in order to minimize the answer computation cost.\nThe exact ranking functions employed by current search engines are closely guarded secrets.\nIn general, however, the rankings are based on query-dependent relevance and queryindependent document \"quality.\"\nQuery-dependent relevance can be calculated in a variety of ways (see [3, 30]).\nSimilarly, there are a number of works that measure the \"quality\" of the documents, typically as captured through link-based analysis [17, 28, 26].\nSince our work does not assume a particular form of ranking function, it is complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\nThe main idea is to either stop the traversal of the inverted lists early, or to shrink the lists by pruning postings from the lists [14, 4, 11, 8].\nOur proof for the correctness indicator function was primarily inspired by [12].\n7.\nCONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted indexes in order to scale to enormous query loads.\nWhile this approach may improve performance, by computing the top results from a pruned index we may notice a significant degradation in the result quality.\nIn this paper, we provided a framework for new pruning techniques and answer computation algorithms that guarantee that the top matching pages are always placed at the top of search results in the correct order.\nWe studied two pruning techniques, namely keyword-based and document-based pruning as well as their combination.\nOur experimental results demonstrated that our algorithms can effectively be used to prune an inverted index without degradation in the quality of results.\nIn particular, a keyword-pruned index can guarantee 73% of the queries with a size of 30% of the full index, while a document-pruned index can guarantee 68% of the queries with the same size.\nWhen we combine the two pruning algorithms we can guarantee 60% of the queries with an index size of 16%.\nIt is our hope that our work will help search engines develop better, faster and more efficient indexes and thus provide for a better user search experience on the Web.","keyphrases":["prune","invert index","correct guarante","web search engin","queri load","prune index","onlin search market","prune techniqu","result comput algorithm","top-match page","top search result","optim size","larg-scale invert index","result qualiti degrad","prune-base perform optim"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","R","M"]} {"id":"C-22","title":"Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications","abstract":"This paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware. The solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects. A key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system. The MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.","lvl-1":"Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications Hendrik Gani School of Computer Science and Information Technology, RMIT University, Melbourne, Australia hgani@cs.rmit.edu.au Caspar Ryan School of Computer Science and Information Technology, RMIT University, Melbourne, Australia caspar@cs.rmit.edu.au Pablo Rossi School of Computer Science and Information Technology, RMIT University, Melbourne, Australia pablo@cs.rmit.edu.au ABSTRACT This paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware.\nThe solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects.\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system.\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.\nCategories and Subject Descriptors C.2.4 Distributed Systems; D.2.8 Metrics General Terms Measurement, Performance.\n1.\nINTRODUCTION The different capabilities of mobile devices, plus the varying speed, error rate and disconnection characteristics of mobile networks [1], make it difficult to predict in advance the exact execution environment of mobile applications.\nOne solution which is receiving increasing attention in the research community is application adaptation [2-7], in which applications adjust their behaviour in response to factors such as network, processor, or memory usage.\nEffective adaptation requires detailed and up to date information about both the system and the software itself.\nMetrics related to system wide information (e.g. processor, memory and network load) are referred to as environmental metrics [5], while metrics representing application behaviour are referred as software metrics [8].\nFurthermore, the type of metrics required for performing adaptation is dependent upon the type of adaptation required.\nFor example, service-based adaptation, in which service quality or service behaviour is modified in response to changes in the runtime environment, generally requires detailed environmental metrics but only simple software metrics [4].\nOn the other hand, adaptation via object mobility [6], also requires detailed software metrics [9] since object placement is dependent on the execution characteristics of the mobile objects themselves.\nWith the exception of MobJeX [6], existing mobile object systems such as Voyager [10], FarGo [11, 12], and JavaParty [13] do not provide automated adaptation, and therefore lack the metrics collection process required to support this process.\nIn the case of MobJeX, although an adaptation engine has been implemented [5], preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks, and no existing means of automatically collecting them.\nConsequently, the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications.\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components, and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine.\nFurthermore, in some cases the location where each metric should be collected is not fixed (i.e. it could be done in several places) and thus a decision must be made based on the efficiency of the chosen solution (see section 3).\nThe rest of this paper is organised as follows: Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection, propagation and delivery of metrics as described in section 3.\nSection 4 describes some initial testing and results and section 5 closes with a summary, conclusions and discussion of future work.\n2.\nBACKGROUND In general, an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain.\nMobile object frameworks allow some of these objects to be tagged as mobile objects, providing middleware support for such objects to be moved at runtime to other hosts.\nAt a minimum, a mobile object framework with at least one running mobile application consists of the following components: runtimes, mobile objects, and proxies [14], although the terminology used by individual frameworks can differ [6, 10-13].\nA runtime is a container process for the management of mobile objects.\nFor example, in FarGo [15] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently, although this is not the case with MobJeX, which can run multiple applications in a single runtime using threads.\nThe applications themselves comprise mobile objects, which interact with each other through proxies [14].\nProxies, which have the same method interface as the object itself but add remote communication and object tracking functionality, are required for each target object that a source object communicates with.\nUpon migration, proxy objects move with the source object.\nThe Java based system MobJeX, which is used as the implementation platform for the metrics collection solution described in this paper, adds a number of additional middleware components.\nFirstly, a host manager (known as a service in MobJeX) provides a central point of communication by running on a known port on a per host basis, thus facilitating the enumeration or lookup of components such as runtimes or mobile objects.\nSecondly, MobJeX has a per-application mobile object container called a transport manager (TM).\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case.\nFinally, depending on adaptation mode, MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation.\n3.\nMETRICS COLLECTION This section discusses the design and derivation of a solution for collecting metrics in order to support the adaptation of applications via object migration.\nThe solution, although implemented within the MobJeX framework, is for the most part discussed in generic terms, except where explicitly stated to be MobJeX specific.\n3.1 Metrics Selection The metrics of Ryan and Rossi [9] have been chosen as the basis for this solution, since they are specifically intended for mobile application adaptation as well as having been derived from a series of mathematical models and empirically validated.\nFurthermore, the metrics were empirically shown to improve the application performance in a real adaptation scenario following a change in the execution environment.\nIt would however be beyond the scope of this paper to implement and test the full suite of metrics listed in [9], and thus in order to provide a useful non-random subset, we chose to implement the minimum set of metrics necessary to implement local and global adaptation [9] and thereby satisfy a range of real adaptation scenarios.\nAs such the solution presented in this section is discussed primarily in terms of these metrics, although the structure of the solution is intended to support the implementation of the remaining metrics, as well as other unspecified metrics such as those related to quality and resource utilisation.\nThis subset is listed below and categorised according to metric type.\nNote that some additional metrics were used for implementation purposes in order to derive core metrics or assist the evaluation, and as such are defined in context where appropriate.\n1.\nSoftware metrics - Number of Invocations (NI), the frequency of invocations on methods of a class.\n2.\nPerformance metrics - Method Execution Time (ET), the time taken to execute a method body (ms).\n- Method Invocation Time (IT), the time taken to invoke a method, excluding the method execution time (ms).\n3.\nResource utilization metrics - Memory Usage (MU), the memory usage of a process (in bytes).\n- Processor Usage (PU), the percentage of the CPU load of a host.\n- Network Usage (NU), the network bandwidth between two hosts (in bytes\/sec).\nFollowing are brief examples of a number of these metrics in order to demonstrate their usage in an adaptation scenario.\nAs Processor Usage (PU) on a certain host increases, the Execution Time (ET) of a given method executed on that host also increases [9], thus facilitating the decision of whether to move an object with high ET to another host with low PU.\nInvocation Time (IT) shows the overhead of invoking a certain method, with the invocation overhead of marshalling parameters and transmitting remote data for a remote call being orders of magnitude higher than the cost of pushing and popping data from the method call stack.\nIn other words, remote method invocation is expensive and thus should be avoided unless the gains made by moving an object to a host with more processing power (thereby reducing ET) outweigh the higher IT of the remote call.\nFinally, Number of Invocations (NI) is used primarily as a weighting factor or multiplier in order to enable the adaptation engine to predict the value over time of a particular adaptation decision.\n3.2 Metrics Measurement This subsection discusses how each of the metrics in the subset under investigation can be obtained in terms of either direct measurement or derivation, and where in the mobile object framework such metrics should actually be measured.\nOf the environmental resource metrics, Processor Usage (PU) and Network Usage (NU) both relate to an individual machine, and thus can be directly measured through the resource monitoring subsystem that is instantiated as part of the MobJeX service.\nHowever, Memory Usage (MU), which represents the memory state of a running process rather than the memory usage of a host, should instead be collected within an individual runtime.\nThe measurement of Number of Invocations (NI) and Execution Time (ET) metrics can be also be performed via direct measurement, however in this case within the mobile object implementation (mobject) itself.\nNI involves simply incrementing a counter value at either the start or end of a method call, depending upon the desired semantics with regard to thrown exceptions, while ET can be measured by starting a timer at the beginning of the method and stopping it at the end of the method, then retrieving the duration recorded by the timer.\nIn contrast, collecting Invocation Time (IT) is not as straight forward because the time taken to invoke a method can only be measured after the method finishes its execution and returns to the caller.\nIn order to collect IT metrics, another additional metric is needed.\nRyan and Rossi [9] define the metric Response Time (RT), as the total time taken for a method call to finish, which is the sum of IT and ET.\nThe Response Time can be measured directly using the same timer based technique used to measure ET, although at the start and end of the proxy call rather than the method implementation.\nOnce the Response Time (RT) is known, IT can derived by subtracting RT from ET.\nAlthough this derivation appears simple, in practice it is complicated by the fact that the RT and ET values from which the IT is derived are by necessity measured using timer code in different locations i.e. RT measured in the proxy, ET measured in the method body of the object implementation.\nIn addition, the proxies are by definition not part of the MobJeX containment hierarchy, since although proxies have a reference to their target object, it is not efficient for a mobile object (mobject) to have backward references to all of the many proxies which reference it (one per source object).\nFortunately, this problem can be solved using the push based propagation mechanism described in section 3.5 in which the RT metric is pushed to the mobject so that IT can be derived from the ET value stored there.\nThe derived value of IT is then stored and propagated further as necessary according to the criteria of section 3.6, the structural relationship of which is shown in Figure 1.\n3.3 Measurement Initiation The polling approach was identified as the most appropriate method for collecting resource utilisation metrics, such as Processor Usage (PU), Network Usage (NU) and Memory Usage (MU), since they are not part of, or related to, the direct flow of the application.\nTo measure PU or NU, the resource monitor polls the Operating System for the current CPU or network load respectively.\nIn the case of Memory Usage (MU), the Java Virtual Machine (JVM) [16] is polled for the current memory load.\nNote that in order to minimise the impact on application response time, the polling action should be done asynchronously in a separate thread.\nMetrics that are suitable for application initiated collection (i.e. as part of a normal method call) are software and performance related metrics, such as Number of Invocations (NI), Execution Time (ET), and Invocation Time (IT), which are explicitly related to the normal invocation of a method, and thus can be measured directly at this time.\n3.4 Metrics Aggregation In the solution presented in this paper, all metrics collected in the same location are aggregated in a MetricsContainer with individual containers corresponding to functional components in the mobile object framework.\nThe primary advantage of aggregating metrics in containers is that it allows them to be propagated easily as a cohesive unit through the components of the mobility framework so that they can be delivered to the adaptation engine, as discussed in the following subsection.\nNote that this containment captures the different granularity of measurement attributes and their corresponding metrics.\nConsider the case of measuring memory consumption.\nAt a coarse level of granularity this could be measured for an entire application or even a system, but could also be measured at the level of an individual object; or for an even finer level of granularity, the memory consumption during the execution of a specific method.\nAs an example of the level of granularity required for mobility based adaptation, the local adaptation algorithm proposed by Ryan and Rossi [9] requires metrics representing both the duration of a method execution and the overhead of a method invocation.\nThe use of metrics containers facilitates the collection of metrics at levels of granularity ranging from a single machine down to the individual method level.\nNote that some metrics containers do not contain any Metric objects, since as previously described, the sample implementation uses only a subset of the adaptation metrics from [9].\nHowever, for the sake of consistency and to promote flexibility in terms of adding new metrics in the future, these containers are still considered in the present design for completeness and for future work.\n3.5 Propagation and Delivery of Metrics The solution in this paper identifies two stages in the metrics collection and delivery process.\nFirstly, the propagation of metrics through the components of the mobility framework and secondly, the delivery of those metrics from the host manager\/service (or runtime if the host manager is not present) to the adaptation engine.\nRegarding propagation, in brief, it is proposed that when a lower level system component detects the arrival of a new metric update (e.g. mobile object), the metric is pushed (possibly along with other relevant metrics) to the next level component (i.e. runtime or transport manager containing the mobile object), which at some later stage, again determined by a configurable criteria (for example when there are a sufficient number of changed mobjects) will get pushed to the next level component (i.e. the host manager or the adaptation engine).\nA further incentive for treating propagation separately from delivery is due to the distinction between local and global adaptation [9].\nLocal adaptation is performed by an engine running on the local host (for example in MobJeX this would occur within the service) and thus in this case the delivery phase would be a local inter-process call.\nConversely, global adaptation is handled by a centralised adaptation engine running on a remote host and thus the delivery of metrics is via a remote call, and in the case where multiple runtimes exist without a separate host manager the delivery process would be even more expensive.\nTherefore, due to the presence of network communication latency, it is important for the host manager to pass as many metrics as possible to the adaptation engine in one invocation, implying the need to gather these metrics in the host manager, through some form of push or propagation, before sending them to the adaptation engine.\nConsequently, an abstract representation or model [17] of the system needs to be maintained.\nSuch a model would contain model entities, corresponding to each of the main system components, connected in a tree like hierarchy, which precisely reflects the structure and containment hierarchy of the actual system.\nAttaching metrics containers to model entities allows a model entity representing a host manager to be delivered to the adaptation engine enabling it to access all metrics in that component and any of its children (i.e. runtimes, and mobile objects).\nFurthermore it would generally be expected that an adaptation engine or system controller would already maintain a model of the system that can not only be reused for propagation but also provides an effective means of delivering metrics information from the host manager to the adaptation engine.\nThe relationship between model entities and metrics containers is captured in Figure 1.\n3.6 Propagation and Delivery Criteria This subsection proposes flexible criteria to allow each component to decide when it should propagate its metrics to the next component in line (Figure 1), in order to reduce the overhead incurred when metrics are unnecessarily propagated through the components of the mobility framework and delivered to the adaptation engine.\nThis paper proposes four different types of criterion that are executed at various stages of the measurement and propagation process in order to determine whether the next action should be taken or not.\nThis approach was designed such that whenever a single criterion is not satisfied, the subsequent criteria are not tested.\nThese four criteria are described in the following subsections.\nMeasure Metric Criterion - This criterion is attached to individual Metric objects to decide whether a new metric value should be measured or not.\nThis is most useful in the case where it is expensive to measure a particular metric.\nFurthermore, this criterion can be used as a mechanism for limiting storage requirements and manipulation overhead in the case where metric history is maintained.\nSimple examples would be either time or frequency based whereas more complex criteria could be domain specific for a particular metric, or based upon information stored in the metrics history.\nNotify Metrics Container Criterion - This criterion is also attached to individual Metric objects and is used to determine the circumstances under which the Metric object should notify its MetricsContainer.\nThis is based on the assumption that there may be cases where it is desirable to measure and store a metric in the history for the analysis of temporal behaviour, but is not yet significant enough to notify the MetricsContainer for further processing.\nA simple example of this criterion would be threshold based in which the newest metric value is compared with the previously stored value to determine whether the difference is significant enough to be of any interest to the MetricsContainer.\nA more complex criterion could involve analysis of the history to determine whether a pattern of recent changes is significant enough to warrant further processing and possible metrics delivery.\nNotify Model Entity Criterion - Unlike the previous two criteria, this criterion is associated with a MetricsContainer.\nSince a MetricsContainer can have multiple Metric objects, of which it has explicit domain knowledge, it is able to determine if, when, and how many of these metrics should be propagated to the ModelEntity and thus become candidates for being part of the hierarchical ModelEntity push process as described below.\nThis decision making is facilitated by the notifications received from individual Metric objects as described above.\nA simple implementation would be waiting for a certain number of updates before sending a notification to the model entity.\nFor example, since the MobjectMetricsContainer object contains three metrics, a possible criteria would be to check if two or more of the metrics have changed.\nA slightly more advanced implementation can be done by giving each metric a weight to indicate how significant it is in the adaptation decision making process.\nPush Criterion - The push criterion applies to all of the ModelEntites which are containers, that is the TransportManagerModelEntity, RuntimeModelEntity and ServiceModelEntity, as well as the special case of the ProxyMetricsContainer.\nThe purpose of this criterion is twofold.\nFor the TransportManagerModelEntity this serves as a criterion to determine notification since as with the previously described criteria, a local reference is involved.\nFor the other model entities, this serves as an opportunity to determine both when and what metrics should be pushed to the parent container wherein the case of the ServiceModelEntity the parent is the adaptation engine itself or in the case of the ProxyMetricsContainer the target of the push is the MobjectMetricsContainer.\nFurthermore, this criterion is evaluated using information from two sources.\nFirstly, it responds to the notification received from its own MetricsContainer but more importantly it serves to keep track of notifications from its child ModelEntities so as to determine when and what metrics information should be pushed to its parent or target.\nIn the specialised case of the push criterion for the proxy, the decision making is based on both the ProxyMetricsContainer itself, as well as the information accumulated from the individual ProxyMethodMetricsContainers.\nNote that a push criterion is not required for a mobject since it does not have any containment or aggregating responsibilities since this is already Service Model Entity Service Metrics Container Notify Model Entity Criterion Runtime Model Entity Runtime Metrics Container Notify Model Entity Criterion Transport Manager Model Entity Transport Manager Metrics Container Notify Model Entity Criterion Push Criterion Mobject Model Entity Mobject Method Metrics Notify Model Entity Criterion Push Criterion Push Criterion To adaptation engine Mobject Metrics Container Notify Metrics Container Criterion Measure Metric Criterion Metric 1 NotifyMetrics Container Criterion Notify Metrics Container Criterion Measure Metric CriterionProxyMethod Metrics Containers RT Metric Notify Metrics Container Criterion ProxyMetrics Container Push Criterion Measure Metric Criterion Metric 2 Measure Metric Criterion Metric 1 1.\n.\nn not currently implemented Notify Metrics Container Criterion Metric 1 Metric 2 Measure Metric Criterion Measure Metric Criterion Notify Metrics Container Criterion MU Metric Measure Metric Criterion Notify Metrics Container Criterion ET Metric IT Metric NI Metric Measure Metric Criterion Measure Metric Criterion Measure Metric Criterion Notify Metrics Container Criterion NU Metric PU Metric Measure Metric Criterion Measure Metric Criterion 1.\n.\nn Figure 1.\nStructural overview of the hierarchical and criteriabased notification relationships between Metrics, Metrics Containers, and Model Entities handled by the MobjectMetricsContainer and its individual MobjectMethodMetricsContainers.\nAlthough it is always important to reduce the number of pushes, this is especially so from a service to a centralised global adaptation engine, or from a proxy to a mobject.\nThis is because these relationships involve a remote call [18] which is expensive due to connection setup and data marshalling and unmarshalling overhead, and thus it is more efficient to send a given amount of data in aggregate form rather than sending smaller chunks multiple times.\nA simple implementation for reducing the number of pushes can be done using the concept of a process period [19] in which case the model entity accumulates pushes from its child entities until the process period expires at which time it pushes the accumulated metrics to its parent.\nAlternatively it could be based on frequency using domain knowledge about the type of children for example when a significant number of mobjects in a particular application (i.e. TransportManager) have undergone substantial changes.\nFor reducing the size of pushed data, two types of pushes were considered: shallow push and deep push.\nWith shallow push, a list of metrics containers that contain updated metrics is pushed.\nIn a deep push, the model entity itself is pushed, along with its metrics container and its child entities, which also have reference to metrics containers but possibly unchanged metrics.\nIn the case of the proxy, a deep push involves pushing the ProxyMetricsContainer and all of the ProxyMethodMetricsContainers whereas a shallow push means only the ProxyMethodMetricsContainers that meet a certain criterion.\n4.\nEVALUATION The preliminary tests presented in this section aim to analyse the performance and scalability of the solution and evaluate the impact on application execution in terms of metrics collection overhead.\nAll tests were executed using two Pentium 4 3.0 GHz PCs with 1,024 MB of RAM, running Java 1.4.2_08.\nThe two machines were connected to a router with a third computer acting as a file server and hosting the external adaptation engine implemented within the MobJeX system controller, thereby simulating a global adaptation scenario.\nSince only a limited number of tests could be executed, this evaluation chose to measure the worst case scenario in which all metrics collection was initiated in mobjects, wherein the propagation cost is higher than for any other metrics collected in the system.\nIn addition, since exhaustive testing of criteria is beyond the scope of this paper, two different types of criteria were used in the tests.\nThe measure metrics criterion was chosen, since this represents the starting point of the measurement process and can control under what circumstances and how frequently metrics are measured.\nIn addition, the push criterion was also implemented on the service, in order to provide an evaluation of controlling the frequency of metrics delivery to the adaptation engine.\nAll other (update and push) criteria were set to always meaning that they always evaluated to true and thus a notification was posted.\nFigure 2 shows the metric collection overhead in the mobject (MMCO), for different numbers of mobjects and methods when all criteria are set to always to provide the maximum measurement and propagation of metrics and thus an absolute worst case performance scenario.\nIt can be seen that the independent factors of increasing the number of mobjects and methods independently are linear.\nAlthough combining these together provides an exponential growth that is approximately n-squared, the initial results are not discouraging since delivering all of the metrics associated with 20 mobjects, each having 20 methods (which constitutes quite a large application given that mobjects typically represent coarse grained object clusters) is approximately 400ms, which could reasonably be expected to be offset with adaptation gains.\nNote that in contrast, the proxy metrics collection overhead (PMCO) was relatively small and constant at < 5ms, since in the absence of a proxy push criterion (this was only implemented on the service) the response time (RT) data for a single method is pushed during every invocation.\n50 150 250 350 450 550 1 5 10 15 20 25 Number of Mobjects\/Methods MobjectMetricsCollectionOverheadMMCO(ms) Methods Mobjects Both Figure 2.\nWorst case performance characteristics The next step was to determine the percentage metrics collection overhead compared with execution time in order to provide information about the execution characteristics of objects that would be suitable for adaptation using this metric collection approach.\nClearly, it is not practical to measure metrics and perform adaptation on objects with short execution times that cannot benefit from remote execution on hosts with greater processing power, thereby offsetting IT overhead of remote compared with local execution as well as the cost of object migration and the metrics collection process itself.\nIn addition, to demonstrate the effect of using simple frequency based criteria, the MMCO results as a percentage of method execution time were plotted as a 3-dimensional graph in Figure 3 with the z-axis representing the frequency used in both the measure metrics criterion and the service to adaptation engine push criterion.\nThis means that for a frequency value of 5 (n=5), metrics are only measured on every fifth method call, which then results in a notification through the model entity hierarchy to the service, on this same fifth invocation.\nFurthermore, the value of n=5 was also applied to the service push criterion so that metrics were only pushed to the adaptation engine after five such notifications, that is for example five different mobjects had updated their metrics.\nThese results are encouraging since even for the worst case scenario of n=1 the metric collection overhead is an acceptable 20% for a method of 1500ms duration (which is relatively short for a component or service level object in a distributed enterprise class application) with previous work on adaptation showing that such an overhead could easily be recovered by the efficiency gains made by adaptation [5].\nFurthermore, the measurement time includes delivering the results synchronously via a remote call to the adaptation engine on a different host, which would normally be done asynchronously, thus further reducing the impact on method execution performance.\nThe graph also demonstrates that even using modest criteria to reduce the metrics measurement to more realistic levels, has a rapid improvement on collection overhead at 20% for 500ms of ET.\n0 1000 2000 3000 4000 5000 1 2 3 4 5 6 0 20 40 60 80 100 120 MMCO (%) ET (milliseconds) N (interval) MMCO (%) Figure 3.\nPerformance characteristics with simple criteria 5.\nSUMMARY AND CONCLUSIONS Given the challenges of developing mobile applications that run in dynamic\/heterogeneous environments, and the subsequent interest in application adaptation, this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware.\nControlled lab studies were conducted to determine worst case performance, as well as show the reduction in collection overhead when applying simple collection criteria.\nIn addition, further testing provided an initial indication of the characteristics of application objects (based on method execution time) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy.\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system, thereby reducing collection overhead.\nWhile the potentially efficacy of this approach was tested using simple criteria, given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria.\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push.\nFurthermore, such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured.\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise, the efficacy of the solution described in this paper.\nFinally, the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [20].\n6.\nREFERENCES 1.\nKatz, R.H., Adaptation and Mobility in Wireless Information Systems.\nIEEE Personal Communications, 1994.\n1: p. 6-17.\n2.\nHirschfeld, R. and Kawamura, K. Dynamic Service Adaptation.\nin ICDCS Workshops``04.\n2004.\n3.\nLemlouma, T. and Layaida, N. Context-Aware Adaptation for Mobile Devices.\nin Proceedings of IEEE International Conference on Mobile Data Management 2004.\n2004.\n4.\nNoble, B.D., et al..\nAgile Application-Aware Adaptation for Mobility.\nin Proc.\nof the 16th ACM Symposium on Operating Systems and Principles SOSP.\n1997.\nSaint-Malo, France.\n5.\nRossi, P. and Ryan, C.\nAn Empirical Evaluation of Dynamic Local Adaptation for Distributed Mobile Applications.\nin Proc.\nof 2005 International Symposium on Distributed Objects and Applications (DOA 2005).\n2005.\nLarnaca, Cyprus: SpringerVerlag.\n6.\nRyan, C. and Westhorpe, C. Application Adaptation through Transparent and Portable Object Mobility in Java.\nin International Symposium on Distributed Objects and Applications (DOA 2004).\n2004.\nLarnaca, Cyprus: SpringerVerlag.\n7.\nda Silva e Silva, F.J., Endler, M., and Kon, F. Developing Adaptive Distributed Applications: A Framework Overview and Experimental Results.\nin On The Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE (LNCS 2888).\n2003.\n8.\nRossi, P. and Fernandez, G. Definition and validation of design metrics for distributed applications.\nin Ninth International Software Metrics Symposium.\n2003.\nSydney: IEEE.\n9.\nRyan, C. and Rossi, P. Software, Performance and Resource Utilisation Metrics for Context Aware Mobile Applications.\nin Proceedings of International Software Metrics Symposium IEEE Metrics 2005.\n2005.\nComo, Italy.\n10.\nRecursion Software Inc..\nVoyager URL: http:\/\/www.recursionsw.com\/voyager.htm.\n2005.\n11.\nHolder, O., Ben-Shaul, I., and Gazit, H., System Support for Dynamic Layout of Distributed Applications.\n1998, TechinonIsrael Institute of Technology.\np. 163 - 173.\n12.\nHolder, O., Ben-Shaul, I., and Gazit, H. Dynamic Layout of Distributed Applications in FarGo.\nin 21st Int'l Conf.\nSoftware Engineering (ICSE'99).\n1999: ACM Press.\n13.\nPhilippsen, M. and Zenger, M., JavaParty - Transparent Remote Objects in Java.\nConcurrency: Practice and Experience, 1997.\n9(11): p. 1225-1242.\n14.\nShapiro, M. Structure and Encapsulation in Distributed Systems: the Proxy Principle.\nin Proc.6th Intl..\nConference on Distributed Computing Systems.\n1986.\nCambridge, Mass. (USA): IEEE.\n15.\nGazit, H., Ben-Shaul, I., and Holder, O. Monitoring-Based Dynamic Relocation of Components in Fargo.\nin Proceedings of the Second International Symposium on Agent Systems and Applications and Fourth International Symposium on Mobile Agents.\n2000.\n16.\nLindholm, T. and Yellin, F., The Java Virtual Machine Specification 2nd Edition.\n1999: Addison-Wesley.\n17.\nRandell, L.G., Holst, L.G., and Bolmsj\u00f6, G.S. Incremental System Development of Large Discrete-Event Simulation Models.\nin Proceedings of the 31st conference on Winter Simulation.\n1999.\nPhoenix, Arizona.\n18.\nWaldo, J., Remote Procedure Calls and Java Remote Method Invocation.\nIEEE Concurrency, 1998.\n6(3): p. 5-7.\n19.\nRolia, J. and Lin, B. Consistency Issues in Distributed Application Performance Metrics.\nin Proceedings of the 1994 Conference of the Centre for Advanced Studies on Collaborative Research.\n1994.\nToronto, Canada.\n20.\nHenricksen, K. and Indulska, J.\nA software engineering framework for context-aware pervasive computing.\nin Proceedings of the 2nd IEEE Conference on Pervasive Computing and Communications (PerCom).\n2004.\nOrlando.","lvl-3":"Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware.\nThe solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects.\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system.\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.\n1.\nINTRODUCTION\nThe different capabilities of mobile devices, plus the varying speed, error rate and disconnection characteristics of mobile networks [1], make it difficult to predict in advance the exact execution environment of mobile applications.\nOne solution which is receiving increasing attention in the research community is application adaptation [2-7], in which applications adjust their behaviour in response to factors such as network, processor, or memory usage.\nEffective adaptation requires detailed and up to date information about both the system and the software itself.\nMetrics related to system wide information (e.g. processor, memory and network load) are referred to as environmental metrics [5], while metrics representing application behaviour are referred as\nsoftware metrics [8].\nFurthermore, the type of metrics required for performing adaptation is dependent upon the type of adaptation required.\nFor example, service-based adaptation, in which service quality or service behaviour is modified in response to changes in the runtime environment, generally requires detailed environmental metrics but only simple software metrics [4].\nOn the other hand, adaptation via object mobility [6], also requires detailed software metrics [9] since object placement is dependent on the execution characteristics of the mobile objects themselves.\nWith the exception of MobJeX [6], existing mobile object systems such as Voyager [10], FarGo [11, 12], and JavaParty [13] do not provide automated adaptation, and therefore lack the metrics collection process required to support this process.\nIn the case of MobJeX, although an adaptation engine has been implemented [5], preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks, and no existing means of automatically collecting them.\nConsequently, the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications.\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components, and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine.\nFurthermore, in some cases the location where each metric should be collected is not fixed (i.e. it could be done in several places) and thus a decision must be made based on the efficiency of the chosen solution (see section 3).\nThe rest of this paper is organised as follows: Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection, propagation and delivery of metrics as described in section 3.\nSection 4 describes some initial testing and results and section 5 closes with a summary, conclusions and discussion of future work.\n2.\nBACKGROUND\nIn general, an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain.\nMobile object frameworks allow some of these objects to be tagged as mobile objects, providing middleware support for such objects to be moved at runtime to other hosts.\nAt a minimum, a mobile object framework with at least one running mobile application consists of the following components: runtimes, mobile objects, and proxies [14], although the terminology used by individual frameworks can differ [6, 10-13].\nA runtime is a container process for the management of mobile objects.\nFor example, in FarGo [15] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently, although this is not the case with MobJeX, which can run multiple applications in a single runtime using threads.\nThe applications themselves comprise mobile objects, which interact with each other through proxies [14].\nProxies, which have the same method interface as the object itself but add remote communication and object tracking functionality, are required for each target object that a source object communicates with.\nUpon migration, proxy objects move with the source object.\nThe Java based system MobJeX, which is used as the implementation platform for the metrics collection solution described in this paper, adds a number of additional middleware components.\nFirstly, a host manager (known as a service in MobJeX) provides a central point of communication by running on a known port on a per host basis, thus facilitating the enumeration or lookup of components such as runtimes or mobile objects.\nSecondly, MobJeX has a per-application mobile object container called a transport manager (TM).\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case.\nFinally, depending on adaptation mode, MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation.\n3.\nMETRICS COLLECTION\n3.1 Metrics Selection\n3.\nResource utilization metrics\n3.2 Metrics Measurement\n3.3 Measurement Initiation\n3.4 Metrics Aggregation\n3.5 Propagation and Delivery of Metrics\n3.6 Propagation and Delivery Criteria\nNotify Metrics Container Criterion - This criterion is also\nNotify Model Entity Criterion - Unlike the previous two\n4.\nEVALUATION\n5.\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic\/heterogeneous environments, and the subsequent interest in application adaptation, this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware.\nControlled lab studies were conducted to determine worst case performance, as well as show the reduction in collection overhead when applying simple collection criteria.\nIn addition, further testing provided an initial indication of the characteristics of application objects (based on method execution time) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy.\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system, thereby reducing collection overhead.\nWhile the potentially efficacy of this approach was tested using simple criteria, given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria.\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push.\nFurthermore, such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured.\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise, the efficacy of the solution described in this paper.\nFinally, the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [20].","lvl-4":"Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware.\nThe solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects.\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system.\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.\n1.\nINTRODUCTION\nEffective adaptation requires detailed and up to date information about both the system and the software itself.\nMetrics related to system wide information (e.g. processor, memory and network load) are referred to as environmental metrics [5], while metrics representing application behaviour are referred as\nsoftware metrics [8].\nFurthermore, the type of metrics required for performing adaptation is dependent upon the type of adaptation required.\nFor example, service-based adaptation, in which service quality or service behaviour is modified in response to changes in the runtime environment, generally requires detailed environmental metrics but only simple software metrics [4].\nOn the other hand, adaptation via object mobility [6], also requires detailed software metrics [9] since object placement is dependent on the execution characteristics of the mobile objects themselves.\nWith the exception of MobJeX [6], existing mobile object systems such as Voyager [10], FarGo [11, 12], and JavaParty [13] do not provide automated adaptation, and therefore lack the metrics collection process required to support this process.\nIn the case of MobJeX, although an adaptation engine has been implemented [5], preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks, and no existing means of automatically collecting them.\nConsequently, the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications.\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components, and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine.\nThe rest of this paper is organised as follows: Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection, propagation and delivery of metrics as described in section 3.\nSection 4 describes some initial testing and results and section 5 closes with a summary, conclusions and discussion of future work.\n2.\nBACKGROUND\nIn general, an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain.\nMobile object frameworks allow some of these objects to be tagged as mobile objects, providing middleware support for such objects to be moved at runtime to other hosts.\nAt a minimum, a mobile object framework with at least one running mobile application consists of the following components: runtimes, mobile objects, and proxies [14], although the terminology used by individual frameworks can differ [6, 10-13].\nA runtime is a container process for the management of mobile objects.\nFor example, in FarGo [15] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently, although this is not the case with MobJeX, which can run multiple applications in a single runtime using threads.\nThe applications themselves comprise mobile objects, which interact with each other through proxies [14].\nUpon migration, proxy objects move with the source object.\nThe Java based system MobJeX, which is used as the implementation platform for the metrics collection solution described in this paper, adds a number of additional middleware components.\nFirstly, a host manager (known as a service in MobJeX) provides a central point of communication by running on a known port on a per host basis, thus facilitating the enumeration or lookup of components such as runtimes or mobile objects.\nSecondly, MobJeX has a per-application mobile object container called a transport manager (TM).\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case.\nFinally, depending on adaptation mode, MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation.\n5.\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic\/heterogeneous environments, and the subsequent interest in application adaptation, this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware.\nControlled lab studies were conducted to determine worst case performance, as well as show the reduction in collection overhead when applying simple collection criteria.\nIn addition, further testing provided an initial indication of the characteristics of application objects (based on method execution time) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy.\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system, thereby reducing collection overhead.\nFurthermore, such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured.\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise, the efficacy of the solution described in this paper.\nFinally, the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [20].","lvl-2":"Runtime Metrics Collection for Middleware Supported Adaptation of Mobile Applications\nABSTRACT\nThis paper proposes, implements, and evaluates in terms of worst case performance, an online metrics collection strategy to facilitate application adaptation via object mobility using a mobile object framework and supporting middleware.\nThe solution is based upon an abstract representation of the mobile object system, which holds containers aggregating metrics for each specific component including host managers, runtimes and mobile objects.\nA key feature of the solution is the specification of multiple configurable criteria to control the measurement and propagation of metrics through the system.\nThe MobJeX platform was used as the basis for implementation and testing with a number of laboratory tests conducted to measure scalability, efficiency and the application of simple measurement and propagation criteria to reduce collection overhead.\n1.\nINTRODUCTION\nThe different capabilities of mobile devices, plus the varying speed, error rate and disconnection characteristics of mobile networks [1], make it difficult to predict in advance the exact execution environment of mobile applications.\nOne solution which is receiving increasing attention in the research community is application adaptation [2-7], in which applications adjust their behaviour in response to factors such as network, processor, or memory usage.\nEffective adaptation requires detailed and up to date information about both the system and the software itself.\nMetrics related to system wide information (e.g. processor, memory and network load) are referred to as environmental metrics [5], while metrics representing application behaviour are referred as\nsoftware metrics [8].\nFurthermore, the type of metrics required for performing adaptation is dependent upon the type of adaptation required.\nFor example, service-based adaptation, in which service quality or service behaviour is modified in response to changes in the runtime environment, generally requires detailed environmental metrics but only simple software metrics [4].\nOn the other hand, adaptation via object mobility [6], also requires detailed software metrics [9] since object placement is dependent on the execution characteristics of the mobile objects themselves.\nWith the exception of MobJeX [6], existing mobile object systems such as Voyager [10], FarGo [11, 12], and JavaParty [13] do not provide automated adaptation, and therefore lack the metrics collection process required to support this process.\nIn the case of MobJeX, although an adaptation engine has been implemented [5], preliminary testing was done using synthetic pre-scripted metrics since there is little prior work on the dynamic collection of software metrics in mobile object frameworks, and no existing means of automatically collecting them.\nConsequently, the main contribution of this paper is a solution for dynamic metrics collection to support adaptation via object mobility for mobile applications.\nThis problem is non-trivial since typical mobile object frameworks consist of multiple application and middleware components, and thus metrics collection must be performed at different locations and the results efficiently propagated to the adaptation engine.\nFurthermore, in some cases the location where each metric should be collected is not fixed (i.e. it could be done in several places) and thus a decision must be made based on the efficiency of the chosen solution (see section 3).\nThe rest of this paper is organised as follows: Section 2 describes the general structure and implementation of mobile object frameworks in order to understand the challenges related to the collection, propagation and delivery of metrics as described in section 3.\nSection 4 describes some initial testing and results and section 5 closes with a summary, conclusions and discussion of future work.\n2.\nBACKGROUND\nIn general, an object-oriented application consists of objects collaborating to provide the functionality required by a given problem domain.\nMobile object frameworks allow some of these objects to be tagged as mobile objects, providing middleware support for such objects to be moved at runtime to other hosts.\nAt a minimum, a mobile object framework with at least one running mobile application consists of the following components: runtimes, mobile objects, and proxies [14], although the terminology used by individual frameworks can differ [6, 10-13].\nA runtime is a container process for the management of mobile objects.\nFor example, in FarGo [15] this component is known as a core and in most systems separate runtimes are required to allow different applications to run independently, although this is not the case with MobJeX, which can run multiple applications in a single runtime using threads.\nThe applications themselves comprise mobile objects, which interact with each other through proxies [14].\nProxies, which have the same method interface as the object itself but add remote communication and object tracking functionality, are required for each target object that a source object communicates with.\nUpon migration, proxy objects move with the source object.\nThe Java based system MobJeX, which is used as the implementation platform for the metrics collection solution described in this paper, adds a number of additional middleware components.\nFirstly, a host manager (known as a service in MobJeX) provides a central point of communication by running on a known port on a per host basis, thus facilitating the enumeration or lookup of components such as runtimes or mobile objects.\nSecondly, MobJeX has a per-application mobile object container called a transport manager (TM).\nAs such the host and transport managers are considered in the solution provided in the next section but could be omitted in the general case.\nFinally, depending on adaptation mode, MobJeX can have a centralised system controller incorporating a global adaptation engine for performing system wide optimisation.\n3.\nMETRICS COLLECTION\nThis section discusses the design and derivation of a solution for collecting metrics in order to support the adaptation of applications via object migration.\nThe solution, although implemented within the MobJeX framework, is for the most part discussed in generic terms, except where explicitly stated to be MobJeX specific.\n3.1 Metrics Selection\nThe metrics of Ryan and Rossi [9] have been chosen as the basis for this solution, since they are specifically intended for mobile application adaptation as well as having been derived from a series of mathematical models and empirically validated.\nFurthermore, the metrics were empirically shown to improve the application performance in a real adaptation scenario following a change in the execution environment.\nIt would however be beyond the scope of this paper to implement and test the full suite of metrics listed in [9], and thus in order to provide a useful non-random subset, we chose to implement the minimum set of metrics necessary to implement local and global adaptation [9] and thereby satisfy a range of real adaptation scenarios.\nAs such the solution presented in this section is discussed primarily in terms of these metrics, although the structure of the solution is intended to support the implementation of the remaining metrics, as well as other unspecified metrics such as those related to quality and resource utilisation.\nThis subset is listed below and categorised according to metric type.\nNote that some additional metrics were used for implementation purposes in order to derive core metrics or assist the evaluation, and as such are defined in context where appropriate.\n1.\nSoftware metrics--Number of Invocations (NI), the frequency of invocations on methods of a class.\n2.\nPerformance metrics--Method Execution Time (ET), the time taken to execute a method body (ms).\n-- Method Invocation Time (IT), the time taken to invoke a\nmethod, excluding the method execution time (ms).\n3.\nResource utilization metrics\n-- Memory Usage (MU), the memory usage of a process (in bytes).\n-- Processor Usage (PU), the percentage of the CPU load of a host.\n-- Network Usage (NU), the network bandwidth between two hosts (in bytes\/sec).\nFollowing are brief examples of a number of these metrics in order to demonstrate their usage in an adaptation scenario.\nAs Processor Usage (PU) on a certain host increases, the Execution Time (ET) of a given method executed on that host also increases [9], thus facilitating the decision of whether to move an object with high ET to another host with low PU.\nInvocation Time (IT) shows the overhead of invoking a certain method, with the invocation overhead of marshalling parameters and transmitting remote data for a remote call being orders of magnitude higher than the cost of pushing and popping data from the method call stack.\nIn other words, remote method invocation is expensive and thus should be avoided unless the gains made by moving an object to a host with more processing power (thereby reducing ET) outweigh the higher IT of the remote call.\nFinally, Number of Invocations (NI) is used primarily as a weighting factor or multiplier in order to enable the adaptation engine to predict the value over time of a particular adaptation decision.\n3.2 Metrics Measurement\nThis subsection discusses how each of the metrics in the subset under investigation can be obtained in terms of either direct measurement or derivation, and where in the mobile object framework such metrics should actually be measured.\nOf the environmental resource metrics, Processor Usage (PU) and Network Usage (NU) both relate to an individual machine, and thus can be directly measured through the resource monitoring subsystem that is instantiated as part of the MobJeX service.\nHowever, Memory Usage (MU), which represents the memory state of a running process rather than the memory usage of a host, should instead be collected within an individual runtime.\nThe measurement of Number of Invocations (NI) and Execution Time (ET) metrics can be also be performed via direct measurement, however in this case within the mobile object implementation (mobject) itself.\nNI involves simply incrementing a counter value at either the start or end of a method call, depending upon the desired semantics with regard to thrown exceptions, while ET can be measured by starting a timer at the beginning of the method and stopping it at the end of the method, then retrieving the duration recorded by the timer.\nIn contrast, collecting Invocation Time (IT) is not as straight forward because the time taken to invoke a method can only be measured after the method finishes its execution and returns to the caller.\nIn order to collect IT metrics, another additional metric is needed.\nRyan and Rossi [9] define the metric Response Time (RT), as the total time taken for a method call to finish, which is the sum of IT and ET.\nThe Response Time can be measured directly using the same timer based technique used to measure ET, although at the start and end of the proxy call rather than the method implementation.\nOnce the Response Time (RT) is known, IT can derived by subtracting RT from ET.\nAlthough this derivation appears simple, in practice it is complicated by the fact that the RT and ET values from which the IT is derived are by necessity measured using timer code in different locations i.e. RT measured in the proxy, ET measured in the method body of the object implementation.\nIn addition, the proxies are by definition not part of the MobJeX containment hierarchy, since although proxies have a reference to their target object, it is not efficient for a mobile object (mobject) to have backward references to all of the many proxies which reference it (one per source object).\nFortunately, this problem can be solved using the push based propagation mechanism described in section 3.5 in which the RT metric is pushed to the mobject so that IT can be derived from the ET value stored there.\nThe derived value of IT is then stored and propagated further as necessary according to the criteria of section 3.6, the structural relationship of which is shown in Figure 1.\n3.3 Measurement Initiation\nThe polling approach was identified as the most appropriate method for collecting resource utilisation metrics, such as Processor Usage (PU), Network Usage (NU) and Memory Usage (MU), since they are not part of, or related to, the direct flow of the application.\nTo measure PU or NU, the resource monitor polls the Operating System for the current CPU or network load respectively.\nIn the case of Memory Usage (MU), the Java Virtual Machine (JVM) [16] is polled for the current memory load.\nNote that in order to minimise the impact on application response time, the polling action should be done asynchronously in a separate thread.\nMetrics that are suitable for application initiated collection (i.e. as part of a normal method call) are software and performance related metrics, such as Number of Invocations (NI), Execution Time (ET), and Invocation Time (IT), which are explicitly related to the normal invocation of a method, and thus can be measured directly at this time.\n3.4 Metrics Aggregation\nIn the solution presented in this paper, all metrics collected in the same location are aggregated in a MetricsContainer with individual containers corresponding to functional components in the mobile object framework.\nThe primary advantage of aggregating metrics in containers is that it allows them to be propagated easily as a cohesive unit through the components of the mobility framework so that they can be delivered to the adaptation engine, as discussed in the following subsection.\nNote that this containment captures the different granularity of measurement attributes and their corresponding metrics.\nConsider the case of measuring memory consumption.\nAt a coarse level of granularity this could be measured for an entire application or even a system, but could also be measured at the level of an individual object; or for an even finer level of granularity, the memory consumption during the execution of a specific method.\nAs an example of the level of granularity required for mobility based adaptation, the local adaptation algorithm proposed by Ryan and Rossi [9] requires metrics representing both the duration of a method execution and the overhead of a method invocation.\nThe use of metrics containers facilitates the collection of metrics at levels of granularity ranging from a single machine down to the individual method level.\nNote that some metrics containers do not contain any Metric objects, since as previously described, the sample implementation uses only a subset of the adaptation metrics from [9].\nHowever, for the sake of consistency and to promote flexibility in terms of adding new metrics in the future, these containers are still considered in the present design for completeness and for future work.\n3.5 Propagation and Delivery of Metrics\nThe solution in this paper identifies two stages in the metrics collection and delivery process.\nFirstly, the propagation of metrics through the components of the mobility framework and secondly, the delivery of those metrics from the host manager\/service (or runtime if the host manager is not present) to the adaptation engine.\nRegarding propagation, in brief, it is proposed that when a lower level system component detects the arrival of a new metric update (e.g. mobile object), the metric is pushed (possibly along with other relevant metrics) to the next level component (i.e. runtime or transport manager containing the mobile object), which at some later stage, again determined by a configurable criteria (for example when there are a sufficient number of changed mobjects) will get pushed to the next level component (i.e. the host manager or the adaptation engine).\nA further incentive for treating propagation separately from delivery is due to the distinction between local and global adaptation [9].\nLocal adaptation is performed by an engine running on the local host (for example in MobJeX this would occur within the service) and thus in this case the delivery phase would be a local inter-process call.\nConversely, global adaptation is handled by a centralised adaptation engine running on a remote host and thus the delivery of metrics is via a remote call, and in the case where multiple runtimes exist without a separate host manager the delivery process would be even more expensive.\nTherefore, due to the presence of network communication latency, it is important for the host manager to pass as many metrics as possible to the adaptation engine in one invocation, implying the need to gather these metrics in the host manager, through some form of push or propagation, before sending them to the adaptation engine.\nConsequently, an abstract representation or model [17] of the system needs to be maintained.\nSuch a model would contain model entities, corresponding to each of the main system components, connected in a tree like hierarchy, which precisely reflects the structure and containment hierarchy of the actual system.\nAttaching metrics containers to model entities allows a model entity representing a host manager to be delivered to the adaptation engine enabling it to access all metrics in that component and any of its children (i.e. runtimes, and mobile objects).\nFurthermore it would generally be expected that an adaptation engine or system controller would already maintain a model of the system that cannot only be reused for propagation but also provides an effective means of delivering metrics information from the host manager to the adaptation engine.\nThe relationship between model entities and metrics containers is captured in Figure 1.\n3.6 Propagation and Delivery Criteria\nThis subsection proposes flexible criteria to allow each component to decide when it should propagate its metrics to the next component in line (Figure 1), in order to reduce the overhead incurred when metrics are unnecessarily propagated through the components of the mobility framework and delivered to the adaptation engine.\nThis paper proposes four different types of criterion that are executed at various stages of the measurement and propagation process in order to determine whether the next action should be taken or not.\nThis approach was designed such that whenever a single criterion is not satisfied, the subsequent criteria are not tested.\nThese four criteria are described in the following subsections.\nFigure 1.\nStructural overview of the hierarchical and criteriabased notification relationships between Metrics, Metrics Containers, and Model Entities\nMeasure Metric Criterion - This criterion is attached to individual Metric objects to decide whether a new metric value should be measured or not.\nThis is most useful in the case where it is expensive to measure a particular metric.\nFurthermore, this criterion can be used as a mechanism for limiting storage requirements and manipulation overhead in the case where metric history is maintained.\nSimple examples would be either time or frequency based whereas more complex criteria could be domain specific for a particular metric, or based upon information stored in the metrics history.\nNotify Metrics Container Criterion - This criterion is also\nattached to individual Metric objects and is used to determine the circumstances under which the Metric object should notify its MetricsContainer.\nThis is based on the assumption that there may be cases where it is desirable to measure and store a metric in the history for the analysis of temporal behaviour, but is not yet significant enough to notify the MetricsContainer for further processing.\nA simple example of this criterion would be threshold based in which the newest metric value is compared with the previously stored value to determine whether the difference is significant enough to be of any interest to the MetricsContainer.\nA more complex criterion could involve analysis of the history to determine whether a pattern of recent changes is significant enough to warrant further processing and possible metrics delivery.\nNotify Model Entity Criterion - Unlike the previous two\ncriteria, this criterion is associated with a MetricsContainer.\nSince a MetricsContainer can have multiple Metric objects, of which it has explicit domain knowledge, it is able to determine if, when, and how many of these metrics should be propagated to the ModelEntity and thus become candidates for being part of the hierarchical ModelEntity push process as described below.\nThis decision making is facilitated by the notifications received from individual Metric objects as described above.\nA simple implementation would be waiting for a certain number of updates before sending a notification to the model entity.\nFor example, since the MobjectMetricsContainer object contains three metrics, a possible criteria would be to check if two or more of the metrics have changed.\nA slightly more advanced implementation can be done by giving each metric a weight to indicate how significant it is in the adaptation decision making process.\nPush Criterion - The push criterion applies to all of the ModelEntites which are containers, that is the TransportManagerModelEntity, RuntimeModelEntity and ServiceModelEntity, as well as the special case of the ProxyMetricsContainer.\nThe purpose of this criterion is twofold.\nFor the TransportManagerModelEntity this serves as a criterion to determine notification since as with the previously described criteria, a local reference is involved.\nFor the other model entities, this serves as an opportunity to determine both when and what metrics should be pushed to the parent container wherein the case of the ServiceModelEntity the parent is the adaptation engine itself or in the case of the ProxyMetricsContainer the target of the push is the MobjectMetricsContainer.\nFurthermore, this criterion is evaluated using information from two sources.\nFirstly, it responds to the notification received from its own MetricsContainer but more importantly it serves to keep track of notifications from its child ModelEntities so as to determine when and what metrics information should be pushed to its parent or target.\nIn the specialised case of the push criterion for the proxy, the decision making is based on both the ProxyMetricsContainer itself, as well as the information accumulated from the individual ProxyMethodMetricsContainers.\nNote that a push criterion is not required for a mobject since it does not have any containment or aggregating responsibilities since this is already\nhandled by the MobjectMetricsContainer and its individual MobjectMethodMetricsContainers.\nAlthough it is always important to reduce the number of pushes, this is especially so from a service to a centralised global adaptation engine, or from a proxy to a mobject.\nThis is because these relationships involve a remote call [18] which is expensive due to connection setup and data marshalling and unmarshalling overhead, and thus it is more efficient to send a given amount of data in aggregate form rather than sending smaller chunks multiple times.\nA simple implementation for reducing the number of pushes can be done using the concept of a process period [19] in which case the model entity accumulates pushes from its child entities until the process period expires at which time it pushes the accumulated metrics to its parent.\nAlternatively it could be based on frequency using domain knowledge about the type of children for example when a significant number of mobjects in a particular application (i.e. TransportManager) have undergone substantial changes.\nFor reducing the size of pushed data, two types of pushes were considered: shallow push and deep push.\nWith shallow push, a list of metrics containers that contain updated metrics is pushed.\nIn a deep push, the model entity itself is pushed, along with its metrics container and its child entities, which also have reference to metrics containers but possibly unchanged metrics.\nIn the case of the proxy, a deep push involves pushing the ProxyMetricsContainer and all of the ProxyMethodMetricsContainers whereas a shallow push means only the ProxyMethodMetricsContainers that meet a certain criterion.\n4.\nEVALUATION\nThe preliminary tests presented in this section aim to analyse the performance and scalability of the solution and evaluate the impact on application execution in terms of metrics collection overhead.\nAll tests were executed using two Pentium 4 3.0 GHz PCs with 1,024 MB of RAM, running Java 1.4.2 _ 08.\nThe two machines were connected to a router with a third computer acting as a file server and hosting the external adaptation engine implemented within the MobJeX system controller, thereby simulating a global adaptation scenario.\nSince only a limited number of tests could be executed, this evaluation chose to measure the worst case scenario in which all metrics collection was initiated in mobjects, wherein the propagation cost is higher than for any other metrics collected in the system.\nIn addition, since exhaustive testing of criteria is beyond the scope of this paper, two different types of criteria were used in the tests.\nThe measure metrics criterion was chosen, since this represents the starting point of the measurement process and can control under what circumstances and how frequently metrics are measured.\nIn addition, the push criterion was also implemented on the service, in order to provide an evaluation of controlling the frequency of metrics delivery to the adaptation engine.\nAll other (update and push) criteria were set to \"always\" meaning that they always evaluated to true and thus a notification was posted.\nFigure 2 shows the metric collection overhead in the mobject (MMCO), for different numbers of mobjects and methods when all criteria are set to always to provide the maximum measurement and propagation of metrics and thus an absolute worst case performance scenario.\nIt can be seen that the independent factors of increasing the number of mobjects and methods independently are linear.\nAlthough combining these together provides an exponential growth that is approximately n-squared, the initial results are not discouraging since delivering all of the metrics associated with 20 mobjects, each having 20 methods (which constitutes quite a large application given that mobjects typically represent coarse grained object clusters) is approximately 400ms, which could reasonably be expected to be offset with adaptation gains.\nNote that in contrast, the proxy metrics collection overhead (PMCO) was relatively small and constant at <5ms, since in the absence of a proxy push criterion (this was only implemented on the service) the response time (RT) data for a single method is pushed during every invocation.\nFigure 2.\nWorst case performance characteristics\nThe next step was to determine the percentage metrics collection overhead compared with execution time in order to provide information about the execution characteristics of objects that would be suitable for adaptation using this metric collection approach.\nClearly, it is not practical to measure metrics and perform adaptation on objects with short execution times that cannot benefit from remote execution on hosts with greater processing power, thereby offsetting IT overhead of remote compared with local execution as well as the cost of object migration and the metrics collection process itself.\nIn addition, to demonstrate the effect of using simple frequency based criteria, the MMCO results as a percentage of method execution time were plotted as a 3-dimensional graph in Figure 3 with the z-axis representing the frequency used in both the measure metrics criterion and the service to adaptation engine push criterion.\nThis means that for a frequency value of 5 (n = 5), metrics are only measured on every fifth method call, which then results in a notification through the model entity hierarchy to the service, on this same fifth invocation.\nFurthermore, the value of n = 5 was also applied to the service push criterion so that metrics were only pushed to the adaptation engine after five such notifications, that is for example five different mobjects had updated their metrics.\nThese results are encouraging since even for the worst case scenario of n = 1 the metric collection overhead is an acceptable 20% for a method of 1500ms duration (which is relatively short for a component or service level object in a distributed enterprise class application) with previous work on adaptation showing that such an overhead could easily be recovered by the efficiency gains made by adaptation [5].\nFurthermore, the measurement time includes delivering the results synchronously via a remote call to the adaptation engine on a different host, which would normally be done asynchronously, thus further reducing the impact on method execution performance.\nThe graph also demonstrates that even using modest criteria to reduce the metrics measurement to\nmore realistic levels, has a rapid improvement on collection overhead at 20% for 500ms of ET.\nFigure 3.\nPerformance characteristics with simple criteria\n5.\nSUMMARY AND CONCLUSIONS\nGiven the challenges of developing mobile applications that run in dynamic\/heterogeneous environments, and the subsequent interest in application adaptation, this paper has proposed and implemented an online metrics collection strategy to assist such adaptation using a mobile object framework and supporting middleware.\nControlled lab studies were conducted to determine worst case performance, as well as show the reduction in collection overhead when applying simple collection criteria.\nIn addition, further testing provided an initial indication of the characteristics of application objects (based on method execution time) that would be good candidates for adaptation using the worst case implementation of the proposed metrics collection strategy.\nA key feature of the solution was the specification of multiple configurable criteria to control the propagation of metrics through the system, thereby reducing collection overhead.\nWhile the potentially efficacy of this approach was tested using simple criteria, given the flexibility of the approach we believe there are many opportunities to significantly reduce collection overhead through the use of more sophisticated criteria.\nOne such approach could be based on maintaining metrics history in order to determine the temporal behaviour of metrics and thus make more intelligent and conservative decisions regarding whether a change in a particular metric is likely to be of interest to the adaptation engine and should thus serve as a basis for notification for inclusion in the next metrics push.\nFurthermore, such a temporal history could also facilitate intelligent decisions regarding the collection of metrics since for example a metric that is known to be largely constant need not be frequently measured.\nFuture work will also involve the evaluation of a broad range of adaptation scenarios on the MobJeX framework to quantity the gains that can be made via adaptation through object mobility and thus demonstrate in practise, the efficacy of the solution described in this paper.\nFinally, the authors wish to explore applying the metrics collection concepts described in this paper to a more general and reusable context management system [20].","keyphrases":["metric collect","adapt","mobil object framework","mobil object","framework","measur","mobjex","data","object-orient applic","java","metricscontain","proxi","perform and scalabl","propag and deliveri"],"prmu":["P","P","P","P","P","P","P","U","M","U","U","U","R","M"]} {"id":"C-36","title":"Encryption-Enforced Access Control in Dynamic Multi-Domain Publish\/Subscribe Networks","abstract":"Publish\/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure. Large scale publish\/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations. As the number of participants in the network increases, security becomes an increasing concern. This paper extends previous work to present and evaluate a secure multi-domain publish\/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types. Key refresh allows us to ensure forward and backward security when event brokers join and leave the network. We demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions. We show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish\/subscribe networks.","lvl-1":"Encryption-Enforced Access Control in Dynamic Multi-Domain Publish\/Subscribe Networks Lauri I.W. Pesonen University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk David M. Eyers University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk Jean Bacon University of Cambridge, Computer Laboratory JJ Thomson Avenue, Cambridge, CB3 0FD, UK {first.last}@cl.\ncam.ac.uk ABSTRACT Publish\/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure.\nLarge scale publish\/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations.\nAs the number of participants in the network increases, security becomes an increasing concern.\nThis paper extends previous work to present and evaluate a secure multi-domain publish\/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types.\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network.\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions.\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish\/subscribe networks.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications General Terms Security, Performance 1.\nINTRODUCTION Publish\/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications.\nMuch of its capacity for scale in the number of participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them.\nIn truly Internet-scale publish\/subscribe systems, the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic (and thus network) area.\nHowever, publish\/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains, be they independent administrative domains inside a single organisation, multiple independent organisations, or a combination of the two.\nWhile the communication capabilities of publish\/subscribe systems are well proved, spanning multiple administrative domains is likely to require addressing security considerations.\nAs security and access control are almost the antithesis of decoupling, relatively little publish\/subscribe research has focused on security so far.\nOur overall research aim is to develop Internet-scale publish\/subscribe networks that provide secure, efficient delivery of events, fault-tolerance and self-healing in the delivery infrastructure, and a convenient event interface.\nIn [12] Pesonen et al. propose a multi-domain, capabilitybased access control architecture for publish\/subscribe systems.\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types.\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish\/ subscribe system.\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly.\nAny malicious, compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers.\nThis might be acceptable in a relatively small system deployed inside a single organisation, but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure.\nWe propose enforcing access control within the broker network by encrypting event content, and that policy dictate controls over the necessary encryption keys.\nWith encrypted event content only those brokers that are authorised to ac104 cess the encryption keys are able to access the event content (i.e. publish, subscribe to, or filter).\nWe effectively move the enforcement of access control from the brokers to the encryption key managers.\nWe expect that access control would need to be enforced in a multi-domain publish\/subscribe system when multiple organisations form a shared publish\/subscribe system yet run multiple independent applications.\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish\/subscribe system.\nBoth cases require access control because event delivery in a dynamic publish\/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers.\nThere are two particular benefits to sharing the publish\/ subscribe infrastructure, both of which relate to the broker network.\nFirst, sharing brokers will create a physically larger network that will provide greater geographic reach.\nSecond, increasing the inter-connectivity of brokers will allow the publish\/subscribe system to provide higher faulttolerance.\nFigure 1 shows the multi-domain publish\/subscribe network we use as an example throughout this paper.\nIt is based on the United Kingdom Police Forces, and we show three particular sub-domains: Metropolitan Police Domain.\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area.\nWe have included Detective Smith as a subscriber in this domain.\nCongestion Charge Service Domain.\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain.\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain.\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish\/subscribe system access control presented in this paper.\nPITO Domain.\nThe Police Information Technology Organisation (PITO) is the centre from which Police data standards are managed.\nIt is the event type owner in this particular scenario.\nEncryption protects the confidentiality of events should they be transported through unauthorised domains.\nHowever encrypting whole events means unauthorised brokers cannot make efficient routing decisions.\nOur approach is to apply encryption to the individual attributes of events.\nThis way our multi-domain access control policy works at a finer granularity - publishers and subscribers may be authorised access to a subset of the available attributes.\nIn cases where non-encrypted events are used for routing, we can reduce the total number of events sent through the system without revealing the values of sensitive attributes.\nIn our example scenario, the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings - the location attribute would not be decrypted.\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish\/subscribe infrastructure.\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle.\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case.\nCurrent publish\/subscribe access control systems enforce security at the edge of the broker network where clients connect to it.\nHowever this approach will often not be acceptable in Internet-scale systems.\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to, by encrypting event content.\nPublications will be encrypted with their event type specific encryption keys.\nBy controlling access to the encryption keys, we can control access to the event types.\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys.\nWe introduce decentralised publish\/subscribe systems and relevant cryptography in Section 2.\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level.\nSection 4 discusses managing encryption keys in multi-domain publish\/subscribe systems.\nWe analytically evaluate the performance of our proposal in Section 5.\nFinally Section 6 discusses related work in securing publish\/subscribe systems and Section 7 provides concluding remarks.\n2.\nBACKGROUND In this section we provide a brief introduction to decentralised publish\/subscribe systems.\nWe indicate our assumptions about multi-domain publish\/subscribe systems, and describe how these assumptions influence the developments we have made from our previously published work.\n2.1 Decentralised Publish\/Subscribe Systems A publish\/subscribe system includes publishers, subscribers, and an event service.\nPublishers publish events, subscribers subscribe to events of interest to them, and the event service is responsible for delivering published events to all subscribers whose interests match the given event.\nThe event service in a decentralised publish\/subscribe system is distributed over a number of broker nodes.\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers.\nClients (publishers and subscribers) connect to a local broker, which is fully trusted by the client.\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers (PHB) or subscriber hosting brokers (SHB) depending on whether the connected client is a publisher or 105 IB SHB Sub Pub Pub Sub Sub IB PHB IB IB PHB IB IB IB IB SHB SHB IBIB IB IB IB IB IB IBIB IB TO IB IB IB Metropolitan Police Domain Congestion Charge Service Domain PITO Domain Detective Smith Camera 1 Camera 2 Billing Office Statistics Office Sub Subscriber SHB Subscriber Hosting Broker Pub Publisher PHB Publisher Hosting Broker TO Type Owner IB Intermediate Broker KEY Figure 1: An overall view of our multi-domain publish\/subscribe deployment a subscriber, respectively.\nA local broker is usually either part of the same domain as the client, or it is owned by a service provider trusted by the client.\nA broker network can have a static topology (e.g. Siena [3] and Gryphon [14]) or a dynamic topology (e.g. Scribe [4] and Hermes [13]).\nOur proposed approach will work in both cases.\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions (see Sect.\n3.4), which is very difficult with a dynamic topology.\nOn the other hand, a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure.\nOur work is based on the Hermes system.\nHermes is a content-based publish\/subscribe middleware that includes strong event type support.\nIn other words, each publication is an instance of a particular predefined event type.\nPublications are type checked at the local broker of each publisher.\nOur attribute level encryption scheme assumes that events are typed.\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology.\nA Hermes publication consists of an event type identifier and a set of attribute value pairs.\nThe type identifier is the SHA-1 hash of the name of the event type.\nIt is used to route the publication through the event broker network.\nIt conveniently hides the type of the publication, i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier.\n2.2 Secure Event Types Pesonen et al. introduced secure event types in [11], which can have their integrity and authenticity confirmed by checking their digital signatures.\nA useful side effect of secure event types are their globally unique event type and attribute names.\nThese names can be referred to by access control policies.\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute.\n2.3 Capability-Based Access Control Pesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems in [12].\nThe model treats event types as resources that publishers, subscribers, and event brokers want to access.\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure (SPKI) authorisation certificates that grant the holder access to the specified event type.\nFor example, authorised publishers will have been issued an authorisation certificate that specifies that the publisher, identified by public key, is authorised to publish instances of the event type specified in the certificate.\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates.\nThat is, a publisher who is authorised to publish a given event type, is also authorised 106 to access the encryption keys used to protect events of that type.\nWe discuss this in more detail in Sect.\n4.\n2.4 Threat model The goal of the proposed mechanism is to enforce access control for authorised participants in the system.\nIn our case the first level of access control is applied when the participant tries to join the publish\/subscribe network.\nUnauthorised event brokers are not allowed to join the broker network.\nSimilarly unauthorised event clients are not allowed to connect to an event broker.\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security (TLS) [5] in order to prevent unauthorised access on the transport layer.\nThe architecture of the publish\/subscribe system means that event clients must connect to event brokers in order to be able to access the publish\/subscribe system.\nThus we assume that these clients are not a threat.\nThe event client relies completely on the local event broker for access to the broker network.\nTherefore the event client is unable to access any events without the assistance of the local broker.\nThe brokers on the other hand are able to analyse all events in the system that pass through them.\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event (in the case of attribute level encryption).\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern.\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary.\nWhile traffic analysis is an important concern we have not addressed it further in this paper.\n3.\nENCRYPTING EVENT CONTENT We propose enforcing access control in a decentralised broker network by encrypting the contents of published events and controlling access to the encryption keys.\nEffectively we move the responsibility for access control from the broker network to the key managers.\nIt is assumed that all clients have access to a broker that they can trust and that is authorised to access the event content required by the client.\nThis allows us to implement the event content encryption within the broker network without involving the clients.\nBy delegating the encryption tasks to the brokers, we lower the number of nodes required to have access to a given encryption key1 .\nThe benefits are three-fold: i) fewer nodes handle the confidential encryption key so there is a smaller chance of the key being disclosed; ii) key refreshes involve fewer nodes which means that the key management algorithm will incur smaller communication and processing overheads to the publish\/subscribe system; and iii) the local broker will decrypt an event once and deliver it to all subscribers, instead of each subscriber 1 The encryption keys are changed over time in response to brokers joining or leaving the network, and periodically to reduce the amount of time any single key is used.\nThis is discussed in Sect.\n4.2 having to decrypt the same event.\nDelegating encryption tasks to the local broker is appropriate, because encryption is a middleware feature used to enforce access control within the middleware system.\nIf applications need to handle encrypted data in the application layer, they are free to publish encrypted data over the publish\/subscribe system.\nWe can implement encryption either at the event level or the attribute level.\nEvent encryption is simpler, requires fewer keys, fewer independent cryptographic operations, and thus is usually faster.\nAttribute encryption enables access control at the attribute level, which means that we have a more expressive and powerful access control mechanism, while usually incurring a larger performance penalty.\nIn this section we discuss encrypting event content both at the event level and the attribute level; avoiding leaking information to unauthorised brokers by encrypting subscription filters; avoiding unnecessary encryptions between authorised brokers; and finally, how event content encryption was implemented in our prototype.\nNote that since no publish\/subscribe client is ever given access to encryption keys, any encryption performed by the brokers is necessarily completely transparent to all clients.\n3.1 Event Encryption In event encryption all the event attributes are encrypted as a single block of plaintext.\nThe event type identifier is left intact (i.e. in plaintext) in order to facilitate event routing in the broker network.\nThe globally unique event type identifier specifies the encryption key used to encrypt the event content.\nEach event type in the system will have its own individual encryption key.\nKeys are refreshed, as discussed in Sect.\n4.2.\nWhile in transit the event will consist of a tuple containing the type identifier, a publication timestamp, ciphertext, and a message authentication tag: .\nEvent brokers that are authorised to access the event, and thus have access to the encryption key, can decrypt the event and implement content-based routing.\nEvent brokers that do not have access to the encryption key will be forced to route the event based only on its type.\nThat is, they will not be able to make intelligent decisions about whether events need not be transmitted down their outgoing links.\nEvent encryption results in one encryption at the publisher hosting broker, and one decryption at each filtering intermediate broker and subscriber hosting broker that the event passes through, regardless of the number of attributes.\nThis results in a significant performance advantage compared to attribute encryption.\n3.2 Attribute Encryption In attribute encryption each attribute value in an event is encrypted separately with its own encryption key.\nThe encryption key is identified by the attribute``s globally unique identifier (the globally unique event identifier defines a namespace inside which the attribute identifier is a fully qualified name).\n107 The event type identifier is left intact to facilitate event routing for unauthorised brokers.\nThe attribute identifiers are also left intact to allow authorised brokers to decrypt the attribute values with the correct keys.\nBrokers that are authorised to access some of the attributes in an event, can implement content-based routing over the attributes that are accessible to them.\nAn attribute encrypted event in transit consists of the event type identifier, a publication timestamp, and a set of attribute tuples: .\nAttribute tuples consist of an attribute identifier, ciphertext, and a message authentication tag: .\nThe attribute identifier is the SHA-1 hash of the attribute name used in the event type definition.\nUsing the attribute identifier in the published event instead of the attribute name prevents unauthorised parties from learning which attributes are included in the publication.\nCompared with event encryption, attribute encryption usually results in larger processing overheads, because each attribute is encrypted separately.\nIn the encryption process the initialisation of the encryption algorithm takes a significant portion of the total running time of the algorithm.\nOnce the algorithm is initialised, increasing the amount of data to be encrypted does not affect the running time very much.\nThis disparity is emphasised in attribute encryption, where an encryption algorithm must be initialised for each attribute separately, and the amount of data encrypted is relatively small.\nAs a result attribute encryption incurs larger processing overheads when compared with event encryption which can be clearly seen from the performance results in Sect.\n5.\nThe advantage of attribute encryption is that the type owner is able to control access to the event type at the attribute level.\nThe event type owner can therefore allow clients to have different levels of access to the same event type.\nAlso, attribute level encryption enables content-based routing in cases where an intermediate broker has access only to some of the attributes of the event, thus reducing the overall impact of event delivery on the broker network.\nTherefore the choice between event and attribute encryption is a trade-off between expressiveness and performance, and depends on the requirements of the distributed application.\nThe expressiveness provided by attribute encryption can be emulated by introducing a new event type for each group of subscribers with the same authorisation.\nThe publisher would then publish an instance of each of these types instead of publishing just a combined event.\nFor example, in our London police network, the congestion control cameras would have to publish one event for the CCS and another for the detective.\nThis approach could become difficult to manage if the attributes have a variety of security properties, since a large number of event types would be required and policies and subscriptions may change dynamically.\nThis approach creates a large number of extra events that must be routed through the network, as is shown in Sect.\n5.3.\n3.3 Encrypting Subscriptions In order to fully protect the confidentiality of event content we must also encrypt subscriptions.\nEncrypted subscriptions guarantee: i) that only authorised brokers are able to submit subscriptions to the broker network, and ii) that unauthorised brokers do not gain information about event content by monitoring which subscriptions a given event matches.\nFor example, in the first case an unauthorised broker can create subscriptions with appropriately chosen filters, route them towards the root of the event dissemination tree, and monitor which events were delivered to it as matching the subscription.\nThe fact that the event matched the subscription would leak information to the broker about the event content even if the event was still encrypted.\nIn the second case, even if an unauthorised broker was unable to create subscriptions itself, it could still look at subscriptions that were routed through it, take note of the filters on those subscriptions, and monitor which events are delivered to it by upstream brokers as matching the subscription filters.\nThis would again reveal information about the event content to the unauthorised broker.\nIn the case of encrypting complete events, we also encrypt the complete subscription filter.\nThe event type identifier in the subscription must be left intact to allow brokers to route events based on their topic when they are not authorised to access the filter.\nIn such cases the unauthorised broker is required to assume that events of such a type match all filter expressions.\nEach attribute filter is encrypted individually, much as when encrypting a publication.\nIn addition to the event type identifier the attribute identifiers are also left intact to allow authorised brokers to decrypt those filters that they have access to, and route the event based on its matching the decrypted filters.\n3.4 Avoiding Unnecessary Cryptographic Operations Encrypting the event content is not necessary if the current broker and the next broker down the event dissemination tree have the same credentials with respect to the event type at hand.\nFor example, one can assume that all brokers inside an organisation would share the same credentials and therefore, as long as the next broker is a member of the same domain, the event can be routed to it in plaintext.\nWith attribute encryption it is possible that the neighbouring broker is authorised to access a subset of the decrypted attributes, in which case those attributes that the broker is not authorised to access would be passed to it encrypted.\nIn order to know when it is safe to pass the event in plaintext form, the brokers exchange credentials as part of a handshake when they connect to each other.\nIn cases when the brokers are able to verify each others'' credentials, they will add them to the routing table for future reference.\nIf a broker acquires new credentials after the initial handshake, it will present these new credentials to its neighbours while in session.\nRegardless of its neighbouring brokers, the PHB will always encrypt the event content, because it is cheaper to encrypt the event once at the root of the event dissemination tree.\nIn Hermes the rendezvous node for each event type is selected uniformly randomly (the event type name is hashed with the SHA-1 hash algorithm to produce the event type 108 PHB IBIB IB SHB RN IB SHB Figure 2: Node addressing is evenly distributed across the network, thus rendezvous nodes may lie outside the domain that owns an event type IB IB SHBPHBP S Encrypts Filters from cache Decrypts, delivers Decrypts, filters Plaintext Cached Plaintext (most data) Cached Plaintext (some data) Different domains Cyphertext KEY Figure 3: Caching decrypted data to increase efficiency when delivering to peers with equivalent security privileges identifier, then the identifier is used to select the rendezvous node in the structured overlay network).\nTherefore it is probable that the rendezvous node will reside outside the current domain.\nThis situation is illustrated in the event dissemination tree in Fig. 2.\nSo even with domain internal applications, where the event can be routed from the publisher to all subscribers in plaintext form, the event content will in most cases have to be encrypted for it to be routed to the rendezvous node.\nTo avoid unnecessary decryptions, we attach a plaintext content cache to encrypted events.\nA broker fills the cache with content that it has decrypted, for example, in order to filter on the content.\nThe cache is accessed by the broker when it delivers an event to a local subscriber after first seeing if the event matches the subscription filter, but the broker also sends the cache to the next broker with the encrypted event.\nThe next broker can look the attribute up from the cache instead of having to decrypt it.\nIf the event is being sent to an unauthorised broker, the cache will be discarded before the event is sent.\nObviously sending the cache with the encrypted event will add to the communication cost, but this is outweighed by the saving in encryption\/decryption processing.\nIn Fig. 3 we see two separate cached plaintext streams accompanying an event depending on the inter-broker relationships in two different domains.\nWe show in Sect.\n5.2 that the overhead of sending encrypted messages with a full plaintext cache incurs almost no overhead compared to sending plaintext messages.\n3.5 Implementation In our implementation we have used the EAX mode [2] of operation when encrypting events, attributes, and subscription filters.\nEAX is a mode of operation for block ciphers, also called an Authenticated Encryption with Associated Data (AEAD) algorithm that provides simultaneously both data confidentiality and integrity protection.\nThe algorithm implements a two-pass scheme where during the first pass the plain text is encrypted, and on the second pass a message authentication code (MAC) is generated for the encrypted data.\nThe EAX mode is compatible with any block cipher.\nWe decided to use the Advanced Encryption Standard (AES) [9] algorithm in our implementation, because of its standard status and the fact that the algorithm has gone through thorough cryptanalysis during its existence and no serious vulnerabilities have been found thus far.\nIn addition to providing both confidentiality and integrity protection, the EAX mode uses the underlying block cipher in counter mode (CTR mode) [21].\nA block cipher in counter mode is used to produce a stream of key bits that are then XORed with the plaintext.\nEffectively CTR mode transforms a block cipher into a stream cipher.\nThe advantage of stream ciphers is that the ciphertext is the same length as the plaintext, whereas with block ciphers the plaintext must be padded to a multiple of the block cipher``s block length (e.g. the AES block size is 128 bits).\nAvoiding padding is very important in attribute encryption, because the padding might increase the size of the attribute disproportionally.\nFor example, a single integer might be 32 bits in length, which would be padded to 128 bits if we used a block cipher.\nWith event encryption the message expansion is not that relevant, since the length of padding required to reach the next 16 byte multiple will probably be a small proportion of the overall plaintext length.\nIn encryption mode the EAX algorithm takes as input a nonce (a number used once), an encryption key and the plaintext, and it returns the ciphertext and an authentication tag.\nIn decryption mode the algorithm takes as input the encryption key, the ciphertext and the authentication tag, and it returns either the plaintext, or an error if the authentication check failed.\nThe nonce is expanded to the block length of the underlying block cipher by passing it through an OMAC construct (see [7]).\nIt is important that particular nonce values are not reused, otherwise the block cipher in CTR mode would produce an identical key stream.\nIn our implementation we used the PHB defined event timestamp (64-bit value counting the milliseconds since January 1, 1970 UTC) appended by the PHB``s identity (i.e. public key) as the nonce.\nThe broker is responsible for ensuring that the timestamps increase monotonically.\nThe authentication tag is appended to the produced cipher text to create a two-tuple.\nWith event encryption a single tag is created for the encrypted event.\nWith attribute 109 encryption each attribute is encrypted and authenticated separately, and they all have their individual tags.\nThe tag length is configurable in EAX without restrictions, which allows the user to make a trade-off between the authenticity guarantees provided by EAX and the added communication overhead.\nWe used a tag length of 16 bytes in our implementation, but one could make the tag length a publisher\/subscriber defined parameter for each publication\/subscription or include it in the event type definition to make it a type specific parameter.\nEAX also supports including unencrypted associated data in the tag calculation.\nThe integrity of this data is protected, but it is still readable by everyone.\nThis feature could be used with event encryption in cases where some of the event content is public and thus would be useful for content-based routing.\nThe integrity of the data would still be protected against changes, but unauthorised brokers would be able to apply filters.\nWe have included the event type identifier as associated data in order to protect its integrity.\nOther AEAD algorithms include the offset codebook mode (OCB) [17] and the counter with CBC-MAC mode (CCM) [22].\nContrarily to the EAX mode the OCB mode requires only one pass over the plaintext, which makes it roughly twice as fast as EAX.\nUnfortunately the OCB mode has a patent application in place in the USA, which restricts its use.\nThe CCM mode is the predecessor of the EAX mode.\nIt was developed in order to provide a free alternative to OCB.\nThe EAX was developed later to address some issues with CCM [18].\nSimilarly to EAX, CCM is also a two-pass mode.\n4.\nKEY MANAGEMENT In both encryption approaches the encrypted event content has a globally unique identifier (i.e. the event type or the attribute identifier).\nThat identifier is used to determine the encryption key to use when encrypting or decrypting the content.\nEach event type, in event encryption, and attribute, in attribute encryption, has its own individual encryption key.\nBy controlling access to the encryption key we effectively control access to the encrypted event content.\nIn order to control access to the encryption keys we form a key group of brokers for each individual encryption key.\nThe key group is used to refresh the key when necessary and to deliver the new key to all current members of the key group.\nThe key group manager is responsible for verifying that a new member requesting to join the key group is authorised to do so.\nTherefore the key group manager must be trusted by the type owner to enforce the access control policy.\nWe assume that the key group manager is either a trusted third party or alternatively a member of the type owner``s domain.\nIn [12] Pesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems.\nThe approach uses capabilities to decentralise the access control policy amongst the publish\/subscribe nodes (i.e. clients and brokers): each node holds a set of capabilities that define the authority granted to that node.\nAuthority to access a given event type is granted by the owner of that type issuing a capability to a node.\nThe capability defines the event type, the action, and the attributes that Type Owner ACS Broker Key Manager 1.\nGrant authorisation for Number Platekey 2.\nBroker requests to join Number Plate key group 5.\nIf the broker satisfies all checks,they will begin receiving appropriate keys.\n3.\nKey manager may check broker``s credentials at the Access Control Service 4.\nKey manager may check that the Type Owner permits access Figure 4: The steps involved for a broker to be successful in joining a key group the node is authorised to access.\nFor example, a tuple would authorise the owner to subscribe to Numberplate events with access to all attributes in the published events.\nThe sequence of events required for a broker to successfully join a key group is shown in Fig. 4.\nBoth the client hosting broker and the client must be authorised to make the client``s request.\nThat is, if the client makes a subscription request for Numberplate events, both the client and the local broker must be authorised to subscribe to Numberplate events.\nThis is because from the perspective of the broker network, the local broker acts as a proxy for the client.\nWe use the same capabilities to authorise membership in a key group that are used to authorise publish\/subscribe requests.\nNot doing so could lead to the inconsistent situation where a SHB is authorised to make a subscription on behalf of its clients, but is not able to decrypt incoming event content for them.\nIn the Numberplate example above, the local broker holding the above capability is authorised to join the Numberplate key group as well as the key groups for all the attributes in the Numberplate event type.\n4.1 Secure Group Communication Event content encryption in a decentralised multi-domain publish\/subscribe system can be seen as a sub-category of secure group communication.\nIn both cases the key management system must scale well with the number of clients, clients might be spread over large geographic areas, there might be high rates of churn in group membership, and all members must be synchronised with each other in time in order to use the same encryption key at the same time.\nThere are a number of scalable key management protocols for secure group communication [15].\nWe have implemented the One-Way Function Tree (OFT) [8] protocol as a proof of concept.\nWe chose to implement OFT, because of its relatively simplicity and good performance.\nOur implementation uses the same structured overlay network used by the broker network as a transport.\nThe OFT protocol is based on a binary tree where the participants are at the leaves of the tree.\nIt scales in log2n in processing and communication costs, as well as in the size of the state stored at each participant, which we have verified in our simulations.\n4.2 Key Refreshing Traditionally in group key management schemes the encryption key is refreshed when a new member joins the group, an 110 existing member leaves the group, or a timer expires.\nRefreshing the key when a new member joins provides backward secrecy, i.e. the new member is prevented from accessing old messages.\nSimilarly refreshing the key when an existing member leaves provides forward secrecy, i.e. the old member is prevented from accessing future messages.\nTimer triggered refreshes are issued periodically in order to limit the damage caused by the current key being compromised.\nEven though the state-of-the-art key management protocols are efficient, refreshing the key unnecessarily introduces extra traffic and processing amongst the key group members.\nIn our case key group membership is based on the broker holding a capability that authorises it to join the key group.\nThe capability has a set of validity conditions that in their simplest form define a time period when the certificate is valid, and in more complex cases involve on-line checks back towards the issuer.\nIn order to avoid unnecessary key refreshes the key manager looks at the certificate validity conditions of the joining or leaving member.\nIn case of a joining member, if the manager can ascertain that the certificate was valid at the time of the previous key refresh, a new key refresh can be avoided.\nSimilarly, instead of refreshing the key immediately when a member leaves the key group, the key manager can cache their credentials and refresh the key only when the credentials expire.\nThese situations are both illustrated in Fig.5.\nIt can be assumed that the credentials granted to brokers are relatively static, i.e. once a domain is authorised to access an event type, the authority will be delegated to all brokers of that domain, and they will have the authority for the foreseeable future.\nMore fine grained and dynamic access control would be implemented at the edge of the broker network between the clients and the client hosting brokers.\nWhen an encryption key is refreshed the new key is tagged with a timestamp.\nThe encryption key to use for a given event is selected based on the event``s publication timestamp.\nThe old keys will be kept for a reasonable amount of time in order to allow for some clock drift.\nSetting this value is part of the key management protocol, although exactly how long this time should be will depend on the nature of the application and possibly the size of the network.\nIt can be configured independently per key group if necessary.\n5.\nEVALUATION In order to evaluate the performance of event content encryption we have implemented both encryption approaches running over our implementation of the Hermes publish\/ subscribe middleware.\nThe implementation supports three modes: plaintext content, event encryption, and attribute encryption, in a single publish\/subscribe system.\nWe ran three performance tests in a discrete event simulator.\nThe simulator was run on an Intel P4 3.2GHz workstation with 1GB of main memory.\nWe decided to run the tests on an event simulator instead of an actual deployed system in order to be able to measure to aggregate time it takes to handle all messages in the system.\nThe following sections describe the specific test setups and the results in more detail.\n5.1 End-to-End Overhead The end-to-end overhead test shows how much the overall message throughput of the simulator was affected by event content encryption.\nWe formed a broker network with two brokers, attached a publisher to one of them and a subscriber to the other one.\nThe subscriber subscribed to the advertised event type without any filters, i.e. each publication matched the subscriber``s publication and thus was delivered to the subscriber.\nThe test measures the combined time it takes to publish and deliver 100,000 events.\nIf the content is encrypted this includes both encrypting the content at the PHB and decrypting it at the SHB.\nIn the test the number of attributes in the event type is increased from 1 to 25 (the x-axis).\nEach attribute is set to a 30 character string.\nFor each number of attributes in the event type the publisher publishes 100,000 events, and the elapsed time is measured to derive the message throughput.\nThe test was repeated five times for each number of attributes and we use the average of all iterations in the graph, but the results were highly consistent so the standard deviation is not shown.\nThe same tests were run with no content encryption, event encryption, and attribute encryption.\nAs can be seen in Fig. 6, event content encryption introduces a large overhead compared to not using encryption.\nThe throughput when using attribute encryption with an event type with one attribute is 46% of the throughput achieved when events are sent in plaintext.\nWhen the number of attributes increases the performance gap increases as well: with ten attributes the performance with attribute encryption has decreased to 11.7% of plaintext performance.\nEvent encryption fares better, because of fewer encryption operations.\nThe increase in the amount of encrypted data does not affect the performance as much as the number of individual encryption operations does.\nThe difference in performance with event encryption and attribute encryption with only one attribute is caused by the Java object serialisation mechanism: in the event encryption case the whole attribute structure is serialised, which results in more objects than serialising a single attribute value.\nA more efficient implementation would provide its own marshalling mechanism.\nNote that the EAX implementation we use runs the nonce (i.e. initialisation vector) through an OMAC construct to increase its randomness.\nSince the nonce is not required to be kept secret (just unique), there is a potential time\/space trade-off we have not yet investigated in attaching extra nonce attributes that have already had this OMAC construct applied to them.\n5.2 Domain Internal Events We explained in Sect.\n3.4 that event content decryption and encryption can be avoided if both brokers are authorised to access the event content.\nThis test was designed to show that the use of the encrypted event content mechanism between two authorised brokers incurs only a small performance overhead.\nIn this test we again form a broker network with two brokers.\n111 Key refresh schedule Broker 1 joining and leaving the key group Broker 2 joining and leaving the key group Actual key refresh times Time One day Broker``s key group credentials are valid Actual join time Actual leave time One day One day Figure 5: How the key refresh schedule is affected by brokers joining and leaving key groups 0 5000 10000 15000 20000 25000 30000 35000 0 5 10 15 20 25 MessagesperSecond Number of Attributes No Encryption Attribute Encryption Whole-content Encryption Figure 6: Throughput of Events in a Simulator Both brokers are configured with the same credentials.\nThe publisher is attached to one of the brokers and the subscriber to the other, and again the subscriber does not specify any filters in its subscription.\nThe publisher publishes 100,000 events and the test measures the elapsed time in order to derive the system``s message throughput.\nThe event content is encrypted outside the timing measurement, i.e. the encryption cost is not included in the measurements.\nThe goal is to model an environment where a broker has received a message from another authorised broker, and it routes the event to a third authorised broker.\nIn this scenario the middle broker does not need to decrypt nor encrypt the event content.\nAs shown in Fig. 2, the elapsed time was measured as the number of attributes in the published event was increased from 1 to 25.\nThe attribute values in each case are 30 character strings.\nEach test is repeated five times, and we use the average of all iterations in the graph.\nThe same test was then repeated with no encryption, event encryption and attribute encryption turned on.\nThe encrypted modes follow each other very closely.\nPredictably, the plaintext mode performs a little better for all attribute counts.\nThe difference can be explained partially by the encrypted events being larger in size, because they include both the plaintext and the encrypted content in this test.\nThe difference in performance is 3.7% with one attribute and 2.5% with 25 attributes.\nWe believe that the roughness of the graphs can be explained by the Java garbage collector interfering with the simulation.\nThe fact that all three graphs show the same irregularities supports this theory.\n112 50000 55000 60000 65000 70000 75000 80000 85000 90000 95000 100000 0 5 10 15 20 25 MessagesperSecond Number of Attributes No Encryption Attribute Encryption Whole-content Encryption Figure 7: Throughput of Domain Internal Events 5.3 Communication Overhead Through the definition of multiple event types, it is possible to emulate the expressiveness of attribute encryption using only event content encryption.\nThe last test we ran was to show the communication overhead caused by this emulation technique, compared to using real attribute encryption.\nIn the test we form a broker network of 2000 brokers.\nWe attach one publisher to one of the brokers, and an increasing number of subscribers to the remaining brokers.\nEach subscriber simulates a group of subscribers that all have the same access rights to the published event.\nEach subscriber group has its own event type in the test.\nThe outcome of this test is shown in Fig. 8.\nThe number of subscriber groups is increased from 1 to 50 (the x-axis).\nFor each n subscriber groups the publisher publishes one event to represent the use of attribute encryption and n events representing the events for each subscriber group.\nWe count the number of hops each publication makes through the broker network (y-axis).\nNote that Fig. 8 shows workloads beyond what we would expect in common usage, in which many event types are likely to contain fewer than ten attributes.\nThe subscriber groups used in this test represent disjoint permission sets over such event attributes.\nThe number of these sets can be determined from the particular access control policy in use, but will be a value less than or equal to the factorial of the number of attributes in a given event type.\nThe graphs indicate that attribute encryption performs better than event encryption even for small numbers of subscriber groups.\nIndeed, with only two subscriber groups (e.g. the case with Numberplate events) the hop count increases from 7.2 hops for attribute encryption to 16.6 hops for event encryption.\nWith 10 subscriber groups the corresponding numbers are 24.2 and 251.0, i.e. an order of magnitude difference.\n6.\nRELATED WORK Wang et al. have categorised the various security issues that need to be addressed in publish\/subscribe systems in the future in [20].\nThe paper is a comprehensive overview of security issues in publish\/subscribe systems and as such tries to draw attention to the issues rather than providing solutions.\nBacon et al. in [1] examine the use of role-based access control in multi-domain, distributed publish\/subscribe systems.\nTheir work is complementary to this paper: distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented.\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [10].\nThey correctly state that a secure group communication approach is infeasible in an environment like publish\/subscribe that has highly dynamic group memberships.\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers.\nWe assume in our work that the SHB is powerful enough to man113 1 10 100 1000 10000 0 5 10 15 20 25 30 35 40 45 50 NumberofHopsinTotal Number of Subscription Groups Attribute Encryption Whole-content Encryption Figure 8: Hop Counts When Emulating Attribute Encryption age a TLS secured connection for each local subscriber.\nBoth Srivatsa et al. [19] and Raiciu et al. [16] present mechanisms for protecting the confidentiality of messages in decentralised publish\/subscribe infrastructures.\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network.\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events.\nIn contrast, we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching.\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication.\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper.\nFinally, Fiege et al. address the related topic of event visibility in [6].\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems, the notion of event visibility does resonate with access control to some extent.\n7.\nCONCLUSIONS Event content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish\/subscribe system.\nEncryption causes an overhead, but i) there may be no alternative when access control is required, and ii) the performance penalty can be lessened with implementation optimisations, such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials.\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue.\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies.\nIn addition to providing attribute-level access control, attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them.\nOur experiments show that i) by caching plaintext and ciphertext content when possible, we are able to deliver comparable performance to plaintext events, and ii) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection.\nIn environments comprising multiple domains, where eventbrokers have different security credentials, we have quantified how a trade-off can be made between performance and expressiveness.\nAcknowledgements We would like to thank the anonymous reviewers for their very helpful comments.\nLauri Pesonen is supported by EPSRC (GR\/T28164) and the Nokia Foundation.\nDavid Eyers is supported by EPSRC (GR\/S94919).\n114 8.\nREFERENCES [1] J. Bacon, D. M. Eyers, K. Moody, and L. I. W. Pesonen.\nSecuring publish\/subscribe for multi-domain systems.\nIn G. Alonso, editor, Middleware, volume 3790 of Lecture Notes in Computer Science, pages 1-20.\nSpringer, 2005.\n[2] M. Bellare, P. Rogaway, and D. Wagner.\nEax: A conventional authenticated-encryption mode.\nCryptology ePrint Archive, Report 2003\/069, 2003.\nhttp:\/\/eprint.iacr.org\/.\n[3] A. Carzaniga, D. S. Rosenblum, and A. L. Wolf.\nDesign and evaluation of a wide-area event notification service.\nACM Transactions on Computer Systems, 19(3):332-383, Aug. 2001.\n[4] M. Castro, P. Druschel, A. Kermarrec, and A. Rowstron.\nSCRIBE: A large-scale and decentralized application-level multicast infrastructure.\nIEEE Journal on Selected Areas in communications (JSAC), 20(8):1489-1499, Oct. 2002.\n[5] T. Dierks and C. Allen.\nThe TLS protocol, version 1.0.\nRFC 2246, Internet Engineering Task Force, Jan. 1999.\n[6] L. Fiege, M. Mezini, G. M uhl, and A. P. Buchmann.\nEngineering event-based systems with scopes.\nIn ECOOP ``02: Proceedings of the 16th European Conference on Object-Oriented Programming, pages 309-333, London, UK, 2002.\nSpringer-Verlag.\n[7] T. Iwata and I. A. Iurosawa.\nOMAC: One-key CBC MAC, Jan. 14 2002.\n[8] D. A. McGrew and A. T. Sherman.\nKey establishment in large dynamic groups using one-way function trees.\nTechnical Report 0755, TIS Labs at Network Associates, Inc., Glenwood, MD, May 1998.\n[9] National Institute of Standards and Technology (NIST).\nAdvanced Encryption Standard (AES).\nFederal Information Processing Standards Publication (FIPS PUB) 197, Nov. 2001.\n[10] L. Opyrchal and A. Prakash.\nSecure distribution of events in content-based publish subscribe systems.\nIn Proc.\nof the 10th USENIX Security Symposium.\nUSENIX, Aug. 2001.\n[11] L. I. W. Pesonen and J. Bacon.\nSecure event types in content-based, multi-domain publish\/subscribe systems.\nIn SEM ``05: Proceedings of the 5th international workshop on Software engineering and middleware, pages 98-105, New York, NY, USA, Sept. 2005.\nACM Press.\n[12] L. I. W. Pesonen, D. M. Eyers, and J. Bacon.\nA capabilities-based access control architecture for multi-domain publish\/subscribe systems.\nIn Proceedings of the Symposium on Applications and the Internet (SAINT 2006), pages 222-228, Phoenix, AZ, Jan. 2006.\nIEEE.\n[13] P. R. Pietzuch and J. M. Bacon.\nHermes: A distributed event-based middleware architecture.\nIn Proc.\nof the 1st International Workshop on Distributed Event-Based Systems (DEBS``02), pages 611-618, Vienna, Austria, July 2002.\nIEEE.\n[14] P. R. Pietzuch and S. Bhola.\nCongestion control in a reliable scalable message-oriented middleware.\nIn M. Endler and D. Schmidt, editors, Proc.\nof the 4th Int.\nConf.\non Middleware (Middleware ``03), pages 202-221, Rio de Janeiro, Brazil, June 2003.\nSpringer.\n[15] S. Rafaeli and D. Hutchison.\nA survey of key management for secure group communication.\nACM Computing Surveys, 35(3):309-329, 2003.\n[16] C. Raiciu and D. S. Rosenblum.\nEnabling confidentiality in content-based publish\/subscribe infrastructures.\nIn Securecomm ``06: Proceedings of the Second IEEE\/CreatNet International Conference on Security and Privacy in Communication Networks, 2006.\n[17] P. Rogaway, M. Bellare, J. Black, and T. Krovetz.\nOCB: a block-cipher mode of operation for efficient authenticated encryption.\nIn ACM Conference on Computer and Communications Security, pages 196-205, 2001.\n[18] P. Rogaway and D. Wagner.\nA critique of CCM, Feb. 2003.\n[19] M. Srivatsa and L. Liu.\nSecuring publish-subscribe overlay services with eventguard.\nIn CCS ``05: Proceedings of the 12th ACM conference on Computer and communications security, pages 289-298, New York, NY, USA, 2005.\nACM Press.\n[20] C. Wang, A. Carzaniga, D. Evans, and A. L. Wolf.\nSecurity issues and requirements in internet-scale publish-subscribe systems.\nIn Proc.\nof the 35th Annual Hawaii International Conference on System Sciences (HICSS``02), Big Island, HI, USA, 2002.\nIEEE.\n[21] D. Whitfield and M. Hellman.\nPrivacy and authentication: An introduction to cryptography.\nIn Proceedings of the IEEE, volume 67, pages 397-427, 1979.\n[22] D. Whiting, R. Housley, and N. Ferguson.\nCounter with CBC-MAC (CCM).\nRFC 3610, Internet Engineering Task Force, Sept. 2003.\n115","lvl-3":"Encryption-Enforced Access Control in Dynamic Multi-Domain Publish\/Subscribe Networks\nABSTRACT\nPublish\/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure.\nLarge scale publish\/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations.\nAs the number of participants in the network increases, security becomes an increasing concern.\nThis paper extends previous work to present and evaluate a secure multi-domain publish\/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types.\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network.\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions.\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish\/subscribe networks.\n1.\nINTRODUCTION\nPublish\/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications.\nMuch of its capacity for scale in the number\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them.\nIn truly Internet-scale publish\/subscribe systems, the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic (and thus network) area.\nHowever, publish\/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains, be they independent administrative domains inside a single organisation, multiple independent organisations, or a combination of the two.\nWhile the communication capabilities of publish\/subscribe systems are well proved, spanning multiple administrative domains is likely to require addressing security considerations.\nAs security and access control are almost the antithesis of decoupling, relatively little publish\/subscribe research has focused on security so far.\nOur overall research aim is to develop Internet-scale publish\/subscribe networks that provide secure, efficient delivery of events, fault-tolerance and self-healing in the delivery infrastructure, and a convenient event interface.\nIn [12] Pesonen et al. propose a multi-domain, capabilitybased access control architecture for publish\/subscribe systems.\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types.\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish \/ subscribe system.\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly.\nAny malicious, compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers.\nThis might be acceptable in a relatively small system deployed inside a single organisation, but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure.\nWe propose enforcing access control within the broker network by encrypting event content, and that policy dictate controls over the necessary encryption keys.\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content (i.e. publish, subscribe to, or filter).\nWe effectively move the enforcement of access control from the brokers to the encryption key managers.\nWe expect that access control would need to be enforced in a multi-domain publish\/subscribe system when multiple organisations form a shared publish\/subscribe system yet run multiple independent applications.\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish\/subscribe system.\nBoth cases require access control because event delivery in a dynamic publish\/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers.\nThere are two particular benefits to sharing the publish \/ subscribe infrastructure, both of which relate to the broker network.\nFirst, sharing brokers will create a physically larger network that will provide greater geographic reach.\nSecond, increasing the inter-connectivity of brokers will allow the publish\/subscribe system to provide higher faulttolerance.\nFigure 1 shows the multi-domain publish\/subscribe network we use as an example throughout this paper.\nIt is based on the United Kingdom Police Forces, and we show three particular sub-domains: Metropolitan Police Domain.\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area.\nWe have included Detective Smith as a subscriber in this domain.\nCongestion Charge Service Domain.\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain.\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain.\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish\/subscribe system access control presented in this paper.\nPITO Domain.\nThe Police Information Technology Organisation (PITO) is the centre from which Police data standards are managed.\nIt is the event type owner in this particular scenario.\nEncryption protects the confidentiality of events should they be transported through unauthorised domains.\nHowever encrypting whole events means unauthorised brokers cannot make efficient routing decisions.\nOur approach is to apply encryption to the individual attributes of events.\nThis way our multi-domain access control policy works at a finer granularity--publishers and subscribers may be authorised access to a subset of the available attributes.\nIn cases where non-encrypted events are used for routing, we can reduce the total number of events sent through the system without revealing the values of sensitive attributes.\nIn our example scenario, the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings--the location attribute would not be decrypted.\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish\/subscribe infrastructure.\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle.\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case.\nCurrent publish\/subscribe access control systems enforce security at the edge of the broker network where clients connect to it.\nHowever this approach will often not be acceptable in Internet-scale systems.\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to, by encrypting event content.\nPublications will be encrypted with their event type specific encryption keys.\nBy controlling access to the encryption keys, we can control access to the event types.\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys.\nWe introduce decentralised publish\/subscribe systems and relevant cryptography in Section 2.\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level.\nSection 4 discusses managing encryption keys in multi-domain publish\/subscribe systems.\nWe analytically evaluate the performance of our proposal in Section 5.\nFinally Section 6 discusses related work in securing publish\/subscribe systems and Section 7 provides concluding remarks.\n2.\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish\/subscribe systems.\nWe indicate our assumptions about multi-domain publish\/subscribe systems, and describe how these assumptions influence the developments we have made from our previously published work.\n2.1 Decentralised Publish\/Subscribe Systems\nA publish\/subscribe system includes publishers, subscribers, and an event service.\nPublishers publish events, subscribers subscribe to events of interest to them, and the event service is responsible for delivering published events to all subscribers whose interests match the given event.\nThe event service in a decentralised publish\/subscribe system is distributed over a number of broker nodes.\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers.\nClients (publishers and subscribers) connect to a local broker, which is fully trusted by the client.\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers (PHB) or subscriber hosting brokers (SHB) depending on whether the connected client is a publisher or\nFigure 1: An overall view of our multi-domain publish\/subscribe deployment\na subscriber, respectively.\nA local broker is usually either part of the same domain as the client, or it is owned by a service provider trusted by the client.\nA broker network can have a static topology (e.g. Siena [3] and Gryphon [14]) or a dynamic topology (e.g. Scribe [4] and Hermes [13]).\nOur proposed approach will work in both cases.\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions (see Sect.\n3.4), which is very difficult with a dynamic topology.\nOn the other hand, a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure.\nOur work is based on the Hermes system.\nHermes is a content-based publish\/subscribe middleware that includes strong event type support.\nIn other words, each publication is an instance of a particular predefined event type.\nPublications are type checked at the local broker of each publisher.\nOur attribute level encryption scheme assumes that events are typed.\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology.\nA Hermes publication consists of an event type identifier and a set of attribute value pairs.\nThe type identifier is the SHA-1 hash of the name of the event type.\nIt is used to route the publication through the event broker network.\nIt conveniently hides the type of the publication, i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier.\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [11], which can have their integrity and authenticity confirmed by checking their digital signatures.\nA useful side effect of secure event types are their globally unique event type and attribute names.\nThese names can be referred to by access control policies.\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute.\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems in [12].\nThe model treats event types as resources that publishers, subscribers, and event brokers want to access.\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure (SPKI) authorisation certificates that grant the holder access to the specified event type.\nFor example, authorised publishers will have been issued an authorisation certificate that specifies that the publisher, identified by public key, is authorised to publish instances of the event type specified in the certificate.\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates.\nThat is, a publisher who is authorised to publish a given event type, is also authorised\nto access the encryption keys used to protect events of that type.\nWe discuss this in more detail in Sect.\n4.\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system.\nIn our case the first level of access control is applied when the participant tries to join the publish\/subscribe network.\nUnauthorised event brokers are not allowed to join the broker network.\nSimilarly unauthorised event clients are not allowed to connect to an event broker.\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security (TLS) [5] in order to prevent unauthorised access on the transport layer.\nThe architecture of the publish\/subscribe system means that event clients must connect to event brokers in order to be able to access the publish\/subscribe system.\nThus we assume that these clients are not a threat.\nThe event client relies completely on the local event broker for access to the broker network.\nTherefore the event client is unable to access any events without the assistance of the local broker.\nThe brokers on the other hand are able to analyse all events in the system that pass through them.\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event (in the case of attribute level encryption).\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern.\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary.\nWhile traffic analysis is an important concern we have not addressed it further in this paper.\n3.\nENCRYPTING EVENT CONTENT\n3.1 Event Encryption\n3.2 Attribute Encryption\n3.3 Encrypting Subscriptions\n3.4 Avoiding Unnecessary Cryptographic Operations\n3.5 Implementation\n4.\nKEY MANAGEMENT\n4.1 Secure Group Communication\n4.2 Key Refreshing\n5.\nEVALUATION\n5.1 End-to-End Overhead\n5.2 Domain Internal Events\n5.3 Communication Overhead\n6.\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish\/subscribe systems in the future in [20].\nThe paper is a comprehensive overview of security issues in publish\/subscribe systems and as such tries to draw attention to the issues rather than providing solutions.\nBacon et al. in [1] examine the use of role-based access control in multi-domain, distributed publish\/subscribe systems.\nTheir work is complementary to this paper: distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented.\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [10].\nThey correctly state that a secure group communication approach is infeasible in an environment like publish\/subscribe that has highly dynamic group memberships.\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers.\nWe assume in our work that the SHB is powerful enough to man\nFigure 8: Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber.\nBoth Srivatsa et al. [19] and Raiciu et al. [16] present mechanisms for protecting the confidentiality of messages in decentralised publish\/subscribe infrastructures.\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network.\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events.\nIn contrast, we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching.\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication.\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper.\nFinally, Fiege et al. address the related topic of event visibility in [6].\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems, the notion of event visibility does resonate with access control to some extent.\n7.\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish\/subscribe system.\nEncryption causes an overhead, but i) there may be no alternative when access control is required, and ii) the performance penalty can be lessened with implementation optimisations, such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials.\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue.\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies.\nIn addition to providing attribute-level access control, attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them.\nOur experiments show that i) by caching plaintext and ciphertext content when possible, we are able to deliver comparable performance to plaintext events, and ii) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection.\nIn environments comprising multiple domains, where eventbrokers have different security credentials, we have quantified how a trade-off can be made between performance and expressiveness.","lvl-4":"Encryption-Enforced Access Control in Dynamic Multi-Domain Publish\/Subscribe Networks\nABSTRACT\nPublish\/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure.\nLarge scale publish\/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations.\nAs the number of participants in the network increases, security becomes an increasing concern.\nThis paper extends previous work to present and evaluate a secure multi-domain publish\/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types.\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network.\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions.\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish\/subscribe networks.\n1.\nINTRODUCTION\nPublish\/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications.\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them.\nIn truly Internet-scale publish\/subscribe systems, the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic (and thus network) area.\nWhile the communication capabilities of publish\/subscribe systems are well proved, spanning multiple administrative domains is likely to require addressing security considerations.\nAs security and access control are almost the antithesis of decoupling, relatively little publish\/subscribe research has focused on security so far.\nOur overall research aim is to develop Internet-scale publish\/subscribe networks that provide secure, efficient delivery of events, fault-tolerance and self-healing in the delivery infrastructure, and a convenient event interface.\nIn [12] Pesonen et al. propose a multi-domain, capabilitybased access control architecture for publish\/subscribe systems.\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types.\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish \/ subscribe system.\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly.\nAny malicious, compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers.\nWe propose enforcing access control within the broker network by encrypting event content, and that policy dictate controls over the necessary encryption keys.\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content (i.e. publish, subscribe to, or filter).\nWe effectively move the enforcement of access control from the brokers to the encryption key managers.\nWe expect that access control would need to be enforced in a multi-domain publish\/subscribe system when multiple organisations form a shared publish\/subscribe system yet run multiple independent applications.\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish\/subscribe system.\nBoth cases require access control because event delivery in a dynamic publish\/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers.\nThere are two particular benefits to sharing the publish \/ subscribe infrastructure, both of which relate to the broker network.\nFirst, sharing brokers will create a physically larger network that will provide greater geographic reach.\nSecond, increasing the inter-connectivity of brokers will allow the publish\/subscribe system to provide higher faulttolerance.\nFigure 1 shows the multi-domain publish\/subscribe network we use as an example throughout this paper.\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area.\nWe have included Detective Smith as a subscriber in this domain.\nCongestion Charge Service Domain.\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain.\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain.\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish\/subscribe system access control presented in this paper.\nPITO Domain.\nIt is the event type owner in this particular scenario.\nEncryption protects the confidentiality of events should they be transported through unauthorised domains.\nHowever encrypting whole events means unauthorised brokers cannot make efficient routing decisions.\nOur approach is to apply encryption to the individual attributes of events.\nThis way our multi-domain access control policy works at a finer granularity--publishers and subscribers may be authorised access to a subset of the available attributes.\nIn cases where non-encrypted events are used for routing, we can reduce the total number of events sent through the system without revealing the values of sensitive attributes.\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish\/subscribe infrastructure.\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case.\nCurrent publish\/subscribe access control systems enforce security at the edge of the broker network where clients connect to it.\nHowever this approach will often not be acceptable in Internet-scale systems.\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to, by encrypting event content.\nPublications will be encrypted with their event type specific encryption keys.\nBy controlling access to the encryption keys, we can control access to the event types.\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys.\nWe introduce decentralised publish\/subscribe systems and relevant cryptography in Section 2.\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level.\nSection 4 discusses managing encryption keys in multi-domain publish\/subscribe systems.\nFinally Section 6 discusses related work in securing publish\/subscribe systems and Section 7 provides concluding remarks.\n2.\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish\/subscribe systems.\nWe indicate our assumptions about multi-domain publish\/subscribe systems, and describe how these assumptions influence the developments we have made from our previously published work.\n2.1 Decentralised Publish\/Subscribe Systems\nA publish\/subscribe system includes publishers, subscribers, and an event service.\nPublishers publish events, subscribers subscribe to events of interest to them, and the event service is responsible for delivering published events to all subscribers whose interests match the given event.\nThe event service in a decentralised publish\/subscribe system is distributed over a number of broker nodes.\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers.\nClients (publishers and subscribers) connect to a local broker, which is fully trusted by the client.\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers (PHB) or subscriber hosting brokers (SHB) depending on whether the connected client is a publisher or\nFigure 1: An overall view of our multi-domain publish\/subscribe deployment\na subscriber, respectively.\nA local broker is usually either part of the same domain as the client, or it is owned by a service provider trusted by the client.\nA broker network can have a static topology (e.g. Siena [3] and Gryphon [14]) or a dynamic topology (e.g. Scribe [4] and Hermes [13]).\nOur proposed approach will work in both cases.\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions (see Sect.\nOur work is based on the Hermes system.\nHermes is a content-based publish\/subscribe middleware that includes strong event type support.\nIn other words, each publication is an instance of a particular predefined event type.\nPublications are type checked at the local broker of each publisher.\nOur attribute level encryption scheme assumes that events are typed.\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology.\nA Hermes publication consists of an event type identifier and a set of attribute value pairs.\nThe type identifier is the SHA-1 hash of the name of the event type.\nIt is used to route the publication through the event broker network.\nIt conveniently hides the type of the publication, i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier.\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [11], which can have their integrity and authenticity confirmed by checking their digital signatures.\nA useful side effect of secure event types are their globally unique event type and attribute names.\nThese names can be referred to by access control policies.\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute.\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems in [12].\nThe model treats event types as resources that publishers, subscribers, and event brokers want to access.\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure (SPKI) authorisation certificates that grant the holder access to the specified event type.\nFor example, authorised publishers will have been issued an authorisation certificate that specifies that the publisher, identified by public key, is authorised to publish instances of the event type specified in the certificate.\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates.\nThat is, a publisher who is authorised to publish a given event type, is also authorised\nto access the encryption keys used to protect events of that type.\n4.\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system.\nIn our case the first level of access control is applied when the participant tries to join the publish\/subscribe network.\nUnauthorised event brokers are not allowed to join the broker network.\nSimilarly unauthorised event clients are not allowed to connect to an event broker.\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security (TLS) [5] in order to prevent unauthorised access on the transport layer.\nThe architecture of the publish\/subscribe system means that event clients must connect to event brokers in order to be able to access the publish\/subscribe system.\nThus we assume that these clients are not a threat.\nThe event client relies completely on the local event broker for access to the broker network.\nTherefore the event client is unable to access any events without the assistance of the local broker.\nThe brokers on the other hand are able to analyse all events in the system that pass through them.\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event (in the case of attribute level encryption).\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern.\n6.\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish\/subscribe systems in the future in [20].\nThe paper is a comprehensive overview of security issues in publish\/subscribe systems and as such tries to draw attention to the issues rather than providing solutions.\nBacon et al. in [1] examine the use of role-based access control in multi-domain, distributed publish\/subscribe systems.\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [10].\nThey correctly state that a secure group communication approach is infeasible in an environment like publish\/subscribe that has highly dynamic group memberships.\nWe assume in our work that the SHB is powerful enough to man\nFigure 8: Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber.\nBoth Srivatsa et al. [19] and Raiciu et al. [16] present mechanisms for protecting the confidentiality of messages in decentralised publish\/subscribe infrastructures.\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network.\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events.\nIn contrast, we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching.\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication.\nFinally, Fiege et al. address the related topic of event visibility in [6].\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems, the notion of event visibility does resonate with access control to some extent.\n7.\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish\/subscribe system.\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies.\nIn addition to providing attribute-level access control, attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them.","lvl-2":"Encryption-Enforced Access Control in Dynamic Multi-Domain Publish\/Subscribe Networks\nABSTRACT\nPublish\/subscribe systems provide an efficient, event-based, wide-area distributed communications infrastructure.\nLarge scale publish\/subscribe systems are likely to employ components of the event transport network owned by cooperating, but independent organisations.\nAs the number of participants in the network increases, security becomes an increasing concern.\nThis paper extends previous work to present and evaluate a secure multi-domain publish\/subscribe infrastructure that supports and enforces fine-grained access control over the individual attributes of event types.\nKey refresh allows us to ensure forward and backward security when event brokers join and leave the network.\nWe demonstrate that the time and space overheads can be minimised by careful consideration of encryption techniques, and by the use of caching to decrease unnecessary decryptions.\nWe show that our approach has a smaller overall communication overhead than existing approaches for achieving the same degree of control over security in publish\/subscribe networks.\n1.\nINTRODUCTION\nPublish\/subscribe is well suited as a communication mechanism for building Internet-scale distributed event-driven applications.\nMuch of its capacity for scale in the number\nof participants comes from its decoupling of publishers and subscribers by placing an asynchronous event delivery service between them.\nIn truly Internet-scale publish\/subscribe systems, the event delivery service will include a large set of interconnected broker nodes spanning a wide geographic (and thus network) area.\nHowever, publish\/subscribe systems that do span a wide geographic area are likely to also span multiple administrative domains, be they independent administrative domains inside a single organisation, multiple independent organisations, or a combination of the two.\nWhile the communication capabilities of publish\/subscribe systems are well proved, spanning multiple administrative domains is likely to require addressing security considerations.\nAs security and access control are almost the antithesis of decoupling, relatively little publish\/subscribe research has focused on security so far.\nOur overall research aim is to develop Internet-scale publish\/subscribe networks that provide secure, efficient delivery of events, fault-tolerance and self-healing in the delivery infrastructure, and a convenient event interface.\nIn [12] Pesonen et al. propose a multi-domain, capabilitybased access control architecture for publish\/subscribe systems.\nThe architecture provides a mechanism for authorising event clients to publish and subscribe to event types.\nThe privileges of the client are checked by the local broker that the client connects to in order to access the publish \/ subscribe system.\nThe approach implements access control at the edge of the broker network and assumes that all brokers can be trusted to enforce the access control policies correctly.\nAny malicious, compromised or unauthorised broker is free to read and write any events that pass through it on their way from the publishers to the subscribers.\nThis might be acceptable in a relatively small system deployed inside a single organisation, but it is not appropriate in a multi-domain environment in which organisations share a common infrastructure.\nWe propose enforcing access control within the broker network by encrypting event content, and that policy dictate controls over the necessary encryption keys.\nWith encrypted event content only those brokers that are authorised to ac\ncess the encryption keys are able to access the event content (i.e. publish, subscribe to, or filter).\nWe effectively move the enforcement of access control from the brokers to the encryption key managers.\nWe expect that access control would need to be enforced in a multi-domain publish\/subscribe system when multiple organisations form a shared publish\/subscribe system yet run multiple independent applications.\nAccess control might also be needed when a single organisation consists of multiple sub-domains that deliver confidential data over the organisation-wide publish\/subscribe system.\nBoth cases require access control because event delivery in a dynamic publish\/subscribe infrastructure based on a shared broker network may well lead to events being routed through unauthorised domains along their paths from publishers to subscribers.\nThere are two particular benefits to sharing the publish \/ subscribe infrastructure, both of which relate to the broker network.\nFirst, sharing brokers will create a physically larger network that will provide greater geographic reach.\nSecond, increasing the inter-connectivity of brokers will allow the publish\/subscribe system to provide higher faulttolerance.\nFigure 1 shows the multi-domain publish\/subscribe network we use as an example throughout this paper.\nIt is based on the United Kingdom Police Forces, and we show three particular sub-domains: Metropolitan Police Domain.\nThis domain contains a set of CCTV cameras that publish information about the movements of vehicles around the London area.\nWe have included Detective Smith as a subscriber in this domain.\nCongestion Charge Service Domain.\nThe charges that are levied on the vehicles that have passed through the London Congestion Charge zone each day are issued by systems within this domain.\nThe source numberplate recognition data comes from the cameras in the Metropolitan Police Domain.\nThe fact that the CCS are only authorised to read a subset of the vehicle event data will exercise some of the key features of the enforceable publish\/subscribe system access control presented in this paper.\nPITO Domain.\nThe Police Information Technology Organisation (PITO) is the centre from which Police data standards are managed.\nIt is the event type owner in this particular scenario.\nEncryption protects the confidentiality of events should they be transported through unauthorised domains.\nHowever encrypting whole events means unauthorised brokers cannot make efficient routing decisions.\nOur approach is to apply encryption to the individual attributes of events.\nThis way our multi-domain access control policy works at a finer granularity--publishers and subscribers may be authorised access to a subset of the available attributes.\nIn cases where non-encrypted events are used for routing, we can reduce the total number of events sent through the system without revealing the values of sensitive attributes.\nIn our example scenario, the Congestion Charge Service would only be authorised to read the numberplate field of vehicle sightings--the location attribute would not be decrypted.\nWe thus preserve the privacy of motorists while still allowing the CCS to do its job using the shared publish\/subscribe infrastructure.\nLet us assume that a Metropolitan Police Service detective is investigating a crime and she is interested in sightings of a specific vehicle.\nThe detective gets a court order that authorises her to subscribe to numberplate events of the specific numberplate related to her case.\nCurrent publish\/subscribe access control systems enforce security at the edge of the broker network where clients connect to it.\nHowever this approach will often not be acceptable in Internet-scale systems.\nWe propose enforcing security within the broker network as well as at the edges that event clients connect to, by encrypting event content.\nPublications will be encrypted with their event type specific encryption keys.\nBy controlling access to the encryption keys, we can control access to the event types.\nThe proposed approach allows event brokers to route events even when they have access only to a subset of the potential encryption keys.\nWe introduce decentralised publish\/subscribe systems and relevant cryptography in Section 2.\nIn Section 3 we present our model for encrypting event content on both the event and the attribute level.\nSection 4 discusses managing encryption keys in multi-domain publish\/subscribe systems.\nWe analytically evaluate the performance of our proposal in Section 5.\nFinally Section 6 discusses related work in securing publish\/subscribe systems and Section 7 provides concluding remarks.\n2.\nBACKGROUND\nIn this section we provide a brief introduction to decentralised publish\/subscribe systems.\nWe indicate our assumptions about multi-domain publish\/subscribe systems, and describe how these assumptions influence the developments we have made from our previously published work.\n2.1 Decentralised Publish\/Subscribe Systems\nA publish\/subscribe system includes publishers, subscribers, and an event service.\nPublishers publish events, subscribers subscribe to events of interest to them, and the event service is responsible for delivering published events to all subscribers whose interests match the given event.\nThe event service in a decentralised publish\/subscribe system is distributed over a number of broker nodes.\nTogether these brokers form a network that is responsible for maintaining the necessary routing paths from publishers to subscribers.\nClients (publishers and subscribers) connect to a local broker, which is fully trusted by the client.\nIn our discussion we refer to the client hosting brokers as publisher hosting brokers (PHB) or subscriber hosting brokers (SHB) depending on whether the connected client is a publisher or\nFigure 1: An overall view of our multi-domain publish\/subscribe deployment\na subscriber, respectively.\nA local broker is usually either part of the same domain as the client, or it is owned by a service provider trusted by the client.\nA broker network can have a static topology (e.g. Siena [3] and Gryphon [14]) or a dynamic topology (e.g. Scribe [4] and Hermes [13]).\nOur proposed approach will work in both cases.\nA static topology enables the system administrator to build trusted domains and in that way improve the efficiency of routing by avoiding unnecessary encryptions (see Sect.\n3.4), which is very difficult with a dynamic topology.\nOn the other hand, a dynamic topology allows the broker network to dynamically re-balance itself when brokers join or leave the network either in a controlled fashion or as a result of a network or node failure.\nOur work is based on the Hermes system.\nHermes is a content-based publish\/subscribe middleware that includes strong event type support.\nIn other words, each publication is an instance of a particular predefined event type.\nPublications are type checked at the local broker of each publisher.\nOur attribute level encryption scheme assumes that events are typed.\nHermes uses a structured overlay network as a transport and therefore has a dynamic topology.\nA Hermes publication consists of an event type identifier and a set of attribute value pairs.\nThe type identifier is the SHA-1 hash of the name of the event type.\nIt is used to route the publication through the event broker network.\nIt conveniently hides the type of the publication, i.e. brokers are prevented from seeing which events are flowing through them unless they are aware of the specific event type name and identifier.\n2.2 Secure Event Types\nPesonen et al. introduced secure event types in [11], which can have their integrity and authenticity confirmed by checking their digital signatures.\nA useful side effect of secure event types are their globally unique event type and attribute names.\nThese names can be referred to by access control policies.\nIn this paper we use the secure name of the event type or attribute to refer to the encryption key used to encrypt the event or attribute.\n2.3 Capability-Based Access Control\nPesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems in [12].\nThe model treats event types as resources that publishers, subscribers, and event brokers want to access.\nThe event type owner is responsible for managing access control for an event type by issuing Simple Public Key Infrastructure (SPKI) authorisation certificates that grant the holder access to the specified event type.\nFor example, authorised publishers will have been issued an authorisation certificate that specifies that the publisher, identified by public key, is authorised to publish instances of the event type specified in the certificate.\nWe leverage the above mentioned access control mechanism in this paper by controlling access to encryption keys using the same authorisation certificates.\nThat is, a publisher who is authorised to publish a given event type, is also authorised\nto access the encryption keys used to protect events of that type.\nWe discuss this in more detail in Sect.\n4.\n2.4 Threat model\nThe goal of the proposed mechanism is to enforce access control for authorised participants in the system.\nIn our case the first level of access control is applied when the participant tries to join the publish\/subscribe network.\nUnauthorised event brokers are not allowed to join the broker network.\nSimilarly unauthorised event clients are not allowed to connect to an event broker.\nAll the connections in the broker network between event brokers and event clients utilise Transport Layer Security (TLS) [5] in order to prevent unauthorised access on the transport layer.\nThe architecture of the publish\/subscribe system means that event clients must connect to event brokers in order to be able to access the publish\/subscribe system.\nThus we assume that these clients are not a threat.\nThe event client relies completely on the local event broker for access to the broker network.\nTherefore the event client is unable to access any events without the assistance of the local broker.\nThe brokers on the other hand are able to analyse all events in the system that pass through them.\nA broker can analyse both the event traffic as well as the number and names of attributes that are populated in an event (in the case of attribute level encryption).\nThere are viable approaches to preventing traffic analysis by inserting random events into the event stream in order to produce a uniform traffic pattern.\nSimilarly attribute content can be padded to a standard length in order to avoid leaking information to the adversary.\nWhile traffic analysis is an important concern we have not addressed it further in this paper.\n3.\nENCRYPTING EVENT CONTENT\nWe propose enforcing access control in a decentralised broker network by encrypting the contents of published events and controlling access to the encryption keys.\nEffectively we move the responsibility for access control from the broker network to the key managers.\nIt is assumed that all clients have access to a broker that they can trust and that is authorised to access the event content required by the client.\nThis allows us to implement the event content encryption within the broker network without involving the clients.\nBy delegating the encryption tasks to the brokers, we lower the number of nodes required to have access to a given encryption key'.\nThe benefits are three-fold: i) fewer nodes handle the confidential encryption key so there is a smaller chance of the key being disclosed; ii) key refreshes involve fewer nodes which means that the key management algorithm will incur smaller communication and processing overheads to the publish\/subscribe system; and iii) the local broker will decrypt an event once and deliver it to all subscribers, instead of each subscriber' The encryption keys are changed over time in response to brokers joining or leaving the network, and periodically to reduce the amount of time any single key is used.\nThis is discussed in Sect.\n4.2 having to decrypt the same event.\nDelegating encryption tasks to the local broker is appropriate, because encryption is a middleware feature used to enforce access control within the middleware system.\nIf applications need to handle encrypted data in the application layer, they are free to publish encrypted data over the publish\/subscribe system.\nWe can implement encryption either at the event level or the attribute level.\nEvent encryption is simpler, requires fewer keys, fewer independent cryptographic operations, and thus is usually faster.\nAttribute encryption enables access control at the attribute level, which means that we have a more expressive and powerful access control mechanism, while usually incurring a larger performance penalty.\nIn this section we discuss encrypting event content both at the event level and the attribute level; avoiding leaking information to unauthorised brokers by encrypting subscription filters; avoiding unnecessary encryptions between authorised brokers; and finally, how event content encryption was implemented in our prototype.\nNote that since no publish\/subscribe client is ever given access to encryption keys, any encryption performed by the brokers is necessarily completely transparent to all clients.\n3.1 Event Encryption\nIn event encryption all the event attributes are encrypted as a single block of plaintext.\nThe event type identifier is left intact (i.e. in plaintext) in order to facilitate event routing in the broker network.\nThe globally unique event type identifier specifies the encryption key used to encrypt the event content.\nEach event type in the system will have its own individual encryption key.\nKeys are refreshed, as discussed in Sect.\n4.2.\nWhile in transit the event will consist of a tuple containing the type identifier, a publication timestamp, ciphertext, and a message authentication tag: .\nEvent brokers that are authorised to access the event, and thus have access to the encryption key, can decrypt the event and implement content-based routing.\nEvent brokers that do not have access to the encryption key will be forced to route the event based only on its type.\nThat is, they will not be able to make intelligent decisions about whether events need not be transmitted down their outgoing links.\nEvent encryption results in one encryption at the publisher hosting broker, and one decryption at each filtering intermediate broker and subscriber hosting broker that the event passes through, regardless of the number of attributes.\nThis results in a significant performance advantage compared to attribute encryption.\n3.2 Attribute Encryption\nIn attribute encryption each attribute value in an event is encrypted separately with its own encryption key.\nThe encryption key is identified by the attribute's globally unique identifier (the globally unique event identifier defines a namespace inside which the attribute identifier is a fully qualified name).\nThe event type identifier is left intact to facilitate event routing for unauthorised brokers.\nThe attribute identifiers are also left intact to allow authorised brokers to decrypt the attribute values with the correct keys.\nBrokers that are authorised to access some of the attributes in an event, can implement content-based routing over the attributes that are accessible to them.\nAn attribute encrypted event in transit consists of the event type identifier, a publication timestamp, and a set of attribute tuples: .\nAttribute tuples consist of an attribute identifier, ciphertext, and a message authentication tag: .\nThe attribute identifier is the SHA-1 hash of the attribute name used in the event type definition.\nUsing the attribute identifier in the published event instead of the attribute name prevents unauthorised parties from learning which attributes are included in the publication.\nCompared with event encryption, attribute encryption usually results in larger processing overheads, because each attribute is encrypted separately.\nIn the encryption process the initialisation of the encryption algorithm takes a significant portion of the total running time of the algorithm.\nOnce the algorithm is initialised, increasing the amount of data to be encrypted does not affect the running time very much.\nThis disparity is emphasised in attribute encryption, where an encryption algorithm must be initialised for each attribute separately, and the amount of data encrypted is relatively small.\nAs a result attribute encryption incurs larger processing overheads when compared with event encryption which can be clearly seen from the performance results in Sect.\n5.\nThe advantage of attribute encryption is that the type owner is able to control access to the event type at the attribute level.\nThe event type owner can therefore allow clients to have different levels of access to the same event type.\nAlso, attribute level encryption enables content-based routing in cases where an intermediate broker has access only to some of the attributes of the event, thus reducing the overall impact of event delivery on the broker network.\nTherefore the choice between event and attribute encryption is a trade-off between expressiveness and performance, and depends on the requirements of the distributed application.\nThe expressiveness provided by attribute encryption can be emulated by introducing a new event type for each group of subscribers with the same authorisation.\nThe publisher would then publish an instance of each of these types instead of publishing just a combined event.\nFor example, in our London police network, the congestion control cameras would have to publish one event for the CCS and another for the detective.\nThis approach could become difficult to manage if the attributes have a variety of security properties, since a large number of event types would be required and policies and subscriptions may change dynamically.\nThis approach creates a large number of extra events that must be routed through the network, as is shown in Sect.\n5.3.\n3.3 Encrypting Subscriptions\nIn order to fully protect the confidentiality of event content we must also encrypt subscriptions.\nEncrypted subscriptions guarantee: i) that only authorised brokers are able to submit subscriptions to the broker network, and ii) that unauthorised brokers do not gain information about event content by monitoring which subscriptions a given event matches.\nFor example, in the first case an unauthorised broker can create subscriptions with appropriately chosen filters, route them towards the root of the event dissemination tree, and monitor which events were delivered to it as matching the subscription.\nThe fact that the event matched the subscription would leak information to the broker about the event content even if the event was still encrypted.\nIn the second case, even if an unauthorised broker was unable to create subscriptions itself, it could still look at subscriptions that were routed through it, take note of the filters on those subscriptions, and monitor which events are delivered to it by upstream brokers as matching the subscription filters.\nThis would again reveal information about the event content to the unauthorised broker.\nIn the case of encrypting complete events, we also encrypt the complete subscription filter.\nThe event type identifier in the subscription must be left intact to allow brokers to route events based on their topic when they are not authorised to access the filter.\nIn such cases the unauthorised broker is required to assume that events of such a type match all filter expressions.\nEach attribute filter is encrypted individually, much as when encrypting a publication.\nIn addition to the event type identifier the attribute identifiers are also left intact to allow authorised brokers to decrypt those filters that they have access to, and route the event based on its matching the decrypted filters.\n3.4 Avoiding Unnecessary Cryptographic Operations\nEncrypting the event content is not necessary if the current broker and the next broker down the event dissemination tree have the same credentials with respect to the event type at hand.\nFor example, one can assume that all brokers inside an organisation would share the same credentials and therefore, as long as the next broker is a member of the same domain, the event can be routed to it in plaintext.\nWith attribute encryption it is possible that the neighbouring broker is authorised to access a subset of the decrypted attributes, in which case those attributes that the broker is not authorised to access would be passed to it encrypted.\nIn order to know when it is safe to pass the event in plaintext form, the brokers exchange credentials as part of a handshake when they connect to each other.\nIn cases when the brokers are able to verify each others' credentials, they will add them to the routing table for future reference.\nIf a broker acquires new credentials after the initial handshake, it will present these new credentials to its neighbours while in session.\nRegardless of its neighbouring brokers, the PHB will always encrypt the event content, because it is cheaper to encrypt the event once at the root of the event dissemination tree.\nIn Hermes the rendezvous node for each event type is selected uniformly randomly (the event type name is hashed with the SHA-1 hash algorithm to produce the event type\nFigure 2: Node addressing is evenly distributed across the network, thus rendezvous nodes may lie outside the domain that owns an event type Figure 3: Caching decrypted data to increase effi\nciency when delivering to peers with equivalent security privileges identifier, then the identifier is used to select the rendezvous node in the structured overlay network).\nTherefore it is probable that the rendezvous node will reside outside the current domain.\nThis situation is illustrated in the event dissemination tree in Fig. 2.\nSo even with domain internal applications, where the event can be routed from the publisher to all subscribers in plaintext form, the event content will in most cases have to be encrypted for it to be routed to the rendezvous node.\nTo avoid unnecessary decryptions, we attach a plaintext content cache to encrypted events.\nA broker fills the cache with content that it has decrypted, for example, in order to filter on the content.\nThe cache is accessed by the broker when it delivers an event to a local subscriber after first seeing if the event matches the subscription filter, but the broker also sends the cache to the next broker with the encrypted event.\nThe next broker can look the attribute up from the cache instead of having to decrypt it.\nIf the event is being sent to an unauthorised broker, the cache will be discarded before the event is sent.\nObviously sending the cache with the encrypted event will add to the communication cost, but this is outweighed by the saving in encryption\/decryption processing.\nIn Fig. 3 we see two separate cached plaintext streams accompanying an event depending on the inter-broker relationships in two different domains.\nWe show in Sect.\n5.2 that the overhead of sending encrypted messages with a full plaintext cache incurs almost no overhead compared to sending plaintext messages.\n3.5 Implementation\nIn our implementation we have used the EAX mode [2] of operation when encrypting events, attributes, and subscription filters.\nEAX is a mode of operation for block ciphers, also called an Authenticated Encryption with Associated Data (AEAD) algorithm that provides simultaneously both data confidentiality and integrity protection.\nThe algorithm implements a two-pass scheme where during the first pass the plain text is encrypted, and on the second pass a message authentication code (MAC) is generated for the encrypted data.\nThe EAX mode is compatible with any block cipher.\nWe decided to use the Advanced Encryption Standard (AES) [9] algorithm in our implementation, because of its standard status and the fact that the algorithm has gone through thorough cryptanalysis during its existence and no serious vulnerabilities have been found thus far.\nIn addition to providing both confidentiality and integrity protection, the EAX mode uses the underlying block cipher in counter mode (CTR mode) [21].\nA block cipher in counter mode is used to produce a stream of key bits that are then XORed with the plaintext.\nEffectively CTR mode transforms a block cipher into a stream cipher.\nThe advantage of stream ciphers is that the ciphertext is the same length as the plaintext, whereas with block ciphers the plaintext must be padded to a multiple of the block cipher's block length (e.g. the AES block size is 128 bits).\nAvoiding padding is very important in attribute encryption, because the padding might increase the size of the attribute disproportionally.\nFor example, a single integer might be 32 bits in length, which would be padded to 128 bits if we used a block cipher.\nWith event encryption the message expansion is not that relevant, since the length of padding required to reach the next 16 byte multiple will probably be a small proportion of the overall plaintext length.\nIn encryption mode the EAX algorithm takes as input a nonce (a number used once), an encryption key and the plaintext, and it returns the ciphertext and an authentication tag.\nIn decryption mode the algorithm takes as input the encryption key, the ciphertext and the authentication tag, and it returns either the plaintext, or an error if the authentication check failed.\nThe nonce is expanded to the block length of the underlying block cipher by passing it through an OMAC construct (see [7]).\nIt is important that particular nonce values are not reused, otherwise the block cipher in CTR mode would produce an identical key stream.\nIn our implementation we used the PHB defined event timestamp (64-bit value counting the milliseconds since January 1, 1970 UTC) appended by the PHB's identity (i.e. public key) as the nonce.\nThe broker is responsible for ensuring that the timestamps increase monotonically.\nThe authentication tag is appended to the produced cipher text to create a two-tuple.\nWith event encryption a single tag is created for the encrypted event.\nWith attribute\nencryption each attribute is encrypted and authenticated separately, and they all have their individual tags.\nThe tag length is configurable in EAX without restrictions, which allows the user to make a trade-off between the authenticity guarantees provided by EAX and the added communication overhead.\nWe used a tag length of 16 bytes in our implementation, but one could make the tag length a publisher\/subscriber defined parameter for each publication\/subscription or include it in the event type definition to make it a type specific parameter.\nEAX also supports including unencrypted associated data in the tag calculation.\nThe integrity of this data is protected, but it is still readable by everyone.\nThis feature could be used with event encryption in cases where some of the event content is public and thus would be useful for content-based routing.\nThe integrity of the data would still be protected against changes, but unauthorised brokers would be able to apply filters.\nWe have included the event type identifier as associated data in order to protect its integrity.\nOther AEAD algorithms include the offset codebook mode (OCB) [17] and the counter with CBC-MAC mode (CCM) [22].\nContrarily to the EAX mode the OCB mode requires only one pass over the plaintext, which makes it roughly twice as fast as EAX.\nUnfortunately the OCB mode has a patent application in place in the USA, which restricts its use.\nThe CCM mode is the predecessor of the EAX mode.\nIt was developed in order to provide a free alternative to OCB.\nThe EAX was developed later to address some issues with CCM [18].\nSimilarly to EAX, CCM is also a two-pass mode.\n4.\nKEY MANAGEMENT\nIn both encryption approaches the encrypted event content has a globally unique identifier (i.e. the event type or the attribute identifier).\nThat identifier is used to determine the encryption key to use when encrypting or decrypting the content.\nEach event type, in event encryption, and attribute, in attribute encryption, has its own individual encryption key.\nBy controlling access to the encryption key we effectively control access to the encrypted event content.\nIn order to control access to the encryption keys we form a key group of brokers for each individual encryption key.\nThe key group is used to refresh the key when necessary and to deliver the new key to all current members of the key group.\nThe key group manager is responsible for verifying that a new member requesting to join the key group is authorised to do so.\nTherefore the key group manager must be trusted by the type owner to enforce the access control policy.\nWe assume that the key group manager is either a trusted third party or alternatively a member of the type owner's domain.\nIn [12] Pesonen et al. proposed a capability-based access control architecture for multi-domain publish\/subscribe systems.\nThe approach uses capabilities to decentralise the access control policy amongst the publish\/subscribe nodes (i.e. clients and brokers): each node holds a set of capabilities that define the authority granted to that node.\nAuthority to access a given event type is granted by the owner of that type issuing a capability to a node.\nThe capability defines the event type, the action, and the attributes that Figure 4: The steps involved for a broker to be successful in joining a key group the node is authorised to access.\nFor example, a tuple would authorise the owner to subscribe to Numberplate events with access to all attributes in the published events.\nThe sequence of events required for a broker to successfully join a key group is shown in Fig. 4.\nBoth the client hosting broker and the client must be authorised to make the client's request.\nThat is, if the client makes a subscription request for Numberplate events, both the client and the local broker must be authorised to subscribe to Numberplate events.\nThis is because from the perspective of the broker network, the local broker acts as a proxy for the client.\nWe use the same capabilities to authorise membership in a key group that are used to authorise publish\/subscribe requests.\nNot doing so could lead to the inconsistent situation where a SHB is authorised to make a subscription on behalf of its clients, but is not able to decrypt incoming event content for them.\nIn the Numberplate example above, the local broker holding the above capability is authorised to join the Numberplate key group as well as the key groups for all the attributes in the Numberplate event type.\n4.1 Secure Group Communication\nEvent content encryption in a decentralised multi-domain publish\/subscribe system can be seen as a sub-category of secure group communication.\nIn both cases the key management system must scale well with the number of clients, clients might be spread over large geographic areas, there might be high rates of churn in group membership, and all members must be synchronised with each other in time in order to use the same encryption key at the same time.\nThere are a number of scalable key management protocols for secure group communication [15].\nWe have implemented the One-Way Function Tree (OFT) [8] protocol as a proof of concept.\nWe chose to implement OFT, because of its relatively simplicity and good performance.\nOur implementation uses the same structured overlay network used by the broker network as a transport.\nThe OFT protocol is based on a binary tree where the participants are at the leaves of the tree.\nIt scales in log2n in processing and communication costs, as well as in the size of the state stored at each participant, which we have verified in our simulations.\n4.2 Key Refreshing\nTraditionally in group key management schemes the encryption key is refreshed when a new member joins the group, an\nexisting member leaves the group, or a timer expires.\nRefreshing the key when a new member joins provides backward secrecy, i.e. the new member is prevented from accessing old messages.\nSimilarly refreshing the key when an existing member leaves provides forward secrecy, i.e. the old member is prevented from accessing future messages.\nTimer triggered refreshes are issued periodically in order to limit the damage caused by the current key being compromised.\nEven though the state-of-the-art key management protocols are efficient, refreshing the key unnecessarily introduces extra traffic and processing amongst the key group members.\nIn our case key group membership is based on the broker holding a capability that authorises it to join the key group.\nThe capability has a set of validity conditions that in their simplest form define a time period when the certificate is valid, and in more complex cases involve on-line checks back towards the issuer.\nIn order to avoid unnecessary key refreshes the key manager looks at the certificate validity conditions of the joining or leaving member.\nIn case of a joining member, if the manager can ascertain that the certificate was valid at the time of the previous key refresh, a new key refresh can be avoided.\nSimilarly, instead of refreshing the key immediately when a member leaves the key group, the key manager can cache their credentials and refresh the key only when the credentials expire.\nThese situations are both illustrated in Fig. 5.\nIt can be assumed that the credentials granted to brokers are relatively static, i.e. once a domain is authorised to access an event type, the authority will be delegated to all brokers of that domain, and they will have the authority for the foreseeable future.\nMore fine grained and dynamic access control would be implemented at the edge of the broker network between the clients and the client hosting brokers.\nWhen an encryption key is refreshed the new key is tagged with a timestamp.\nThe encryption key to use for a given event is selected based on the event's publication timestamp.\nThe old keys will be kept for a reasonable amount of time in order to allow for some clock drift.\nSetting this value is part of the key management protocol, although exactly how long this time should be will depend on the nature of the application and possibly the size of the network.\nIt can be configured independently per key group if necessary.\n5.\nEVALUATION\nIn order to evaluate the performance of event content encryption we have implemented both encryption approaches running over our implementation of the Hermes publish \/ subscribe middleware.\nThe implementation supports three modes: plaintext content, event encryption, and attribute encryption, in a single publish\/subscribe system.\nWe ran three performance tests in a discrete event simulator.\nThe simulator was run on an Intel P4 3.2 GHz workstation with 1GB of main memory.\nWe decided to run the tests on an event simulator instead of an actual deployed system in order to be able to measure to aggregate time it takes to handle all messages in the system.\nThe following sections describe the specific test setups and the results in more detail.\n5.1 End-to-End Overhead\nThe end-to-end overhead test shows how much the overall message throughput of the simulator was affected by event content encryption.\nWe formed a broker network with two brokers, attached a publisher to one of them and a subscriber to the other one.\nThe subscriber subscribed to the advertised event type without any filters, i.e. each publication matched the subscriber's publication and thus was delivered to the subscriber.\nThe test measures the combined time it takes to publish and deliver 100,000 events.\nIf the content is encrypted this includes both encrypting the content at the PHB and decrypting it at the SHB.\nIn the test the number of attributes in the event type is increased from 1 to 25 (the x-axis).\nEach attribute is set to a 30 character string.\nFor each number of attributes in the event type the publisher publishes 100,000 events, and the elapsed time is measured to derive the message throughput.\nThe test was repeated five times for each number of attributes and we use the average of all iterations in the graph, but the results were highly consistent so the standard deviation is not shown.\nThe same tests were run with no content encryption, event encryption, and attribute encryption.\nAs can be seen in Fig. 6, event content encryption introduces a large overhead compared to not using encryption.\nThe throughput when using attribute encryption with an event type with one attribute is 46% of the throughput achieved when events are sent in plaintext.\nWhen the number of attributes increases the performance gap increases as well: with ten attributes the performance with attribute encryption has decreased to 11.7% of plaintext performance.\nEvent encryption fares better, because of fewer encryption operations.\nThe increase in the amount of encrypted data does not affect the performance as much as the number of individual encryption operations does.\nThe difference in performance with event encryption and attribute encryption with only one attribute is caused by the Java object serialisation mechanism: in the event encryption case the whole attribute structure is serialised, which results in more objects than serialising a single attribute value.\nA more efficient implementation would provide its own marshalling mechanism.\nNote that the EAX implementation we use runs the nonce (i.e. initialisation vector) through an OMAC construct to increase its randomness.\nSince the nonce is not required to be kept secret (just unique), there is a potential time\/space trade-off we have not yet investigated in attaching extra nonce attributes that have already had this OMAC construct applied to them.\n5.2 Domain Internal Events\nWe explained in Sect.\n3.4 that event content decryption and encryption can be avoided if both brokers are authorised to access the event content.\nThis test was designed to show that the use of the encrypted event content mechanism between two authorised brokers incurs only a small performance overhead.\nIn this test we again form a broker network with two brokers.\nFigure 5: How the key refresh schedule is affected by brokers joining and leaving key groups\nFigure 6: Throughput of Events in a Simulator\nBoth brokers are configured with the same credentials.\nThe publisher is attached to one of the brokers and the subscriber to the other, and again the subscriber does not specify any filters in its subscription.\nThe publisher publishes 100,000 events and the test measures the elapsed time in order to derive the system's message throughput.\nThe event content is encrypted outside the timing measurement, i.e. the encryption cost is not included in the measurements.\nThe goal is to model an environment where a broker has received a message from another authorised broker, and it routes the event to a third authorised broker.\nIn this scenario the middle broker does not need to decrypt nor encrypt the event content.\nAs shown in Fig. 2, the elapsed time was measured as the number of attributes in the published event was increased from 1 to 25.\nThe attribute values in each case are 30 character strings.\nEach test is repeated five times, and we use the average of all iterations in the graph.\nThe same test was then repeated with no encryption, event encryption and attribute encryption turned on.\nThe encrypted modes follow each other very closely.\nPredictably, the plaintext mode performs a little better for all attribute counts.\nThe difference can be explained partially by the encrypted events being larger in size, because they include both the plaintext and the encrypted content in this test.\nThe difference in performance is 3.7% with one attribute and 2.5% with 25 attributes.\nWe believe that the roughness of the graphs can be explained by the Java garbage collector interfering with the simulation.\nThe fact that all three graphs show the same irregularities supports this theory.\nFigure 7: Throughput of Domain Internal Events\n5.3 Communication Overhead\nThrough the definition of multiple event types, it is possible to emulate the expressiveness of attribute encryption using only event content encryption.\nThe last test we ran was to show the communication overhead caused by this emulation technique, compared to using real attribute encryption.\nIn the test we form a broker network of 2000 brokers.\nWe attach one publisher to one of the brokers, and an increasing number of subscribers to the remaining brokers.\nEach subscriber simulates a group of subscribers that all have the same access rights to the published event.\nEach subscriber group has its own event type in the test.\nThe outcome of this test is shown in Fig. 8.\nThe number of subscriber groups is increased from 1 to 50 (the x-axis).\nFor each n subscriber groups the publisher publishes one event to represent the use of attribute encryption and n events representing the events for each subscriber group.\nWe count the number of hops each publication makes through the broker network (y-axis).\nNote that Fig. 8 shows workloads beyond what we would expect in common usage, in which many event types are likely to contain fewer than ten attributes.\nThe subscriber groups used in this test represent disjoint permission sets over such event attributes.\nThe number of these sets can be determined from the particular access control policy in use, but will be a value less than or equal to the factorial of the number of attributes in a given event type.\nThe graphs indicate that attribute encryption performs better than event encryption even for small numbers of subscriber groups.\nIndeed, with only two subscriber groups (e.g. the case with Numberplate events) the hop count increases from 7.2 hops for attribute encryption to 16.6 hops for event encryption.\nWith 10 subscriber groups the corresponding numbers are 24.2 and 251.0, i.e. an order of magnitude difference.\n6.\nRELATED WORK\nWang et al. have categorised the various security issues that need to be addressed in publish\/subscribe systems in the future in [20].\nThe paper is a comprehensive overview of security issues in publish\/subscribe systems and as such tries to draw attention to the issues rather than providing solutions.\nBacon et al. in [1] examine the use of role-based access control in multi-domain, distributed publish\/subscribe systems.\nTheir work is complementary to this paper: distributed RBAC is one potential policy formalism that might use the enforcement mechanisms we have presented.\nOpyrchal and Prakash address the problem of event confidentiality at the last link between the subscriber and the SHB in [10].\nThey correctly state that a secure group communication approach is infeasible in an environment like publish\/subscribe that has highly dynamic group memberships.\nAs a solution they propose a scheme utilising key caching and subscriber grouping in order to minimise the number of required encryptions when delivering a publication from a SHB to a set of matching subscribers.\nWe assume in our work that the SHB is powerful enough to man\nFigure 8: Hop Counts When Emulating Attribute Encryption\nage a TLS secured connection for each local subscriber.\nBoth Srivatsa et al. [19] and Raiciu et al. [16] present mechanisms for protecting the confidentiality of messages in decentralised publish\/subscribe infrastructures.\nCompared to our work both papers aim to provide the means for protecting the integrity and confidentiality of messages whereas the goal for our work is to enforce access control inside the broker network.\nRaiciu et al. assume in their work that none of the brokers in the network are trusted and therefore all events are encrypted from publisher to subscriber and that all matching is based on encrypted events.\nIn contrast, we assume that some of the brokers on the path of a publication are trusted to access that publication and are therefore able to implement event matching.\nWe also assume that the publisher and subscriber hosting brokers are always trusted to access the publication.\nThe contributions of Srivatsa et al. and Raiciu et al. are complementary to the contributions in this paper.\nFinally, Fiege et al. address the related topic of event visibility in [6].\nWhile the work concentrated on using scopes as mechanism for structuring large-scale event-based systems, the notion of event visibility does resonate with access control to some extent.\n7.\nCONCLUSIONS\nEvent content encryption can be used to enforce an access control policy while events are in transit in the broker network of a multi-domain publish\/subscribe system.\nEncryption causes an overhead, but i) there may be no alternative when access control is required, and ii) the performance penalty can be lessened with implementation optimisations, such as passing cached plaintext content alongside encrypted content between brokers with identical security credentials.\nThis is particularly appropriate if broker-to-broker connections are secured by default so that wire-sniffing is not an issue.\nAttribute level encryption can be implemented in order to enforce fine-grained access control policies.\nIn addition to providing attribute-level access control, attribute encryption enables partially authorised brokers to implement contentbased routing based on the attributes that are accessible to them.\nOur experiments show that i) by caching plaintext and ciphertext content when possible, we are able to deliver comparable performance to plaintext events, and ii) that attribute encryption within an event incurs far less overhead than defining separate event types for the attributes that need different levels of protection.\nIn environments comprising multiple domains, where eventbrokers have different security credentials, we have quantified how a trade-off can be made between performance and expressiveness.","keyphrases":["encrypt","multi-domain","overal commun overhead","secur publish\/subscrib system","distribut access control","multipl administr domain","attribut encrypt","distribut system-distribut applic","perform","congest charg servic","administr domain"],"prmu":["P","P","P","M","R","U","R","M","U","U","U"]} {"id":"J-13","title":"On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions","abstract":"The winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices. While this problem is in general NP-hard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs). Formally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph. Note that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness. In fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem. In this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3. Motivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions. We show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here. Indeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width. Even more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.","lvl-1":"On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions [Extended Abstract] Georg Gottlob Computing Laboratory Oxford University OX1 3QD Oxford, UK georg.gottlob@comlab.ox.ac.uk Gianluigi Greco Dipartimento di Matematica University of Calabria I-87030 Rende, Italy ggreco@mat.unical.it ABSTRACT The winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices.\nWhile this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs).\nFormally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph.\nNote that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness.\nIn fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem.\nIn this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3.\nMotivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions.\nWe show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here.\nIndeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width.\nEven more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics; F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity 1.\nINTRODUCTION Combinatorial auctions.\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items.\nThis is desirable when a bidder``s valuation of a bundle of items is not equal to the sum of her valuations of the individual items.\nThis framework is currently used to regulate agents'' interactions in several application domains (cf., e.g., [21]) such as, electricity markets [13], bandwidth auctions [14], and transportation exchanges [18].\nFormally, a combinatorial auction is a pair I, B , where I = {I1, ..., Im} is the set of items the auctioneer has to sell, and B = {B1, ..., Bn} is the set of bids from the buyers interested in the items in I. Each bid Bi has the form item(Bi), pay(Bi) , where pay(Bi) is a rational number denoting the price a buyer offers for the items in item(Bi) \u2286 I.\nAn outcome for I, B is a subset b of B such that item(Bi)\u2229item(Bj) = \u2205, for each pair Bi and Bj of bids in b with i = j.\nThe winner determination problem.\nA crucial problem for combinatorial auctions is to determine the outcome b\u2217 that maximizes the sum of the accepted bid prices (i.e., Bi\u2208b\u2217 pay(Bi)) over all the possible outcomes.\nThis problem, called winner determination problem (e.g., [11]), is known to be intractable, actually NP-hard [17], and even not approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time (e.g., [15, 22, 12, 21]).\nIn fact, constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions.\nItem graphs.\nCurrently, the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph, which is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any 152 Figure 1: Example MaxWSP problem: (a) Hypergraph H I0,B0 , and a packing h for it; (b) Primal graph for H I0,B0 ; and, (c,d) Two item graphs for H I0,B0 .\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph, i.e., a tree or, more generally, a graph having tree-like structure [3]-formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built, we notice that bidder interaction in a combinatorial auction I, B can be represented by means of a hypergraph H I,B such that its set of nodes N(H I,B ) coincides with set of items I, and where its edges E(H I,B ) are precisely the bids of the buyers {item(Bi) | Bi \u2208 B}.\nA special item graph for I, B is the primal graph of H I,B , denoted by G(H I,B ), which contains an edge between any pair of nodes in some hyperedge of H I,B .\nThen, any item graph for H I,B can be viewed as a simplification of G(H I,B ) obtained by deleting some edges, yet preserving the connectivity condition on the nodes included in each hyperedge.\nExample 1.\nThe hypergraph H I0,B0 reported in Figure 1.\n(a) is an encoding for a combinatorial auction I0, B0 , where I0 = {I1, ..., I5}, and item(Bi) = hi, for each 1 \u2264 i \u2264 3.\nThe primal graph for H I0,B0 is reported in Figure 1.\n(b), while two example item graphs are reported in Figure 1.\n(c) and (d), where edges required for maintaining the connectivity for h1 are depicted in bold.\n\u00a1 Open Problem: Computing structured item graphs efficiently.\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined.\nHowever, exponentially many item graphs might be associated with a combinatorial auction, and it is not clear how to determine whether a structured item graph of a certain (constant) treewidth exists, and if so, how to compute such a structured item graph efficiently.\nPolynomial time algorithms to find the best simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [10], a cycle [4], or a tree [3], but it was an important open problem (cf. [3]) whether it is tractable to check if for a combinatorial auction, an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time, if so.\nWeighted Set Packing.\nLet us note that the hypergraph representation H I,B of a combinatorial auction I, B is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h, h \u2208 h with h = h , it holds that h \u2229 h = \u2205.\nLetting w be a weighting function for H, i.e., a polynomially-time computable function from E(H) to rational numbers, the weight of a packing h is the rational number w(h) = h\u2208h w(h), where w({}) = 0.\nThen, the maximum-weighted set packing problem for H w.r.t. w, denoted by MaxWSP(H, w), is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem, given a combinatorial auction I, B , it is sufficient to define the weighting function w I,B (item(Bi)) = pay(Bi).\nThen, the set of the solutions for the weighted set packing problem for H I,B w.r.t. w I,B coincides with the set of the solutions for the winner determination problem on I, B .\nExample 2.\nConsider again the hypergraph H I0,B0 reported in Figure 1.\n(a).\nAn example packing for H I0,B0 is h = {h1}, which intuitively corresponds to an outcome for I0, B0 , where the auctioneer accepted the bid B1.\nBy assuming that bids B1, B2, and B3 are such that pay(B1) = pay(B2) = pay(B3), the packing h is not a solution for the problem MaxWSP(H I0,B0 , w I0,B0 ).\nIndeed, the packing h\u2217 = {h2, h3} is such that w I0,B0 (h\u2217 ) > w I0,B0 (h).\n\u00a1 Contributions The primary aim of this paper is to identify large tractable classes for the winner determination problem, that are, moreover polynomially recognizable.\nTowards this aim, we first study structured item graphs and solve the open problem in [3].\nThe result is very bad news: It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3.\nMore formally, letting C(ig, k) denote the class of all the hypergraphs having an item tree of treewidth bounded by k, we prove that deciding whether a hypergraph (associated with a combinatorial auction problem) belongs to C(ig, 3) is NP-complete.\nIn the light of this result, it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or, equivalently, the winner determination problem.\nOur investigations, this time, led to very good news which are summarized below: For a hypergraph H, its dual \u00afH = (V, E) is such that nodes in V are in one-to-one correspondence with hyperedges in H, and for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208 153 E(H)} is in E.\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width[7] bounded by k (short: class C(hw, k) of hypergraphs).\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph \u00afH instead of the auction hypergraph H.\nIn fact, we can show that MaxWSP remains NP-hard even when H is acyclic (i.e., when it has hypertree width 1), even when each node is contained in 3 hyperedges at most.\nFor some relevant special classes of hypergraphs in C(hw, k), we design a higly-parallelizeable algorithm for MaxWSP.\nSpecifically, if the weighting functions can be computed in logarithmic space and weights are polynomial (e.g., when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges), we show that MaxWSP can be solved by a LOGCFL algorithm.\nRecall, in fact, that LOGCFL is the class of decision problems that are logspace reducible to context free languages, and that LOGCFL \u2286 NC2 \u2286 P (see, e.g., [9]).\nSurprisingly, we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs.\nTo the contrary, the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs.\nIn fact, we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach.\nIntuitively, the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality, but rather to some peculiarities in its definition.\nThe proof of the above results give us some interesting insight into the notion of structured item graph.\nIndeed, we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph, which we call strict hypertree decompositions.\nA game-characterization for the notion of strict hypertree width is also proposed, which specializes the Robber and Marshals game in [6] (proposed to characterize the hypertree width), and which makes it clear the further requirements on hypertree decompositions.\nThe rest of the paper is organized as follows.\nSection 2 discusses the intractability of structured item graphs.\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width, and discusses the cases where the algorithm is also highly parallelizable.\nThe comparison between the classes C(ig, k) and C(hw, k) is discussed in Section 4.\nFinally, in Section 5 we draw our conclusions by also outlining directions for further research.\n2.\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS Let H be a hypergraph.\nA graph G = (V, E) is an item graph for H if V = N(H) and, for each h \u2208 E(H), the subgraph of G induced over the nodes in h is connected.\nAn important class of item graphs is that of structured item graphs, i.e., of those item graphs having bounded treewidth as formalized below.\nA tree decomposition [16] of a graph G = (V, E) is a pair T, \u03c7 , where T = (N, F) is a tree, and \u03c7 is a labelling function assigning to each vertex p \u2208 N a set of vertices \u03c7(p) \u2286 V , such that the following conditions are satisfied: (1) for each vertex b of G, there exists p \u2208 N such that b \u2208 \u03c7(p); (2) for each edge {b, d} \u2208 E, there exists p \u2208 N such that {b, d} \u2286 \u03c7(p); (3) for each vertex b of G, the set {p \u2208 N | b \u2208 \u03c7(p)} induces a connected subtree of T.\nThe width of T, \u03c7 is the number maxp\u2208N |\u03c7(p) \u2212 1|.\nThe treewidth of G, denoted by tw(G), is the minimum width over all its tree decompositions.\nThe winner determination problem can be solved in polynomial time on item graphs having bounded treewidth [3].\nTheorem 1 (cf. [3]).\nAssume a k-width tree decomposition T, \u03c7 of an item graph for H is given.\nThen, MaxWSP(H, w) can be solved in time O(|T|2 \u00d7(|E(H)|+1)k+1 ).\nMany item graphs can be associated with a hypergraph.\nAs an example, observe that the item graph in Figure 1.\n(c) has treewidth 1, while Figure 1.\n(d) reports an item graph whose treewidth is 2.\nIndeed, it was an open question whether for a given constant k it can be checked in polynomial time if an item graph of treewidth k exists, and if so, whether such an item graph can be efficiently computed.\nLet C(ig, k) denote the class of all the hypergraphs having an item graph G such that tw(G) \u2264 k.\nThe main result of this section is to show that the class C(ig, k) is hard to recognize.\nTheorem 2.\nDeciding whether a hypergraph H belongs to C(ig, 3) is NP-hard.\nThe proof of this result relies on an elaborate reduction from the Hamiltonian path problem HP(s, t) of deciding whether there is an Hamiltonian path from a node s to a node t in a directed graph G = (N, E).\nTo help the intuition, we report here a high-level overview of the main ingredients exploited in the proof1 .\nThe general idea it to build a hypergraph HG such that there is an item graph G for HG with tw(G ) \u2264 3 if and only if HP(s, t) over G has a solution.\nFirst, we discuss the way HG is constructed.\nSee Figure 2.\n(a) for an illustration, where the graph G consists of the nodes s, x, y, and t, and the set of its edges is {e1 = (s, x), e2 = (x, y), e3 = (x, t), e4 = (y, t)}.\nFrom G to HG.\nLet G = (N, E) be a directed graph.\nThen, the set of the nodes in HG is such that: for each x \u2208 N, N(HG) contains the nodes bsx, btx, bx, bx, bdx; for each e = (x, y) \u2208 E, N(HG) contains the nodes nsx, nsx, nty, nty , nse x and nte y. No other node is in N(HG).\nHyperedges in HG are of three kinds: 1) for each x \u2208 N, E(HG) contains the hyperedges: \u2022 Sx = {bsx} \u222a {nse x | e = (x, y) \u2208 E}; \u2022 Tx = {btx} \u222a {nte x | e = (z, x) \u2208 E}; \u2022 A1 x = {bdx, bx}, A2 x = {bdx, bx}, and A3 x = {bx, bx} -notice that these hyperedges induce a clique on the nodes {bx, bx, bdx}; 1 Detailed proofs can be found in the Appendix, available at www.mat.unical.it\/\u223cggreco\/papers\/ca.pdf.\n154 Figure 2: Proof of Theorem 2: (a) from G to HG - hyperedges in 1) and 2) are reported only; (b) a skeleton for a tree decomposition TD for HG.\n\u2022 SA1 x = {bsx, bx}, SA2 x = {bsx, bx}, SA3 x = {bsx, bdx} -notice that these hyperedges plus A1 x, A2 x, and A3 x induce a clique on the nodes {bsx, bx, bx, bdx}; \u2022 TA1 x = {btx, bx}, TA2 x = {btx, bx}, and TA3 x = {btx, bdx} -notice that these hyperedges plus A1 x, A2 x, and A3 x induce a clique on the nodes {btx, bx, bx, bdx}; 2) for each e = (x, y) \u2208 E, E(HG) contains the hyperedges: \u2022 SHx = {nsx, nsx}; \u2022 THy = {nty, nty }; \u2022 SEe = {nsx, nse x} and SEe = {nsx, nse x} -notice that these two hyperedges plus SHx induce a clique on the nodes {nsx, nsx, nse x}; \u2022 TEe = {nty, nte y} and TEe = {nty , nte y} -notice that these two hyperedges plus THy induce a clique on the nodes {nty, nty , nte y}.\nNotice that each of the above hyperedges but those of the form Sx and Tx contains exactly two nodes.\nAs an example of the hyperedges of kind 1) and 2), the reader may refer to the example construction reported in Figure 2.\n(a), and notice, for instance, that Sx = {bsx, nse2 x , nse3 x } and that Tt = {btt, nte4 t , nte3 t }.\n3) finally, we denote by DG the set containing the hyperedges in E(HG) of the third kind.\nIn the reduction we are exploiting, DG can be an arbitrary set of hyperedges satisfying the four conditions that are discussed below.\nLet PG be the set of the following |PG| \u2264 |N| + 3 \u00d7 |E| pairs: PG = {(bx, bx) | x \u2208 N} \u222a {(nsx, nsx), (nty, nty ), (nse x, nte y) | e = (x, y) \u2208 E}.\nAlso, let I(v) denote the set {h \u2208 E(H) | v \u2208 h} of the hyperedges of H that are touched by v; and, for a set V \u2286 N(H), let I(V ) = v\u2208V I(v).\nThen, DG has to be a set such that: (c1) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u2229 I(\u03b2) \u2229 DG = \u2205; (c2) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u222a I(\u03b2) \u2287 DG; (c3) \u2200\u03b1 \u2208 N such that \u2203\u03b2 \u2208 N with (\u03b1, \u03b2) \u2208 PG or (\u03b2, \u03b1) \u2208 PG, it holds: I(\u03b1) \u2229 DG = \u2205; and, (c4) \u2200S \u2286 N such that |S| \u2264 3 and where \u2203\u03b1, \u03b2 \u2208 S with (\u03b1, \u03b2) \u2208 PG, it is the case that: I(S) \u2287 DG.\nIntuitively, the set DG is such that each of its hyperedges is touched by exactly one of the two nodes in every pair 155 of PG - cf. (c1) and (c2).\nMoreover, hyperedges in DG touch only vertices included in at least a pair of PG - cf. (c3); and, any triple of nodes is not capable of touching all the elements of DG if none of the pairs that can be built from it belongs to PG - cf. (c4).\nThe reader may now ask whether a set DG exists at all satisfying (c1), (c2), (c3) and (c4).\nIn the following lemma, we positively answer this question and refer the reader to its proof for an example construction.\nLemma 1.\nA set DG, with |DG| = 2 \u00d7 |PG| + 2, satisfying conditions (c1), (c2), (c3), and (c4) can be built in time O(|PG|2 ).\nKey Ingredients.\nWe are now in the position of presenting an overview of the key ingredients of the proof.\nLet G be an arbitrary item graph for HG, and let TD = T, \u03c7 be a 3-width tree decomposition of G (note that, because of the cliques, e.g., on the nodes {bsx, bx, bx, bdx}, any item graph for HG has treewidth 3 at least).\nThere are three basic observations serving the purpose of proving the correctness of the reduction.\nBlocks of TD: First, we observe that TD must contain some special kinds of vertex.\nSpecifically, for each node x \u2208 N, TD contains a vertex bs(x) such that \u03c7(bs(x)) \u2287 {bsx, bx, bx, bdx}, and a vertex bt(x) such that \u03c7(bt(x)) \u2287 {btx, bx, bx, bdx}.\nAnd, for each edge e = (x, y) \u2208 E, TD contains a vertex ns(x,e) such that \u03c7(ns(x,e)) \u2287 {nse x, nsx, nsx}, and a vertex nt(y,e) such that \u03c7(nt(y,e)) \u2287 {nte y, nty, nty }.\nIntuitively, these vertices are required to cover the cliques of HG associated with the hyperedges of kind 1) and 2).\nEach of these vertices plays a specific role in the reduction.\nIndeed, each directed edge e = (x, y) \u2208 E is encoded in TD by means of the vertices: ns(x,e), representing precisely that e starts from x; and, nt(y,e), representing precisely that e terminates into y. Also, each node x \u2208 N is encoded in TD be means of the vertices: bs(x), representing the starting point of edges originating from x; and, bt(x), representing the terminating point of edges ending into x.\nAs an example, Figure 2.\n(b) reports the skeleton of a tree decomposition TD.\nThe reader may notice in it the blocks defined above and how they are related with the hypergraph HG in Figure 2.\n(a) - other blocks in it (of the form w(x,y)) are defined next.\nConnectedness between blocks, and uniqueness of the connections: The second crucial observation is that in the path connecting a vertex of the form bs(x) (resp., bt(y)) with a vertex of the form ns(x,e) (resp., nt(y,e)) there is one special vertex of the form w(x,y) such that: \u03c7(w(x,y)) \u2287 {nse x , nte y }, for some edge e = (x, y) \u2208 E. Guaranteeing the existence of one such vertex is precisely the role played by the hyperedges in DG.\nThe arguments for the proof are as follows.\nFirst, we observe that I(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) \u2287 DG \u222a {Sx} and I(\u03c7(bt(y))) \u2229 I(\u03c7(nt(y,e))) \u2287 DG \u222a {Ty}.\nThen, we show a property stating that for a pair of consecutive vertices p and q in the path connecting bs(x) and ns(x,e) (resp., bt(y) and nt(y,e)), I(\u03c7(p) \u2229 \u03c7(q)) \u2287 I(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) (resp., I(\u03c7(p) \u2229 \u03c7(q)) \u2287 I(\u03c7(bt(x))) \u2229 I(\u03c7(nt(y,e)))).\nThus, we have: I(\u03c7(p) \u2229 \u03c7(q)) \u2287 DG \u222a{Sx} (resp., I(\u03c7(p)\u2229\u03c7(q)) \u2287 DG \u222a{Ty}).\nBased on this observation, and by exploiting the properties of the hyperedges in DG, it is not difficult to show that any pair of consecutive vertices p and q must share two nodes of HG forming a pair in PG, and must both touch Sx (resp., Ty).\nWhen the treewidth of G is 3, we can conclude that a vertex, say w(x,y), in this path is such that \u03c7(w(x,y)) \u2287 {nse x , nte y }, for some edge e = (x, y) \u2208 E - to this end, note that nse x \u2208 Sx, nte t \u2208 Ty, and I(\u03c7(w(x,y))) \u2287 DG.\nIn particular, w(x,y) is the only kind of vertex satisfying these conditions, i.e., in the path there is no further vertex of the form w(x,z), for z = y (resp., w(z,y), for z = x).\nTo help the intuition, we observe that having a vertex of the form w(x,y) in TD corresponds to the selection of an edge from node x to node y in the Hamiltonian path.\nIn fact, given the uniqueness of these vertices selected for ensuring the connectivity, a one-to-one correspondence can be established between the existence of a Hamiltonian path for G and the vertices of the form w(x,y).\nAs an example, in Figure 2.\n(b), the vertices of the form w(s,x), w(x,y), and w(y,t) are in TD, and GT D shows the corresponding Hamiltonian path.\nUnused blocks: Finally, the third ingredient of the proof is the observation that if a vertex of the form w(x,y), for an edge e = (x, y) \u2208 E is not in TD (i.e., if the edge (x, y) does not belong to the Hamiltonian path), then the corresponding block ns(x,e ) (resp., nt(y,e )) can be arbitrarily appended in the subtree rooted at the block ns(x,e) (resp., nt(y,e)), where e is the edge of the form e = (x, z) (resp., e = (z, y)) such that w(x,z) (resp., w(z,y)) is in TD.\nE.g., Figure 2.\n(a) shows w(x,t), which is not used in TD, and Figure 2.\n(b) shows how the blocks ns(x,e3) and nt(t,e3) can be arranged in TD for ensuring the connectedness condition.\n3.\nTRACTABLE CASES VIA HYPERTREE DECOMPOSITIONS Since constructing structured item graphs is intractable, it is relevant to assess whether other structural restrictions can be used to single out classes of tractable MaxWSP instances.\nTo this end, we focus on the notion of hypertree decomposition [7], which is a natural generalization of hypergraph acyclicity and which has been profitably used in other domains, e.g, constraint satisfaction and database query evaluation, to identify tractability islands for NP-hard problems.\nA hypertree for a hypergraph H is a triple T, \u03c7, \u03bb , where T = (N, E) is a rooted tree, and \u03c7 and \u03bb are labelling functions which associate each vertex p \u2208 N with two sets \u03c7(p) \u2286 N(H) and \u03bb(p) \u2286 E(H).\nIf T = (N , E ) is a subtree of T, we define \u03c7(T ) = v\u2208N \u03c7(v).\nWe denote the set of vertices N of T by vertices(T).\nMoreover, for any p \u2208 N, Tp denotes the subtree of T rooted at p. Definition 1.\nA hypertree decomposition of a hypergraph H is a hypertree HD = T, \u03c7, \u03bb for H which satisfies all the following conditions: 1.\nfor each edge h \u2208 E(H), there exists p \u2208 vertices(T) such that h \u2286 \u03c7(p) (we say that p covers h); 156 Figure 3: Example MaxWSP problem: (a) Hypergraph H1; (b) Hypergraph \u00afH1; (b) A 2-width hypertree decomposition of \u00afH1.\n2.\nfor each node Y \u2208 N(H), the set {p \u2208 vertices(T) | Y \u2208 \u03c7(p)} induces a (connected) subtree of T; 3.\nfor each p \u2208 vertices(T), \u03c7(p) \u2286 N(\u03bb(p)); 4.\nfor each p \u2208 vertices(T), N(\u03bb(p)) \u2229 \u03c7(Tp) \u2286 \u03c7(p).\nThe width of a hypertree decomposition T, \u03c7, \u03bb is maxp\u2208vertices(T )|\u03bb(p)|.\nThe HYPERTREE width hw(H) of H is the minimum width over all its hypertree decompositions.\nA hypergraph H is acyclic if hw(H) = 1.\nP Example 3.\nThe hypergraph H I0,B0 reported in Figure 1.\n(a) is an example acyclic hypergraph.\nInstead, both the hypergraphs H1 and \u00afH1 shown in Figure 3.\n(a) and Figure 3.\n(b), respectively, are not acyclic since their hypertree width is 2.\nA 2-width hypertree decomposition for \u00afH1 is reported in Figure 3.\n(c).\nIn particular, observe that H1 has been obtained by adding the two hyperedges h4 and h5 to H I0,B0 to model, for instance, that two new bids, B4 and B5, respectively, have been proposed to the auctioneer.\n\u00a1 In the following, rather than working on the hypergraph H associated with a MaxWSP problem, we shall deal with its dual \u00afH, i.e., with the hypergraph such that its nodes are in one-to-one correspondence with the hyperedges of H, and where for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208 E(H)} is in E( \u00afH).\nAs an example, the reader may want to check again the hypergraph H1 in Figure 3.\n(a) and notice that the hypergraph in Figure 3.\n(b) is in fact its dual.\nThe rationale for this choice is that issuing restrictions on the original hypergraph is a guarantee for the tractability only in very simple scenarios.\nTheorem 3.\nOn the class of acyclic hypergraphs, MaxWSP is (1) in P if each node occurs into two hyperedges at most; and, (2) NP-hard, even if each node is contained into three hyperedges at most.\n3.1 Hypertree Decomposition on the Dual Hypergraph and Tractable Packing Problems For a fixed constant k, let C(hw, k) denote the class of all the hypergraphs whose dual hypergraphs have hypertree width bounded by k.\nThe maximum weighted-set packing problem can be solved in polynomial time on the class C(hw, k) by means of the algorithm ComputeSetPackingk, shown in Figure 4.\nThe algorithm receives in input a hypergraph H, a weighting function w, and a k-width hypertree decomposition HD = T=(N, E), \u03c7, \u03bb of \u00afH. For each vertex v \u2208 N, let Hv be the hypergraph whose set of nodes N(Hv) \u2286 N(H) coincides with \u03bb(v), and whose set of edges E(Hv) \u2286 E(H) coincides with \u03c7(v).\nIn an initialization step, the algorithm equips each vertex v with all the possible packings for Hv, which are stored in the set Hv.\nNote that the size of Hv is bounded by (|E(H)| + 1)k , since each node in \u03bb(v) is either left uncovered in a packing or is covered with precisely one of the hyperedges in \u03c7(v) \u2286 E(H).\nThen, ComputeSetPackingk is designed to filter these packings by retaining only those that conform with some packing for Hc, for each children c of v in T, as formalized next.\nLet hv and hc be two packings for Hv and Hc, respectively.\nWe say that hv conforms with hc, denoted by hv \u2248 hc if: for each h \u2208 hc \u2229 E(Hv), h is in hv; and, for each h \u2208 (E(Hc) \u2212 hc), h is not in hv.\nExample 4.\nConsider again the hypertree decomposition of \u00afH1 reported in Figure 3.\n(c).\nThen, the set of all the possible packings (which are build in the initialization step of ComputeSetPackingk), for each of its vertices, is reFigure 5: Example application of Algorithm ComputeSetPackingk.\n157 Input: H, w, and a k-width hypertree decomposition HD = T =(N, E), \u03c7, \u03bb of \u00afH; Output: A solution to MaxWSP(H, w); var Hv : set of packings for Hv, for each v \u2208 N; h\u2217 : packing for H; v hv : rational number, for each partial packing hv for Hv; hhv,c : partial packing for Hc, for each partial packing hv for Hv, and for each (v, c) \u2208 E; -------------------------------------------Procedure BottomUp; begin Done := the set of all the leaves of T ; while \u2203v \u2208 T such that (i) v \u2208 Done, and (ii) {c | c is child of v} \u2286 Done do for each c such that (v, c) \u2208 E do Hv := Hv \u2212 {hv | \u2203hc \u2208 Hc s.t. hv \u2248 hc}; for each hv \u2208 Hv do v hv := w(hv); for each c such that (v, c) \u2208 E do \u00afhc := arg maxhc\u2208Hc|hv\u2248 hc c hc \u2212 w(hc \u2229 hv) ; hhv,c := \u00afhc; (* set best packing *) v hv := v hv + c \u00afhc \u2212 w(\u00afhc \u2229 hv); end for end for Done := Done \u222a {v}; end while end; -------------------------------------------begin (* MAIN *) for each vertex v in T do Hv := {hv packing for Hv}; BottomUp; let r be the root of T ; \u00afhr := arg maxhr\u2208Hr r hr ; h\u2217 := \u00afhr; (* include packing *) T opDown(r, hr); return h\u2217 ; end.\nProcedure T opDown(v : vertex of N, \u00afhv \u2208 Hv); begin for each c \u2208 N s.t. (v, c) \u2208 E do \u00afhc := h\u00afhv,c; h\u2217 := h\u2217 \u222a \u00afhc; (* include packing *) T opDown(c, \u00afhc); end for end; Figure 4: Algorithm ComputeSetPackingk.\nported in Figure 5.\n(a).\nFor instance, the root v1 is such that Hv1 = { {}, {h1}, {h3}, {h5} }.\nMoreover, an arrow from a packing hc to hv denotes that hv conforms with hc.\nFor instance, the reader may check that the packing {h3} \u2208 Hv1 conforms with the packing {h2, h3} \u2208 Hv3 , but do not conform with {h1} \u2208 Hv3 .\n\u00a1 ComputeSetPackingk builds a solution by traversing T in two phases.\nIn the first phase, vertices of T are processed from the leaves to the root r, by means of the procedure BottomUp.\nFor each node v being processed, the set Hv is preliminary updated by removing all the packings hv that do not conform with any packing for some of the children of v.\nAfter this filtering is performed, the weight hv is updated.\nIntuitively, v hv stores the weight of the best partial packing for H computed by using only the hyperedges occurring in \u03c7(Tv).\nIndeed, if v is a leaf, then v hv = w(hv).\nOtherwise, for each child c of v in T, v hv is updated with the maximum of c hc \u2212 w(hc \u2229 hv) over all the packings hc that conforms with hv (resolving ties arbitrarily).\nThe packing \u00afhc for which this maximum is achieved is stored in the variable hhv,c.\nIn the second phase, the tree T is processed starting from the root.\nFirstly, the packing h\u2217 is selected that maximizes the weight equipped with the packings in Hr.\nThen, procedure TopDown is used to extend h\u2217 to all the other partial packings for vertices of T.\nIn particular, at each vertex v, h\u2217 is extended with the packing hhv,c, for each child c of v. Example 5.\nAssume that, in our running example, w(h1) = w(h2) = w(h3) = w(h4) = 1.\nThen, an execution of ComputeSetPackingk is graphically depicted in Figure 5.\n(b), where an arrow from a packing hc to a packing hv is used to denote that hc = hhv,c. Specifically, the choices made during the computation are such that the packing {h2, h3} is computed.\nIn particular, during the bottom-up phase, we have that: (1) v4 is processed, and we set v4 {h2} = v4 {h4} = 1 and v4 {} = 0; (2) v3 is processed, and we set v3 {h1} = v3 {h3} = 1 and v3 {} = 0; (3) v2 is processed, and we set v2 {h1} = v2 {h2} = v2 {h3} = v2 {h4} = 1, v2 {h2,h3} = 2 and v3 {} = 0; (4) v1 is processed and we set v1 {h1} = 1, v1 {h5} = v1 {h3} = 2 and v1 {} = 0.\nFor instance, note that v1 {h5} = 2 since {h5} conforms with the packing {h4} of Hv2 such that v2 {h4} = 1.\nThen, at the beginning of the top-down phase, ComputeSetPackingk selects {h3} as a packing for Hv1 and propagates this choice in the tree.\nEquivalently, the algorithm may have chosen {h5}.\nAs a further example, the way the solution {h1} is obtained by the algorithm when w(h1) = 5 and w(h2) = w(h3) = w(h4) = 1 is reported in Figure 5.\n(c).\nNotice that, this time, in the top-down phase, ComputeSetPackingk starts selecting {h1} as the best packing for Hv1 .\n\u00a1 Theorem 4.\nLet H be a hypergraph and w be a weighting function for it.\nLet HD = T, \u03c7, \u03bb be a complete k-width hypertree decomposition of \u00afH. Then, ComputeSetPackingk on input H, w, and HD correctly outputs a solution for MaxWSP(H, w) in time O(|T| \u00d7 (|E(H)| + 1)2k ).\nProof.\n[Sketch] We observe that h\u2217 (computed by ComputeSetPackingk) is a packing for H. Indeed, consider a pair of hyperedges h1 and h2 in h\u2217 , and assume, for the sake of contradiction, that h1 \u2229 h2 = \u2205.\nLet v1 (resp., v2) be an arbitrary vertex of T, for which ComputeSetPackingk included h1 (resp., h2) in h\u2217 in the bottom-down computation.\nBy construction, we have h1 \u2208 \u03c7(v1) and h2 \u2208 \u03c7(v2).\n158 Let I be an element in h1 \u2229 h2.\nIn the dual hypergraph H, I is a hyperedge in E( \u00afH) which covers both the nodes h1 and h2.\nHence, by condition (1) in Definition 1, there is a vertex v \u2208 vertices(T) such that {h1, h2} \u2286 \u03c7(v).\nNote that, because of the connectedness condition in Definition 1, we can also assume, w.l.o.g., that v is in the path connecting v1 and v2 in T. Let hv \u2208 Hv denote the element added by ComputeSetPackingk into h\u2217 during the bottom-down phase.\nSince the elements in Hv are packings for Hv, it is the case that either h1 \u2208 hv or h2 \u2208 hv.\nAssume, w.l.o.g., that h1 \u2208 hv, and notice that each vertex w in T in the path connecting v to v1 is such that h1 \u2208 \u03c7(w), because of the connectedness condition.\nHence, because of definition of conformance, the packing hw selected by ComputeSetPackingk to be added at vertex w in h\u2217 must be such that h1 \u2208 hw.\nThis holds in particular for w = v1.\nContradiction with the definition of v1.\nTherefore, h\u2217 is a packing for H.\nIt remains then to show that it has the maximum weight over all the packings for H. To this aim, we can use structural induction on T to prove that, in the bottom-up phase, the variable v hv is updated to contain the weight of the packing on the edges in \u03c7(Tv), which contains hv and which has the maximum weight over all such packings for the edges in \u03c7(Tv).\nThen, the result follows, since in the top-down phase, the packing hr giving the maximum weight over \u03c7(Tr) = E(H) is first included in h\u2217 , and then extended at each node c with the packing hhv,c conformingly with hv and such that the maximum value of v hv is achieved.\nAs for the complexity, observe that the initialization step requires the construction of the set Hv, for each vertex v, and each set has size (|E(H)| + 1)k at most.\nThen, the function BottomUp checks for the conformance between strategies in Hv with strategies in Hc, for each pair (v, c) \u2208 E, and updates the weight v hv .\nThese tasks can be carried out in time O((|E(H)| + 1)2k ) and must be repeated for each edge in T, i.e., O(|T|) times.\nFinally, the function TopDown can be implemented in linear time in the size of T, since it just requires updating h\u2217 by accessing the variable hhv,c.\nThe above result shows that if a hypertree decomposition of width k is given, the MaxWSP problem can be efficiently solved.\nMoreover, differently from the case of structured item graphs, it is well known that deciding the existence of a k-bounded hypertree decomposition and computing one (if any) are problems which can be efficiently solved in polynomial time [7].\nTherefore, Theorem 4 witnesses that the class C(hw, k) actually constitutes a tractable class for the winner determination problem.\nAs the following theorem shows, for large subclasses (that depend only on how the weight function is specified), MaxWSP(H, w) is even highly parallelizeable.\nLet us call a weighting function smooth if it is logspace computable and if all weights are polynomial (and thus just require O(log n) bits for their representation).\nRecall that LOGCFL is a parallel complexity class contained in NC2, cf. [9].\nThe functional version of LOGCFL is LLOGCFL , which is obtained by equipping a logspace transducer with an oracle in LOGCFL.\nTheorem 5.\nLet H be a hypergraph in C(hw, k), and let w be a smooth weighting function for it.\nThen, MaxWSP(H, w) is in LLOGCFL .\n4.\nHYPERTREE DECOMPOSITIONS VS STRUCTURED ITEM GRAPHS Given that the class C(hw, k) has been shown to be an island of tractability for the winner determination problem, and given that the class C(ig, k) has been shown not to be efficiently recognizable, one may be inclined to think that there are instances having unbounded hypertree width, but admitting an item graph of bounded tree width (so that the intractability of structured item graphs would lie in their generality).\nSurprisingly, we establish this is not the case.\nThe line of the proof is to first show that structured item graphs are in one-to-one correspondence with a special kind of hypertree decompositions of the dual hypergraph, which we shall call strict.\nThen, the result will follow by proving that k-width strict hypertree decompositions are less powerful than kwith hypertree decompositions.\n4.1 Strict Hypertree Decompositions Let H be a hypergraph, and let V \u2286 N(H) be a set of nodes and X, Y \u2208 N(H).\nX is [V ]-adjacent to Y if there exists an edge h \u2208 E(H) such that {X, Y } \u2286 (h \u2212 V ).\nA [V ]-path \u03c0 from X to Y is a sequence X = X0, ... , X = Y of variables such that: Xi is [V ]-adjacent to Xi+1, for each i \u2208 [0... -1].\nA set W \u2286 N(H) of nodes is [V ]-connected if \u2200X, Y \u2208 W there is a [V ]-path from X to Y .\nA [V ]-component is a maximal [V ]-connected non-empty set of nodes W \u2286 (N(H) \u2212 V ).\nFor any [V ]-component C, let E(C) = {h \u2208 E(H) | h \u2229 C = \u2205}.\nDefinition 2.\nA hypertree decomposition HD = T, \u03c7, \u03bb of H is strict if the following conditions hold: 1.\nfor each pair of vertices r and s in vertices(T) such that s is a child of r, and for each [\u03c7(r)]-component Cr s.t. Cr \u2229 \u03c7(Ts) = \u2205, Cr is a [\u03c7(r) \u2229 N(\u03bb(r) \u2229 \u03bb(s))]-component; 2.\nfor each edge h \u2208 E(H), there is a vertex p such that h \u2208 \u03bb(p) and h \u2286 \u03c7(p) (we say p strongly covers h); 3.\nfor each edge h \u2208 E(H), the set {p \u2208 vertices(T) | h \u2208 \u03bb(p)} induces a (connected) subtree of T.\nThe strict hypertree width shw(H) of H is the minimum width over all its strict hypertree decompositions.\nP The basic relationship between nice hypertree decompositions and structured item graphs is shown in the following theorem.\nTheorem 6.\nLet H be a hypergraph such that for each node v \u2208 N(H), {v} is in E(H).\nThen, a k-width tree decomposition of an item graph for H exists if and only if \u00afH has a (k + 1)-width strict hypertree decomposition2 .\nNote that, as far as the maximum weighted-set packing problem is concerned, given a hypergraph H, we can always assume that for each node v \u2208 N(H), {v} is in E(H).\nIn fact, if this hyperedge is not in the hypergraph, then it can be added without loss of generality, by setting w({v}) = 0.\nTherefore, letting C(shw, k) denote the class of all the hypergraphs whose dual hypergraphs (associated with maximum 2 The term +1 only plays the technical role of taking care of the different definition of width for tree decompositions and hypertree decompositions.\n159 weighted-set packing problems) have strict hypertree width bounded by k, we have that C(shw, k + 1) = C(ig, k).\nBy definition, strict hypertree decompositions are special hypertree decompositions.\nIn fact, we are able to show that the additional conditions in Definition 2 induce an actual restriction on the decomposition power.\nTheorem 7.\nC(ig, k) = C(shw, k + 1) \u2282 C(hw, k + 1).\nA Game Theoretic View.\nWe shed further lights on strict hypertree decompositions by discussing an interesting characterization based on the strict Robber and Marshals Game, defined by adapting the Robber and Marshals game defined in [6], which characterizes hypertree width.\nThe game is played on a hypergraph H by a robber against k marshals which act in coordination.\nMarshals move on the hyperedges of H, while the robber moves on nodes of H.\nThe robber sees where the marshals intend to move, and reacts by moving to another node which is connected with its current position and through a path in G(H) which does not use any node contained in a hyperedge that is occupied by the marshals before and after their move-we say that these hyperedges are blocked.\nNote that in the basic game defined in [6], the robber is not allowed to move on vertices that are occupied by the marshals before and after their move, even if they do not belong to blocked hyperedges.\nImportantly, marshals are required to play monotonically, i.e., they cannot occupy an edge that was previously occupied in the game, and which is currently not.\nThe marshals win the game if they capture the robber, by occupying an edge covering a node where the robber is.\nOtherwise, the robber wins.\nTheorem 8.\nLet H be a hypergraph such that for each node v \u2208 N(H), {v} is in E(H).\nThen, \u00afH has a k-width strict hypertree decomposition if and only if k marshals can win the strict Robber and Marshals Game on \u00afH, no matter of the robber``s moves.\n5.\nCONCLUSIONS We have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario.\nThe result is bad news, since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph, even for treewidth 3.\nMotivated by this result, we investigated the use of hypertree decomposition (on the dual hypergraph associated with the scenario) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width.\nFor some special, yet relevant cases, a highly parallelizable algorithm is also discussed.\nInterestingly, it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width (hence, the reason of their intractability is not their generality).\nIn particular, the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions (on the dual hypergraph), called query decompositions (see, e.g., [7]).\nIn the light of this observation, we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions, which is currently missing in the literature.\nAs a further avenue of research, it would be relevant to enhance the algorithm ComputeSetPackingk, e.g., by using specialized data structures, in order to avoid the quadratic dependency from (|E(H)| + 1)k .\nFinally, an other interesting question is to assess whether the structural decomposition techniques discussed in the paper can be used to efficiently deal with generalizations of the winner determination problem.\nFor instance, it might be relevant in several application scenarios to design algorithms that can find a selling strategy when several copies of the same item are available for selling, and when moreover the auctioneer is satisfied when at least a given number of copies is actually sold.\nAcknowledgement G. Gottlob``s work was supported by the EC3 - E-Commerce Competence Center (Vienna) and by a Royal Society Wolfson Research Merit Award.\nIn particular, this Award allowed Gottlob to invite G. Greco for a research visit to Oxford.\nIn addition, G. Greco is supported by ICAR-CNR, and by M.I.U.R. under project TOCAI.IT.\n6.\nREFERENCES [1] I. Adler, G. Gottlob, and M. Grohe.\nHypertree-Width and Related Hypergraph Invariants.\nIn Proc.\nof EUROCOMB``05, pages 5-10, 2005.\n[2] C. Boutilier.\nSolving Concisely Expressed Combinatorial Auction Problems.\nIn Proc.\nof AAAI``02, pages 359-366, 2002.\n[3] V. Conitzer, J. Derryberry, and T. Sandholm.\nCombinatorial auctions with structured item graphs.\nIn Proc.\nof AAAI``04, pages 212-218, 2004.\n[4] E. M. Eschen and J. P. Sinrad.\nAn o(n2 ) algorithm for circular-arc graph recognition.\nIn Proc.\nof SODA``93, pages 128-137, 1993.\n[5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of combinatorial auctions: Optimal and approximate.\nIn Proc.\nof IJCAI``99, pages 548-553, 1999.\n[6] G. Gottlob, N. Leone, and F. Scarcello.\nRobbers, marshals, and guards: game theoretic and logical characterizations of hypertree width.\nJournal of Computer and System Sciences, 66(4):775-808, 2003.\n[7] G. Gottlob, N. Leone, and S. Scarcello.\nHypertree decompositions and tractable queries.\nJournal of Computer and System Sciences, 63(3):579-627, 2002.\n[8] H. H. Hoos and C. Boutilier.\nSolving combinatorial auctions using stochastic local search.\nIn Proc.\nof AAAI``00, pages 22-29, 2000.\n[9] D. Johnson.\nA Catalog of Complexity Classes.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, pages 67-161.\n1990.\n[10] N. Korte and R. H. Mohring.\nAn incremental linear-time algorithm for recognizing interval graphs.\nSIAM Journal on Computing, 18(1):68-81, 1989.\n[11] D. Lehmann, R. M\u00a8uller, and T. Sandholm.\nThe Winner Determination Problem.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Combinatorial Auctions.\nMIT Press, 2006.\n[12] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient 160 combinatorial auctions.\nJ. ACM, 49(5):577-602, 2002.\n[13] R. McAfee and J. McMillan.\nAnalyzing the airwaves auction.\nJournal of Economic Perspectives, 10(1):159175, 1996.\n[14] J. McMillan.\nSelling spectrum rights.\nJournal of Economic Perspectives, 8(3):145-62, 1994.\n[15] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proc.\nof EC``00, pages 1-12, 2000.\n[16] N. Robertson and P. Seymour.\nGraph minors ii.\nalgorithmic aspects of tree width.\nJournal of Algorithms, 7:309-322, 1986.\n[17] M. H. Rothkopf, A. Pekec, and R. M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44:1131-1147, 1998.\n[18] T. Sandholm.\nAn implementation of the contract net protocol based on marginal cost calculations.\nIn Proc.\nof AAAI``93, pages 256-262, 1993.\n[19] T. Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nArtificial Intelligence, 135(1-2):1-54, 2002.\n[20] T. Sandholm.\nWinner determination algorithms.\nIn P. Cramton, Y. Shoham, and R. Steinberg, editors, Combinatorial Auctions.\nMIT Press, 2006.\n[21] T. Sandholm and S. Suri.\nBob: Improved winner determination in combinatorial auctions and generalizations.\nArtificial Intelligence, 7:33-58, 2003.\n[22] M. Tennenholtz.\nSome tractable combinatorial auctions.\nIn Proc.\nof AAAI``00, pages 98-103, 2000.\n[23] E. Zurel and N. Nisan.\nAn efficient approximate allocation algorithm for combinatorial auctions.\nIn Proc.\nof EC``01, pages 125-136, 2001.\n161","lvl-3":"On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices.\nWhile this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs).\nFormally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph.\nNote that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness.\nIn fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem.\nIn this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3.\nMotivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions.\nWe show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here.\nIndeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width.\nEven more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.\n1.\nINTRODUCTION\nCombinatorial auctions.\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items.\nThis is desirable when a bidder's valuation of a bundle of items is not equal to the sum of her valuations of the individual items.\nThis framework is currently used to regulate agents' interactions in several application domains (cf., e.g., [21]) such as, electricity markets [13], bandwidth auctions [14], and transportation exchanges [18].\nFormally, a combinatorial auction is a pair (Z, B), where Z = {I1,..., Im} is the set of items the auctioneer has to sell, and B = {B1,..., Bn} is the set of bids from the buyers interested in the items in Z. Each bid Bi has the form (item (Bi), pay (Bi)), where pay (Bi) is a rational number denoting the price a buyer offers for the items in item (Bi) C Z.\nAn outcome for (Z, B) is a subset b of B such that item (Bi) n item (Bj) = 0, for each pair Bi and Bj of bids in b with i = ~ j.\nThe winner determination problem.\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices (i.e.,\nBi \u2208 b \u2217 pay (Bi)) over all the possible outcomes.\nThis problem, called winner determination problem (e.g., [11]), is known to be intractable, actually NP-hard [17], and even not approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time (e.g., [15, 22, 12, 21]).\nIn fact, constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions.\nItem graphs.\nCurrently, the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph, which is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any\nFigure 1: Example MaxWSP problem: (a) Hypergraph H (To, go), and a packing h for it; (b) Primal graph for H (To, go); and, (c, d) Two item graphs for H (To, go).\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph, i.e., a tree or, more generally, a graph having tree-like structure [3]--formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built, we notice that bidder interaction in a combinatorial auction ~ I, B ~ can be represented by means of a hypergraph H (T, g) such that its set of nodes N (H (T, g)) coincides with set of items I, and where its edges E (H (T, g)) are precisely the bids of the buyers {item (Bi) | Bi \u2208 B}.\nA special item graph for ~ I, B ~ is the primal graph of H (T, g), denoted by G (H (T, g)), which contains an edge between any pair of nodes in some hyperedge of H (T, g).\nThen, any item graph for H (T, g) can be viewed as a simplification of G (H (T, g)) obtained by deleting some edges, yet preserving the connectivity condition on the nodes included in each hyperedge.\nEXAMPLE 1.\nThe hypergraph H (To, go) reported in Figure 1.\n(a) is an encoding for a combinatorial auction ~ I0, B0 ~, where I0 = {I1,..., I5}, and item (Bi) = hi, for each 1 \u2264 i \u2264 3.\nThe primal graph for H (To, go) is reported in\nFigure 1.\n(b), while two example item graphs are reported in Figure 1.\n(c) and (d), where edges required for maintaining\nthe connectivity for h1 are depicted in bold.\n<\nOpen Problem: Computing structured item\ngraphs efficiently.\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined.\nHowever, exponentially many item graphs might be associated with a combinatorial auction, and it is not clear how to determine whether a structured item graph of a certain (constant) treewidth exists, and if so, how to compute such a structured item graph efficiently.\nPolynomial time algorithms to find the \"best\" simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [10], a cycle [4], or a tree [3], but it was an important open problem (cf. [3]) whether it is tractable to check if for a combinatorial auction, an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time, if so.\nWeighted Set Packing.\nLet us note that the hypergraph representation H (T, g) of a combinatorial auction ~ I, B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h, h' \u2208 h with h = ~ h', it holds that h \u2229 h' = \u2205.\nLetting w be a weighting function for H, i.e., a polynomially-time computable function from E (H) to rational numbers, the weight of a packing h is the rational number w (h) = EhCh w (h), where w ({}) = 0.\nThen, the maximum-weighted set packing problem for H w.r.t. w, denoted by MaxWSP (H, w), is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem, given a combinatorial auction ~ I, B ~, it is sufficient to define the weighting function w (T, g) (item (Bi)) = pay (Bi).\nThen, the set of the solutions for the weighted set packing problem for H (T, g) w.r.t. w (T, g) coincides with the set of the solutions for the winner determination problem on ~ I, B ~.\nEXAMPLE 2.\nConsider again the hypergraph H (To, go) reported in Figure 1.\n(a).\nAn example packing for H (To, go) is h = {h1}, which intuitively corresponds to an outcome for ~ I0, B0 ~, where the auctioneer accepted the bid B1.\nBy assuming that bids B1, B2, and B3 are such that pay (B1) = pay (B2) = pay (B3), the packing h is not a solution for the problem MaxWSP (H (To, go), w (To, go)).\nIndeed, the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem, that are, moreover polynomially recognizable.\nTowards this aim, we first study structured item graphs and solve the open problem in [3].\nThe result is very bad news: \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3.\nMore formally, letting C (ig, k) denote the class of all the hypergraphs having an item tree of treewidth bounded by k, we prove that deciding whether a hypergraph (associated with a combinatorial auction problem) belongs to C (ig, 3) is NP-complete.\nIn the light of this result, it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or, equivalently, the winner determination problem.\nOur investigations, this time, led to very good news which are summarized below:\n\u25ba For a hypergraph H, its dual H \u00af = (V, E) is such that nodes in V are in one-to-one correspondence with hyperedges in H, and for each node x \u2208 N (H), {h | x \u2208 h \u2227 h \u2208\nE (H)} is in E.\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [7] bounded by k (short: class C (hw, k) of hypergraphs).\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H.\nIn fact, we can show that MaxWSP remains NP-hard even when H is acyclic (i.e., when it has hypertree width 1), even when each node is contained in 3 hyperedges at most.\n\u25ba For some relevant special classes of hypergraphs in C (hw, k), we design a higly-parallelizeable algorithm for MaxWSP.\nSpecifically, if the weighting functions can be computed in logarithmic space and weights are polynomial (e.g., when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges), we show that MaxWSP can be solved by a LOGCFL algorithm.\nRecall, in fact, that LOGCFL is the class of decision problems that are logspace reducible to context free languages, and that LOGCFL C _ NC2 C _ P (see, e.g., [9]).\n\u25ba Surprisingly, we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs.\nTo the contrary, the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs.\nIn fact, we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach.\nIntuitively, the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality, but rather to some peculiarities in its definition.\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph.\nIndeed, we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph, which we call strict hypertree decompositions.\nA game-characterization for the notion of strict hypertree width is also proposed, which specializes the Robber and Marshals game in [6] (proposed to characterize the hypertree width), and which makes it clear the further requirements on hypertree decompositions.\nThe rest of the paper is organized as follows.\nSection 2 discusses the intractability of structured item graphs.\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width, and discusses the cases where the algorithm is also highly parallelizable.\nThe comparison between the classes C (ig, k) and C (hw, k) is discussed in Section 4.\nFinally, in Section 5 we draw our conclusions by also outlining directions for further research.\n2.\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS\nConnectedness between blocks,\n3.\nTRACTABLE CASES VIA HYPERTREE DECOMPOSITIONS\n3.1 Hypertree Decomposition on the Dual Hypergraph and Tractable Packing Problems\n4.\nHYPERTREE DECOMPOSITIONS VS STRUCTURED ITEM GRAPHS\n4.1 Strict Hypertree Decompositions\n5.\nCONCLUSIONS\nWe have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario.\nThe result is bad news, since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph, even for treewidth 3.\nMotivated by this result, we investigated the use of hypertree decomposition (on the dual hypergraph associated with the scenario) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width.\nFor some special, yet relevant cases, a highly parallelizable algorithm is also discussed.\nInterestingly, it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width (hence, the reason of their intractability is not their generality).\nIn particular, the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions (on the dual hypergraph), called query decompositions (see, e.g., [7]).\nIn the light of this observation, we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions, which is currently missing in the literature.\nAs a further avenue of research, it would be relevant to enhance the algorithm ComputeSetPackingk, e.g., by using specialized data structures, in order to avoid the quadratic dependency from (| E (H) | + 1) k. Finally, an other interesting question is to assess whether the structural decomposition techniques discussed in the paper can be used to efficiently deal with generalizations of the winner determination problem.\nFor instance, it might be relevant in several application scenarios to design algorithms that can find a selling strategy when several copies of the same item are available for selling, and when moreover the auctioneer is satisfied when at least a given number of copies is actually sold.","lvl-4":"On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices.\nWhile this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs).\nFormally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph.\nNote that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness.\nIn fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem.\nIn this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3.\nMotivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions.\nWe show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here.\nIndeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width.\nEven more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.\n1.\nINTRODUCTION\nCombinatorial auctions.\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items.\nThis is desirable when a bidder's valuation of a bundle of items is not equal to the sum of her valuations of the individual items.\nAn outcome for (Z, B) is a subset b of B such that item (Bi) n item (Bj) = 0, for each pair Bi and Bj of bids in b with i = ~ j.\nThe winner determination problem.\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices (i.e.,\nBi \u2208 b \u2217 pay (Bi)) over all the possible outcomes.\nThis problem, called winner determination problem (e.g., [11]), is known to be intractable, actually NP-hard [17], and even not approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time (e.g., [15, 22, 12, 21]).\nIn fact, constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions.\nItem graphs.\nCurrently, the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph, which is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any\nFigure 1: Example MaxWSP problem: (a) Hypergraph H (To, go), and a packing h for it; (b) Primal graph for H (To, go); and, (c, d) Two item graphs for H (To, go).\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph, i.e., a tree or, more generally, a graph having tree-like structure [3]--formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built, we notice that bidder interaction in a combinatorial auction ~ I, B ~ can be represented by means of a hypergraph H (T, g) such that its set of nodes N (H (T, g)) coincides with set of items I, and where its edges E (H (T, g)) are precisely the bids of the buyers {item (Bi) | Bi \u2208 B}.\nA special item graph for ~ I, B ~ is the primal graph of H (T, g), denoted by G (H (T, g)), which contains an edge between any pair of nodes in some hyperedge of H (T, g).\nThen, any item graph for H (T, g) can be viewed as a simplification of G (H (T, g)) obtained by deleting some edges, yet preserving the connectivity condition on the nodes included in each hyperedge.\nEXAMPLE 1.\nThe hypergraph H (To, go) reported in Figure 1.\n(a) is an encoding for a combinatorial auction ~ I0, B0 ~, where I0 = {I1,..., I5}, and item (Bi) = hi, for each 1 \u2264 i \u2264 3.\nThe primal graph for H (To, go) is reported in\nFigure 1.\n(b), while two example item graphs are reported in Figure 1.\n(c) and (d), where edges required for maintaining\nthe connectivity for h1 are depicted in bold.\n<\nOpen Problem: Computing structured item\ngraphs efficiently.\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined.\nHowever, exponentially many item graphs might be associated with a combinatorial auction, and it is not clear how to determine whether a structured item graph of a certain (constant) treewidth exists, and if so, how to compute such a structured item graph efficiently.\nWeighted Set Packing.\nLet us note that the hypergraph representation H (T, g) of a combinatorial auction ~ I, B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h, h' \u2208 h with h = ~ h', it holds that h \u2229 h' = \u2205.\nThen, the set of the solutions for the weighted set packing problem for H (T, g) w.r.t. w (T, g) coincides with the set of the solutions for the winner determination problem on ~ I, B ~.\nEXAMPLE 2.\nConsider again the hypergraph H (To, go) reported in Figure 1.\n(a).\nAn example packing for H (To, go) is h = {h1}, which intuitively corresponds to an outcome for ~ I0, B0 ~, where the auctioneer accepted the bid B1.\nIndeed, the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem, that are, moreover polynomially recognizable.\nTowards this aim, we first study structured item graphs and solve the open problem in [3].\nThe result is very bad news: \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3.\nMore formally, letting C (ig, k) denote the class of all the hypergraphs having an item tree of treewidth bounded by k, we prove that deciding whether a hypergraph (associated with a combinatorial auction problem) belongs to C (ig, 3) is NP-complete.\nIn the light of this result, it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or, equivalently, the winner determination problem.\nE (H)} is in E.\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [7] bounded by k (short: class C (hw, k) of hypergraphs).\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H.\nIn fact, we can show that MaxWSP remains NP-hard even when H is acyclic (i.e., when it has hypertree width 1), even when each node is contained in 3 hyperedges at most.\n\u25ba For some relevant special classes of hypergraphs in C (hw, k), we design a higly-parallelizeable algorithm for MaxWSP.\nRecall, in fact, that LOGCFL is the class of decision problems that are logspace reducible to context free languages, and that LOGCFL C _ NC2 C _ P (see, e.g., [9]).\n\u25ba Surprisingly, we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs.\nTo the contrary, the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs.\nIn fact, we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach.\nIntuitively, the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality, but rather to some peculiarities in its definition.\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph.\nIndeed, we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph, which we call strict hypertree decompositions.\nThe rest of the paper is organized as follows.\nSection 2 discusses the intractability of structured item graphs.\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width, and discusses the cases where the algorithm is also highly parallelizable.\nThe comparison between the classes C (ig, k) and C (hw, k) is discussed in Section 4.\nFinally, in Section 5 we draw our conclusions by also outlining directions for further research.\n5.\nCONCLUSIONS\nWe have solved the open question of determining the complexity of computing a structured item graph associated with a combinatorial auction scenario.\nThe result is bad news, since it turned out that it is NP-complete to check whether a combinatorial auction has a structured item graph, even for treewidth 3.\nMotivated by this result, we investigated the use of hypertree decomposition (on the dual hypergraph associated with the scenario) and we shown that the problem is tractable on the class of those instances whose dual hypergraphs have bounded hypertree width.\nFor some special, yet relevant cases, a highly parallelizable algorithm is also discussed.\nInterestingly, it also emerged that the class of structured item graphs is properly contained in the class of instances having bounded hypertree width (hence, the reason of their intractability is not their generality).\nIn particular, the latter result is established by showing a precise relationship between structured item graphs and restricted forms of hypertree decompositions (on the dual hypergraph), called query decompositions (see, e.g., [7]).\nIn the light of this observation, we note that proving some approximability results for structured item graphs requires a deep understanding of the approximability of query decompositions, which is currently missing in the literature.","lvl-2":"On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions\nABSTRACT\nThe winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices.\nWhile this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs).\nFormally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph.\nNote that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness.\nIn fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem.\nIn this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3.\nMotivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions.\nWe show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here.\nIndeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width.\nEven more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.\n1.\nINTRODUCTION\nCombinatorial auctions.\nCombinatorial auctions are well-known mechanisms for resource and task allocation where bidders are allowed to simultaneously bid on combinations of items.\nThis is desirable when a bidder's valuation of a bundle of items is not equal to the sum of her valuations of the individual items.\nThis framework is currently used to regulate agents' interactions in several application domains (cf., e.g., [21]) such as, electricity markets [13], bandwidth auctions [14], and transportation exchanges [18].\nFormally, a combinatorial auction is a pair (Z, B), where Z = {I1,..., Im} is the set of items the auctioneer has to sell, and B = {B1,..., Bn} is the set of bids from the buyers interested in the items in Z. Each bid Bi has the form (item (Bi), pay (Bi)), where pay (Bi) is a rational number denoting the price a buyer offers for the items in item (Bi) C Z.\nAn outcome for (Z, B) is a subset b of B such that item (Bi) n item (Bj) = 0, for each pair Bi and Bj of bids in b with i = ~ j.\nThe winner determination problem.\nA crucial problem for combinatorial auctions is to determine the outcome b \u2217 that maximizes the sum of the accepted bid prices (i.e.,\nBi \u2208 b \u2217 pay (Bi)) over all the possible outcomes.\nThis problem, called winner determination problem (e.g., [11]), is known to be intractable, actually NP-hard [17], and even not approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have been spent to design practically efficient algorithms for general auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of instances where solving the winner determination problem is feasible in polynomial time (e.g., [15, 22, 12, 21]).\nIn fact, constraining bidder interaction was proven to be useful for identifying classes of tractable combinatorial auctions.\nItem graphs.\nCurrently, the most general class of tractable combinatorial auctions has been singled out by modelling interactions among bidders with the notion of item graph, which is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any\nFigure 1: Example MaxWSP problem: (a) Hypergraph H (To, go), and a packing h for it; (b) Primal graph for H (To, go); and, (c, d) Two item graphs for H (To, go).\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be solvable in polynomial time if interactions among bidders can be represented by means of a structured item graph, i.e., a tree or, more generally, a graph having tree-like structure [3]--formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built, we notice that bidder interaction in a combinatorial auction ~ I, B ~ can be represented by means of a hypergraph H (T, g) such that its set of nodes N (H (T, g)) coincides with set of items I, and where its edges E (H (T, g)) are precisely the bids of the buyers {item (Bi) | Bi \u2208 B}.\nA special item graph for ~ I, B ~ is the primal graph of H (T, g), denoted by G (H (T, g)), which contains an edge between any pair of nodes in some hyperedge of H (T, g).\nThen, any item graph for H (T, g) can be viewed as a simplification of G (H (T, g)) obtained by deleting some edges, yet preserving the connectivity condition on the nodes included in each hyperedge.\nEXAMPLE 1.\nThe hypergraph H (To, go) reported in Figure 1.\n(a) is an encoding for a combinatorial auction ~ I0, B0 ~, where I0 = {I1,..., I5}, and item (Bi) = hi, for each 1 \u2264 i \u2264 3.\nThe primal graph for H (To, go) is reported in\nFigure 1.\n(b), while two example item graphs are reported in Figure 1.\n(c) and (d), where edges required for maintaining\nthe connectivity for h1 are depicted in bold.\n<\nOpen Problem: Computing structured item\ngraphs efficiently.\nThe above mentioned tractability result on structured item graphs turns out to be useful in practice only when a structured item graph either is given or can be efficiently determined.\nHowever, exponentially many item graphs might be associated with a combinatorial auction, and it is not clear how to determine whether a structured item graph of a certain (constant) treewidth exists, and if so, how to compute such a structured item graph efficiently.\nPolynomial time algorithms to find the \"best\" simplification of the primal graph were so far only known for the cases where the item graph to be constructed is a line [10], a cycle [4], or a tree [3], but it was an important open problem (cf. [3]) whether it is tractable to check if for a combinatorial auction, an item graph of treewidth bounded by a fixed natural number k exists and can be constructed in polynomial time, if so.\nWeighted Set Packing.\nLet us note that the hypergraph representation H (T, g) of a combinatorial auction ~ I, B ~ is also useful to make the analogy between the winner determination problem and the maximum weighted-set packing problem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of hyperedges of H such that for each pair h, h' \u2208 h with h = ~ h', it holds that h \u2229 h' = \u2205.\nLetting w be a weighting function for H, i.e., a polynomially-time computable function from E (H) to rational numbers, the weight of a packing h is the rational number w (h) = EhCh w (h), where w ({}) = 0.\nThen, the maximum-weighted set packing problem for H w.r.t. w, denoted by MaxWSP (H, w), is the problem of finding a packing for H having the maximum weight over all the packings for H. To see that MaxWSP is just a different formulation for the winner determination problem, given a combinatorial auction ~ I, B ~, it is sufficient to define the weighting function w (T, g) (item (Bi)) = pay (Bi).\nThen, the set of the solutions for the weighted set packing problem for H (T, g) w.r.t. w (T, g) coincides with the set of the solutions for the winner determination problem on ~ I, B ~.\nEXAMPLE 2.\nConsider again the hypergraph H (To, go) reported in Figure 1.\n(a).\nAn example packing for H (To, go) is h = {h1}, which intuitively corresponds to an outcome for ~ I0, B0 ~, where the auctioneer accepted the bid B1.\nBy assuming that bids B1, B2, and B3 are such that pay (B1) = pay (B2) = pay (B3), the packing h is not a solution for the problem MaxWSP (H (To, go), w (To, go)).\nIndeed, the packing\nContributions\nThe primary aim of this paper is to identify large tractable classes for the winner determination problem, that are, moreover polynomially recognizable.\nTowards this aim, we first study structured item graphs and solve the open problem in [3].\nThe result is very bad news: \u25ba It is NP complete to check whether a combinatorial auction has a structured item graph of treewidth 3.\nMore formally, letting C (ig, k) denote the class of all the hypergraphs having an item tree of treewidth bounded by k, we prove that deciding whether a hypergraph (associated with a combinatorial auction problem) belongs to C (ig, 3) is NP-complete.\nIn the light of this result, it was crucial to assess whether there are some other kinds of structural requirement that can be checked in polynomial time and that can still be used to isolate tractable classes of the maximum weightedset packing problem or, equivalently, the winner determination problem.\nOur investigations, this time, led to very good news which are summarized below:\n\u25ba For a hypergraph H, its dual H \u00af = (V, E) is such that nodes in V are in one-to-one correspondence with hyperedges in H, and for each node x \u2208 N (H), {h | x \u2208 h \u2227 h \u2208\nE (H)} is in E.\nWe show that MaxWSP is tractable on the class of those instances whose dual hypergraphs have hypertree width [7] bounded by k (short: class C (hw, k) of hypergraphs).\nNote that a key issue of the tractability is to consider the hypertree width of the dual hypergraph H \u00af instead of the auction hypergraph H.\nIn fact, we can show that MaxWSP remains NP-hard even when H is acyclic (i.e., when it has hypertree width 1), even when each node is contained in 3 hyperedges at most.\n\u25ba For some relevant special classes of hypergraphs in C (hw, k), we design a higly-parallelizeable algorithm for MaxWSP.\nSpecifically, if the weighting functions can be computed in logarithmic space and weights are polynomial (e.g., when all the hyperegdes have unitary weights and one is interested in finding the packing with the maximum number of edges), we show that MaxWSP can be solved by a LOGCFL algorithm.\nRecall, in fact, that LOGCFL is the class of decision problems that are logspace reducible to context free languages, and that LOGCFL C _ NC2 C _ P (see, e.g., [9]).\n\u25ba Surprisingly, we show that nothing is lost in terms of generality when considering the hypertree decomposition of dual hypergraphs instead of the treewidth of item graphs.\nTo the contrary, the proposed hypertree-based decomposition method is strictly more general than the method of structured item graphs.\nIn fact, we show that strictly larger classes of instances are tractable according to our new approach than according to the structured item graphs approach.\nIntuitively, the NP-hardness of recognizing bounded-width structured item graphs is thus not due to its great generality, but rather to some peculiarities in its definition.\n\u25ba The proof of the above results give us some interesting insight into the notion of structured item graph.\nIndeed, we show that structured item graphs are in one-to-one correspondence with some special kinds of hypertree decomposition of the dual hypergraph, which we call strict hypertree decompositions.\nA game-characterization for the notion of strict hypertree width is also proposed, which specializes the Robber and Marshals game in [6] (proposed to characterize the hypertree width), and which makes it clear the further requirements on hypertree decompositions.\nThe rest of the paper is organized as follows.\nSection 2 discusses the intractability of structured item graphs.\nSection 3 presents the polynomial-time algorithm for solving MaxWSP on the class of those instances whose dual hypergraphs have bounded hypertree width, and discusses the cases where the algorithm is also highly parallelizable.\nThe comparison between the classes C (ig, k) and C (hw, k) is discussed in Section 4.\nFinally, in Section 5 we draw our conclusions by also outlining directions for further research.\n2.\nCOMPLEXITY OF STRUCTURED ITEM GRAPHS\nLet H be a hypergraph.\nA graph G = (V, E) is an item graph for H if V = N (H) and, for each h \u2208 E (H), the subgraph of G induced over the nodes in h is connected.\nAn important class of item graphs is that of structured item graphs, i.e., of those item graphs having bounded treewidth as formalized below.\nA tree decomposition [16] of a graph G = (V, E) is a pair (T, \u03c7), where T = (N, F) is a tree, and \u03c7 is a labelling function assigning to each vertex p \u2208 N a set of vertices \u03c7 (p) C _ V, such that the following conditions are satisfied: (1) for each vertex b of G, there exists p \u2208 N such that b \u2208 \u03c7 (p); (2) for each edge {b, d} \u2208 E, there exists p \u2208 N such that {b, d} C _ \u03c7 (p); (3) for each vertex b of G, the set {p \u2208 N | b \u2208 \u03c7 (p)} induces a connected subtree of T.\nThe width of (T, \u03c7) is the number maxpEN | \u03c7 (p) \u2212 1 |.\nThe treewidth of G, denoted by tw (G), is the minimum width over all its tree decompositions.\nThe winner determination problem can be solved in polynomial time on item graphs having bounded treewidth [3].\nTHEOREM 1 (CF. [3]).\nAssume a k-width tree decomposition (T, \u03c7) of an item graph for H is given.\nThen, MaxWSP (H, w) can be solved in time O (| T | 2 \u00d7 (| E (H) | +1) k +1).\nMany item graphs can be associated with a hypergraph.\nAs an example, observe that the item graph in Figure 1.\n(c) has treewidth 1, while Figure 1.\n(d) reports an item graph whose treewidth is 2.\nIndeed, it was an open question whether for a given constant k it can be checked in polynomial time if an item graph of treewidth k exists, and if so, whether such an item graph can be efficiently computed.\nLet C (ig, k) denote the class of all the hypergraphs having an item graph G such that tw (G) TR1.\nNow we want to compare the time to answer a stream of Q queries in both cases.\nLet Vc(Nc) be the volume of the most frequent Nc queries.\nThen, for case (A), we have an overall time TCA = Vc(Nc) + TR2(Q \u2212 Vc(Nc)).\nSimilarly, for case (B), let Vp(Np) be the number of computable queries.\nThen we have overall time TP L = TR1Vp(Np) + TR2(Q \u2212 Vp(Np)).\nWe want to check under which conditions we have TP L < TCA.\nWe have TP L \u2212 TCA = (TR2 \u2212 1)Vc(Nc) \u2212 (TR2 \u2212 TR1)Vp(Np) > 0.\nFigure 9 shows the values of Vp and Vc for our data.\nWe can see that caching answers saturates faster and for this particular data there is no additional benefit from using more than 10% of the index space for caching answers.\nAs the query distribution is a power law with parameter \u03b1 > 1, the i-th most frequent query appears with probability proportional to 1 i\u03b1 .\nTherefore, the volume Vc(n), which is the total number of the n most frequent queries, is Vc(n) = V0 n i=1 Q i\u03b1 = \u03b3nQ (0 < \u03b3n < 1).\nWe know that Vp(n) grows faster than Vc(n) and assume, based on experimental results, that the relation is of the form Vp(n) = k Vc(n)\u03b2 .\nIn the worst case, for a large cache, \u03b2 \u2192 1.\nThat is, both techniques will cache a constant fraction of the overall query volume.\nThen caching posting lists makes sense only if L(TR2 \u2212 1) k(TR2 \u2212 TR1) > 1.\nIf we use compression, we have L < L and TR1 > TR1.\nAccording to the experiments that we show later, compression is always better.\nFor a small cache, we are interested in the transient behavior and then \u03b2 > 1, as computed from our data.\nIn this case there will always be a point where TP L > TCA for a large number of queries.\nIn reality, instead of filling the cache only with answers or only with posting lists, a better strategy will be to divide the total cache space into cache for answers and cache for posting lists.\nIn such a case, there will be some queries that could be answered by both parts of the cache.\nAs the answer cache is faster, it will be the first choice for answering those queries.\nLet QNc and QNp be the set of queries that can be answered by the cached answers and the cached posting lists, respectively.\nThen, the overall time is T = Vc(Nc)+TR1V (QNp \u2212QNc )+TR2(Q\u2212V (QNp \u222aQNc )), where Np = (M \u2212 Nc)\/L. Finding the optimal division of the cache in order to minimize the overall retrieval time is a difficult problem to solve analytically.\nIn Section 6.3 we use simulations to derive optimal cache trade-offs for particular implementation examples.\n6.2 Parameter Estimation We now use a particular implementation of a centralized system and the model of a distributed system as examples from which we estimate the parameters of the analysis from the previous section.\nWe perform the experiments using an optimized version of Terrier [11] for both indexing documents and processing queries, on a single machine with a Pentium 4 at 2GHz and 1GB of RAM.\nWe indexed the documents from the UK-2006 dataset, without removing stop words or applying stemming.\nThe posting lists in the inverted file consist of pairs of document identifier and term frequency.\nWe compress the document identifier gaps using Elias gamma encoding, and the 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Queryvolume Space precomputed answers posting lists Figure 9: Cache saturation as a function of size.\nTable 2: Ratios between the average time to evaluate a query and the average time to return cached answers (centralized and distributed case).\nCentralized system TR1 TR2 TR1 TR2 Full evaluation 233\u00a01760\u00a0707\u00a01140 Partial evaluation 99\u00a01626\u00a0493\u00a0798 LAN system TRL 1 TRL 2 TR L 1 TR L 2 Full evaluation 242\u00a01769\u00a0716\u00a01149 Partial evaluation 108\u00a01635\u00a0502\u00a0807 WAN system TRW 1 TRW 2 TR W 1 TR W 2 Full evaluation 5001\u00a06528\u00a05475\u00a05908 Partial evaluation 4867\u00a06394\u00a05270\u00a05575 term frequencies in documents using unary encoding [16].\nThe size of the inverted file is 1,189Mb.\nA stored answer requires 1264 bytes, and an uncompressed posting takes 8 bytes.\nFrom Table 1, we obtain L = (8\u00b7# of postings) 1264\u00b7# of terms = 0.75 and L = Inverted file size 1264\u00b7# of terms = 0.26.\nWe estimate the ratio TR = T\/Tc between the average time T it takes to evaluate a query and the average time Tc it takes to return a stored answer for the same query, in the following way.\nTc is measured by loading the answers for 100,000 queries in memory, and answering the queries from memory.\nThe average time is Tc = 0.069ms. T is measured by processing the same 100,000 queries (the first 10,000 queries are used to warm-up the system).\nFor each query, we remove stop words, if there are at least three remaining terms.\nThe stop words correspond to the terms with a frequency higher than the number of documents in the index.\nWe use a document-at-a-time approach to retrieve documents containing all query terms.\nThe only disk access required during query processing is for reading compressed posting lists from the inverted file.\nWe perform both full and partial evaluation of answers, because some queries are likely to retrieve a large number of documents, and only a fraction of the retrieved documents will be seen by users.\nIn the partial evaluation of queries, we terminate the processing after matching 10,000 documents.\nThe estimated ratios TR are presented in Table 2.\nFigure 10 shows for a sample of queries the workload of the system with partial query evaluation and compressed posting lists.\nThe x-axis corresponds to the total time the system spends processing a particular query, and the vertical axis corresponds to the sum t\u2208q fq \u00b7 fd(t).\nNotice that the total number of postings of the query-terms does not necessarily provide an accurate estimate of the workload imposed on the system by a query (which is the case for full evaluation and uncompressed lists).\n0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Totalpostingstoprocessquery(normalized) Total time to process query (normalized) Partial processing of compressed postings query len = 1 query len in [2,3] query len in [4,8] query len > 8 Figure 10: Workload for partial query evaluation with compressed posting lists.\nThe analysis of the previous section also applies to a distributed retrieval system in one or multiple sites.\nSuppose that a document partitioned distributed system is running on a cluster of machines interconnected with a local area network (LAN) in one site.\nThe broker receives queries and broadcasts them to the query processors, which answer the queries and return the results to the broker.\nFinally, the broker merges the received answers and generates the final set of answers (we assume that the time spent on merging results is negligible).\nThe difference between the centralized architecture and the document partition architecture is the extra communication between the broker and the query processors.\nUsing ICMP pings on a 100Mbps LAN, we have measured that sending the query from the broker to the query processors which send an answer of 4,000 bytes back to the broker takes on average 0.615ms. Hence, TRL = TR + 0.615ms\/0.069ms = TR + 9.\nIn the case when the broker and the query processors are in different sites connected with a wide area network (WAN), we estimated that broadcasting the query from the broker to the query processors and getting back an answer of 4,000 bytes takes on average 329ms.\nHence, TRW = TR + 329ms\/0.069ms = TR + 4768.\n6.3 Simulation Results We now address the problem of finding the optimal tradeoff between caching query answers and caching posting lists.\nTo make the problem concrete we assume a fixed budget M on the available memory, out of which x units are used for caching query answers and M \u2212 x for caching posting lists.\nWe perform simulations and compute the average response time as a function of x. Using a part of the query log as training data, we first allocate in the cache the answers to the most frequent queries that fit in space x, and then we use the rest of the memory to cache posting lists.\nFor selecting posting lists we use the QtfDf algorithm, applied to the training query log but excluding the queries that have already been cached.\nIn Figure 11, we plot the simulated response time for a centralized system as a function of x. For the uncompressed index we use M = 1GB, and for the compressed index we use M = 0.5GB.\nIn the case of the configuration that uses partial query evaluation with compressed posting lists, the lowest response time is achieved when 0.15GB out of the 0.5GB is allocated for storing answers for queries.\nWe obtained similar trends in the results for the LAN setting.\nFigure 12 shows the simulated workload for a distributed system across a WAN.\nIn this case, the total amount of memory is split between the broker, which holds the cached 400 500 600 700 800 900 1000 1100 1200 0 0.2 0.4 0.6 0.8 1 Averageresponsetime Space (GB) Simulated workload -- single machine full \/ uncompr \/ 1 G partial \/ uncompr \/ 1 G full \/ compr \/ 0.5 G partial \/ compr \/ 0.5 G Figure 11: Optimal division of the cache in a server.\n3000 3500 4000 4500 5000 5500 6000 0 0.2 0.4 0.6 0.8 1 Averageresponsetime Space (GB) Simulated workload -- WAN full \/ uncompr \/ 1 G partial \/ uncompr \/ 1 G full \/ compr \/ 0.5 G partial \/ compr \/ 0.5 G Figure 12: Optimal division of the cache when the next level requires WAN access.\nanswers of queries, and the query processors, which hold the cache of posting lists.\nAccording to the figure, the difference between the configurations of the query processors is less important because the network communication overhead increases the response time substantially.\nWhen using uncompressed posting lists, the optimal allocation of memory corresponds to using approximately 70% of the memory for caching query answers.\nThis is explained by the fact that there is no need for network communication when the query can be answered by the cache at the broker.\n7.\nEFFECT OF THE QUERY DYNAMICS For our query log, the query distribution and query-term distribution change slowly over time.\nTo support this claim, we first assess how topics change comparing the distribution of queries from the first week in June, 2006, to the distribution of queries for the remainder of 2006 that did not appear in the first week in June.\nWe found that a very small percentage of queries are new queries.\nThe majority of queries that appear in a given week repeat in the following weeks for the next six months.\nWe then compute the hit rate of a static cache of 128, 000 answers trained over a period of two weeks (Figure 13).\nWe report hit rate hourly for 7 days, starting from 5pm.\nWe observe that the hit rate reaches its highest value during the night (around midnight), whereas around 2-3pm it reaches its minimum.\nAfter a small decay in hit rate values, the hit rate stabilizes between 0.28, and 0.34 for the entire week, suggesting that the static cache is effective for a whole week after the training period.\n0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Hit-rate Time Hits on the frequent queries of distances Figure 13: Hourly hit rate for a static cache holding 128,000 answers during the period of a week.\nThe static cache of posting lists can be periodically recomputed.\nTo estimate the time interval in which we need to recompute the posting lists on the static cache we need to consider an efficiency\/quality trade-off: using too short a time interval might be prohibitively expensive, while recomputing the cache too infrequently might lead to having an obsolete cache not corresponding to the statistical characteristics of the current query stream.\nWe measured the effect on the QtfDf algorithm of the changes in a 15-week query stream (Figure 14).\nWe compute the query term frequencies over the whole stream, select which terms to cache, and then compute the hit rate on the whole query stream.\nThis hit rate is as an upper bound, and it assumes perfect knowledge of the query term frequencies.\nTo simulate a realistic scenario, we use the first 6 (3) weeks of the query stream for computing query term frequencies and the following 9 (12) weeks to estimate the hit rate.\nAs Figure 14 shows, the hit rate decreases by less than 2%.\nThe high correlation among the query term frequencies during different time periods explains the graceful adaptation of the static caching algorithms to the future query stream.\nIndeed, the pairwise correlation among all possible 3-week periods of the 15-week query stream is over 99.5%.\n8.\nCONCLUSIONS Caching is an effective technique in search engines for improving response time, reducing the load on query processors, and improving network bandwidth utilization.\nWe present results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries.\nOur results show that in our UK log, the minimum miss rate is 50% using a working set strategy.\nCaching terms is more effective with respect to miss rate, achieving values as low as 12%.\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU, obtaining hit rate values that are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists, and we simulate different types of architectures.\nOur results show that for centralized and LAN environments, there is an optimal allocation of caching query results and caching of posting lists, while for WAN scenarios in which network time prevails it is more important to cache query results.\n0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Hitrate Cache size Dynamics of static QTF\/DF caching policy perfect knowledge 6-week training 3-week training Figure 14: Impact of distribution changes on the static caching of posting lists.\n9.\nREFERENCES [1] V. N. Anh and A. Moffat.\nPruned query evaluation using pre-computed impacts.\nIn ACM CIKM, 2006.\n[2] R. A. Baeza-Yates and F. Saint-Jean.\nA three level search engine index based in query log distribution.\nIn SPIRE, 2003.\n[3] C. Buckley and A. F. Lewit.\nOptimization of inverted vector searches.\nIn ACM SIGIR, 1985.\n[4] S. B\u00a8uttcher and C. L. A. Clarke.\nA document-centric approach to static index pruning in text retrieval systems.\nIn ACM CIKM, 2006.\n[5] P. Cao and S. Irani.\nCost-aware WWW proxy caching algorithms.\nIn USITS, 1997.\n[6] P. Denning.\nWorking sets past and present.\nIEEE Trans.\non Software Engineering, SE-6(1):64-84, 1980.\n[7] T. Fagni, R. Perego, F. Silvestri, and S. Orlando.\nBoosting the performance of web search engines: Caching and prefetching query results by exploiting historical usage data.\nACM Trans.\nInf.\nSyst., 24(1):51-78, 2006.\n[8] R. Lempel and S. Moran.\nPredictive caching and prefetching of query results in search engines.\nIn WWW, 2003.\n[9] X. Long and T. Suel.\nThree-level caching for efficient query processing in large web search engines.\nIn WWW, 2005.\n[10] E. P. Markatos.\nOn caching search engine query results.\nComputer Communications, 24(2):137-143, 2001.\n[11] I. Ounis, G. Amati, V. Plachouras, B. He, C. Macdonald, and C. Lioma.\nTerrier: A High Performance and Scalable Information Retrieval Platform.\nIn SIGIR Workshop on Open Source Information Retrieval, 2006.\n[12] V. V. Raghavan and H. Sever.\nOn the reuse of past optimal queries.\nIn ACM SIGIR, 1995.\n[13] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira, R. Fonseca, and B. Riberio-Neto.\nRank-preserving two-level caching for scalable search engines.\nIn ACM SIGIR, 2001.\n[14] D. R. Slutz and I. L. Traiger.\nA note on the calculation of average working set size.\nCommunications of the ACM, 17(10):563-565, 1974.\n[15] T. Strohman, H. Turtle, and W. B. Croft.\nOptimization strategies for complex queries.\nIn ACM SIGIR, 2005.\n[16] I. H. Witten, T. C. Bell, and A. Moffat.\nManaging Gigabytes: Compressing and Indexing Documents and Images.\nJohn Wiley & Sons, Inc., NY, 1994.\n[17] N. E. Young.\nOn-line file caching.\nAlgorithmica, 33(3):371-383, 2002.","lvl-3":"The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines.\nWe explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs. caching posting lists.\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers.\nWe propose a new algorithm for static caching of posting lists, which outperforms previous methods.\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists.\nFinally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time.\nOur results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory\/disk layer or a broker\/remote server layer.\n1.\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines, and users have high expectations of the quality and\nspeed of the answers.\nAs the searchable Web becomes larger and larger, with more than 20 billion pages to index, evaluating a single query requires processing large amounts of data.\nIn such a setting, to achieve a fast response time and to increase the query throughput, using a cache is crucial.\nThe primary use of a cache memory is to speedup computation by exploiting frequently or recently used data, although reducing the workload to back-end servers is also a major goal.\nCaching can be applied at different levels with increasing response latencies or processing requirements.\nFor example, the different levels may correspond to the main memory, the disk, or resources in a local or a wide area network.\nThe decision of what to cache is either off-line (static) or online (dynamic).\nA static cache is based on historical information and is periodically updated.\nA dynamic cache replaces entries according to the sequence of requests.\nWhen a new request arrives, the cache system decides whether to evict some entry from the cache in the case of a cache miss.\nSuch online decisions are based on a cache policy, and several different policies have been studied in the past.\nFor a search engine, there are two possible ways to use a cache memory: Caching answers: As the engine returns answers to a particular query, it may decide to store these answers to resolve future queries.\nCaching terms: As the engine evaluates a particular query, it may decide to store in memory the posting lists of the involved query terms.\nOften the whole set of posting lists does not fit in memory, and consequently, the engine has to select a small set to keep in memory and speed up query processing.\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists.\nOn the other hand, previously unseen queries occur more often than previously unseen terms, implying a higher miss rate for cached answers.\nCaching of posting lists has additional challenges.\nAs posting lists have variable size, caching them dynamically is not very efficient, due to the complexity in terms of efficiency and space, and the skewed distribution of the query stream, as shown later.\nStatic caching of posting lists poses even more challenges: when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient.\nFinally, before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time.\nFigure 1: One caching level in a distributed search architecture.\nIn this paper we explore the trade-offs in the design of each cache level, showing that the problem is the same and only a few parameters change.\nIn general, we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1.\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms.\nMore concretely, our main conclusions are that:\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms, but it is faster because there is no need for query evaluation.\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists; \u2022 Static caching of terms can be more effective than dynamic caching with, for example, LRU.\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache, and we show improvements over previous work, achieving a hit ratio over 90%; \u2022 Changes of the query distribution over time have little impact on static caching.\nThe remainder of this paper is organized as follows.\nSections 2 and 3 summarize related work and characterize the data sets we use.\nSection 4 discusses the limitations of dynamic caching.\nSections 5 and 6 introduce algorithms for caching posting lists, and a theoretical framework for the analysis of static caching, respectively.\nSection 7 discusses the impact of changes in the query distribution on static caching, and Section 8 provides concluding remarks.\n2.\nRELATED WORK\nThere is a large body of work devoted to query optimization.\nBuckley and Lewit [3], in one of the earliest works, take a term-at-a-time approach to deciding when inverted lists need not be further examined.\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [1, 4, 15].\nAlthough these approaches seek to improve query processing efficiency, they differ from our current work in that they do not consider caching.\nThey may be considered separate and complementary to a cache-based approach.\nRaghavan and Sever [12], in one of the first papers on exploiting user query history, propose using a query base, built upon a set of persistent \"optimal\" queries submitted in the past, to improve the retrieval effectiveness for similar future queries.\nMarkatos [10] shows the existence of temporal locality in queries, and compares the performance of different caching policies.\nBased on the observations of Markatos, Lempel and Moran propose a new caching policy, called Probabilistic Driven Caching, by attempting to estimate the probability distribution of all possible queries submitted to a search engine [8].\nFagni et al. follow Markatos' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [7].\nDifferent from our work, they consider caching and prefetching of pages of results.\nAs systems are often hierarchical, there has also been some effort on multi-level architectures.\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [13].\nTheir goal for such systems has been to improve response time for hierarchical engines.\nIn their architecture, both levels use an LRU eviction policy.\nThey find that the second-level cache can effectively reduce disk traffic, thus increasing the overall throughput.\nBaeza-Yates and Saint-Jean propose a three-level index organization [2].\nLong and Suel propose a caching system structured according to three different levels [9].\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists.\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy.\nFinally, our static caching algorithm for posting lists in Section 5 uses the ratio frequency\/size in order to evaluate the goodness of an item to cache.\nSimilar ideas have been used in the context of file caching [17], Web caching [5], and even caching of posting lists [9], but in all cases in a dynamic setting.\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists.\n3.\nDATA CHARACTERIZATION\n4.\nCACHING OF QUERIES AND TERMS\n5.\nCACHING POSTING LISTS\n6.\nANALYSIS OF STATIC CACHING\n6.1 Analytical Model\n6.2 Parameter Estimation\n6.3 Simulation Results\n7.\nEFFECT OF THE QUERY DYNAMICS\n8.\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time, reducing the load on query processors, and improving network bandwidth utilization.\nWe present results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries.\nOur results show that in our UK log, the minimum miss rate is 50% using a working set strategy.\nCaching terms is more effective with respect to miss rate, achieving values as low as 12%.\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU, obtaining hit rate values that are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists, and we simulate different types of architectures.\nOur results show that for centralized and LAN environments, there is an optimal allocation of caching query results and caching of posting lists, while for WAN scenarios in which network time prevails it is more important to cache query results.\nFigure 14: Impact of distribution changes on the static caching of posting lists.","lvl-4":"The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines.\nWe explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs. caching posting lists.\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers.\nWe propose a new algorithm for static caching of posting lists, which outperforms previous methods.\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists.\nFinally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time.\nOur results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory\/disk layer or a broker\/remote server layer.\n1.\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines, and users have high expectations of the quality and\nspeed of the answers.\nIn such a setting, to achieve a fast response time and to increase the query throughput, using a cache is crucial.\nCaching can be applied at different levels with increasing response latencies or processing requirements.\nThe decision of what to cache is either off-line (static) or online (dynamic).\nA static cache is based on historical information and is periodically updated.\nA dynamic cache replaces entries according to the sequence of requests.\nWhen a new request arrives, the cache system decides whether to evict some entry from the cache in the case of a cache miss.\nSuch online decisions are based on a cache policy, and several different policies have been studied in the past.\nFor a search engine, there are two possible ways to use a cache memory: Caching answers: As the engine returns answers to a particular query, it may decide to store these answers to resolve future queries.\nCaching terms: As the engine evaluates a particular query, it may decide to store in memory the posting lists of the involved query terms.\nOften the whole set of posting lists does not fit in memory, and consequently, the engine has to select a small set to keep in memory and speed up query processing.\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists.\nOn the other hand, previously unseen queries occur more often than previously unseen terms, implying a higher miss rate for cached answers.\nCaching of posting lists has additional challenges.\nAs posting lists have variable size, caching them dynamically is not very efficient, due to the complexity in terms of efficiency and space, and the skewed distribution of the query stream, as shown later.\nStatic caching of posting lists poses even more challenges: when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient.\nFinally, before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time.\nFigure 1: One caching level in a distributed search architecture.\nIn this paper we explore the trade-offs in the design of each cache level, showing that the problem is the same and only a few parameters change.\nIn general, we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1.\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms.\nMore concretely, our main conclusions are that:\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms, but it is faster because there is no need for query evaluation.\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists; \u2022 Static caching of terms can be more effective than dynamic caching with, for example, LRU.\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache, and we show improvements over previous work, achieving a hit ratio over 90%; \u2022 Changes of the query distribution over time have little impact on static caching.\nSections 2 and 3 summarize related work and characterize the data sets we use.\nSection 4 discusses the limitations of dynamic caching.\nSections 5 and 6 introduce algorithms for caching posting lists, and a theoretical framework for the analysis of static caching, respectively.\nSection 7 discusses the impact of changes in the query distribution on static caching, and Section 8 provides concluding remarks.\n2.\nRELATED WORK\nThere is a large body of work devoted to query optimization.\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [1, 4, 15].\nAlthough these approaches seek to improve query processing efficiency, they differ from our current work in that they do not consider caching.\nMarkatos [10] shows the existence of temporal locality in queries, and compares the performance of different caching policies.\nFagni et al. follow Markatos' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [7].\nDifferent from our work, they consider caching and prefetching of pages of results.\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [13].\nTheir goal for such systems has been to improve response time for hierarchical engines.\nIn their architecture, both levels use an LRU eviction policy.\nThey find that the second-level cache can effectively reduce disk traffic, thus increasing the overall throughput.\nLong and Suel propose a caching system structured according to three different levels [9].\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists.\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy.\nFinally, our static caching algorithm for posting lists in Section 5 uses the ratio frequency\/size in order to evaluate the goodness of an item to cache.\nSimilar ideas have been used in the context of file caching [17], Web caching [5], and even caching of posting lists [9], but in all cases in a dynamic setting.\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists.\n8.\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time, reducing the load on query processors, and improving network bandwidth utilization.\nWe present results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries.\nOur results show that in our UK log, the minimum miss rate is 50% using a working set strategy.\nCaching terms is more effective with respect to miss rate, achieving values as low as 12%.\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU, obtaining hit rate values that are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists, and we simulate different types of architectures.\nOur results show that for centralized and LAN environments, there is an optimal allocation of caching query results and caching of posting lists, while for WAN scenarios in which network time prevails it is more important to cache query results.\nFigure 14: Impact of distribution changes on the static caching of posting lists.","lvl-2":"The Impact of Caching on Search Engines\nABSTRACT\nIn this paper we study the trade-offs in designing efficient caching systems for Web search engines.\nWe explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs. caching posting lists.\nUsing a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers.\nWe propose a new algorithm for static caching of posting lists, which outperforms previous methods.\nWe also study the problem of finding the optimal way to split the static cache between answers and posting lists.\nFinally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time.\nOur results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory\/disk layer or a broker\/remote server layer.\n1.\nINTRODUCTION\nMillions of queries are submitted daily to Web search engines, and users have high expectations of the quality and\nspeed of the answers.\nAs the searchable Web becomes larger and larger, with more than 20 billion pages to index, evaluating a single query requires processing large amounts of data.\nIn such a setting, to achieve a fast response time and to increase the query throughput, using a cache is crucial.\nThe primary use of a cache memory is to speedup computation by exploiting frequently or recently used data, although reducing the workload to back-end servers is also a major goal.\nCaching can be applied at different levels with increasing response latencies or processing requirements.\nFor example, the different levels may correspond to the main memory, the disk, or resources in a local or a wide area network.\nThe decision of what to cache is either off-line (static) or online (dynamic).\nA static cache is based on historical information and is periodically updated.\nA dynamic cache replaces entries according to the sequence of requests.\nWhen a new request arrives, the cache system decides whether to evict some entry from the cache in the case of a cache miss.\nSuch online decisions are based on a cache policy, and several different policies have been studied in the past.\nFor a search engine, there are two possible ways to use a cache memory: Caching answers: As the engine returns answers to a particular query, it may decide to store these answers to resolve future queries.\nCaching terms: As the engine evaluates a particular query, it may decide to store in memory the posting lists of the involved query terms.\nOften the whole set of posting lists does not fit in memory, and consequently, the engine has to select a small set to keep in memory and speed up query processing.\nReturning an answer to a query that already exists in the cache is more efficient than computing the answer using cached posting lists.\nOn the other hand, previously unseen queries occur more often than previously unseen terms, implying a higher miss rate for cached answers.\nCaching of posting lists has additional challenges.\nAs posting lists have variable size, caching them dynamically is not very efficient, due to the complexity in terms of efficiency and space, and the skewed distribution of the query stream, as shown later.\nStatic caching of posting lists poses even more challenges: when deciding which terms to cache one faces the trade-off between frequently queried terms and terms with small posting lists that are space efficient.\nFinally, before deciding to adopt a static caching policy the query stream should be analyzed to verify that its characteristics do not change rapidly over time.\nFigure 1: One caching level in a distributed search architecture.\nIn this paper we explore the trade-offs in the design of each cache level, showing that the problem is the same and only a few parameters change.\nIn general, we assume that each level of caching in a distributed search architecture is similar to that shown in Figure 1.\nWe use a query log spanning a whole year to explore the limitations of dynamically caching query answers or posting lists for query terms.\nMore concretely, our main conclusions are that:\n\u2022 Caching query answers results in lower hit ratios compared to caching of posting lists for query terms, but it is faster because there is no need for query evaluation.\nWe provide a framework for the analysis of the trade-off between static caching of query answers and posting lists; \u2022 Static caching of terms can be more effective than dynamic caching with, for example, LRU.\nWe provide algorithms based on the KNAPSACK problem for selecting the posting lists to put in a static cache, and we show improvements over previous work, achieving a hit ratio over 90%; \u2022 Changes of the query distribution over time have little impact on static caching.\nThe remainder of this paper is organized as follows.\nSections 2 and 3 summarize related work and characterize the data sets we use.\nSection 4 discusses the limitations of dynamic caching.\nSections 5 and 6 introduce algorithms for caching posting lists, and a theoretical framework for the analysis of static caching, respectively.\nSection 7 discusses the impact of changes in the query distribution on static caching, and Section 8 provides concluding remarks.\n2.\nRELATED WORK\nThere is a large body of work devoted to query optimization.\nBuckley and Lewit [3], in one of the earliest works, take a term-at-a-time approach to deciding when inverted lists need not be further examined.\nMore recent examples demonstrate that the top k documents for a query can be returned without the need for evaluating the complete set of posting lists [1, 4, 15].\nAlthough these approaches seek to improve query processing efficiency, they differ from our current work in that they do not consider caching.\nThey may be considered separate and complementary to a cache-based approach.\nRaghavan and Sever [12], in one of the first papers on exploiting user query history, propose using a query base, built upon a set of persistent \"optimal\" queries submitted in the past, to improve the retrieval effectiveness for similar future queries.\nMarkatos [10] shows the existence of temporal locality in queries, and compares the performance of different caching policies.\nBased on the observations of Markatos, Lempel and Moran propose a new caching policy, called Probabilistic Driven Caching, by attempting to estimate the probability distribution of all possible queries submitted to a search engine [8].\nFagni et al. follow Markatos' work by showing that combining static and dynamic caching policies together with an adaptive prefetching policy achieves a high hit ratio [7].\nDifferent from our work, they consider caching and prefetching of pages of results.\nAs systems are often hierarchical, there has also been some effort on multi-level architectures.\nSaraiva et al. propose a new architecture for Web search engines using a two-level dynamic caching system [13].\nTheir goal for such systems has been to improve response time for hierarchical engines.\nIn their architecture, both levels use an LRU eviction policy.\nThey find that the second-level cache can effectively reduce disk traffic, thus increasing the overall throughput.\nBaeza-Yates and Saint-Jean propose a three-level index organization [2].\nLong and Suel propose a caching system structured according to three different levels [9].\nThe intermediate level contains frequently occurring pairs of terms and stores the intersections of the corresponding inverted lists.\nThese last two papers are related to ours in that they exploit different caching strategies at different levels of the memory hierarchy.\nFinally, our static caching algorithm for posting lists in Section 5 uses the ratio frequency\/size in order to evaluate the goodness of an item to cache.\nSimilar ideas have been used in the context of file caching [17], Web caching [5], and even caching of posting lists [9], but in all cases in a dynamic setting.\nTo the best of our knowledge we are the first to use this approach for static caching of posting lists.\n3.\nDATA CHARACTERIZATION\nOur data consists of a crawl of documents from the UK domain, and query logs of one year of queries submitted to http:\/\/www.yahoo.co.uk from November 2005 to November 2006.\nIn our logs, 50% of the total volume of queries are unique.\nThe average query length is 2.5 terms, with the longest query having 731 terms.\nFigure 2: The distribution of queries (bottom curve)\nand query terms (middle curve) in the query log.\nThe distribution of document frequencies of terms in the UK-2006 dataset (upper curve).\nFigure 2 shows the distributions of queries (lower curve), and query terms (middle curve).\nThe x-axis represents the normalized frequency rank of the query or term.\n(The most frequent query appears closest to the y-axis.)\nThe y-axis is\nTable 1: Statistics of the UK-2006 sample.\nthe normalized frequency for a given query (or term).\nAs expected, the distribution of query frequencies and query term frequencies follow power law distributions, with slope of 1.84 and 2.26, respectively.\nIn this figure, the query frequencies were computed as they appear in the logs with no normalization for case or white space.\nThe query terms (middle curve) have been normalized for case, as have the terms in the document collection.\nThe document collection that we use for our experiments is a summary of the UK domain crawled in May 2006.1 This summary corresponds to a maximum of 400 crawled documents per host, using a breadth first crawling strategy, comprising 15GB.\nThe distribution of document frequencies of terms in the collection follows a power law distribution with slope 2.38 (upper curve in Figure 2).\nThe statistics of the collection are shown in Table 1.\nWe measured the correlation between the document frequency of terms in the collection and the number of queries that contain a particular term in the query log to be 0.424.\nA scatter plot for a random sample of terms is shown in Figure 3.\nIn this experiment, terms have been converted to lower case in both the queries and the documents so that the frequencies will be comparable.\nFigure 3: Normalized scatter plot of document-term frequencies vs. query-term frequencies.\n4.\nCACHING OF QUERIES AND TERMS\nCaching relies upon the assumption that there is locality in the stream of requests.\nThat is, there must be sufficient repetition in the stream of requests and within intervals of time that enable a cache memory of reasonable size to be effective.\nIn the query log we used, 88% of the unique queries are singleton queries, and 44% are singleton queries out of the whole volume.\nThus, out of all queries in the stream composing the query log, the upper threshold on hit ratio is 56%.\nThis is because only 56% of all the queries comprise queries that have multiple occurrences.\nIt is important to observe, however, that not all queries in this 56% can be cache hits because of compulsory misses.\nA compulsory miss 1The collection is available from the University of Milan: http:\/\/law.dsi.unimi.it\/.\nURL retrieved 05\/2007.\nFigure 4: Arrival rate for both terms and queries.\nhappens when the cache receives a query for the first time.\nThis is different from capacity misses, which happen due to space constraints on the amount of memory the cache uses.\nIf we consider a cache with infinite memory, then the hit ratio is 50%.\nNote that for an infinite cache there are no capacity misses.\nAs we mentioned before, another possibility is to cache the posting lists of terms.\nIntuitively, this gives more freedom in the utilization of the cache content to respond to queries because cached terms might form a new query.\nOn the other hand, they need more space.\nAs opposed to queries, the fraction of singleton terms in the total volume of terms is smaller.\nIn our query log, only 4% of the terms appear once, but this accounts for 73% of the vocabulary of query terms.\nWe show in Section 5 that caching a small fraction of terms, while accounting for terms appearing in many documents, is potentially very effective.\nFigure 4 shows several graphs corresponding to the normalized arrival rate for different cases using days as bins.\nThat is, we plot the normalized number of elements that appear in a day.\nThis graph shows only a period of 122 days, and we normalize the values by the maximum value observed throughout the whole period of the query log.\n\"Total queries\" and \"Total terms\" correspond to the total volume of queries and terms, respectively.\n\"Unique queries\" and \"Unique terms\" correspond to the arrival rate of unique queries and terms.\nFinally, \"Query diff\" and \"Terms diff\" correspond to the difference between the curves for total and unique.\nIn Figure 4, as expected, the volume of terms is much higher than the volume of queries.\nThe difference between the total number of terms and the number of unique terms is much larger than the difference between the total number of queries and the number of unique queries.\nThis observation implies that terms repeat significantly more than queries.\nIf we use smaller bins, say of one hour, then the ratio of unique to volume is higher for both terms and queries because it leaves less room for repetition.\nWe also estimated the workload using the document frequency of terms as a measure of how much work a query imposes on a search engine.\nWe found that it follows closely the arrival rate for terms shown in Figure 4.\nTo demonstrate the effect of a dynamic cache on the query frequency distribution of Figure 2, we plot the same frequency graph, but now considering the frequency of queries\nFigure 5: Frequency graph after LRU cache.\nafter going through an LRU cache.\nOn a cache miss, an LRU cache decides upon an entry to evict using the information on the recency of queries.\nIn this graph, the most frequent queries are not the same queries that were most frequent before the cache.\nIt is possible that queries that are most frequent after the cache have different characteristics, and tuning the search engine to queries frequent before the cache may degrade performance for non-cached queries.\nThe maximum frequency after caching is less than 1% of the maximum frequency before the cache, thus showing that the cache is very effective in reducing the load of frequent queries.\nIf we re-rank the queries according to after-cache frequency, the distribution is still a power law, but with a much smaller value for the highest frequency.\nWhen discussing the effectiveness of dynamically caching, an important metric is cache miss rate.\nTo analyze the cache miss rate for different memory constraints, we use the working set model [6, 14].\nA working set, informally, is the set of references that an application or an operating system is currently working with.\nThe model uses such sets in a strategy that tries to capture the temporal locality of references.\nThe working set strategy then consists in keeping in memory only the elements that are referenced in the previous 0 steps of the input sequence, where 0 is a configurable parameter corresponding to the window size.\nOriginally, working sets have been used for page replacement algorithms of operating systems, and considering such a strategy in the context of search engines is interesting for three reasons.\nFirst, it captures the amount of locality of queries and terms in a sequence of queries.\nLocality in this case refers to the frequency of queries and terms in a window of time.\nIf many queries appear multiple times in a window, then locality is high.\nSecond, it enables an offline analysis of the expected miss rate given different memory constraints.\nThird, working sets capture aspects of efficient caching algorithms such as LRU.\nLRU assumes that references farther in the past are less likely to be referenced in the present, which is implicit in the concept of working sets [14].\nFigure 6 plots the miss rate for different working set sizes, and we consider working sets of both queries and terms.\nThe working set sizes are normalized against the total number of queries in the query log.\nIn the graph for queries, there is a sharp decay until approximately 0.01, and the rate at which the miss rate drops decreases as we increase the size of the working set over 0.01.\nFinally, the minimum value it reaches is 50% miss rate, not shown in the figure as we have cut the tail of the curve for presentation purposes.\nFigure 6: Miss rate as a function of the working set size.\nFigure 7: Distribution of distances expressed in terms of distinct queries.\nCompared to the query curve, we observe that the minimum miss rate for terms is substantially smaller.\nThe miss rate also drops sharply on values up to 0.01, and it decreases minimally for higher values.\nThe minimum value, however, is slightly over 10%, which is much smaller than the minimum value for the sequence of queries.\nThis implies that with such a policy it is possible to achieve over 80% hit rate, if we consider caching dynamically posting lists for terms as opposed to caching answers for queries.\nThis result does not consider the space required for each unit stored in the cache memory, or the amount of time it takes to put together a response to a user query.\nWe analyze these issues more carefully later in this paper.\nIt is interesting also to observe the histogram of Figure 7, which is an intermediate step in the computation of the miss rate graph.\nIt reports the distribution of distances between repetitions of the same frequent query.\nThe distance in the plot is measured in the number of distinct queries separating a query and its repetition, and it considers only queries appearing at least 10 times.\nFrom Figures 6 and 7, we conclude that even if we set the size of the query answers cache to a relatively large number of entries, the miss rate is high.\nThus, caching the posting lists of terms has the potential to improve the hit ratio.\nThis is what we explore next.\n5.\nCACHING POSTING LISTS\nThe previous section shows that caching posting lists can obtain a higher hit rate compared to caching query answers.\nIn this section we study the problem of how to select post\ning lists to place on a certain amount of available memory, assuming that the whole index is larger than the amount of memory available.\nThe posting lists have variable size (in fact, their size distribution follows a power law), so it is beneficial for a caching policy to consider the sizes of the posting lists.\nWe consider both dynamic and static caching.\nFor dynamic caching, we use two well-known policies, LRU and LFU, as well as a modified algorithm that takes posting-list size into account.\nBefore discussing the static caching strategies, we introduce some notation.\nWe use fq (t) to denote the query-term frequency of a term t, that is, the number of queries containing t in the query log, and fd (t) to denote the document frequency of t, that is, the number of documents in the collection in which the term t appears.\nThe first strategy we consider is the algorithm proposed by Baeza-Yates and Saint-Jean [2], which consists in selecting the posting lists of the terms with the highest query-term frequencies fq (t).\nWe call this algorithm QTF.\nWe observe that there is a trade-off between fq (t) and fd (t).\nTerms with high fq (t) are useful to keep in the cache because they are queried often.\nOn the other hand, terms with high fd (t) are not good candidates because they correspond to long posting lists and consume a substantial amount of space.\nIn fact, the problem of selecting the best posting lists for the static cache corresponds to the standard KNAPSACK problem: given a knapsack of fixed capacity, and a set of n items, such as the i-th item has value ci and size si, select the set of items that fit in the knapsack and maximize the overall value.\nIn our case, \"value\" corresponds to fq (t) and \"size\" corresponds to fd (t).\nThus, we employ a simple algorithm for the knapsack problem, which is selecting the posting lists of the terms with the highest values of the ratio fq (t) fd (t).\nWe call this algorithm QTFDF.\nWe tried other variations considering query frequencies instead of term frequencies, but the gain was minimal compared to the complexity added.\nIn addition to the above two static algorithms we consider the following algorithms for dynamic caching:\n\u2022 LRU: A standard LRU algorithm, but many posting lists might need to be evicted (in order of least-recent usage) until there is enough space in the memory to place the currently accessed posting list; \u2022 LFU: A standard LFU algorithm (eviction of the leastfrequently used), with the same modification as the LRU; \u2022 DYN-QTFDF: A dynamic version of the QTFDF algorithm; evict from the cache the term (s) with the lowest\nThe performance of all the above algorithms for 15 weeks of the query log and the UK dataset are shown in Figure 8.\nPerformance is measured with hit rate.\nThe cache size is measured as a fraction of the total space required to store the posting lists of all terms.\nFor the dynamic algorithms, we load the cache with terms in order of fq (t) and we let the cache \"warm up\" for 1 million queries.\nFor the static algorithms, we assume complete knowledge of the frequencies fq (t), that is, we estimate fq (t) from the whole query stream.\nAs we show in Section 7 the results do not change much if we compute the query-term frequencies using the first 3 or 4 weeks of the query log and measure the hit rate on the rest.\nFigure 8: Hit rate of different strategies for caching posting lists.\nThe most important observation from our experiments is that the static QTFDF algorithm has a better hit rate than all the dynamic algorithms.\nAn important benefit a static cache is that it requires no eviction and it is hence more efficient when evaluating queries.\nHowever, if the characteristics of the query traffic change frequently over time, then it requires re-populating the cache often or there will be a significant impact on hit rate.\n6.\nANALYSIS OF STATIC CACHING\nIn this section we provide a detailed analysis for the problem of deciding whether it is preferable to cache query answers or cache posting lists.\nOur analysis takes into account the impact of caching between two levels of the data-access hierarchy.\nIt can either be applied at the memory\/disk layer or at a server\/remote server layer as in the architecture we discussed in the introduction.\nUsing a particular system model, we obtain estimates for the parameters required by our analysis, which we subsequently use to decide the optimal trade-off between caching query answers and caching posting lists.\n6.1 Analytical Model\nLet M be the size of the cache measured in answer units (the cache can store M query answers).\nAssume that all posting lists are of the same length L, measured in answer units.\nWe consider the following two cases: (A) a cache that stores only precomputed answers, and (B) a cache that stores only posting lists.\nIn the first case, Nc = M answers fit in the cache, while in the second case Np = M\/L posting lists fit in the cache.\nThus, Np = Nc\/L.\nNote that although posting lists require more space, we can combine terms to evaluate more queries (or partial queries).\nFor case (A), suppose that a query answer in the cache can be evaluated in 1 time unit.\nFor case (B), assume that if the posting lists of the terms of a query are in the cache then the results can be computed in TR1 time units, while if the posting lists are not in the cache then the results can be computed in TR2 time units.\nOf course TR2> TR1.\nNow we want to compare the time to answer a stream of Q queries in both cases.\nLet Vc (Nc) be the volume of the most frequent Nc queries.\nThen, for case (A), we have an overall time\nSimilarly, for case (B), let Vp (Np) be the number of com\nputable queries.\nThen we have overall time\nWe want to check under which conditions we have TPL 1, the i-th most frequent query appears with probability proportional to i\u03b11.\nTherefore, the volume Vc (n), which is the total number of the n most frequent queries, is\nFigure 9: Cache saturation as a function of size.\nanswers (centralized and distributed case).\nWe know that Vp (n) grows faster than Vc (n) and assume, based on experimental results, that the relation is of the form Vp (n) = k Vc (n) \u03b2.\nIn the worst case, for a large cache, \u03b2--+ 1.\nThat is, both techniques will cache a constant fraction of the overall query volume.\nThen caching posting lists makes sense only if\nIf we use compression, we have L ~ TR1.\nAccording to the experiments that we show later, compression is always better.\nFor a small cache, we are interested in the transient behavior and then \u03b2> 1, as computed from our data.\nIn this case there will always be a point where TPL> TCA for a large number of queries.\nIn reality, instead of filling the cache only with answers or only with posting lists, a better strategy will be to divide the total cache space into cache for answers and cache for posting lists.\nIn such a case, there will be some queries that could be answered by both parts of the cache.\nAs the answer cache is faster, it will be the first choice for answering those queries.\nLet QN.\nand QN,, be the set of queries that can be answered by the cached answers and the cached posting lists, respectively.\nThen, the overall time is\nwhere Np = (M--Nc) \/ L. Finding the optimal division of the cache in order to minimize the overall retrieval time is a difficult problem to solve analytically.\nIn Section 6.3 we use simulations to derive optimal cache trade-offs for particular implementation examples.\n6.2 Parameter Estimation\nWe now use a particular implementation of a centralized system and the model of a distributed system as examples from which we estimate the parameters of the analysis from the previous section.\nWe perform the experiments using an optimized version of Terrier [11] for both indexing documents and processing queries, on a single machine with a Pentium 4 at 2GHz and 1GB of RAM.\nWe indexed the documents from the UK-2006 dataset, without removing stop words or applying stemming.\nThe posting lists in the inverted file consist of pairs of document identifier and term frequency.\nWe compress the document identifier gaps using Elias gamma encoding, and the\nterm frequencies in documents using unary encoding [16].\nThe size of the inverted file is 1,189 Mb.\nA stored answer requires 1264 bytes, and an uncompressed posting takes 8 bytes.\nFrom Table 1, we obtain L = (8 \u00b7 #of postings) 1264 \u00b7 #of terms = 0.75 and L ~ = Inverted file size 1264 \u00b7 #of terms = 0.26.\nWe estimate the ratio TR = T\/Tc between the average time T it takes to evaluate a query and the average time Tc it takes to return a stored answer for the same query, in the following way.\nTc is measured by loading the answers for 100,000 queries in memory, and answering the queries from memory.\nThe average time is Tc = 0.069 ms. T is measured by processing the same 100,000 queries (the first 10,000 queries are used to warm-up the system).\nFor each query, we remove stop words, if there are at least three remaining terms.\nThe stop words correspond to the terms with a frequency higher than the number of documents in the index.\nWe use a document-at-a-time approach to retrieve documents containing all query terms.\nThe only disk access required during query processing is for reading compressed posting lists from the inverted file.\nWe perform both full and partial evaluation of answers, because some queries are likely to retrieve a large number of documents, and only a fraction of the retrieved documents will be seen by users.\nIn the partial evaluation of queries, we terminate the processing after matching 10,000 documents.\nThe estimated ratios TR are presented in Table 2.\nFigure 10 shows for a sample of queries the workload of the system with partial query evaluation and compressed posting lists.\nThe x-axis corresponds to the total time the tical axis corresponds to the sum E system spends processing a particular query, and the vert \u2208 q fq \u2022 fd (t).\nNotice that the total number of postings of the query-terms does not necessarily provide an accurate estimate of the workload imposed on the system by a query (which is the case for full evaluation and uncompressed lists).\nFigure 10: Workload for partial query evaluation with compressed posting lists.\nThe analysis of the previous section also applies to a distributed retrieval system in one or multiple sites.\nSuppose that a document partitioned distributed system is running on a cluster of machines interconnected with a local area network (LAN) in one site.\nThe broker receives queries and broadcasts them to the query processors, which answer the queries and return the results to the broker.\nFinally, the broker merges the received answers and generates the final set of answers (we assume that the time spent on merging results is negligible).\nThe difference between the centralized architecture and the document partition architecture is the extra communication between the broker and the query processors.\nUsing ICMP pings on a 100Mbps LAN, we have measured that sending the query from the broker to the query processors which send an answer of 4,000 bytes back to the broker takes on average 0.615 ms. Hence, TRL = TR + 0.615 ms\/0 .069 ms = TR + 9.\nIn the case when the broker and the query processors are in different sites connected with a wide area network (WAN), we estimated that broadcasting the query from the broker to the query processors and getting back an answer of 4,000 bytes takes on average 329ms.\nHence, TRW = TR + 329ms\/0 .069 ms = TR + 4768.\n6.3 Simulation Results\nWe now address the problem of finding the optimal tradeoff between caching query answers and caching posting lists.\nTo make the problem concrete we assume a fixed budget M on the available memory, out of which x units are used for caching query answers and M \u2212 x for caching posting lists.\nWe perform simulations and compute the average response time as a function of x. Using a part of the query log as training data, we first allocate in the cache the answers to the most frequent queries that fit in space x, and then we use the rest of the memory to cache posting lists.\nFor selecting posting lists we use the QTFDF algorithm, applied to the training query log but excluding the queries that have already been cached.\nIn Figure 11, we plot the simulated response time for a centralized system as a function of x. For the uncompressed index we use M = 1GB, and for the compressed index we use M = 0.5 GB.\nIn the case of the configuration that uses partial query evaluation with compressed posting lists, the lowest response time is achieved when 0.15 GB out of the 0.5 GB is allocated for storing answers for queries.\nWe obtained similar trends in the results for the LAN setting.\nFigure 12 shows the simulated workload for a distributed system across a WAN.\nIn this case, the total amount of memory is split between the broker, which holds the cached\nFigure 11: Optimal division of the cache in a server.\nFigure 12: Optimal division of the cache when the next level requires WAN access.\nanswers of queries, and the query processors, which hold the cache of posting lists.\nAccording to the figure, the difference between the configurations of the query processors is less important because the network communication overhead increases the response time substantially.\nWhen using uncompressed posting lists, the optimal allocation of memory corresponds to using approximately 70% of the memory for caching query answers.\nThis is explained by the fact that there is no need for network communication when the query can be answered by the cache at the broker.\n7.\nEFFECT OF THE QUERY DYNAMICS\nFor our query log, the query distribution and query-term distribution change slowly over time.\nTo support this claim, we first assess how topics change comparing the distribution of queries from the first week in June, 2006, to the distribution of queries for the remainder of 2006 that did not appear in the first week in June.\nWe found that a very small percentage of queries are new queries.\nThe majority of queries that appear in a given week repeat in the following weeks for the next six months.\nWe then compute the hit rate of a static cache of 128, 000 answers trained over a period of two weeks (Figure 13).\nWe report hit rate hourly for 7 days, starting from 5pm.\nWe observe that the hit rate reaches its highest value during the night (around midnight), whereas around 2-3pm it reaches its minimum.\nAfter a small decay in hit rate values, the hit rate stabilizes between 0.28, and 0.34 for the entire week, suggesting that the static cache is effective for a whole week after the training period.\nFigure 13: Hourly hit rate for a static cache holding 128,000 answers during the period of a week.\nThe static cache of posting lists can be periodically recomputed.\nTo estimate the time interval in which we need to recompute the posting lists on the static cache we need to consider an efficiency\/quality trade-off: using too short a time interval might be prohibitively expensive, while recomputing the cache too infrequently might lead to having an obsolete cache not corresponding to the statistical characteristics of the current query stream.\nWe measured the effect on the QTFDF algorithm of the changes in a 15-week query stream (Figure 14).\nWe compute the query term frequencies over the whole stream, select which terms to cache, and then compute the hit rate on the whole query stream.\nThis hit rate is as an upper bound, and it assumes perfect knowledge of the query term frequencies.\nTo simulate a realistic scenario, we use the first 6 (3) weeks of the query stream for computing query term frequencies and the following 9 (12) weeks to estimate the hit rate.\nAs Figure 14 shows, the hit rate decreases by less than 2%.\nThe high correlation among the query term frequencies during different time periods explains the graceful adaptation of the static caching algorithms to the future query stream.\nIndeed, the pairwise correlation among all possible 3-week periods of the 15-week query stream is over 99.5%.\n8.\nCONCLUSIONS\nCaching is an effective technique in search engines for improving response time, reducing the load on query processors, and improving network bandwidth utilization.\nWe present results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the high number of compulsory misses caused by the number of unique or infrequent queries.\nOur results show that in our UK log, the minimum miss rate is 50% using a working set strategy.\nCaching terms is more effective with respect to miss rate, achieving values as low as 12%.\nWe also propose a new algorithm for static caching of posting lists that outperforms previous static caching algorithms as well as dynamic algorithms such as LRU and LFU, obtaining hit rate values that are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off between caching query results and caching posting lists, and we simulate different types of architectures.\nOur results show that for centralized and LAN environments, there is an optimal allocation of caching query results and caching of posting lists, while for WAN scenarios in which network time prevails it is more important to cache query results.\nFigure 14: Impact of distribution changes on the static caching of posting lists.","keyphrases":["cach","effici cach system","web search engin","web search","dynam cach","cach queri result","cach post list","queri log","static cach","static cach","answer and post list","data-access hierarchi","disk layer","remot server layer","static cach effect","the queri distribut","inform retriev system"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"H-12","title":"Fast Generation of Result Snippets in Web Search","abstract":"The presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users. In this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets. We begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library. These experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process. Using simulation, we examine snippet generation performance for different size RAM caches. Finally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality. This scheme effectively doubles the number of documents that can fit in a fixed size cache.","lvl-1":"Fast Generation of Result Snippets in Web Search Andrew Turpin & Yohannes Tsegay RMIT University Melbourne, Australia aht@cs.rmit.edu.au ytsegay@cs.rmit.edu.au David Hawking CSIRO ICT Centre Canberra, Australia david.hawking@acm.org Hugh E. Williams Microsoft Corporation One Microsoft Way Redmond, WA.\nhughw@microsoft.com ABSTRACT The presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users.\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets.\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library.\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process.\nUsing simulation, we examine snippet generation performance for different size RAM caches.\nFinally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality.\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; H.3.4 [Information Storage and Retrieval]: Systems and Software-performance evaluation (efficiency and effectiveness); General Terms Algorithms, Experimentation, Measurement, Performance 1.\nINTRODUCTION Each result in search results list delivered by current WWW search engines such as search.yahoo.com, google.com and search.msn.com typically contains the title and URL of the actual document, links to live and cached versions of the document and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented, giving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the document content (or its metadata).\nThey may be static (for example, always show the first 50 words of the document, or the content of its description metadata, or a description taken from a directory site such as dmoz.org) or query-biased [20].\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher``s query.\nThe addition of informative snippets to search results may substantially increase their value to searchers.\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any documents by directly providing the answer to the searcher``s real information need, such as the contact details of a person or an organization.\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load (remembering that each search typically generates ten snippets).\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files, works when query rates are low and collections are small, but does not scale to the degree required.\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them, would be manifestly excessive under heavy query load.\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems.\nSpecial-purpose filesystems have been built to address these problems [6].\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications.\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov (c. 25 million pages) and govsearch.australia.gov.au (c. 5 million pages) and within large enterprises such as IBM [2] (c. 50 million pages).\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present.\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space\/time performance against an obvious baseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk reads and cpu processing; demonstrating that the time for locating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory, it may seem that compression is not required.\nHowever, this is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for experimentation is difficult, hence we use simulation to show that caching documents dramatically improves the performance of snippet generation.\nIn turn, the more documents can be compressed, the more can fit in cache, and hence the more disk seeks can be avoided: the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [24].\nAs hitting the document cache is important, we examine document compaction, as opposed to compression, schemes by imposing an a priori ordering of sentences within a document, and then only allowing leading sentences into cache for each document.\nThis leads to further time savings, with only marginal impact on the quality of the snippets returned.\n2.\nRELATED WORK Snippet generation is a special type of extractive document summarization, in which sentences, or sentence fragments, are selected for inclusion in the summary on the basis of the degree to which they match the search query.\nThis process was given the name of query-biased summarization by Tombros and Sanderson [20] The reader is referred to Mani [13] and to Radev et al. [16] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries.\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document.\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries, as there is no need to search the document for text fragments containing query terms.\nTo our knowledge, Google was the first whole-ofWeb search engine to provide query biased summaries, but summarization is listed by Brin and Page [1] only under the heading of future work.\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [20, 21], rather than efficient generation of summaries.\nDespite the importance of efficient summary generation in Web search, few algorithms appear in the literature.\nSilber and McKoy [19] describe a linear-time lexical chaining algorithm for use in generic summaries, but offer no empirical data for the performance of their algorithm.\nWhite et al [21] report some experimental timings of their WebDocSum system, but the snippet generation algorithms themselves are not isolated, so it is difficult to infer snippet generation time comparable to the times we report in this paper.\n3.\nSEARCH ENGINE ARCHITECTURES A search engine must perform a variety of activities, and is comprised of many sub-systems, as depicted by the boxes in Figure 1.\nNote that there may be several other sub-systems such as the Advertising Engine or the Parsing Engine that could easily be added to the diagram, but we have concentrated on the sub-systems that are relevant to snippet generation.\nDepending on the number of documents that the search engine indexes, the data and processes for each Ranking Engine Crawling Engine Indexing Engine Engine Lexicon Meta Data Engine Engine Snippet Term&Doc#s Snippetperdoc WEB Query Engine Query Results Page Term#s Doc#s Invertedlists Docs perdoc Title,URL,etc Doc#s Document meta data Terms Querystring Term#s Figure 1: An abstraction of some of the sub-systems in a search engine.\nDepending on the number of documents indexed, each sub-system could reside on a single machine, be distributed across thousands of machines, or a combination of both.\nsub-system could be distributed over many machines, or all occupy a single server and filesystem, competing with each other for resources.\nSimilarly, it may be more efficient to combine some sub-systems in an implementation of the diagram.\nFor example, the meta-data such as document title and URL requires minimal computation apart from highlighting query words, but we note that disk seeking is likely to be minimized if title, URL and fixed summary information is stored contiguously with the text from which query biased summaries are extracted.\nHere we ignore the fixed text and consider only the generation of query biased summaries: we concentrate on the Snippet Engine.\nIn addition to data and programs operating on that data, each sub-system also has its own memory management scheme.\nThe memory management system may simply be the memory hierarchy provided by the operating system used on machines in the sub-system, or it may be explicitly coded to optimise the processes in the sub-system.\nThere are many papers on caching in search engines (see [3] and references therein for a current summary), but it seems reasonable that there is a query cache in the Query Engine that stores precomputed final result pages for very popular queries.\nWhen one of the popular queries is issued, the result page is fetched straight from the query cache.\nIf the issued query is not in the query cache, then the Query Engine uses the four sub-systems in turn to assemble a results page.\n1.\nThe Lexicon Engine maps query terms to integers.\n2.\nThe Ranking Engine retrieves inverted lists for each term, using them to get a ranked list of documents.\n3.\nThe Snippet Engine uses those document numbers and query term numbers to generate snippets.\n4.\nThe Meta Data Engine fetches other information about each document to construct the results page.\nIN A document broken into one sentence per line, and a sequence of query terms.\n1 For each line of the text, L = [w1, w2, ... , wm] 2 Let h be 1 if L is a heading, 0 otherwise.\n3 Let be 2 if L is the first line of a document, 1 if it is the second line, 0 otherwise.\n4 Let c be the number of wi that are query terms, counting repetitions.\n5 Let d be the number of distinct query terms that match some wi.\n6 Identify the longest contiguous run of query terms in L, say wj ... wj+k. 7 Use a weighted combination of c, d, k, h and to derive a score s. 8 Insert L into a max-heap using s as the key.\nOUT Remove the number of sentences required from the heap to form the summary.\nFigure 2: Simple sentence ranker that operates on raw text with one sentence per line.\n4.\nTHE SNIPPET ENGINE For each document identifier passed to the Snippet Engine, the engine must generate text, preferably containing query terms, that attempts to summarize that document.\nPrevious work on summarization identifies the sentence as the minimal unit for extraction and presentation to the user [12].\nAccordingly, we also assume a web snippet extraction process will extract sentences from documents.\nIn order to construct a snippet, all sentences in a document should be ranked against the query, and the top two or three returned as the snippet.\nThe scoring of sentences against queries has been explored in several papers [7, 12, 18, 20, 21], with different features of sentences deemed important.\nBased on these observations, Figure 2, shows the general algorithm for scoring sentences in relevant documents, with the highest scoring sentences making the snippet for each document.\nThe final score of a sentence, assigned in Step 7, can be derived in many different ways.\nIn order to avoid bias towards any particular scoring mechanism, we compare sentence quality later in the paper using the individual components of the score, rather than an arbitrary combination of the components.\n4.1 Parsing Web Documents Unlike well edited text collections that are often the target for summarization systems, Web data is often poorly structured, poorly punctuated, and contains a lot of data that do not form part of valid sentences that would be candidates for parts of snippets.\nWe assume that the documents passed to the Snippet Engine by the Indexing Engine have all HTML tags and JavaScript removed, and that each document is reduced to a series of word tokens separated by non-word tokens.\nWe define a word token as a sequence of alphanumeric characters, while a non-word is a sequence of non-alphanumeric characters such as whitespace and the other punctuation symbols.\nBoth are limited to a maximum of 50 characters.\nAdjacent, repeating characters are removed from the punctuation.\nIncluded in the punctuation set is a special end of sentence marker which replaces the usual three sentence terminators ?!\n.\n.\nOften these explicit punctuation characters are missing, and so HTML tags such as
and

are assumed to terminate sentences.\nIn addition, a sentence must contain at least five words and no more than twenty words, with longer or shorter sentences being broken and joined as required to meet these criteria [10].\nUnterminated HTML tags-that is, tags with an open brace, but no close brace-cause all text from the open brace to the next open brace to be discarded.\nA major problem in summarizing web pages is the presence of large amounts of promotional and navigational material (navbars) visually above and to the left of the actual page content.\nFor example, The most wonderful company on earth.\nProducts.\nService.\nAbout us.\nContact us.\nTry before you buy.\nSimilar, but often not identical, navigational material is typically presented on every page within a site.\nThis material tends to lower the quality of summaries and slow down summary generation.\nIn our experiments we did not use any particular heuristics for removing navigational information as the test collection in use (wt100g) pre-dates the widespread take up of the current style of web publishing.\nIn wt100g, the average web page size is more than half the current Web average [11].\nAnecdotally, the increase is due to inclusion of sophisticated navigational and interface elements and the JavaScript functions to support them.\nHaving defined the format of documents that are presented to the Snippet Engine, we now define our Compressed Token System (CTS) document storage scheme, and the baseline system used for comparison.\n4.2 Baseline Snippet Engine An obvious document representation scheme is to simply compress each document with a well known adaptive compressor, and then decompress the document as required [1], using a string matching algorithm to effect the algorithm in Figure 2.\nAccordingly, we implemented such a system, using zlib [4] with default parameters to compress every document after it has been parsed as in Section 4.1.\nEach document is stored in a single file.\nWhile manageable for our small test collections or small enterprises with millions of documents, a full Web search engine may require multiple documents to inhabit single files, or a special purpose filesystem [6].\nFor snippet generation, the required documents are decompressed one at a time, and a linear search for provided query terms is employed.\nThe search is optimized for our specific task, which is restricted to matching whole words and the sentence terminating token, rather than general pattern matching.\n4.3 The CTS Snippet Engine Several optimizations over the baseline are possible.\nThe first is to employ a semi-static compression method over the entire document collection, which will allow faster decompression with minimal compression loss [24].\nUsing a semistatic approach involves mapping words and non-words produced by the parser to single integer tokens, with frequent symbols receiving small integers, and then choosing a coding scheme that assigns small numbers a small number of bits.\nWords and non-words strictly alternate in the compressed file, which always begins with a word.\nIn this instance we simply assign each symbol its ordinal number in a list of symbols sorted by frequency.\nWe use the vbyte coding scheme to code the word tokens [22].\nThe set of non-words is limited to the 64 most common punctuation sequences in the collection itself, and are encoded with a flat 6-bit binary code.\nThe remaining 2 bits of each punctuation symbol are used to store capitalization information.\nThe process of computing the semi-static model is complicated by the fact that the number of words and non-words appearing in large web collections is high.\nIf we stored all words and non-words appearing in the collection, and their associated frequency, many gigabytes of RAM or a B-tree or similar on-disk structure would be required [23].\nMoffat et al. [14] have examined schemes for pruning models during compression using large alphabets, and conclude that rarely occurring terms need not reside in the model.\nRather, rare terms are spelt out in the final compressed file, using a special word token (escape symbol), to signal their occurrence.\nDuring the first pass of encoding, two move-to-front queues are kept; one for words and one for non-words.\nWhenever the available memory is consumed and a new symbol is discovered in the collection, an existing symbol is discarded from the end of the queue.\nIn our implementation, we enforce the stricter condition on eviction that, where possible, the evicted symbol should have a frequency of one.\nIf there is no symbol with frequency one in the last half of the queue, then we evict symbols of frequency two, and so on until enough space is available in the model for the new symbol.\nThe second pass of encoding replaces each word with its vbyte encoded number, or the escape symbol and an ASCII representation of the word if it is not in the model.\nSimilarly each non-word sequence is replaced with its codeword, or the codeword for a single space character if it is not in the model.\nWe note that this lossless compression of non-words is acceptable when the documents are used for snippet generation, but may not be acceptable for a document database.\nWe assume that a separate sub-system would hold cached documents for other purposes where exact punctuation is important.\nWhile this semi-static scheme should allow faster decompression than the baseline, it also readily allows direct matching of query terms as compressed integers in the compressed file.\nThat is, sentences can be scored without having to decompress a document, and only the sentences returned as part of a snippet need to be decoded.\nThe CTS system stores all documents contiguously in one file, and an auxiliary table of 64 bit integers indicating the start offset of each document in the file.\nFurther, it must have access to the reverse mapping of term numbers, allowing those words not spelt out in the document to be recovered and returned to the Query Engine as strings.\nThe first of these data structures can be readily partitioned and distributed if the Snippet Engine occupies multiple machines; the second, however, is not so easily partitioned, as any document on a remote machine might require access to the whole integer-to-string mapping.\nThis is the second reason for employing the model pruning step during construction of the semi-static code: it limits the size of the reverse mapping table that should be present on every machine implementing the Snippet Engine.\n4.4 Experimental assessment of CTS All experiments reported in this paper were run on a Sun Fire V210 Server running Solaris 10.\nThe machine consists of dual 1.34 GHz UltraSPARC IIIi processors and 4GB of wt10g wt50g wt100g No.\nDocs.\n(\u00d7106 ) 1.7 10.1 18.5 Raw Text 10, 522 56, 684 102, 833 Baseline(zlib) 2, 568 (24%) 10, 940 (19%) 19, 252 (19%) CTS 2, 722 (26%) 12, 010 (21%) 22, 269 (22%) Table 1: Total storage space (Mb) for documents for the three test collections both compressed, and uncompressed.\n0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) 0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) 0 20 40 60 0.00.20.40.60.8 Queries grouped in 100``s Time(seconds) Baseline CTS with caching CTS without caching Figure 3: Time to generate snippets for 10 documents per query, averaged over buckets of 100 queries, for the first 7000 Excite queries on wt10g.\nRAM.\nAll source code was compiled using gcc4.1.1 with -O9 optimisation.\nTimings were run on an otherwise unoccupied machine and were averaged over 10 runs, with memory flushed between runs to eliminate any caching of data files.\nIn the absence of evidence to the contrary, we assume that it is important to model realistic query arrival sequences and the distribution of query repetitions for our experiments.\nConsequently, test collections which lack real query logs, such as TREC ad-hoc and .\nGOV2 were not considered suitable.\nObtaining extensive query logs and associated result doc-ids for a corresponding large collection is not easy.\nWe have used two collections (wt10g and wt100g) from the TREC Web Track [8] coupled with queries from Excite logs from the same (c. 1997) period.\nFurther, we also made use of a medium sized collection wt50g, obtained by randomly sampling half of the documents from wt100g.\nThe first two rows of Table 1 give the number of documents and the size in Mb of these collections.\nThe final two rows of Table 1 show the size of the resulting document sets after compression with the baseline and CTS schemes.\nAs expected, CTS admits a small compression loss over zlib, but both substantially reduce the size of the text to about 20% of the original, uncompressed size.\nNote that the figures for CTS do not include the reverse mapping from integer token to string that is required to produce the final snippets as that occupies RAM.\nIt is 1024 Mb in these experiments.\nThe Zettair search engine [25] was used to produce a list of documents to summarize for each query.\nFor the majority of the experiments the Okapi BM25 scoring scheme was used to determine document rankings.\nFor the static caching experiments reported in Section 5, the score of each document wt10g wt50g wt100g Baseline 75\u00a0157\u00a0183 CTS 38 70 77 Reduction in time 49% 56% 58% Table 2: Average time (msec) for the final 7000 queries in the Excite logs using the baseline and CTS systems on the 3 test collections.\nis a 50:50 weighted average of the BM25 score (normalized by the top scoring document for each query) and a score for each document independent of any query.\nThis is to simulate effects of ranking algorithms like PageRank [1] on the distribution of document requests to the Snippet Engine.\nIn our case we used the normalized Access Count [5] computed from the top 20 documents returned to the first 1 million queries from the Excite log to determine the query independent score component.\nPoints on Figure 3 indicate the mean running time to generate 10 snippets for each query, averaged in groups of 100 queries, for the first 7000 queries in the Excite query log.\nOnly the data for wt10g is shown, but the other collections showed similar patterns.\nThe x-axis indicates the group of 100 queries; for example, 20 indicates the queries 2001 to 2100.\nClearly there is a caching effect, with times dropping substantially after the first 1000 or so queries are processed.\nAll of this is due to the operating system caching disk blocks and perhaps pre-fetching data ahead of specific read requests.\nThis is evident because the baseline system has no large internal data structures to take advantage of non-disk based caching, it simply opens and processes files, and the speed up is evident for the baseline system.\nPart of this gain is due to the spatial locality of disk references generated by the query stream: repeated queries will already have their document files cached in memory; and similarly different queries that return the same documents will benefit from document caching.\nBut when the log is processed after removing all but the first request for each document, the pronounced speed-up is still evident as more queries are processed (not shown in figure).\nThis suggests that the operating system (or the disk itself) is reading and buffering a larger amount of data than the amount requested and that this brings benefit often enough to make an appreciable difference in snippet generation times.\nThis is confirmed by the curve labeled CTS without caching, which was generated after mounting the filesystem with a no-caching option (directio in Solaris).\nWith disk caching turned off, the average time to generate snippets varies little.\nThe time to generate ten snippets for a query, averaged over the final 7000 queries in the Excite log as caching effects have dissipated, are shown in Table 2.\nOnce the system has stabilized, CTS is over 50% faster than the Baseline system.\nThis is primarily due to CTS matching single integers for most query words, rather than comparing strings in the baseline system.\nTable 3 shows a break down of the average time to generate ten snippets over the final 7000 queries of the Excite log on the wt50g collection when entire documents are processed, and when only the first half of each document is processed.\nAs can be seen, the majority of time spent generating a snippet is in locating the document on disk (Seek): 64% for whole documents, and 75% for half documents.\nEven if the amount of processing a document must % of doc processed Seek Read Score & Decode 100% 45 4 21 50% 45 4 11 Table 3: Time to generate 10 snippets for a single query (msec) for the wt50g collection averaged over the final 7000 Excite queries when either all of each document is processed (100%) or just the first half of each document (50%).\nundergo is halved, as in the second row of the Table, there is only a 14% reduction in the total time required to generate a snippet.\nAs locating documents in secondary storage occupies such a large proportion of snippet generation time, it seems logical to try and reduce its impact through caching.\n5.\nDOCUMENT CACHING In Section 3 we observed that the Snippet Engine would have its own RAM in proportion to the size of the document collection.\nFor example, on a whole-of-Web search engine, the Snippet Engine would be distributed over many workstations, each with at least 4 Gb of RAM.\nIn a small enterprise, the Snippet Engine may be sharing RAM with all other sub-systems on a single workstation, hence only have 100 Mb available.\nIn this section we use simulation to measure the number of cache hits in the Snippet Engine as memory size varies.\nWe compare two caching policies: a static cache, where the cache is loaded with as many documents as it can hold before the system begins answering queries, and then never changes; and a least-recently-used cache, which starts out as for the static cache, but whenever a document is accessed it moves to the front of a queue, and if a document is fetched from disk, the last item in the queue is evicted.\nNote that documents are first loaded into the caches in order of decreasing query independent score, which is computed as described in Section 4.4.\nThe simulations also assume a query cache exists for the top Q most frequent queries, and that these queries are never processed by the Snippet Engine.\nAll queries passed into the simulations are from the second half of the Excite query log (the first half being used to compute query independent scores), and are stemmed, stopped, and have their terms sorted alphabetically.\nThis final alteration simply allows queries such as red dog and dog red to return the same documents, as would be the case in a search engine where explicit phrase operators would be required in the query to enforce term order and proximity.\nFigure 4 shows the percentage of document access that hit cache using the two caching schemes, with Q either 0 or 10,000, on 535,276 Excite queries on wt100g.\nThe xaxis shows the percentage of documents that are held in the cache, so 1.0% corresponds to about 185,000 documents.\nFrom this figure it is clear that caching even a small percentage of the documents has a large impact on reducing seek time for snippet generation.\nWith 1% of documents cached, about 222 Mb for the wt100g collection, around 80% of disk seeks are avoided.\nThe static cache performs surprisingly well (squares in Figure 4), but is outperformed by the LRU cache (circles).\nIn an actual implementation of LRU, however, there may be fragmentation of the cache as documents are swapped in and out.\nThe reason for the large impact of the document cache is 0.0 0.5 1.0 1.5 2.0 2.5 3.0 020406080100 Cache size (% of collection) %ofaccessesascachehits LRU Q=0 LRU Q=10,000 Static Q=0 Static Q=10,000 Figure 4: Percentage of the time that the Snippet Engine does not have to go to disk in order to generate a snippet plotted against the size of the document cache as a percentage of all documents in the collection.\nResults are from a simulation on wt100g with 535,276 Excite queries.\nthat, for a particular collection, some documents are much more likely to appear in results lists than others.\nThis effect occurs partly because of the approximately Zipfian query frequency distribution, and partly because most Web search engines employ ranking methods which combine query based scores with static (a priori) scores determined from factors such as link graph measures, URL features, spam scores and so on [17].\nDocuments with high static scores are much more likely to be retrieved than others.\nIn addition to the document cache, the RAM of the Snippet Engine must also hold the CTS decoding table that maps integers to strings, which is capped by a parameter at compression time (1 Gb in our experiments here).\nThis is more than compensated for by the reduced size of each document, allowing more documents into the document cache.\nFor example, using CTS reduces the average document size from 5.7 Kb to 1.2 Kb (as shown in Table 1), so a 2 Gb RAM could hold 368,442 uncompressed documents (2% of the collection), or 850,691 documents plus a 1 Gb decompression table (5% of the collection).\nIn fact, further experimentation with the model size reveals that the model can in fact be very small and still CTS gives good compression and fast scoring times.\nThis is evidenced in Figure 5, where the compressed size of wt50g is shown in the solid symbols.\nNote that when no compression is used (Model Size is 0Mb), the collection is only 31 Gb as HTML markup, JavaScript, and repeated punctuation has been discarded as described in Section 4.1.\nWith a 5 Mb model, the collection size drops by more than half to 14 Gb, and increasing the model size to 750 Mb only elicits a 2 Gb drop in the collection size.\nFigure 5 also shows the average time to score and decode a a snippet (excluding seek time) with the different model sizes (open symbols).\nAgain, there is a large speed up when a 5 Mb model is used, but little 0 200\u00a0400\u00a0600 15202530 Model Size (Mb) CollectionSize(Gb)orTime(msec) Size (Gb) Time (msec) Figure 5: Collection size of the wt50g collection when compressed with CTS using different memory limits on the model, and the average time to generate single snippet excluding seek time on 20000 Excite queries using those models.\nimprovement with larger models.\nSimilar results hold for the wt100g collection, where a model of about 10 Mb offers substantial space and time savings over no model at all, but returns diminish as the model size increases.\nApart from compression, there is another approach to reducing the size of each document in the cache: do not store the full document in cache.\nRather store sentences that are likely to be used in snippets in the cache, and if during snippet generation on a cached document the sentence scores do not reach a certain threshold, then retrieve the whole document from disk.\nThis raises questions on how to choose sentences from documents to put in cache, and which to leave on disk, which we address in the next section.\n6.\nSENTENCE REORDERING Sentences within each document can be re-ordered so that sentences that are very likely to appear in snippets are at the front of the document, hence processed first at query time, while less likely sentences are relegated to the rear of the document.\nThen, during query time, if k sentences with a score exceeding some threshold are found before the entire document is processed, the remainder of the document is ignored.\nFurther, to improve caching, only the head of each document can be stored in the cache, with the tail residing on disk.\nNote that we assume that the search engine is to provide cached copies of a document-that is, the exact text of the document as it was indexed-then this would be serviced by another sub-system in Figure 1, and not from the altered copy we store in the Snippet Engine.\nWe now introduce four sentence reordering approaches.\n1.\nNatural order The first few sentences of a well authored document usually best describe the document content [12].\nThus simply processing a document in order should yield a quality snippet.\nUnfortunately, however, web documents are often not well authored, with little editorial or professional writing skills brought to bear on the creation of a work of literary merit.\nMore importantly, perhaps, is that we are producing query-biased snippets, and there is no guarantee that query terms will appear in sentences towards the front of a document.\n2.\nSignificant terms (ST) Luhn introduced the concept of a significant sentence as containing a cluster of significant terms [12], a concept found to work well by Tombros and Sanderson [20].\nLet fd,t be the frequency of term t in document d, then term t is determined to be significant if fd,t \u2265 8 < : 7 \u2212 0.1 \u00d7 (25 \u2212 sd), if sd < 25 7, if 25 \u2264 sd \u2264 40 7 + 0.1 \u00d7 (sd \u2212 40), otherwise, where sd is the number of sentences in document d.\nA bracketed section is defined as a group of terms where the leftmost and rightmost terms are significant terms, and no significant terms in the bracketed section are divided by more than four non-significant terms.\nThe score of a bracketed section is the square of the number of significant words falling in the section, divided by the total number of words in the entire sentence.\nThe a priori score for a sentence is computed as the maximum of all scores for the bracketed sections of the sentence.\nWe then sort the sentences by this score.\n3.\nQuery log based (QLt) Many Web queries repeat, and a small number of queries make up a large volume of total searches [9].\nIn order to take advantage of this bias, sentences that contain many past query terms should be promoted to the front of a document, while sentences that contain few query terms should be demoted.\nIn this scheme, the sentences are sorted by the number of sentence terms that occur in the query log.\nTo ensure that long sentences do not dominate over shorter qualitative sentences the score assigned to each sentence is divided by the number of terms in that sentence giving each sentence a score between 0 and 1.\n4.\nQuery log based (QLu) This scheme is as for QLt, but repeated terms in the sentence are only counted once.\nBy re-ordering sentences using schemes ST, QLt or QLu, we aim to terminate snippet generation earlier than if Natural Order is used, but still produce sentences with the same number of unique query terms (d in Figure 2), total number of query terms (c), the same positional score (h+ ) and the same maximum span (k).\nAccordingly, we conducted experiments comparing the methods, the first 80% of the Excite query log was used to reorder sentences when required, and the final 20% for testing.\nFigure 6 shows the differences in snippet scoring components using each of the three methods over the Natural Order method.\nIt is clear that sorting sentences using the Significant Terms (ST) method leads to the smallest change in the sentence scoring components.\nThe greatest change over all methods is in the sentence position (h + ) component of the score, which is to be expected as their is no guarantee that leading and heading sentences are processed at all after sentences are re-ordered.\nThe second most affected component is the number of distinct query terms in a returned sentence, but if only the first 50% of the document is processed with the ST method, there is a drop of only 8% in the number of distinct query terms found in snippets.\nDepending how these various components are weighted to compute an overall snippet score, one can argue that there is little overall affect on scores when processing only half the document using the ST method.\nSpan (k) Term Count (c) Sentence Position (h + l) Distinct Terms (d) 40% 50% 60% 70% ST QLt QLu ST QLt QLu ST QLt QLu ST QLt QLu ST QLt QLu RelativedifferencetoNaturalOrder Documents size used 90% 80% 70% 60% 50% 0% 10% 20% 30% Figure 6: Relative difference in the snippet score components compared to Natural Ordered documents when the amount of documents processed is reduced, and the sentences in the document are reordered using Query Logs (QLt, QLu) or Significant Terms (ST).\n7.\nDISCUSSION In this paper we have described the algorithms and compression scheme that would make a good Snippet Engine sub-system for generating text snippets of the type shown on the results pages of well known Web search engines.\nOur experiments not only show that our scheme is over 50% faster than the obvious baseline, but also reveal some very important aspects of the snippet generation problem.\nPrimarily, caching documents avoids seek costs to secondary memory for each document that is to be summarized, and is vital for fast snippet generation.\nOur caching simulations show that if as little as 1% of the documents can be cached in RAM as part of the Snippet Engine, possibly distributed over many machines, then around 75% of seeks can be avoided.\nOur second major result is that keeping only half of each document in RAM, effectively doubling the cache size, has little affect on the quality of the final snippets generated from those half-documents, provided that the sentences that are kept in memory are chosen using the Significant Term algorithm of Luhn [12].\nBoth our document compression and compaction schemes dramatically reduce the time taken to generate snippets.\nNote that these results are generated using a 100Gb subset of the Web, and the Excite query log gathered from the same period as that subset was created.\nWe are assuming, as there is no evidence to the contrary, that this collection and log is representative of search engine input in other domains.\nIn particular, we can scale our results to examine what resources would be required, using our scheme, to provide a Snippet Engine for the entire World Wide Web.\nWe will assume that the Snippet Engine is distributed across M machines, and that there are N web pages in the collection to be indexed and served by the search engine.\nWe also assume a balanced load for each machine, so each machine serves about N\/M documents, which is easily achieved in practice.\nEach machine, therefore, requires RAM to hold the following.\n\u2022 The CTS model, which should be 1\/1000 of the size of the uncompressed collection (using results in Figure 5 and Williams et al. [23]).\nAssuming an average uncompressed document size of 8 Kb [11], this would require N\/M \u00d7 8.192 bytes of memory.\n\u2022 A cache of 1% of all N\/M documents.\nEach document requires 2 Kb when compressed with CTS (Table 1), and only half of each document is required using ST sentence reordering, requiring a total of N\/M \u00d70.01\u00d7 1024 bytes.\n\u2022 The offset array that gives the start position of each document in the single, compressed file: 8 bytes per N\/M documents.\nThe total amount of RAM required by a single machine, therefore, would be N\/M(8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are 20 billion pages to index on the Web, a total of M = 62 machines would be required for the Snippet Engine.\nOf course in practice, more machines may be required to manage the distributed system, to provide backup services for failed machines, and other networking services.\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [20].\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words.\nThe document compaction techniques using sentence re-ordering, however, remove the spatial relationship between sentences, and so if a scoring technique relies on the position of a sentence within a document, the aggressive compaction techniques reported here cannot be used.\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [24], but there are alternate compression schemes that allow direct matching in compressed text (see Navarro and M\u00a8akinen [15] for a recent survey.)\nAs seek time dominates the snippet generation process, we have not focused on this portion of the snippet generation in detail in this paper.\nWe will explore alternate compression schemes in future work.\nAcknowledgments This work was supported in part by ARC Discovery Project DP0558916 (AT).\nThanks to Nick Lester and Justin Zobel for valuable discussions.\n8.\nREFERENCES [1] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual Web search engine.\nIn WWW7, pages 107-117, 1998.\n[2] R. Fagin, Ravi K., K. S. McCurley, J. Novak, D. Sivakumar, J. A. Tomlin, and D. P. Williamson.\nSearching the workplace web.\nIn WWW2003, Budapest, Hungary, May 2003.\n[3] T. Fagni, R. Perego, F. Silvestri, and S. Orlando.\nBoosting the performance of web search engines: Caching and prefetching query results by exploiting historical usage data.\nACM Trans.\nInf.\nSyst., 24(1):51-78, 2006.\n[4] J-L Gailly and M. Adler.\nZlib Compression Library.\nwww.zlib.net.\nAccessed January 2007.\n[5] S. Garcia, H.E. Williams, and A. Cannane.\nAccess-ordered indexes.\nIn V. Estivill-Castro, editor, Proc.\nAustralasian Computer Science Conference, pages 7-14, Dunedin, New Zealand, 2004.\n[6] S. Ghemawat, H. Gobioff, and S. Leung.\nThe google file system.\nIn SOSP ``03: Proc.\nof the 19th ACM Symposium on Operating Systems Principles, pages 29-43, New York, NY, USA, 2003.\nACM Press.\n[7] J. Goldstein, M. Kantrowitz, V. Mittal, and J. Carbonell.\nSummarizing text documents: sentence selection and evaluation metrics.\nIn SIGIR99, pages 121-128, 1999.\n[8] D. Hawking, Nick C., and Paul Thistlewaite.\nOverview of TREC-7 Very Large Collection Track.\nIn Proc.\nof TREC-7, pages 91-104, November 1998.\n[9] B. J. Jansen, A. Spink, and J. Pedersen.\nA temporal comparison of altavista web searching.\nJ. Am.\nSoc.\nInf.\nSci.\nTech.\n(JASIST), 56(6):559-570, April 2005.\n[10] J. Kupiec, J. Pedersen, and F. Chen.\nA trainable document summarizer.\nIn SIGIR95, pages 68-73, 1995.\n[11] S. Lawrence and C.L. Giles.\nAccessibility of information on the web.\nNature, 400:107-109, July 1999.\n[12] H.P. Luhn.\nThe automatic creation of literature abstracts.\nIBM Journal, pages 159-165, April 1958.\n[13] I. Mani.\nAutomatic Summarization, volume 3 of Natural Language Processing.\nJohn Benjamins Publishing Company, Amsterdam\/Philadelphia, 2001.\n[14] A. Moffat, J. Zobel, and N. Sharman.\nText compression for dynamic document databases.\nKnowledge and Data Engineering, 9(2):302-313, 1997.\n[15] G. Navarro and V. M\u00a8akinen.\nCompressed full text indexes.\nACM Computing Surveys, 2007.\nTo appear.\n[16] D. R. Radev, E. Hovy, and K. McKeown.\nIntroduction to the special issue on summarization.\nComput.\nLinguist., 28(4):399-408, 2002.\n[17] M. Richardson, A. Prakash, and E. Brill.\nBeyond pagerank: machine learning for static ranking.\nIn WWW06, pages 707-715, 2006.\n[18] T. Sakai and K. Sparck-Jones.\nGeneric summaries for indexing in information retrieval.\nIn SIGIR01, pages 190-198, 2001.\n[19] H. G. Silber and K. F. McCoy.\nEfficiently computed lexical chains as an intermediate representation for automatic text summarization.\nComput.\nLinguist., 28(4):487-496, 2002.\n[20] A. Tombros and M. Sanderson.\nAdvantages of query biased summaries in information retrieval.\nIn SIGIR98, pages 2-10, Melbourne, Aust., August 1998.\n[21] R. W. White, I. Ruthven, and J. M. Jose.\nFinding relevant documents using top ranking sentences: an evaluation of two alternative schemes.\nIn SIGIR02, pages 57-64, 2002.\n[22] H. E. Williams and J. Zobel.\nCompressing integers for fast file access.\nComp.\nJ., 42(3):193-201, 1999.\n[23] H.E. Williams and J. Zobel.\nSearchable words on the Web.\nInternational Journal on Digital Libraries, 5(2):99-105, April 2005.\n[24] I. H. Witten, A. Moffat, and T. C. Bell.\nManaging Gigabytes: Compressing and Indexing Documents and Images.\nMorgan Kaufmann Publishing, San Francisco, second edition, May 1999.\n[25] The Zettair Search Engine.\nwww.seg.rmit.edu.au\/zettair.\nAccessed January 2007.","lvl-3":"Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users.\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets.\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library.\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process.\nUsing simulation, we examine snippet generation performance for different size RAM caches.\nFinally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality.\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache.\n1.\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com, google.com and search.msn.com typically contains the title and URL of the actual document, links to live and cached versions of the document and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented, giving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the document content (or its metadata).\nThey may be static (for example, always show the first 50 words of the document, or the content of its description metadata, or a description taken from a directory site such as dmoz.org) or query-biased [20].\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher's query.\nThe addition of informative snippets to search results may substantially increase their value to searchers.\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any documents by directly providing the answer to the searcher's real information need, such as the contact details of a person or an organization.\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load (remembering that each search typically generates ten snippets).\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files, works when query rates are low and collections are small, but does not scale to the degree required.\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them, would be manifestly excessive under heavy query load.\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems.\nSpecial-purpose filesystems have been built to address these problems [6].\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications.\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov (c. 25 million pages) and govsearch.australia.gov.au (c. 5 million pages) and within large enterprises such as IBM [2] (c. 50 million pages).\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present.\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space\/time performance against an obvious baseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk reads and cpu processing; demonstrating that the time for locating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory, it may seem that compression is not required.\nHowever, this is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for experimentation is difficult, hence we use simulation to show that caching documents dramatically improves the performance of snippet generation.\nIn turn, the more documents can be compressed, the more can fit in cache, and hence the more disk seeks can be avoided: the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [24].\nAs hitting the document cache is important, we examine document compaction, as opposed to compression, schemes by imposing an a priori ordering of sentences within a document, and then only allowing leading sentences into cache for each document.\nThis leads to further time savings, with only marginal impact on the quality of the snippets returned.\n2.\nRELATED WORK\nSnippet generation is a special type of extractive document summarization, in which sentences, or sentence fragments, are selected for inclusion in the summary on the basis of the degree to which they match the search query.\nThis process was given the name of query-biased summarization by Tombros and Sanderson [20] The reader is referred to Mani [13] and to Radev et al. [16] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries.\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document.\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries, as there is no need to search the document for text fragments containing query terms.\nTo our knowledge, Google was the first whole-ofWeb search engine to provide query biased summaries, but summarization is listed by Brin and Page [1] only under the heading of future work.\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [20, 21], rather than efficient generation of summaries.\nDespite the importance of efficient summary generation in Web search, few algorithms appear in the literature.\nSilber and McKoy [19] describe a linear-time lexical chaining algorithm for use in generic summaries, but offer no empirical data for the performance of their algorithm.\nWhite et al [21] report some experimental timings of their WebDocSum system, but the snippet generation algorithms themselves are not isolated, so it is difficult to infer snippet generation time comparable to the times we report in this paper.\n3.\nSEARCH ENGINE ARCHITECTURES\nSIGIR 2007 Proceedings Session 6: Summaries\n4.\nTHE SNIPPET ENGINE\n4.1 Parsing Web Documents\n4.2 Baseline Snippet Engine\n4.3 The CTS Snippet Engine\nSIGIR 2007 Proceedings Session 6: Summaries\n4.4 Experimental assessment of CTS\nQueries grouped in 100 's\n5.\nDOCUMENT CACHING\n6.\nSENTENCE REORDERING\nDocuments size used\n7.\nDISCUSSION\nSIGIR 2007 Proceedings Session 6: Summaries\nN\/M documents.\nThe total amount of RAM required by a single machine, therefore, would be N\/M (8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are 20 billion pages to index on the Web, a total of M = 62 machines would be required for the Snippet Engine.\nOf course in practice, more machines may be required to manage the distributed system, to provide backup services for failed machines, and other networking services.\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [20].\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words.\nThe document compaction techniques using sentence re-ordering, however, remove the spatial relationship between sentences, and so if a scoring technique relies on the position of a sentence within a document, the aggressive compaction techniques reported here cannot be used.\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [24], but there are alternate compression schemes that allow direct matching in compressed text (see Navarro and M \u00a8 akinen [15] for a recent survey.)\nAs seek time dominates the snippet generation process, we have not focused on this portion of the snippet generation in detail in this paper.\nWe will explore alternate compression schemes in future work.","lvl-4":"Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users.\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets.\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library.\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process.\nUsing simulation, we examine snippet generation performance for different size RAM caches.\nFinally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality.\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache.\n1.\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com, google.com and search.msn.com typically contains the title and URL of the actual document, links to live and cached versions of the document and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented, giving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the document content (or its metadata).\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher's query.\nThe addition of informative snippets to search results may substantially increase their value to searchers.\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any documents by directly providing the answer to the searcher's real information need, such as the contact details of a person or an organization.\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load (remembering that each search typically generates ten snippets).\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files, works when query rates are low and collections are small, but does not scale to the degree required.\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them, would be manifestly excessive under heavy query load.\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems.\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications.\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov (c. 25 million pages) and govsearch.australia.gov.au (c. 5 million pages) and within large enterprises such as IBM [2] (c. 50 million pages).\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present.\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space\/time performance against an obvious baseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk reads and cpu processing; demonstrating that the time for locating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory, it may seem that compression is not required.\nHowever, this is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for experimentation is difficult, hence we use simulation to show that caching documents dramatically improves the performance of snippet generation.\nIn turn, the more documents can be compressed, the more can fit in cache, and hence the more disk seeks can be avoided: the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [24].\nAs hitting the document cache is important, we examine document compaction, as opposed to compression, schemes by imposing an a priori ordering of sentences within a document, and then only allowing leading sentences into cache for each document.\nThis leads to further time savings, with only marginal impact on the quality of the snippets returned.\n2.\nRELATED WORK\nSnippet generation is a special type of extractive document summarization, in which sentences, or sentence fragments, are selected for inclusion in the summary on the basis of the degree to which they match the search query.\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document.\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries, as there is no need to search the document for text fragments containing query terms.\nTo our knowledge, Google was the first whole-ofWeb search engine to provide query biased summaries, but summarization is listed by Brin and Page [1] only under the heading of future work.\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [20, 21], rather than efficient generation of summaries.\nDespite the importance of efficient summary generation in Web search, few algorithms appear in the literature.\nWhite et al [21] report some experimental timings of their WebDocSum system, but the snippet generation algorithms themselves are not isolated, so it is difficult to infer snippet generation time comparable to the times we report in this paper.\nN\/M documents.\nThe total amount of RAM required by a single machine, therefore, would be N\/M (8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are 20 billion pages to index on the Web, a total of M = 62 machines would be required for the Snippet Engine.\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [20].\nThe document compaction techniques using sentence re-ordering, however, remove the spatial relationship between sentences, and so if a scoring technique relies on the position of a sentence within a document, the aggressive compaction techniques reported here cannot be used.\nAs seek time dominates the snippet generation process, we have not focused on this portion of the snippet generation in detail in this paper.\nWe will explore alternate compression schemes in future work.","lvl-2":"Fast Generation of Result Snippets in Web Search\nABSTRACT\nThe presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users.\nIn this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets.\nWe begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library.\nThese experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process.\nUsing simulation, we examine snippet generation performance for different size RAM caches.\nFinally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality.\nThis scheme effectively doubles the number of documents that can fit in a fixed size cache.\n1.\nINTRODUCTION\nEach result in search results list delivered by current WWW search engines such as search.yahoo.com, google.com and search.msn.com typically contains the title and URL of the actual document, links to live and cached versions of the document and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented, giving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the document content (or its metadata).\nThey may be static (for example, always show the first 50 words of the document, or the content of its description metadata, or a description taken from a directory site such as dmoz.org) or query-biased [20].\nA query-biased snippet is one selectively extracted on the basis of its relation to the searcher's query.\nThe addition of informative snippets to search results may substantially increase their value to searchers.\nAccurate snippets allow the searcher to make good decisions about which results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any documents by directly providing the answer to the searcher's real information need, such as the contact details of a person or an organization.\nGeneration of query-biased snippets by Web search engines indexing of the order of ten billion web pages and handling hundreds of millions of search queries per day imposes a very significant computational load (remembering that each search typically generates ten snippets).\nThe simpleminded approach of keeping a copy of each document in a file and generating snippets by opening and scanning files, works when query rates are low and collections are small, but does not scale to the degree required.\nThe overhead of opening and reading ten files per query on top of accessing the index structure to locate them, would be manifestly excessive under heavy query load.\nEven storing ten billion files and the corresponding hundreds of terabytes of data is beyond the reach of traditional filesystems.\nSpecial-purpose filesystems have been built to address these problems [6].\nNote that the utility of snippets is by no means restricted to whole-of-Web search applications.\nEfficient generation of snippets is also important at the scale of whole-of-government search services such as www.firstgov.gov (c. 25 million pages) and govsearch.australia.gov.au (c. 5 million pages) and within large enterprises such as IBM [2] (c. 50 million pages).\nSnippets may be even more useful in database or filesystem search applications in which no useful URL or title information is present.\nWe present a new algorithm and compact single-file structure designed for rapid generation of high quality snippets and compare its space\/time performance against an obvious baseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk reads and cpu processing; demonstrating that the time for locating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in comparison to locating and reading the document into memory, it may seem that compression is not required.\nHowever, this is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for experimentation is difficult, hence we use simulation to show that caching documents dramatically improves the performance of snippet generation.\nIn turn, the more documents can be compressed, the more can fit in cache, and hence the more disk seeks can be avoided: the classic data compression tradeoff that is exploited in inverted file structures and computing ranked document lists [24].\nAs hitting the document cache is important, we examine document compaction, as opposed to compression, schemes by imposing an a priori ordering of sentences within a document, and then only allowing leading sentences into cache for each document.\nThis leads to further time savings, with only marginal impact on the quality of the snippets returned.\n2.\nRELATED WORK\nSnippet generation is a special type of extractive document summarization, in which sentences, or sentence fragments, are selected for inclusion in the summary on the basis of the degree to which they match the search query.\nThis process was given the name of query-biased summarization by Tombros and Sanderson [20] The reader is referred to Mani [13] and to Radev et al. [16] for overviews of the very many different applications of summarization and for the equally diverse methods for producing summaries.\nEarly Web search engines presented query-independent snippets consisting of the first k bytes of the result document.\nGenerating these is clearly much simpler and much less computationally expensive than processing documents to extract query biased summaries, as there is no need to search the document for text fragments containing query terms.\nTo our knowledge, Google was the first whole-ofWeb search engine to provide query biased summaries, but summarization is listed by Brin and Page [1] only under the heading of future work.\nMost of the experimental work using query-biased summarization has focused on comparing their value to searchers relative to other types of summary [20, 21], rather than efficient generation of summaries.\nDespite the importance of efficient summary generation in Web search, few algorithms appear in the literature.\nSilber and McKoy [19] describe a linear-time lexical chaining algorithm for use in generic summaries, but offer no empirical data for the performance of their algorithm.\nWhite et al [21] report some experimental timings of their WebDocSum system, but the snippet generation algorithms themselves are not isolated, so it is difficult to infer snippet generation time comparable to the times we report in this paper.\n3.\nSEARCH ENGINE ARCHITECTURES\nA search engine must perform a variety of activities, and is comprised of many sub-systems, as depicted by the boxes in Figure 1.\nNote that there may be several other sub-systems such as the \"Advertising Engine\" or the \"Parsing Engine\" that could easily be added to the diagram, but we have concentrated on the sub-systems that are relevant to snippet generation.\nDepending on the number of documents that the search engine indexes, the data and processes for each\nFigure 1: An abstraction of some of the sub-systems\nin a search engine.\nDepending on the number of documents indexed, each sub-system could reside on a single machine, be distributed across thousands of machines, or a combination of both.\nsub-system could be distributed over many machines, or all occupy a single server and filesystem, competing with each other for resources.\nSimilarly, it may be more efficient to combine some sub-systems in an implementation of the diagram.\nFor example, the meta-data such as document title and URL requires minimal computation apart from highlighting query words, but we note that disk seeking is likely to be minimized if title, URL and fixed summary information is stored contiguously with the text from which query biased summaries are extracted.\nHere we ignore the fixed text and consider only the generation of query biased summaries: we concentrate on the \"Snippet Engine\".\nIn addition to data and programs operating on that data, each sub-system also has its own memory management scheme.\nThe memory management system may simply be the memory hierarchy provided by the operating system used on machines in the sub-system, or it may be explicitly coded to optimise the processes in the sub-system.\nThere are many papers on caching in search engines (see [3] and references therein for a current summary), but it seems reasonable that there is a query cache in the Query Engine that stores precomputed final result pages for very popular queries.\nWhen one of the popular queries is issued, the result page is fetched straight from the query cache.\nIf the issued query is not in the query cache, then the Query Engine uses the four sub-systems in turn to assemble a results page.\n1.\nThe Lexicon Engine maps query terms to integers.\n2.\nThe Ranking Engine retrieves inverted lists for each term, using them to get a ranked list of documents.\n3.\nThe Snippet Engine uses those document numbers and query term numbers to generate snippets.\n4.\nThe Meta Data Engine fetches other information about each document to construct the results page.\nSIGIR 2007 Proceedings Session 6: Summaries\nIN A document broken into one sentence per line, and a sequence of query terms.\n1 For each line of the text, L = [w1, w2,..., wm] 2 Let h be 1 if L is a heading, 0 otherwise.\n3 Let B be 2 if L is the first line of a document, 1 if it is the second line, 0 otherwise.\n4 Let c be the number of wi that are query terms, counting repetitions.\n5 Let d be the number of distinct query terms that match some wi.\n6 Identify the longest contiguous run of query terms in L, say wj...wj + k.\n7 Use a weighted combination of c, d, k, h and B to derive a score s. 8 Insert L into a max-heap using s as the key.\nOUT Remove the number of sentences required from the heap to form the summary.\nFigure 2: Simple sentence ranker that operates on raw text with one sentence per line.\n4.\nTHE SNIPPET ENGINE\nFor each document identifier passed to the Snippet Engine, the engine must generate text, preferably containing query terms, that attempts to summarize that document.\nPrevious work on summarization identifies the sentence as the minimal unit for extraction and presentation to the user [12].\nAccordingly, we also assume a web snippet extraction process will extract sentences from documents.\nIn order to construct a snippet, all sentences in a document should be ranked against the query, and the top two or three returned as the snippet.\nThe scoring of sentences against queries has been explored in several papers [7, 12, 18, 20, 21], with different features of sentences deemed important.\nBased on these observations, Figure 2, shows the general algorithm for scoring sentences in relevant documents, with the highest scoring sentences making the snippet for each document.\nThe final score of a sentence, assigned in Step 7, can be derived in many different ways.\nIn order to avoid bias towards any particular scoring mechanism, we compare sentence quality later in the paper using the individual components of the score, rather than an arbitrary combination of the components.\n4.1 Parsing Web Documents\nUnlike well edited text collections that are often the target for summarization systems, Web data is often poorly structured, poorly punctuated, and contains a lot of data that do not form part of valid sentences that would be candidates for parts of snippets.\nWe assume that the documents passed to the Snippet Engine by the Indexing Engine have all HTML tags and JavaScript removed, and that each document is reduced to a series of word tokens separated by non-word tokens.\nWe define a word token as a sequence of alphanumeric characters, while a non-word is a sequence of non-alphanumeric characters such as whitespace and the other punctuation symbols.\nBoth are limited to a maximum of 50 characters.\nAdjacent, repeating characters are removed from the punctuation.\nIncluded in the punctuation set is a special end of sentence marker which replaces the usual three sentence terminators \"?!\n.\"\n.\nOften these explicit punctuation characters are missing, and so HTML tags such as
and

are assumed to terminate sentences.\nIn addition, a sentence must contain at least five words and no more than twenty words, with longer or shorter sentences being broken and joined as required to meet these criteria [10].\nUnterminated HTML tags--that is, tags with an open brace, but no close brace--cause all text from the open brace to the next open brace to be discarded.\nA major problem in summarizing web pages is the presence of large amounts of promotional and navigational material (\"navbars\") visually above and to the left of the actual page content.\nFor example, \"The most wonderful company on earth.\nProducts.\nService.\nAbout us.\nContact us.\nTry before you buy.\"\nSimilar, but often not identical, navigational material is typically presented on every page within a site.\nThis material tends to lower the quality of summaries and slow down summary generation.\nIn our experiments we did not use any particular heuristics for removing navigational information as the test collection in use (wt100g) pre-dates the widespread take up of the current style of web publishing.\nIn wt100g, the average web page size is more than half the current Web average [11].\nAnecdotally, the increase is due to inclusion of sophisticated navigational and interface elements and the JavaScript functions to support them.\nHaving defined the format of documents that are presented to the Snippet Engine, we now define our Compressed Token System (CTS) document storage scheme, and the baseline system used for comparison.\n4.2 Baseline Snippet Engine\nAn obvious document representation scheme is to simply compress each document with a well known adaptive compressor, and then decompress the document as required [1], using a string matching algorithm to effect the algorithm in Figure 2.\nAccordingly, we implemented such a system, using zlib [4] with default parameters to compress every document after it has been parsed as in Section 4.1.\nEach document is stored in a single file.\nWhile manageable for our small test collections or small enterprises with millions of documents, a full Web search engine may require multiple documents to inhabit single files, or a special purpose filesystem [6].\nFor snippet generation, the required documents are decompressed one at a time, and a linear search for provided query terms is employed.\nThe search is optimized for our specific task, which is restricted to matching whole words and the sentence terminating token, rather than general pattern matching.\n4.3 The CTS Snippet Engine\nSeveral optimizations over the baseline are possible.\nThe first is to employ a semi-static compression method over the entire document collection, which will allow faster decompression with minimal compression loss [24].\nUsing a semistatic approach involves mapping words and non-words produced by the parser to single integer tokens, with frequent symbols receiving small integers, and then choosing a coding scheme that assigns small numbers a small number of bits.\nWords and non-words strictly alternate in the compressed file, which always begins with a word.\nIn this instance we simply assign each symbol its ordinal number in a list of symbols sorted by frequency.\nWe use the\nSIGIR 2007 Proceedings Session 6: Summaries\nvbyte coding scheme to code the word tokens [22].\nThe set of non-words is limited to the 64 most common punctuation sequences in the collection itself, and are encoded with a flat 6-bit binary code.\nThe remaining 2 bits of each punctuation symbol are used to store capitalization information.\nThe process of computing the semi-static model is complicated by the fact that the number of words and non-words appearing in large web collections is high.\nIf we stored all words and non-words appearing in the collection, and their associated frequency, many gigabytes of RAM or a B-tree or similar on-disk structure would be required [23].\nMoffat et al. [14] have examined schemes for pruning models during compression using large alphabets, and conclude that rarely occurring terms need not reside in the model.\nRather, rare terms are spelt out in the final compressed file, using a special word token (ESCAPE symbol), to signal their occurrence.\nDuring the first pass of encoding, two move-to-front queues are kept; one for words and one for non-words.\nWhenever the available memory is consumed and a new symbol is discovered in the collection, an existing symbol is discarded from the end of the queue.\nIn our implementation, we enforce the stricter condition on eviction that, where possible, the evicted symbol should have a frequency of one.\nIf there is no symbol with frequency one in the last half of the queue, then we evict symbols of frequency two, and so on until enough space is available in the model for the new symbol.\nThe second pass of encoding replaces each word with its vbyte encoded number, or the ESCAPE symbol and an ASCII representation of the word if it is not in the model.\nSimilarly each non-word sequence is replaced with its codeword, or the codeword for a single space character if it is not in the model.\nWe note that this lossless compression of non-words is acceptable when the documents are used for snippet generation, but may not be acceptable for a document database.\nWe assume that a separate sub-system would hold cached documents for other purposes where exact punctuation is important.\nWhile this semi-static scheme should allow faster decompression than the baseline, it also readily allows direct matching of query terms as compressed integers in the compressed file.\nThat is, sentences can be scored without having to decompress a document, and only the sentences returned as part of a snippet need to be decoded.\nThe CTS system stores all documents contiguously in one file, and an auxiliary table of 64 bit integers indicating the start offset of each document in the file.\nFurther, it must have access to the reverse mapping of term numbers, allowing those words not spelt out in the document to be recovered and returned to the Query Engine as strings.\nThe first of these data structures can be readily partitioned and distributed if the Snippet Engine occupies multiple machines; the second, however, is not so easily partitioned, as any document on a remote machine might require access to the whole integer-to-string mapping.\nThis is the second reason for employing the model pruning step during construction of the semi-static code: it limits the size of the reverse mapping table that should be present on every machine implementing the Snippet Engine.\n4.4 Experimental assessment of CTS\nAll experiments reported in this paper were run on a Sun Fire V210 Server running Solaris 10.\nThe machine consists of dual 1.34 GHz UltraSPARC IIIi processors and 4GB of\nTable 1: Total storage space (Mb) for documents for the three test collections both compressed, and uncompressed.\nQueries grouped in 100 's\nFigure 3: Time to generate snippets for 10 documents per query, averaged over buckets of 100 queries, for the first 7000 Excite queries on WT10G.\nRAM.\nAll source code was compiled using gcc4 .1.1 with - O9 optimisation.\nTimings were run on an otherwise unoccupied machine and were averaged over 10 runs, with memory flushed between runs to eliminate any caching of data files.\nIn the absence of evidence to the contrary, we assume that it is important to model realistic query arrival sequences and the distribution of query repetitions for our experiments.\nConsequently, test collections which lack real query logs, such as TREC Ad Hoc and.\nGOV2 were not considered suitable.\nObtaining extensive query logs and associated result doc-ids for a corresponding large collection is not easy.\nWe have used two collections (WT10G and WT100G) from the TREC Web Track [8] coupled with queries from Excite logs from the same (c. 1997) period.\nFurther, we also made use of a medium sized collection WT50G, obtained by randomly sampling half of the documents from WT100G.\nThe first two rows of Table 1 give the number of documents and the size in Mb of these collections.\nThe final two rows of Table 1 show the size of the resulting document sets after compression with the baseline and CTS schemes.\nAs expected, CTS admits a small compression loss over zlib, but both substantially reduce the size of the text to about 20% of the original, uncompressed size.\nNote that the figures for CTS do not include the reverse mapping from integer token to string that is required to produce the final snippets as that occupies RAM.\nIt is 1024 Mb in these experiments.\nThe Zettair search engine [25] was used to produce a list of documents to summarize for each query.\nFor the majority of the experiments the Okapi BM25 scoring scheme was used to determine document rankings.\nFor the static caching experiments reported in Section 5, the score of each document\nTable 2: Average time (msec) for the final 7000\nqueries in the Excite logs using the baseline and CTS systems on the 3 test collections.\nis a 50:50 weighted average of the BM25 score (normalized by the top scoring document for each query) and a score for each document independent of any query.\nThis is to simulate effects of ranking algorithms like PageRank [1] on the distribution of document requests to the Snippet Engine.\nIn our case we used the normalized Access Count [5] computed from the top 20 documents returned to the first 1 million queries from the Excite log to determine the query independent score component.\nPoints on Figure 3 indicate the mean running time to generate 10 snippets for each query, averaged in groups of 100 queries, for the first 7000 queries in the Excite query log.\nOnly the data for WT10G is shown, but the other collections showed similar patterns.\nThe x-axis indicates the group of 100 queries; for example, 20 indicates the queries 2001 to 2100.\nClearly there is a caching effect, with times dropping substantially after the first 1000 or so queries are processed.\nAll of this is due to the operating system caching disk blocks and perhaps pre-fetching data ahead of specific read requests.\nThis is evident because the baseline system has no large internal data structures to take advantage of non-disk based caching, it simply opens and processes files, and the speed up is evident for the baseline system.\nPart of this gain is due to the spatial locality of disk references generated by the query stream: repeated queries will already have their document files cached in memory; and similarly different queries that return the same documents will benefit from document caching.\nBut when the log is processed after removing all but the first request for each document, the pronounced speed-up is still evident as more queries are processed (not shown in figure).\nThis suggests that the operating system (or the disk itself) is reading and buffering a larger amount of data than the amount requested and that this brings benefit often enough to make an appreciable difference in snippet generation times.\nThis is confirmed by the curve labeled \"CTS without caching\", which was generated after mounting the filesystem with a \"no-caching\" option (directio in Solaris).\nWith disk caching turned off, the average time to generate snippets varies little.\nThe time to generate ten snippets for a query, averaged over the final 7000 queries in the Excite log as caching effects have dissipated, are shown in Table 2.\nOnce the system has stabilized, CTS is over 50% faster than the Baseline system.\nThis is primarily due to CTS matching single integers for most query words, rather than comparing strings in the baseline system.\nTable 3 shows a break down of the average time to generate ten snippets over the final 7000 queries of the Excite log on the WT50G collection when entire documents are processed, and when only the first half of each document is processed.\nAs can be seen, the majority of time spent generating a snippet is in locating the document on disk (\"Seek\"): 64% for whole documents, and 75% for half documents.\nEven if the amount of processing a document must\nTable 3: Time to generate 10 snippets for a single\nquery (msec) for the WT50G collection averaged over the final 7000 Excite queries when either all of each document is processed (100%) or just the first half of each document (50%).\nundergo is halved, as in the second row of the Table, there is only a 14% reduction in the total time required to generate a snippet.\nAs locating documents in secondary storage occupies such a large proportion of snippet generation time, it seems logical to try and reduce its impact through caching.\n5.\nDOCUMENT CACHING\nIn Section 3 we observed that the Snippet Engine would have its own RAM in proportion to the size of the document collection.\nFor example, on a whole-of-Web search engine, the Snippet Engine would be distributed over many workstations, each with at least 4 Gb of RAM.\nIn a small enterprise, the Snippet Engine may be sharing RAM with all other sub-systems on a single workstation, hence only have 100 Mb available.\nIn this section we use simulation to measure the number of cache hits in the Snippet Engine as memory size varies.\nWe compare two caching policies: a static cache, where the cache is loaded with as many documents as it can hold before the system begins answering queries, and then never changes; and a least-recently-used cache, which starts out as for the static cache, but whenever a document is accessed it moves to the front of a queue, and if a document is fetched from disk, the last item in the queue is evicted.\nNote that documents are first loaded into the caches in order of decreasing query independent score, which is computed as described in Section 4.4.\nThe simulations also assume a query cache exists for the top Q most frequent queries, and that these queries are never processed by the Snippet Engine.\nAll queries passed into the simulations are from the second half of the Excite query log (the first half being used to compute query independent scores), and are stemmed, stopped, and have their terms sorted alphabetically.\nThis final alteration simply allows queries such as \"red dog\" and \"dog red\" to return the same documents, as would be the case in a search engine where explicit phrase operators would be required in the query to enforce term order and proximity.\nFigure 4 shows the percentage of document access that hit cache using the two caching schemes, with Q either 0 or 10,000, on 535,276 Excite queries on WT100G.\nThe xaxis shows the percentage of documents that are held in the cache, so 1.0% corresponds to about 185,000 documents.\nFrom this figure it is clear that caching even a small percentage of the documents has a large impact on reducing seek time for snippet generation.\nWith 1% of documents cached, about 222 Mb for the WT100G collection, around 80% of disk seeks are avoided.\nThe static cache performs surprisingly well (squares in Figure 4), but is outperformed by the LRU cache (circles).\nIn an actual implementation of LRU, however, there may be fragmentation of the cache as documents are swapped in and out.\nThe reason for the large impact of the document cache is\nCache size (% of collection) Figure 4: Percentage of the time that the Snippet Engine does not have to go to disk in order to generate a snippet plotted against the size of the document cache as a percentage of all documents in the collection.\nResults are from a simulation on WT100G with 535,276 Excite queries.\nthat, for a particular collection, some documents are much more likely to appear in results lists than others.\nThis effect occurs partly because of the approximately Zipfian query frequency distribution, and partly because most Web search engines employ ranking methods which combine query based scores with static (a priori) scores determined from factors such as link graph measures, URL features, spam scores and so on [17].\nDocuments with high static scores are much more likely to be retrieved than others.\nIn addition to the document cache, the RAM of the Snippet Engine must also hold the CTS decoding table that maps integers to strings, which is capped by a parameter at compression time (1 Gb in our experiments here).\nThis is more than compensated for by the reduced size of each document, allowing more documents into the document cache.\nFor example, using CTS reduces the average document size from 5.7 Kb to 1.2 Kb (as shown in Table 1), so a 2 Gb RAM could hold 368,442 uncompressed documents (2% of the collection), or 850,691 documents plus a 1 Gb decompression table (5% of the collection).\nIn fact, further experimentation with the model size reveals that the model can in fact be very small and still CTS gives good compression and fast scoring times.\nThis is evidenced in Figure 5, where the compressed size of WT50G is shown in the solid symbols.\nNote that when no compression is used (Model Size is 0Mb), the collection is only 31 Gb as HTML markup, JavaScript, and repeated punctuation has been discarded as described in Section 4.1.\nWith a 5 Mb model, the collection size drops by more than half to 14 Gb, and increasing the model size to 750 Mb only elicits a 2 Gb drop in the collection size.\nFigure 5 also shows the average time to score and decode a a snippet (excluding seek time) with the different model sizes (open symbols).\nAgain, there is a large speed up when a 5 Mb model is used, but little\nFigure 5: Collection size of the WT50G collection\nwhen compressed with CTS using different memory limits on the model, and the average time to generate single snippet excluding seek time on 20000 Excite queries using those models.\nimprovement with larger models.\nSimilar results hold for the WT100G collection, where a model of about 10 Mb offers substantial space and time savings over no model at all, but returns diminish as the model size increases.\nApart from compression, there is another approach to reducing the size of each document in the cache: do not store the full document in cache.\nRather store sentences that are likely to be used in snippets in the cache, and if during snippet generation on a cached document the sentence scores do not reach a certain threshold, then retrieve the whole document from disk.\nThis raises questions on how to choose sentences from documents to put in cache, and which to leave on disk, which we address in the next section.\n6.\nSENTENCE REORDERING\nSentences within each document can be re-ordered so that sentences that are very likely to appear in snippets are at the front of the document, hence processed first at query time, while less likely sentences are relegated to the rear of the document.\nThen, during query time, if k sentences with a score exceeding some threshold are found before the entire document is processed, the remainder of the document is ignored.\nFurther, to improve caching, only the head of each document can be stored in the cache, with the tail residing on disk.\nNote that we assume that the search engine is to provide \"cached copies\" of a document--that is, the exact text of the document as it was indexed--then this would be serviced by another sub-system in Figure 1, and not from the altered copy we store in the Snippet Engine.\nWe now introduce four sentence reordering approaches.\n1.\nNatural order The first few sentences of a well authored document usually best describe the document content [12].\nThus simply processing a document in order should yield a quality snippet.\nUnfortunately, however, web documents are often not well authored, with little editorial or professional\nwriting skills brought to bear on the creation of a work of literary merit.\nMore importantly, perhaps, is that we are producing query-biased snippets, and there is no guarantee that query terms will appear in sentences towards the front of a document.\n2.\nSignificant terms (ST) Luhn introduced the concept of a significant sentence as containing a cluster of significant terms [12], a concept found to work well by Tombros and Sanderson [20].\nLet fd, t be the frequency of term t in document d, then term t is determined to be significant if\nwhere sd is the number of sentences in document d.\nA bracketed section is defined as a group of terms where the leftmost and rightmost terms are significant terms, and no significant terms in the bracketed section are divided by more than four non-significant terms.\nThe score of a bracketed section is the square of the number of significant words falling in the section, divided by the total number of words in the entire sentence.\nThe a priori score for a sentence is computed as the maximum of all scores for the bracketed sections of the sentence.\nWe then sort the sentences by this score.\n3.\nQuery log based (QLt) Many Web queries repeat, and a small number of queries make up a large volume of total searches [9].\nIn order to take advantage of this bias, sentences that contain many past query terms should be promoted to the front of a document, while sentences that contain few query terms should be demoted.\nIn this scheme, the sentences are sorted by the number of sentence terms that occur in the query log.\nTo ensure that long sentences do not dominate over shorter qualitative sentences the score assigned to each sentence is divided by the number of terms in that sentence giving each sentence a score between 0 and 1.\n4.\nQuery log based (QLu) This scheme is as for QLt, but repeated terms in the sentence are only counted once.\nBy re-ordering sentences using schemes ST, QLt or QLu, we aim to terminate snippet generation earlier than if Natural Order is used, but still produce sentences with the same number of unique query terms (d in Figure 2), total number of query terms (c), the same positional score (h + f) and the same maximum span (k).\nAccordingly, we conducted experiments comparing the methods, the first 80% of the Excite query log was used to reorder sentences when required, and the final 20% for testing.\nFigure 6 shows the differences in snippet scoring components using each of the three methods over the Natural Order method.\nIt is clear that sorting sentences using the Significant Terms (ST) method leads to the smallest change in the sentence scoring components.\nThe greatest change over all methods is in the sentence position (h + f) component of the score, which is to be expected as their is no guarantee that leading and heading sentences are processed at all after sentences are re-ordered.\nThe second most affected component is the number of distinct query terms in a returned sentence, but if only the first 50% of the document is processed with the ST method, there is a drop of only 8% in the number of distinct query terms found in snippets.\nDepending how these various components are weighted to compute an overall snippet score, one can argue that there is little overall affect on scores when processing only half the document using the ST method.\nDocuments size used\nFigure 6: Relative difference in the snippet score components compared to Natural Ordered documents when the amount of documents processed is reduced, and the sentences in the document are reordered using Query Logs (QLt, QLu) or Significant Terms (ST).\n7.\nDISCUSSION\nIn this paper we have described the algorithms and compression scheme that would make a good Snippet Engine sub-system for generating text snippets of the type shown on the results pages of well known Web search engines.\nOur experiments not only show that our scheme is over 50% faster than the obvious baseline, but also reveal some very important aspects of the snippet generation problem.\nPrimarily, caching documents avoids seek costs to secondary memory for each document that is to be summarized, and is vital for fast snippet generation.\nOur caching simulations show that if as little as 1% of the documents can be cached in RAM as part of the Snippet Engine, possibly distributed over many machines, then around 75% of seeks can be avoided.\nOur second major result is that keeping only half of each document in RAM, effectively doubling the cache size, has little affect on the quality of the final snippets generated from those half-documents, provided that the sentences that are kept in memory are chosen using the Significant Term algorithm of Luhn [12].\nBoth our document compression and compaction schemes dramatically reduce the time taken to generate snippets.\nNote that these results are generated using a 100Gb subset of the Web, and the Excite query log gathered from the same period as that subset was created.\nWe are assuming, as there is no evidence to the contrary, that this collection and log is representative of search engine input in other domains.\nIn particular, we can scale our results to examine what resources would be required, using our scheme, to provide a Snippet Engine for the entire World Wide Web.\nWe will assume that the Snippet Engine is distributed across M machines, and that there are N web pages in the collection to be indexed and served by the search engine.\nWe also assume a balanced load for each machine, so each machine serves about N\/M documents, which is easily achieved in practice.\nEach machine, therefore, requires RAM to hold the following.\n9 The CTS model, which should be 1\/1000 of the size of the uncompressed collection (using results in Fig\nSIGIR 2007 Proceedings Session 6: Summaries\nure 5 and Williams et al. [23]).\nAssuming an average uncompressed document size of 8 Kb [11], this would require N\/M \u00d7 8.192 bytes of memory.\n\u2022 A cache of 1% of all N\/M documents.\nEach document requires 2 Kb when compressed with CTS (Table 1), and only half of each document is required using ST sentence reordering, requiring a total of N\/M \u00d7 0.01 \u00d7 1024 bytes.\n\u2022 The offset array that gives the start position of each document in the single, compressed file: 8 bytes per\nN\/M documents.\nThe total amount of RAM required by a single machine, therefore, would be N\/M (8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are 20 billion pages to index on the Web, a total of M = 62 machines would be required for the Snippet Engine.\nOf course in practice, more machines may be required to manage the distributed system, to provide backup services for failed machines, and other networking services.\nThese machines would also need access to 37 Tb of disk to store the compressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to one particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the four components that have been previously shown to be important in determining useful snippets [20].\nThe CTS method can incorporate any new metrics that may arise in the future that are calculated on whole words.\nThe document compaction techniques using sentence re-ordering, however, remove the spatial relationship between sentences, and so if a scoring technique relies on the position of a sentence within a document, the aggressive compaction techniques reported here cannot be used.\nA variation on the semi-static compression approach we have adopted in this work has been used successfully in previous search engine design [24], but there are alternate compression schemes that allow direct matching in compressed text (see Navarro and M \u00a8 akinen [15] for a recent survey.)\nAs seek time dominates the snippet generation process, we have not focused on this portion of the snippet generation in detail in this paper.\nWe will explore alternate compression schemes in future work.","keyphrases":["search engin","snippet gener","ram","perform","document cach","document cach","link graph measur","web summari","special-purpos filesystem","document compact","text fragment","precomput final result page","vbyte code scheme","semi-static compress"],"prmu":["P","P","P","P","P","P","U","M","U","R","U","M","M","M"]} {"id":"H-8","title":"Robust Test Collections for Retrieval Evaluation","abstract":"Low-cost methods for acquiring relevance judgments can be a boon to researchers who need to evaluate new retrieval tasks or topics but do not have the resources to make thousands of judgments. While these judgments are very useful for a one-time evaluation, it is not clear that they can be trusted when re-used to evaluate new systems. In this work, we formally define what it means for judgments to be reusable: the confidence in an evaluation of new systems can be accurately assessed from an existing set of relevance judgments. We then present a method for augmenting a set of relevance judgments with relevance estimates that require no additional assessor effort. Using this method practically guarantees reusability: with as few as five judgments per topic taken from only two systems, we can reliably evaluate a larger set of ten systems. Even the smallest sets of judgments can be useful for evaluation of new systems.","lvl-1":"Robust Test Collections for Retrieval Evaluation Ben Carterette Center for Intelligent Information Retrieval Computer Science Department University of Massachusetts Amherst Amherst, MA 01003 carteret@cs.umass.edu ABSTRACT Low-cost methods for acquiring relevance judgments can be a boon to researchers who need to evaluate new retrieval tasks or topics but do not have the resources to make thousands of judgments.\nWhile these judgments are very useful for a one-time evaluation, it is not clear that they can be trusted when re-used to evaluate new systems.\nIn this work, we formally define what it means for judgments to be reusable: the confidence in an evaluation of new systems can be accurately assessed from an existing set of relevance judgments.\nWe then present a method for augmenting a set of relevance judgments with relevance estimates that require no additional assessor effort.\nUsing this method practically guarantees reusability: with as few as five judgments per topic taken from only two systems, we can reliably evaluate a larger set of ten systems.\nEven the smallest sets of judgments can be useful for evaluation of new systems.\nCategories and Subject Descriptors: H.3 Information Storage and Retrieval; H.3.4 Systems and Software: Performance Evaluation General Terms: Experimentation, Measurement, Reliability 1.\nINTRODUCTION Consider an information retrieval researcher who has invented a new retrieval task.\nShe has built a system to perform the task and wants to evaluate it.\nSince the task is new, it is unlikely that there are any extant relevance judgments.\nShe does not have the time or resources to judge every document, or even every retrieved document.\nShe can only judge the documents that seem to be the most informative and stop when she has a reasonable degree of confidence in her conclusions.\nBut what happens when she develops a new system and needs to evaluate it?\nOr another research group decides to implement a system to perform the task?\nCan they reliably reuse the original judgments?\nCan they evaluate without more relevance judgments?\nEvaluation is an important aspect of information retrieval research, but it is only a semi-solved problem: for most retrieval tasks, it is impossible to judge the relevance of every document; there are simply too many of them.\nThe solution used by NIST at TREC (Text REtrieval Conference) is the pooling method [19, 20]: all competing systems contribute N documents to a pool, and every document in that pool is judged.\nThis method creates large sets of judgments that are reusable for training or evaluating new systems that did not contribute to the pool [21].\nThis solution is not adequate for our hypothetical researcher.\nThe pooling method gives thousands of relevance judgments, but it requires many hours of (paid) annotator time.\nAs a result, there have been a slew of recent papers on reducing annotator effort in producing test collections: Cormack et al. [11], Zobel [21], Sanderson and Joho [17], Carterette et al. [8], and Aslam et al. [4], among others.\nAs we will see, the judgments these methods produce can significantly bias the evaluation of a new set of systems.\nReturning to our hypothetical resesarcher, can she reuse her relevance judgments?\nFirst we must formally define what it means to be reusable.\nIn previous work, reusability has been tested by simply assessing the accuracy of a set of relevance judgments at evaluating unseen systems.\nWhile we can say that it was right 75% of the time, or that it had a rank correlation of 0.8, these numbers do not have any predictive power: they do not tell us which systems are likely to be wrong or how confident we should be in any one.\nWe need a more careful definition of reusability.\nSpecifically, the question of reusability is not how accurately we can evaluate new systems.\nA malicious adversary can always produce a new ranked list that has not retrieved any of the judged documents.\nThe real question is how much confidence we have in our evaluations, and, more importantly, whether we can trust our estimates of confidence.\nEven if confidence is not high, as long as we can trust it, we can identify which systems need more judgments in order to increase confidence.\nAny set of judgments, no matter how small, becomes reusable to some degree.\nSmall, reusable test collections could have a huge impact on information retrieval research.\nResearch groups would be able to share the relevance judgments they have done in-house for pilot studies, new tasks, or new topics.\nThe amount of data available to researchers would grow exponentially over time.\n2.\nROBUST EVALUATION Above we gave an intuitive definition of reusability: a collection is reusable if we can trust our estimates of confidence in an evaluation.\nBy that we mean that if we have made some relevance judgments and have, for example, 75% confidence that system A is better than system B, we would like there to be no more than 25% chance that our assessment of the relative quality of the systems will change as we continue to judge documents.\nOur evaluation should be robust to missing judgments.\nIn our previous work, we defined confidence as the probability that the difference in an evaluation measure calculated for two systems is less than zero [8].\nThis notion of confidence is defined in the context of a particular evaluation task that we call comparative evaluation: determining the sign of the difference in an evaluation measure.\nOther evaluation tasks could be defined; estimating the magnitude of the difference or the values of the measures themselves are examples that entail different notions of confidence.\nWe therefore see confidence as a probability estimate.\nOne of the questions we must ask about a probability estimate is what it means.\nWhat does it mean to have 75% confidence that system A is better than system B?\nAs described above, we want it to mean that if we continue to judge documents, there will only be a 25% chance that our assessment will change.\nIf this is what it means, we can trust the confidence estimates.\nBut do we know it has that meaning?\nOur calculation of confidence rested on an assumption about the probability of relevance of unjudged documents, specifically that each unjudged document was equally likely to be relevant or nonrelevant.\nThis assumption is almost certainly not realistic in most IR applications.\nAs it turns out, it is this assumption that determines whether the confidence estimates can eb trusted.\nBefore elaborating on this, we formally define confidence.\n2.1 Estimating Confidence Average precision (AP) is a standard evaluation metric that captures both the ability of a system to rank relevant documents highly (precision) as well as its ability to retrieve relevant documents (recall).\nIt is typically written as the mean precision at the ranks of relevant documents: AP = 1 |R| i\u2208R prec@r(i) where R is the set of relevant documents and r(i) is the rank of document i. Let Xi be a random variable indicating the relevance of document i.\nIf documents are ordered by rank, we can express precision as prec@i = 1\/i i j=1 Xj .\nAverage precision then becomes the quadratic equation AP = 1 Xi n i=1 Xi\/i i j=1 Xj = 1 Xi n i=1 j\u2265i aijXiXj where aij = 1\/ max{r(i), r(j)}.\nUsing aij instead of 1\/i allows us to number the documents arbitrarily.\nTo see why this is true, consider a toy example: a list of 3 documents with relevant documents B, C at ranks 1 and 3 and nonrelevant document A at rank 2.\nAverage precision will be 1 2 (1 1 x2 B+ 1 2 xBxA+ 1 3 xBxC + 1 2 x2 A+ 1 3 xAxC + 1 3 x2 C) = 1 2 1 + 2 3 because xA = 0, xB = 1, xC = 1.\nThough the ordering B, A, C is different from the labeling A, B, C, it does not affect the computation.\nWe can now see average precision itself is a random variable with a distribution over all possible assignments of relevance to all documents.\nThis random variable has an expectation, a variance, confidence intervals, and a certain probability of being less than or equal to a given value.\nAll of these are dependent on the probability that document i is relevant: pi = p(Xi = 1).\nSuppose in our previous example we do not know the relevance judgments, but we believe pA = 0.4, pB = 0.8, pC = 0.7.\nWe can then compute e.g. P(AP = 0) = 0.2 \u00b7 0.6 \u00b7 0.3 = 0.036, or P(AP = 1 2 ) = 0.2 \u00b7 0.4 \u00b7 0.7 = 0.056.\nSumming over all possibilities, we can compute expectation and variance: E[AP] \u2248 1 pi aiipi + j>i aij pipj V ar[AP] \u2248 1 ( pi)2 n i a2 iipiqi + j>i a2 ijpipj(1 \u2212 pipj) + i=j 2aiiaijpipj(1 \u2212 pi) + k>j=i 2aijaikpipjpk(1 \u2212 pi) AP asymptotically converges to a normal distribution with expectation and variance as defined above.1 For our comparative evaluation task we are interested in the sign of the difference in two average precisions: \u0394AP = AP1 \u2212 AP2.\nAs we showed in our previous work, \u0394AP has a closed form when documents are ordered arbitrarily: \u0394AP = 1 Xi n i=1 j\u2265i cij XiXj cij = aij \u2212 bij where bij is defined analogously to aij for the second ranking.\nSince AP is normal, \u0394AP is normal as well, meaning we can use the normal cumulative density function to determine the confidence that a difference in AP is less than zero.\nSince topics are independent, we can easily extend this to mean average precision (MAP).\nMAP is also normally distributed; its expectation and variance are: EMAP = 1 T t\u2208T E[APt] (1) VMAP = 1 T2 t\u2208T V ar[APt] \u0394MAP = MAP1 \u2212 MAP2 Confidence can then be estimated by calculating the expectation and variance and using the normal density function to find P(\u0394MAP < 0).\n2.2 Confidence and Robustness Having defined confidence, we turn back to the issue of trust in confidence estimates, and show how it ties into the robustness of the collection to missing judgments.\n1 These are actually approximations to the true expectation and variance, but the error is a negligible O(n2\u2212n ).\nLet Z be the set of all pairs of ranked results for a common set of topics.\nSuppose we have a set of m relevance judgments xm = {x1, x2, ..., xm} (using small x rather than capital X to distinguish between judged and unjudged documents); these are the judgments against which we compute confidence.\nLet Z\u03b1 be the subset of pairs in Z for which we predict that \u0394MAP = \u22121 with confidence \u03b1 given the judgments xm .\nFor the confidence estimates to be accurate, we need at least \u03b1 \u00b7 |Z\u03b1| of these pairs to actually have \u0394MAP = \u22121 after we have judged every document.\nIf they do, we can trust the confidence estimates; our evaluation will be robust to missing judgments.\nIf our confidence estimates are based on unrealistic assumptions, we cannot expect them to be accurate.\nThe assumptions they are based on are the probabilities of relevance pi.\nWe need these to be realistic.\nWe argue that the best possible distribution of relevance p(Xi) is the one that explains all of the data (all of the observations made about the retrieval systems) while at the same time making no unwarranted assumptions.\nThis is known as the principle of maximum entropy [13].\nThe entropy of a random variable X with distribution p(X) is defined as H(p) = \u2212 i p(X = i) log p(X = i).\nThis has found a wide array of uses in computer science and information retrieval.\nThe maximum entropy distribution is the one that maximizes H.\nThis distribution is unique and has an exponential form.\nThe following theorem shows the utility of a maximum entropy distribution for relevance when estimating confidence.\nTheorem 1.\nIf p(Xn |I, xm ) = argmaxpH(p), confidence estimates will be accurate.\nwhere xm is the set of relevance judgments defined above, Xn is the full set of documents that we wish to estimate the relevance of, and I is some information about the documents (unspecified as of now).\nWe forgo the proof for the time being, but it is quite simple.\nThis says that the better the estimates of relevance, the more accurate the evaluation.\nThe task of creating a reusable test collection thus becomes the task of estimating the relevance of unjudged documents.\nThe theorem and its proof say nothing whatsoever about the evaluation metric.\nThe probability estimates are entirely indepedent of the measure we are interested in.\nThis means the same probability estimates can tell us about average precision as well as precision, recall, bpref, etc..\nFurthermore, we could assume that the relevance of documents i and j is independent and achieve the same result, which we state as a corollary: Corollary 1.\nIf p(Xi|I, xm ) = argmaxpH(p), confidence estimates will be accurate.\nThe task therefore becomes the imputation of the missing values of relevance.\nThe theorem implies that the closer we get to the maximum entropy distribution of relevance, the closer we get to robustness.\n3.\nPREDICTING RELEVANCE In our statement of Theorem 1, we left the nature of the information I unspecified.\nOne of the advantages of our confidence estimates is that they admit information from a wide variety of sources; essentially anything that can be modeled can be used as information for predicting relevance.\nA natural source of information is the retrieval systems themselves: how they ranked the judged documents, how often they failed to rank relevant documents, how they perform across topics, and so on.\nIf we treat each system as an information retrieval expert providing an opinion about the relevance of each document, the problem becomes one of expert opinion aggregation.\nThis is similar to the metasearch or data fusion problem in which the task is to take k input systems and merge them into a single ranking.\nAslam et al. [3] previously identified a connection between evaluation and metasearch.\nOur problem has two key differences: 1.\nWe explicitly need probabilities of relevance that we can plug into Eq.\n1; metasearch algorithms have no such requirement.\n2.\nWe are accumulating relevance judgments as we proceed with the evaluation and are able to re-estimate relevance given each new judgment.\nIn light of (1) above, we introduce a probabilistic model for expert combination.\n3.1 A Model for Expert Opinion Aggregation Suppose that each expert j provides a probability of relevance qij = pj(Xi = 1).\nThe information about the relevance of document i will then be the set of k expert opinions I = qi = (qi1, qi2, \u00b7 \u00b7 \u00b7 , qik).\nThe probability distribution we wish to find is the one that maximizes the entropy of pi = p(Xi = 1|qi).\nAs it turns out, finding the maximum entropy model is equivalent to finding the parameters that maximize the likelihood [5].\nBlower [6] explicitly shows that finding the maximum entropy model for a binary variable is equivalent to solving a logistic regression.\nThen pi = p(Xi = 1|qi) = exp k j=1 \u03bbjqij 1 + exp k j=1 \u03bbj qij (2) where \u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbk are the regression parameters.\nWe include a beta prior for p(\u03bbj) with parameters \u03b1, \u03b2.\nThis can be seen as a type of smoothing to account for the fact that the training data is highly biased.\nThis model has the advantage of including the statistical dependence between the experts.\nA model of the same form was shown by Clemen & Winkler to be the best for aggregating expert probabilities [10].\nA similar maximumentropy-motivated approach has been used for expert aggregation [15].\nAslam & Montague [1] used a similar model for metasearch, but assumed independence among experts.\nWhere do the qij s come from?\nUsing raw, uncalibrated scores as predictors will not work because score distributions vary too much between topics.\nA language modeling ranker, for instance, will typically give a much higher score to the top retrieved document for a short query than to the top retrieved document for a long query.\nWe could train a separate predicting model for each topic, but that does not take advantage of all of the information we have: we may only have a handful of judgments for a topic, not enough to train a model to any confidence.\nFurthermore, it seems reasonable to assume that if an expert makes good predictions for one topic, it will make good predictions for other topics as well.\nWe could use a hierarchical model [12], but that will not generalize to unseen topics.\nInstead, we will calibrate the scores of each expert individually so that scores can be compared both within topic and between topic.\nThus our model takes into account not only the dependence between experts, but also the dependence between experts'' performances on different tasks (topics).\n3.2 Calibrating Experts Each expert gives us a score and a rank for each document.\nWe need to convert these to probabilities.\nA method such as the one used by Manmatha et al. [14] could be used to convert scores into probabilities of relevance.\nThe pairwise preference method of Carterette & Petkova [9] could also be used, interpeting the ranking of one document over another as an expression of preference.\nLet q\u2217 ij be expert j``s self-reported probability that document i is relevant.\nIntuitively it seems clear that q\u2217 ij should decrease with rank, and it should be zero if document i is unranked (the expert did not believe it to be relevant).\nThe pairwise preference model can handle these two requirements easily, so we will use it.\nLet \u03b8rj (i) be the relevance coefficient of the document at rank rj(i).\nWe want to find the \u03b8s that maximize the likelihood function: Ljt(\u0398) = rj (i) 0.\nIf it turns out that \u0394MAP <0, we win the dollar.\nOtherwise, we pay out O.\nIf our confidence estimates are perfectly accurate, we break even.\nIf confidence is greater than accuracy, we lose money; we win if accuracy is greater than confidence.\nCounterintuitively, the most desirable outcome is breaking even: if we lose money, we cannot trust the confidence estimates, but if we win money, we have either underestimated confidence or judged more documents than necessary.\nHowever, the cost of not being able to trust the confidence estimates is higher than the cost of extra relevance judgments, so we will treat positive outcomes as \"good\".\nThe amount we win on each pairwise comparison i is:\n0).\nThe summary statistic is W, the mean of Wi.\nNote that as Pi increases, we lose more for being wrong.\nThis is as it should be: the penalty should be great for missing the high probability predictions.\nHowever, since our losses grow without bound as predictions approach certainty, we cap - Wi at 100.\nFor our hypothesis that RTC requires fewer judgments than MTC, we are interested in the number of judgments needed to reach 95% confidence on the first pair of systems.\nThe median is more interesting than the mean: most pairs require a few hundred judgments, but a few pairs require several thousand.\nThe distribution is therefore highly skewed, and the mean strongly affected by those outliers.\nFinally, for our hypothesis that RTC is more accurate than MTC, we will look at Kendall's \u03c4 correlation between a ranking of k systems by a small set of judgments and the true ranking using the full set of judgments.\nKendall's \u03c4, a nonparametric statistic based on pairwise swaps between two lists, is a standard evaluation for this type of study.\nIt ranges from -1 (perfectly anti-correlated) to 1 (rankings identical), with 0 meaning that half of the pairs are swapped.\nAs we touched on in the introduction, though, an accuracy measure like rank correlation is not a good evaluation of reusability.\nWe include it for completeness.\n4.4.1 Hypothesis Testing\nRunning multiple trials allows the use of statistical hypothesis testing to compare algorithms.\nUsing the same sets of systems allows the use of paired tests.\nAs we stated above, we are more interested in the median number of judgments than the mean.\nA test for difference in median is the Wilcoxon sign rank test.\nWe can also use a paired t-test to test for a difference in mean.\nFor rank correlation, we can use a paired t-test to test for a difference in \u03c4.\n5.\nRESULTS AND ANALYSIS\nThe comparison between MTC and RTC is shown in Table 2.\nWith MTC and uniform probabilities of relevance, the results are far from robust.\nWe cannot reuse the relevance judgments with much confidence.\nBut with RTC, the results are very robust.\nThere is a slight dip in accuracy when confidence gets above 0.95; nonetheless, the confidence predictions are trustworthy.\nMean Wi shows that RTC is much closer to 0 than MTC.\nThe distribution of confidence scores shows that at least 80% confidence is achieved more than 35% of the time, indicating that neither algorithm is being too conservative in its confidence estimates.\nThe confidence estimates are rather low overall; that is because we have built a test collection from only two initial systems.\nRecall from Section 1 that we cannot require (or even expect) a minimum level of confidence when we generalize to new systems.\nMore detailed results for both algorithms are shown in Figure 2.\nThe solid line is the ideal result that would give W = 0.\nRTC is on or above this line at all points until confidence reaches about 0.97.\nAfter that there is a slight dip in accuracy which we discuss below.\nNote that both\nTable 2: Confidence that P (\u0394MAP <0) and accuracy of prediction when generalizing a set of relevance judgments acquired using MTC and RTC.\nEach bin contains over 1,000 trials from the adhoc 3, 5--8 sets.\nRTC is much more robust than MTC.\nW is defined in Section 4.4; closer to 0 is better.\nMedian judged is the number of judgments to reach 95% confidence on the first two systems.\nMean \u03c4 is the average rank correlation for all 10 systems.\nFigure 2: Confidence vs. accuracy of MTC and RTC.\nThe solid line is the perfect result that would\ngive W = 0; performance should be on or above this line.\nEach point represents at least 500 pairwise comparisons.\nalgorithms are well above the line up to around confidence 0.7.\nThis is because the baseline performance on these data sets is high; it is quite easy to achieve 75% accuracy doing very little work [7].\nNumber of Judgments: The median number of judgments required by MTC to reach 95% confidence on the first two systems is 251, an average of 5 per topic.\nThe median required by RTC is 235, about 4.7 per topic.\nAlthough the numbers are close, RTC's median is significantly lower by a paired Wilcoxon test (p <0.0001).\nFor comparison, a pool of depth 100 would result in a minimum of 5,000 judgments for each pair.\nThe difference in means is much greater.\nMTC required a mean of 823 judgments, 16 per topic, while RTC required a mean of 502, 10 per topic.\n(Recall that means are strongly skewed by a few pairs that take thousands of judgments.)\nThis difference is significant by a paired t-test (p <0.0001).\nTen percent of the sets resulted in 100 or fewer judgments (less than two per topic).\nPerformance on these is very high: W = 0.41, and 99.7% accuracy when confidence is at least 0.9.\nThis shows that even tiny collections can be reusable.\nFor the 50% of sets with more than 235 judgments, accuracy is 93% when confidence is at least 0.9.\nRank Correlation: MTC and RTC both rank the 10 systems by EMAP (Eq.\n(1)) calculated using their respective probability estimates.\nThe mean \u03c4 rank correlation between true MAP and EMAP is 0.393 for MTC and 0.555 for RTC.\nThis difference is significant by a paired t-test (p <0.0001).\nNote that we do not expect the \u03c4 correlations to be high, since we are ranking the systems with so few relevance judgments.\nIt is more important that we estimate confidence in each pairwise comparison correctly.\nWe ran IP for the same number of judgments that MTC took for each pair, then ranked the systems by MAP using only those judgments (all unjudged documents assumed nonrelevant).\nWe calculated the \u03c4 correlation to the true ranking.\nThe mean \u03c4 correlation is 0.398, which is not significantly different from MTC, but is significantly lower than RTC.\nUsing uniform estimates of probability is indistinguishable from the baseline, whereas estimating relevance by expert aggregation boosts performance a great deal: nearly 40% over both MTC and IP.\nOverfitting: It is possible to \"overfit\": if too many judgments come from the first two systems, the variance in \u0394MAP is reduced and the confidence estimates become unreliable.\nWe saw this in Table 2 and Figure 2 where RTC exhibits a dip in accuracy when confidence is around 97%.\nIn fact, the number of judgments made prior to a wrong prediction is over 50% greater than the number made prior to a correct prediction.\nOverfitting is difficult to quantify exactly, because making more relevance judgments does not always cause it: at higher confidence levels, more relevance judgments are made, and as Table 2 shows, accuracy is greater at those higher confidences.\nObviously having more relevance judgments should increase both confidence and accuracy; the difference seems to be when one system has a great deal more judgments than the other.\nPairwise Comparisons: Our pairwise comparisons fall into one of three groups:\n1.\nthe two original runs from which relevance judgments are acquired; 2.\none of the original runs vs. one of the new runs; 3.\ntwo new runs.\nTable 3 shows confidence vs. accuracy results for each of these three groups.\nInterestingly, performance is worst when comparing one of the original runs to one of the additional runs.\nThis is most likely due to a large difference in the number of judgments affecting the variance of \u0394MAP.\nNevertheless, performance is quite good on all three subsets.\nWorst Case: The case intuitively most likely to produce an error is when the two systems being compared have retrieved very few documents in common.\nIf we want the judgments to be reusable, we should to be able to generalize even to runs that are very different from the ones used to acquire the relevance judgments.\nA simple measure of similarity of two runs is the average percentage of documents they retrieved in common for each topic [2].\nWe calculated this for all pairs, then looked at performance on pairs with low similarity.\nResults are shown in\nTable 3: Confidence vs. accuracy of RTC when comparing the two original runs, one original run and one new run, and two new runs.\nRTC is robust in all three cases.\nTable 4: Confidence vs. accuracy of RTC when a\npair of systems retrieved 0--30% documents in common (broken out into 0%--10%, 10%--20%, and 20%--30%).\nRTC is robust in all three cases.\nTable 4.\nPerformance is in fact very robust even when similarity is low.\nWhen the two runs share very few documents in common, W is actually positive.\nMTC and IP both performed quite poorly in these cases.\nWhen the similarity was between 0 and 10%, both MTC and IP correctly predicted \u0394MAP only 60% of the time, compared to an 87.6% success rate for RTC.\nBy Data Set: All the previous results have only been on the ad hoc collections.\nWe did the same experiments on our additional data sets, and broke out the results by data set to see how performance varies.\nThe results in Table 5 show everything about each set, including binned accuracy, W, mean \u03c4, and median number of judgments to reach 95% confidence on the first two systems.\nThe results are highly consistent from collection to collection, suggesting that our method is not overfitting to any particular data set.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this work we have offered the first formal definition of the common idea of \"reusability\" of a test collection and presented a model that is able to achieve reusability with very small sets of relevance judgments.\nTable 2 and Figure 2 together show how biased a small set of judgments can be: MTC is dramatically overestimating confidence and is much less accurate than RTC, which is able to remove the bias to give a robust evaluation.\nThe confidence estimates of RTC, in addition to being accurate, provide a guide for obtaining additional judgments: focus on judging documents from the lowest-confidence comparisons.\nIn the long run, we see small sets of relevance judg\nTable 5: Accuracy, W, mean \u03c4, and median number of judgments for all 8 testing sets.\nThe results are highly consistent across data sets.\nments being shared by researchers, each group contributing a few more judgments to gain more confidence about their particular systems.\nAs time goes on, the number of judgments grows until there is 100% confidence in every evaluation--and there is a full test collection for the task.\nWe see further use for this method in scenarios such as web retrieval in which the corpus is frequently changing.\nIt could be applied to evaluation on a dynamic test collection as defined by Soboroff [18].\nThe model we presented in Section 3 is by no means the only possibility for creating a robust test collection.\nA simpler expert aggregation model might perform as well or better (though all our efforts to simplify failed).\nIn addition to expert aggregation, we could estimate probabilities by looking at similarities between documents.\nThis is an obvious area for future exploration.\nAdditionally, it will be worthwhile to investigate the issue of overfitting: the circumstances it occurs under and what can be done to prevent it.\nIn the meantime, capping confidence estimates at 95% is a \"hack\" that solves the problem.\nWe have many more experimental results that we unfortunately did not have space for but that reinforce the notion that RTC is highly robust: with just a few judgments per topic, we can accurately assess the confidence in any pairwise comparison of systems.","keyphrases":["test collect","evalu","reusabl","inform retriev","relev judgement","lowerest-confid comparison","mtc","rtc","expect","varianc","relev distribut"],"prmu":["P","P","P","M","M","U","U","U","U","U","M"]} {"id":"C-27","title":"A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks","abstract":"The problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments. In this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight. Our system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes. We demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems. We evaluate the performance of our system in deployments of Mica2 and XSM motes. Through performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error. A sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000. To the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.","lvl-1":"A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks Radu Stoleru, Tian He, John A. Stankovic, David Luebke Department of Computer Science University of Virginia, Charlottesville, VA 22903 {stoleru, tianhe, stankovic, luebke}@cs.\nvirginia.edu ABSTRACT The problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments.\nIn this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight.\nOur system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes.\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems.\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes.\nThrough performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error.\nA sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000.\nTo the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.\nCategories and Subject Descriptors C.2.4 [Computer-Communications Networks]: Distributed Systems; C.3 [Special-Purpose and Application-Based Systems]: Real-Time and embedded systems.\nGeneral Terms Algorithms, Measurement, Performance, Design, Experimentation 1.\nINTRODUCTION Recently, wireless sensor network systems have been used in many promising applications including military surveillance, habitat monitoring, wildlife tracking etc. [12] [22] [33] [36].\nWhile many middleware services, to support these applications, have been designed and implemented successfully, localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically.\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations, such as annotating sensed data with location context, it is an indispensable requirement for a sensor node to be able to find its own location.\nMany approaches have been proposed in the literature [4] [6] [13] [14] [19] [20] [21] [23] [27] [28], however it is still not clear how these solutions can be practically and economically deployed.\nAn on-board GPS [23] is a typical high-end solution, which requires sophisticated hardware to achieve high resolution time synchronization with satellites.\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution.\nOther solutions require per node devices that can perform ranging among neighboring nodes.\nThe difficulties of these approaches are twofold.\nFirst, under constraints of form factor and power supply, the effective ranges of such devices are very limited.\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [26].\nSecond, since most sensor nodes are static, i.e. the location is not expected to change, it is not cost-effective to equip these sensors with special circuitry just for a one-time localization.\nTo overcome these limitations, many range-free localization schemes have been proposed.\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes.\nThese approaches eliminate the need of high-cost specialized hardware, at the cost of a less accurate localization.\nIn addition, the radio propagation characteristics vary over time and are environment dependent, thus imposing high calibration costs for the range-free localizations schemes.\nWith such limitations in mind, this paper addresses the following research challenge: How to reconcile the need for high accuracy in location estimation with the cost to achieve it.\nOur answer to this challenge is a localization system called Spotlight.\nThis system employs an asymmetric architecture, in which sensor nodes do not need any additional hardware, other than what they currently have.\nAll the sophisticated hardware and computation reside on a single Spotlight device.\nThe Spotlight device uses a steerable laser light source, illuminating the sensor nodes placed within a known terrain.\nWe demonstrate that this localization is much more accurate (i.e., tens of centimeters) than the range-based localization schemes and that it has a much longer effective range (i.e., thousands of meters) than the solutions based on ultra-sound\/acoustic ranging.\nAt the same time, since only a single sophisticated device is needed to localize the whole network, the amortized cost is much smaller than the cost to add hardware components to the individual sensors.\n2.\nRELATED WORK In this section, we discuss prior work in localization in two major categories: the range-based localization schemes (which use either expensive, per node, ranging devices for high accuracy, or less accurate ranging solutions, as the Received Signal Strength Indicator (RSSI)), and the range-free schemes, which use only connectivity information (hop-by-hop) as an indication of proximity among the nodes.\nThe localization problem is a fundamental research problem in many domains.\nIn the field of robotics, it has been studied extensively [9] [10].\nThe reported localization errors are on the order of tens of centimeters, when using specialized ranging hardware, i.e. laser range finder or ultrasound.\nDue to the high cost and non-negligible form factor of the ranging hardware, these solutions can not be simply applied to sensor networks.\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver.\nThe RADAR system [2] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes.\nThe location of a mobile user is estimated within a few meters.\nIn a similar approach, MoteTrack [17] distributes the reference RSSI values to the beacon nodes.\nSolutions that use RSSI and do not require beacon nodes have also been proposed [5] [14] [24] [26] [29].\nThey all share the idea of using a mobile beacon.\nThe sensor nodes that receive the beacons, apply different algorithms for inferring their location.\nIn [29], Sichitiu proposes a solution in which the nodes that receive the beacon construct, based on the RSSI value, a constraint on their position estimate.\nIn [26], Priyantha et al. propose MAL, a localization method in which a mobile node (moving strategically) assists in measuring distances between node pairs, until the constraints on distances generate a rigid graph.\nIn [24], Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it.\nElnahrawy [8] provides strong evidence of inherent limitations of localization accuracy using RSSI, in indoor environments.\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave, to obtain pair wise distances between sensor nodes.\nThis approach produces smaller localization errors, at the cost of additional hardware.\nThe Cricket location-support system [25] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers.\nAHLoS, proposed by Savvides et al. [27], employs Time of Arrival (ToA) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations.\nA similar ToA technique is employed in [3].\nIn [30], Simon et al. implement a distributed system (using acoustic ranging) which locates a sniper in an urban terrain.\nAcoustic ranging for localization is also used by Kwon et al. [15].\nThe reported errors in localization vary from 2.2m to 9.5m, depending on the type (centralized vs. distributed) of the Least Square Scaling algorithm used.\nFor wireless sensor networks ranging is a difficult option.\nThe hardware cost, the energy expenditure, the form factor, the small range, all are difficult compromises, and it is hard to envision cheap, unreliable and resource-constraint devices make use of range-based localization solutions.\nHowever, the high localization accuracy, achievable by these schemes is very desirable.\nTo overcome the challenges posed by the range-based localization schemes, when applied to sensor networks, a different approach has been proposed and evaluated in the past.\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes.\nBulusu et al. propose in [4] a localization scheme, called Centroid, in which each node localizes itself to the centroid of its proximate beacon nodes.\nIn [13], He et al. propose APIT, a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node.\nThe Global Coordinate System [20], developed at MIT, uses apriori knowledge of the node density in the network, to estimate the average hop distance.\nThe DV-* family of localization schemes [21], uses the hop count from known beacon nodes to the nodes in the network to infer the distance.\nThe majority of range-free localization schemes have been evaluated in simulations, or controlled environments.\nSeveral studies [11] [32] [34] have emphasized the challenges that real environments pose.\nLangendoen and Reijers present a detailed, comparative study of several localization schemes in [16].\nTo the best of our knowledge, Spotlight is the first range-free localization scheme that works very well in an outdoor environment.\nOur system requires a line of sight between a single device and the sensor nodes, and the map of the terrain where the sensor field is located.\nThe Spotlight system has a long effective range (1000``s meters) and does not require any infrastructure or additional hardware for sensor nodes.\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes.\n3.\nSPOTLIGHT SYSTEM DESIGN The main idea of the Spotlight localization system is to generate controlled events in the field where the sensor nodes were deployed.\nAn event could be, for example, the presence of light in an area.\nUsing the time when an event is perceived by a sensor node and the spatio-temporal properties of the generated events, spatial information (i.e. location) regarding the sensor node can be inferred.\nFigure 1.\nLocalization of a sensor network using the Spotlight system We envision, and depict in Figure 1, a sensor network deployment and localization scenario as follows: wireless sensor nodes are randomly deployed from an unmanned aerial vehicle.\nAfter deployment, the sensor nodes self-organize into a network and execute a time-synchronization protocol.\nAn aerial vehicle (e.g. helicopter), equipped with a device, called Spotlight, flies over the network and generates light events.\nThe sensor nodes detect the events and report back to the Spotlight device, through a base station, the timestamps when the events were detected.\nThe Spotlight device computes the location of the sensor nodes.\nDuring the design of our Spotlight system, we made the following assumptions: - the sensor network to be localized is connected and a middleware, able to forward data from the sensor nodes to the Spotlight device, is present.\n- the aerial vehicle has a very good knowledge about its position and orientation (6 parameters: 3 translation and 3 rigid-body rotation) and it possesses the map of the field where the network was deployed.\n- a powerful Spotlight device is available and it is able to generate 14 spatially large events that can be detected by the sensor nodes, even in the presence of background noise (daylight).\n- a line of sight between the Spotlight device and sensor nodes exists.\nOur assumptions are simplifying assumptions, meant to reduce the complexity of the presentation, for clarity.\nWe propose solutions that do not rely on these simplifying assumptions, in Section 6.\nIn order to formally describe and generalize the Spotlight localization system, we introduce the following definitions.\n3.1 Definitions and Problem Formulation Let``s assume that the space A \u2282R3 contains all sensor nodes N, and that each node Ni is positioned at pi(x, y, z).\nTo obtain pi(x, y, z), a Spotlight localization system needs to support three main functions, namely an Event Distribution Function (EDF) E(t), an Event Detection Function D(e), and a Localization Function L(Ti).\nThey are formally defined as follows: Definition 1: An event e(t, p) is a detectable phenomenon that occurs at time t and at point p \u0454 A. Examples of events are light, heat, smoke, sound, etc..\nLet Ti={ti1, ti2, ..., tin} be a set of n timestamps of events detected by a node i. Let T''={t1'', t2'', ..., tm''} be the set of m timestamps of events generated in the sensor field.\nDefinition 2: The Event Detection Function D(e) defines a binary detection algorithm.\nFor a given event e: \u23a9 \u23a8 \u23a7 = detectednotisEventfalse, detectedisEventtrue, )( e e eD (1) Definition 3: The Event Distribution Function (EDF) E(t) defines the point distribution of events within A at time t: }{ truepteDApptE =\u2227\u2208= )),((|)( (2) Definition 4: The Localization Function L(Ti) defines a localization algorithm with input Ti, a sequence of timestamps of events detected by the node i: I iTt i tETL \u2208 = )()( (3) Figure 2.\nSpotlight system architecture As shown in Figure 2, the Event Detection Function D(e) is supported by the sensor nodes.\nIt is used to determine whether an external event happens or not.\nIt can be implemented through either a simple threshold-based detection algorithm or other advanced digital signal processing techniques.\nThe Event Distribution E(t) and Localization Functions L(Ti) are implemented by a Spotlight device.\nThe Localization function is an aggregation algorithm which calculates the intersection of multiple sets of points.\nThe Event Distribution Function E(t) describes the distribution of events over time.\nIt is the core of the Spotlight system and it is much more sophisticated than the other two functions.\nDue to the fact that E(t) is realized by the Spotlight device, the hardware requirements for the sensor nodes remain minimal.\nWith the support of these three functions, the localization process goes as follows: 1) A Spotlight device distributes events in the space A over a period of time.\n2) During the event distribution, sensor nodes record the time sequence Ti = {ti1, ti2, ..., tin} at which they detect the events.\n3) After the event distribution, each sensor node sends the detection time sequence back to the Spotlight device.\n4) The Spotlight device estimates the location of a sensor node i, using the time sequence Ti and the known E(t) function.\nThe Event Distribution Function E(t) is the core technique used in the Spotlight system and we propose three designs for it.\nThese designs have different tradeoffs and the cost comparison is presented in Section 3.5.\n3.2 Point Scan Event Distribution Function To illustrate the basic functionality of a Spotlight system, we start with a simple sensor system where a set of nodes are placed along a straight line (A = [0, l] R).\nThe Spotlight device generates point events (e.g. light spots) along this line with constant speed s.\nThe set of timestamps of events detected by a node i is Ti={ti1}.\nThe Event Distribution Function E(t) is: \u2282 }{ stpApptE *)( =\u2227\u2208= (4) where t \u2208[0, l\/s].\nThe resulting localization function is: }{ sttETL iii \u2217== 11 )()( (5) where D(e(ti1, pi)) = true for node i positioned at pi.\nThe implementation of the Event Distribution Function E(t) is straightforward.\nAs shown in Figure 3(a), when a light source emits a beam of light with the angular speed given by d s dt d S )(cos 2 \u03b1\u03b1 \u03b1 == , a light spot event with constant speed s is generated along the line situated at distance d. Figure 3.\nThe implementation of the Point Scan EDF The Point Scan EDF can be generalized to the case where nodes are placed in a two dimensional plane R2 .\nIn this case, the Spotlight system progressively scans the plane to activate the sensor nodes.\nThis scenario is depicted in Figure 3(b).\n3.3 Line Scan Event Distribution Function Some devices, e.g. diode lasers, can generate an entire line of events simultaneously.\nWith these devices, we can support the Line Scan Event Distributed Function easily.\nWe assume that the 15 sensor nodes are placed in a two dimensional plane (A=[l x l] \u2282R2 ) and that the scanning speed is s.\nThe set of timestamps of events detected by a node i is Ti={ti1, ti2}.\nFigure 4.\nThe implementation of the Line Scan EDF The Line Scan EDF is defined as follows: ( ){ ks,*tpl][0,kp(t)E kkx =\u2227\u2208= } for t \u2208[0, l\/s] and: ({ ls*tk,pl][0,kp(t)E kky \u2212=\u2227\u2208= )} (6) for t \u2208[ l\/s, 2l\/s].\nU )()()( tEtEtE yx= We can localize a node by calculating the intersection of the two event lines, as shown in Figure 4.\nMore formally: I )()()( 21 iii tEtETL = (7) where D(e(ti1, pi)) = true, D(e(ti2, pi)) = true for node i positioned at pi.\n3.4 Area Cover Event Distribution Function Other devices, such as light projectors, can generate events that cover an area.\nThis allows the implementation of the Area Cover EDF.\nThe idea of Area Cover EDF is to partition the space A into multiple sections and assign a unique binary identifier, called code, to each section.\nLet``s suppose that the localization is done within a plane (A R2 ).\nEach section Sk within A has a unique code k.\nThe Area Cover EDF is then defined as follows: \u2282 \u23a9 \u23a8 \u23a7 = 0iskofbitjthiffalse, 1iskofbitjthiftrue, ),( jkBIT (8) }{ truetkBITSpptE k =\u2227\u2208= ),()( and the corresponding localization algorithm is: { \u2227\u2208=\u2227== )),(()(|)( iki TtiftruetkBITSCOGppTL (9)})`),(( iTTtiffalsetkBIT \u2212\u2208= where COG(Sk) denotes the center of gravity of Sk.\nWe illustrate the Area Cover EDF with a simple example.\nAs shown in Figure 5, the plane A is divided in 16 sections.\nEach section Sk has a unique code k.\nThe Spotlight device distributes the events according to these codes: at time j a section Sk is covered by an event (lit by light), if jth bit of k is 1.\nA node residing anywhere in the section Sk is localized at the center of gravity of that section.\nFor example, nodes within section 1010 detect the events at time T = {1, 3}.\nAt t = 4 the section where each node resides can be determined A more accurate localization requires a finer partitioning of the plane, hence the number of bits in the code will increase.\nConsidering the noise that is present in a real, outdoor environment, it is easy to observe that a relatively small error in detecting the correct bit pattern could result in a large localization error.\nReturning to the example shown in Figure 5, if a sensor node is located in the section with code 0000, and due to the noise, at time t = 3, it thinks it detected an event, it will incorrectly conclude that its code is 1000, and it positions itself two squares below its correct position.\nThe localization accuracy can deteriorate even further, if multiple errors are present in the transmission of the code.\nA natural solution to this problem is to use error-correcting codes, which greatly reduce the probability of an error, without paying the price of a re-transmission, or lengthening the transmission time too much.\nSeveral error correction schemes have been proposed in the past.\nTwo of the most notable ones are the Hamming (7, 4) code and the Golay (23, 12) code.\nBoth are perfect linear error correcting codes.\nThe Hamming coding scheme can detect up to 2-bit errors and correct 1-bit errors.\nIn the Hamming (7, 4) scheme, a message having 4 bits of data (e.g. dddd, where d is a data bit) is transmitted as a 7-bit word by adding 3 error control bits (e.g. dddpdpp, where p is a parity bit).\nFigure 5.\nThe steps of Area Cover EDF.\nThe events cover the shaded areas.\nThe steps of the Area Cover technique, when using Hamming (7, 4) scheme are shown in Figure 6.\nGolay codes can detect up to 6-bit errors and correct up to 3-bit errors.\nSimilar to Hamming (7, 4), Golay constructs a 23-bit codeword from 12-bit data.\nGolay codes have been used in satellite and spacecraft data transmission and are most suitable in cases where short codeword lengths are desirable.\nFigure 6.\nThe steps of Area Cover EDF with Hamming (7, 4) ECC.\nThe events cover the shaded areas.\nLet``s assume a 1-bit error probability of 0.01, and a 12-bit message that needs to be transmitted.\nThe probability of a failed transmission is thus: 0.11, if no error detection and correction is used; 0.0061 for the Hamming scheme (i.e. more than 1-bit error); and 0.000076 for the Golay scheme (i.e. more than 3-bit errors).\nGolay is thus 80 times more robust that the Hamming scheme, which is 20 times more robust than the no error correction scheme.\n16 Considering that a limited number of corrections is possible by any coding scheme, a natural question arises: can we minimize the localization error when there are errors that can not be corrected?\nThis can be achieved by a clever placement of codes in the grid.\nAs shown in Figure 7, the placement A, in the presence of a 1-bit error has a smaller average localization error when compared to the placement B.\nThe objective of our code placement strategy is to reduce the total Euclidean distance between all pairs of codes with Hamming distances smaller than K, the largest number of expected 1-bit errors.\nFigure 7.\nDifferent code placement strategies Formally, a placement is represented by a function P: [0, l]d \u2192 C, which assigns a code to every coordinate in the d-dimensional cube of size l (e.g., in the planar case, we place codes in a 2dimensional grid).\nWe denote by dE(i, j) the Euclidean distance and by dH(i, j) the Hamming distance between two codes i and j.\nIn a noisy environment, dH(i,j) determines the crossover probability between the two codes.\nFor the case of independent detections, the higher dH(i, j) is, the lower the crossover probability will be.\nThe objective function is defined as follows: d Kjid E ljiwherejid H ],0[,}),(min{ ),( \u2208\u2211\u2264 (10) Equation 10 is a non-linear and non-convex programming problem.\nIn general, it is analytically hard to obtain the global minimum.\nTo overcome this, we propose a Greedy Placement method to obtain suboptimal results.\nIn this method we initialize the 2-dimensional grid with codes.\nThen we swap the codes within the grid repeatedly, to minimize the objective function.\nFor each swap, we greedily chose a pair of codes, which can reduce the objective function (Equation 10) the most.\nThe proposed Greedy Placement method ends when no swap of codes can further minimize the objective function.\nFor evaluation, we compared the average localization error in the presence of K-bit error for two strategies: the proposed Greedy Placement and the Row-Major Placement (it places the codes consecutively in the array, in row-first order).\n0 0.5 1 1.5 2 2.5 3 3.5 4 4 9 16 25 36 49 64 81 Grid Size LocalizationError[gridunit] Row-major Consecutive placement Greedy Placement Figure 8.\nLocalization error with code placement and no ECC As Figure 8 shows, if no error detection\/correction capability is present and 1-bit errors occur, then our Greedy Placement method can reduce the localization error by an average 23%, when compared to the Row-Major Placement.\nIf error detection and correction schemes are used (e.g. Hamming (12, 8) and if 3-bit errors occur (K=3) then the Greedy Placement method reduces localization error by 12%, when compared to the Row-Major Placement, as shown in Figure 9.\nIf K=1, then there is no benefit in using the Greedy Placement method, since the 1-bit error can be corrected by the Hamming scheme.\n0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 4 9 16 25 36 49 64 81 Grid Size LocalizationError[gridunit] Row-major Consecutive placement Greedy Placement Figure 9.\nLocalization error with code placement and Hamming ECC 3.5 Event Distribution Function Analysis Although all three aforementioned techniques are able to localize the sensor nodes, they differ in the localization time, communication overhead and energy consumed by the Event Distribution Function (let``s call it Event Overhead).\nLet``s assume that all sensor nodes are located in a square with edge size D, and that the Spotlight device can generate N events (e.g. Point, Line and Area Cover events) every second and that the maximum tolerable localization error is r. Table 1 presents the execution cost comparison of the three different Spotlight techniques.\nTable 1.\nExecution Cost Comparison Criterion Point Scan Line Scan Area Cover Localization Time NrD \/)\/( 22 NrD \/)\/2( NDlogr \/ # Detections 1 2 logrD # Time Stamps 1 2 logrD Event Overhead D2 2D2 D2 logrD\/2 Table 1 indicates that the Event Overhead for the Point Scan method is the smallest - it requires a one-time coverage of the area, hence the D2 .\nHowever the Point Scan takes a much longer time than the Area Cover technique, which finishes in logrD seconds.\nThe Line Scan method trades the Event Overhead well with the localization time.\nBy doubling the Event Overhead, the Line Scan method takes only r\/2D percentage of time to complete, when compared with the Point Scan method.\nFrom Table 1, it can be observed that the execution costs do not depend on the number of sensor nodes to be localized.\nIt is important to remark the ratio Event Overhead per unit time, which is indicative of the power requirement for the Spotlight device.\nThis ratio is constant for the Point Scan (r2 *N) while it grows linearly with area, for the Area Cover (D2 *N\/2).\nIf the deployment area is very large, the use of the Area Cover EDF is prohibitively expensive, if not impossible.\nFor practical purposes, the Area Cover is a viable solution for small to medium size networks, while the Line Scan works well for large networks.\nWe discuss the implications of the power requirement for the Spotlight device, and offer a hybrid solution in Section 6.\n3.6 Localization Error Analysis The accuracy of localization with the Spotlight technique depends on many aspects.\nThe major factors that were considered during the implementation of the system are discussed below: 17 - Time Synchronization: the Spotlight system exchanges time stamps between sensor nodes and the Spotlight device.\nIt is necessary for the system to reach consensus on global time through synchronization.\nDue to the uncertainty in hardware processing and wireless communication, we can only confine such errors within certain bounds (e.g. one jiffy).\nAn imprecise input to the Localization Function L(T) leads to an error in node localization.\n- Uncertainty in Detection: the sampling rate of the sensor nodes is finite, consequently, there will be an unpredictable delay between the time when an event is truly present and when the sensor node detects it.\nLower sampling rates will generate larger localizations errors.\n- Size of the Event: the events distributed by the Spotlight device can not be infinitely small.\nIf a node detects one event, it is hard for it to estimate the exact location of itself within the event.\n- Realization of Event Distribution Function: EDF defines locations of events at time t. Due to the limited accuracy (e.g. mechanical imprecision), a Spotlight device might generate events which locate differently from where these events are supposed to be.\nIt is important to remark that the localization error is independent of the number of sensor nodes in the network.\nThis independence, as well as the aforementioned independence of the execution cost, indicate the very good scalability properties (with the number of sensor nodes, but not with the area of deployment) that the Spotlight system possesses.\n4.\nSYSTEM IMPLEMENTATION For our performance evaluation we implemented two Spotlight systems.\nUsing these two implementations we were able to investigate the full spectrum of Event Distribution techniques, proposed in Section 3, at a reduced one time cost (less than $1,000).\nThe first implementation, called \u03bcSpotlight, had a short range (10-20 meters), however its capability of generating the entire spectrum of EDFs made it very useful.\nWe used this implementation mainly to investigate the capabilities of the Spotlight system and tune its performance.\nIt was not intended to represent the full solution, but only a scaled down version of the system.\nThe second implementation, the Spotlight system, had a much longer range (as far as 6500m), but it was limited in the types of EDFs that it can generate.\nThe goal of this implementation was to show how the Spotlight system works in a real, outdoor environment, and show correlations with the experimental results obtained from the \u03bcSpotlight system implementation.\nIn the remaining part of this section, we describe how we implemented the three components (Event Distribution, Event Detection and Localization functions) of the Spotlight architecture, and the time synchronization protocol, a key component of our system.\n4.1 \u00b5Spotlight System The first system we built, called \u03bcSpotlight, used as the Spotlight device, an Infocus LD530 projector connected to an IBM Thinkpad laptop.\nThe system is shown in Figure 10.\nThe Event Distribution Function was implemented as a Java GUI.\nDue to the stringent timing requirements and the delay caused by the buffering in the windowing system of a PC, we used the Full-Screen Exclusive Mode API provided by Java2.\nThis allowed us to bypass the windowing system and more precisely estimate the time when an event is displayed by the projector, hence a higher accuracy of timestamps of events.\nBecause of the 50Hz refresh rate of our projector, there was still an uncertainty in the time stamping of the events of 20msec.\nWe explored the possibility of using and modifying the Linux kernel to expose the vertical synch (VSYNCH) interrupt, generated by the displaying device after each screen refresh, out of the kernel mode.\nThe performance evaluation results showed, however, that this level of accuracy was not needed.\nThe sensor nodes that we used were Berkeley Mica2 motes equipped with MTS310 multi-sensor boards from Crossbow.\nThis sensor board contains a CdSe photo sensor which can detect the light from the projector.\nFigure 10.\n\u03bcSpotlight system implementation With this implementation of the Spotlight system, we were able to generate Point, Line and Area Scan events.\n4.2 Spotlight System The second Spotlight system we built used, as the Spotlight device, diode lasers, a computerized telescope mount (Celestron CG-5GT, shown in Figure 11), and an IBM Thinkpad laptop.\nThe laptop was connected, through RS232 interfaces, to the telescope mount and to one XSM600CA [7] mote, acting as a base station.\nThe diode lasers we used ranged in power from 7mW to 35mW.\nThey emitted at 650nm, close to the point of highest sensitivity for CdSe photosensor.\nThe diode lasers were equipped with lenses that allowed us to control the divergence of the beam.\nFigure 11.\nSpotlight system implementation The telescope mount has worm gears for a smooth motion and high precision angular measurements.\nThe two angular measures that we used were the, so called, Alt (from Altitude) and Az (from Azimuth).\nIn astronomy, the Altitude of a celestial object is its angular distance above or below the celestial horizon, and the Azimuth is the angular distance of an object eastwards of the meridian, along the horizon.\n18 The laptop computer, through a Java GUI, controls the motion of the telescope mount, orienting it such that a full Point Scan of an area is performed, similar to the one described in Figure 3(b).\nFor each turning point i, the 3-tuple (Alti and Azi angles and the timestamp ti) is recorded.\nThe Spotlight system uses the timestamp received from a sensor node j, to obtain the angular measures Altj and Azj for its location.\nFor the sensor nodes, we used XSM motes, mainly because of their longer communication range.\nThe XSM mote has the photo sensor embedded in its main board.\nWe had to make minor adjustments to the plastic housing, in order to expose the photo sensor to the outside.\nThe same mote code, written in nesC, for TinyOS, was used for both \u00b5Spotlight and Spotlight system implementations.\n4.3 Event Detection Function D(t) The Event Detection Function aims to detect the beginning of an event and record the time when the event was observed.\nWe implemented a very simple detection function based on the observed maximum value.\nAn event i will be time stamped with time ti, if the reading from the photo sensor dti, fulfills the condition: itdd <\u0394+max where dmax is the maximum value reported by the photo sensor before ti and \u0394 is a constant which ensures that the first large detection gives the timestamp of the event (i.e. small variations around the first large signal are not considered).\nHence \u0394 guarantees that only sharp changes in the detected value generate an observed event.\n4.4 Localization Function L(T) The Localization Function is implemented in the Java GUI.\nIt matches the timestamps created by the Event Distribution Function with those reported by the sensor nodes.\nThe Localization Function for the Point Scan EDF has as input a time sequence Ti = {t1}, as reported by node i.\nThe function performs a simple search for the event with a timestamp closest to t1.\nIf t1 is constrained by: 11 + << nn ee ttt where en and en+1 are two consecutive events, then the obtained location for node i is: 11 , ++ == nn ee yyxx The case for the Line Scan is treated similarly.\nThe input to the Localization Function is the time sequence Ti = {t1, t2} as reported by node i.\nIf the reported timestamps are constrained by: 11 + << nn ee ttt , and 12 + << mm ee ttt where en and en+1 are two consecutive events on the horizontal scan and em and em+1 are two consecutive events on vertical scan, then the inferred location for node i is: 11 , ++ == mn ee yyxx The Localization Function for the Area Cover EDF has as input a timestamp set Ti={ti1, ti2, ..., tin} of the n events, detected by node i.\nWe recall the notation for the set of m timestamps of events generated by the Spotlight device, T''={t1'', t2'', ..., tm''}.\nA code di=di1di2...dim is then constructed for each node i, such that dij=1 if tj'' \u2208Ti and dij=0 if tj'' \u2209 Ti.\nThe function performs a search for an event with an identical code.\nIf the following condition is true: nei dd = where en is an event with code den, then the inferred location for node i is: nn ee yyxx == , 4.5 Time Synchronization The time synchronization in the Spotlight system consists of two parts: - Synchronization between sensor nodes: This is achieved through the Flooding Time Synchronization Protocol [18].\nIn this protocol, synchronized nodes (the root node is the only synchronized node at the beginning) send time synchronization message to unsynchronized nodes.\nThe sender puts the time stamp into the synchronization message right before the bytes containing the time stamp are transmitted.\nOnce a receiver gets the message, it follows the sender's time and performs the necessary calculations to compensate for the clock drift.\n- Synchronization between the sensor nodes and the Spotlight device: We implemented this part through a two-way handshaking between the Spotlight device and one node, used as the base station.\nThe sensor node is attached to the Spotlight device through a serial interface.\nFigure 12.\nTwo-way synchronization As shown in Figure 12, let``s assume that the Spotlight device sends a synchronization message (SYNC) at local time T1, the sensor node receives it at its local time T2 and acknowledges it at local time T3 (both T2 and T3 are sent back through ACK).\nAfter the Spotlight device receives the ACK, at its local time T4, the time synchronization can be achieved as follows: 2 )()( 4312 TTTT Offset \u2212+\u2212 = (11) OffsetTTT spotlightnodeglobal +== We note that Equation 11 assumes that the one trip delays are the same in both directions.\nIn practice this does not hold well enough.\nTo improve the performance, we separate the handshaking process from the timestamp exchanges.\nThe handshaking is done fast, through a 2 byte exchange between the Spotlight device and the sensor node (the timestamps are still recorded, but not sent).\nAfter this fast handshaking, the recorded time stamps are exchanged.\nThe result indicates that this approach can significantly improve the accuracy of time synchronization.\n5.\nPERFORMANCE EVALUATION In this section we present the performance evaluation of the Spotlight systems when using the three event distribution functions, i.e. Point Scan, Line Scan and Area Cover, described in Section 3.\n19 For the \u00b5Spotlight system we used 10 Mica2 motes.\nThe sensor nodes were attached to a vertically positioned Veltex board.\nBy projecting the light to the sensor nodes, we are able to generate well controlled Point, Line and Area events.\nThe Spotlight device was able to generate events, i.e. project light patterns, covering an area of approximate size 180x140cm2 .\nThe screen resolution for the projector was 1024x768, and the movement of the Point Scan and Line Scan techniques was done through increments (in the appropriate direction) of 10 pixels between events.\nEach experimental point was obtained from 10 successive runs of the localization procedure.\nEach set of 10 runs was preceded by a calibration phase, aimed at estimating the total delays (between the Spotlight device and each sensor node) in detecting an event.\nDuring the calibration, we created an event covering the entire sensor field (illuminated the entire area).\nThe timestamp reported by each sensor node, in conjunction with the timestamp created by the Spotlight device were used to obtain the time offset, for each sensor node.\nMore sophisticated calibration procedures have been reported previously [35].\nIn addition to the time offset, we added a manually configurable parameter, called bias.\nIt was used to best estimate the center of an event.\nFigure 13.\nDeployment site for the Spotlight system For the Spotlight system evaluation, we deployed 10 XSM motes in a football field.\nThe site is shown in Figure 13 (laser beams are depicted with red arrows and sensor nodes with white dots).\nTwo sets of experiments were run, with the Spotlight device positioned at 46m and at 170m from the sensor field.\nThe sensor nodes were aligned and the Spotlight device executed a Point Scan.\nThe localization system computed the coordinates of the sensor nodes, and the Spotlight device was oriented, through a GoTo command sent to the telescope mount, towards the computed location.\nIn the initial stages of the experiments, we manually measured the localization error.\nFor our experimental evaluation, the metrics of interest were as follows: - Localization error, defined as the distance, between the real location and the one obtained from the Spotlight system.\n- Localization duration, defined as the time span between the first and last event.\n- Localization range, defined as the maximum distance between the Spotlight device and the sensor nodes.\n- A Localization Cost function Cost:{{localization accuracy}, {localization duration}} \u2192 [0,1] quantifies the trade-off between the accuracy in localization and the localization duration.\nThe objective is to minimize the Localization Cost function.\nBy denoting with ei the localization error for the ith scenario, with di the localization duration for the ith scenario, with max(e) the maximum localization error, with max(d) the maximum localization duration, and with \u03b1 the importance factor, the Localization Cost function is formally defined as: )max( )1( )max( ),( d d e e deCost ii ii \u2217\u2212+\u2217= \u03b1\u03b1 - Localization Bias.\nThis metric is used to investigate the effectiveness of the calibration procedure.\nIf, for example, all computed locations have a bias in the west direction, a calibration factor can be used to compensate for the difference.\nThe parameters that we varied during the performance evaluation of our system were: the type of scanning (Point, Line and Area), the size of the event, the duration of the event (for Area Cover), the scanning speed, the power of the laser and the distance between the Spotlight device and sensor field, to estimate the range of the system.\n5.1 Point Scan - \u03bcSpotlight system In this experiment, we investigated how the size of the event and the scanning speed affect the localization error.\nFigure 14 shows the mean localization errors with their standard deviations.\nIt can be observed, that while the scanning speed, varying between 35cm\/sec and 87cm\/sec has a minor influence on the localization accuracy, the size of the event has a dramatic effect.\n0 2 4 6 8 10 12 14 7.0 10.5 14.0 17.5 21.0 24.5 Event Size [cm] Locationerror[cm] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 14.\nLocalization Error vs. Event Size for the Point Scan EDF The obtained localization error varied from as little as 2cm to over 11cm for the largest event.\nThis dependence can be explained by our Event Detection algorithm: the first detection above a threshold gave the timestamp for the event.\nThe duration of the localization scheme is shown in Figure 15.\nThe dependency of the localization duration on the size of the event and scanning speed is natural.\nA bigger event allows a reduction in the total duration of up to 70%.\nThe localization duration is directly proportional to the scanning speed, as expected, and depicted in Figure 15.\n0 20 40 60 80 100 120 7.0 10.5 14.0 17.5 21.0 24.5 Event Size [cm] LocalizationDuration[sec] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 15.\nLocalization Duration vs. Event Size for the Point Scan EDF 20 An interesting trade-off is between the localization accuracy (usually the most important factor), and the localization time (important in environments where stealthiness is paramount).\nFigure 16 shows the Localization Cost function, for \u03b1 = 0.5 (accuracy and duration are equally important).\nAs shown in Figure 16, it can be observed that an event size of approximately 10-15cm (depending on the scanning speed) minimizes our Cost function.\nFor \u03b1 = 1, the same graph would be a monotonically increasing function, while for \u03b1 = 0, it would be monotonically decreasing function.\n0.40 0.45 0.50 0.55 0.60 0.65 0.70 5.0 10.0 15.0 20.0 25.0 30.0 Event Size [cm] LocalizationCost[%] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 16.\nLocalization Cost vs. Event Size for the Point Scan EDF 5.2 Line Scan - \u03bcSpotlight system In a similar manner to the Point Scan EDF, for the Line Scan EDF we were interested in the dependency of the localization error and duration on the size of the event and scanning speed.\nWe represent in Figure 17 the localization error for different event sizes.\nIt is interesting to observe the dependency (concave shape) of the localization error vs. the event size.\nMoreover, a question that should arise is why the same dependency was not observed in the case of Point Scan EDF.\n0 1 2 3 4 5 6 7 8 9 10 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] Locationerror[cm] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 17.\nLocalization Error vs. Event Size for the Line Scan EDF The explanation for this concave dependency is the existence of a bias in location estimation.\nAs a reminder, a bias factor was introduced in order to best estimate the central point of events that have a large size.\nWhat Figure 17 shows is the fact that the bias factor was optimal for an event size of approximately 7cm.\nFor events smaller and larger than this, the bias factor was too large, and too small, respectively.\nThus, it introduced biased errors in the position estimation.\nThe reason why we did not observe the same dependency in the case of the Point Scan EDF was that we did not experiment with event sizes below 7cm, due to the long time it would have taken to scan the entire field with events as small as 1.7cm.\nThe results for the localization duration as a function of the size of the event are shown in Figure 18.\nAs shown, the localization duration is directly proportional to the scanning speed.\nThe size of the event has a smaller influence on the localization duration.\nOne can remark the average localization duration of about 10sec, much shorter then the duration obtained in the Point Scan experiment.\nThe Localization Cost function dependency on the event size and scanning speed, for \u03b1=0.5, is shown in Figure 19.\nThe dependency on the scanning speed is very small (the Cost Function achieves a minimum in the same 4-6cm range).\nIt is interesting to note that this 4-6cm optimal event size is smaller than the one observed in the case of Point Scan EDF.\nThe explanation for this is that the smaller localization duration observed in the Line Scan EDF, allowed a shift (towards smaller event sizes) in the total Localization Cost Function.\n0 5 10 15 20 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] LocalizationDuration[sec] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 18.\nLocalization Duration vs. Event Size for the Line Scan EDF 0.50 0.55 0.60 0.65 0.70 0.75 0.80 1.0 3.0 5.0 7.0 9.0 11.0 Event Size [cm] LocalizationCost[%] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 19.\nCost Function vs. Event Size for the Line Scan EDF During our experiments with the Line Scan EDF, we observed evidence of a bias in location estimation.\nThe estimated locations for all sensor nodes exhibited different biases, for different event sizes.\nFor example, for an event size of 17.5cm, the estimated location for sensor nodes was to the upper-left size of the actual location.\nThis was equivalent to an early detection, since our scanning was done from left to right and from top to bottom.\nThe scanning speed did not influence the bias.\nIn order to better understand the observed phenomena, we analyzed our data.\nFigure 20 shows the bias in the horizontal direction, for different event sizes (the vertical bias was almost identical, and we omit it, due to space constraints).\nFrom Figure 20, one can observe that the smallest observed bias, and hence the most accurate positioning, was for an event of size 7cm.\nThese results are consistent with the observed localization error, shown in Figure 17.\nWe also adjusted the measured localization error (shown in Figure 17) for the observed bias (shown in Figure 20).\nThe results of an ideal case of Spotlight Localization system with Line Scan EDF are shown in Figure 21.\nThe errors are remarkably small, varying between 0.1cm and 0.8cm, with a general trend of higher localization errors for larger event sizes.\n21 -6 -5 -4 -3 -2 -1 0 1 2 3 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] HorizontalBias[cm] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 20.\nPosition Estimation Bias for the Line Scan EDF 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.7 3.5 7.0 10.5 14.0 17.5 Event Size [cm] LocalizationErrorw\/oBias[cm] 87cm\/sec 58cm\/sec 43cm\/sec 35cm\/sec Figure 21.\nPosition Estimation w\/o Bias (ideal), for the Line Scan EDF 5.3 Area Cover - \u03bcSpotlight system In this experiment, we investigated how the number of bits used to quantify the entire sensor field, affected the localization accuracy.\nIn our first experiment we did not use error correcting codes.\nThe results are shown in Figure 22.\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 6 8 10 12 Number of Bits Locationerror[cm] 20ms\/event 40ms\/event 60ms\/event 80ms\/event 100ms\/event Figure 22.\nLocalization Error vs. Event Size for the Area Cover EDF One can observe a remarkable accuracy, with localization error on the order of 0.3-0.6cm.\nWhat is important to observe is the variance in the localization error.\nIn the scenario where 12 bits were used, while the average error was very small, there were a couple of cases, where an incorrect event detection generated a larger than expected error.\nAn example of how this error can occur was described in Section 3.4.\nThe experimental results, presented in Figure 22, emphasize the need for error correction of the bit patterns observed and reported by the sensor nodes.\nThe localization duration results are shown in Figure 23.\nIt can be observed that the duration is directly proportional with the number of bits used, with total durations ranging from 3sec, for the least accurate method, to 6-7sec for the most accurate.\nThe duration of an event had a small influence on the total localization time, when considering the same scenario (same number of bits for the code).\nThe Cost Function dependency on the number of bits in the code, for \u03b1=0.5, is shown in Figure 24.\nGenerally, since the localization duration for the Area Scan can be extremely small, a higher accuracy in the localization is desired.\nWhile the Cost function achieves a minimum when 10 bits are used, we attribute the slight increase observed when 12 bits were used to the two 12bit scenarios where larger than the expected errors were observed, namely 6-7mm (as shown in Figure 22).\n0 1 2 3 4 5 6 7 8 9 10 6 8 10 12 Number of Bits LocalizationDuration[sec] 20ms\/event 40ms\/event 60ms\/event 80ms\/event 100ms\/event Figure 23.\nLocalization Duration vs. Event Size for the Area Cover EDF 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 4 6 8 10 12 14 Number of Bits CostFunction[%] 20ms\/event 40ms\/event 60ms\/event 80ms\/event 100ms\/event Figure 24.\nCost Function vs. Event Size for the Area Cover EDF -0.4 -0.1 0.2 0.5 0.8 1.1 1.4 20 40 60 80 100 Event Duration [ms\/event] Locationerror[cm] w\/o ECC w\/ ECC Figure 25.\nLocalization Error w\/ and w\/o Error Correction The two problematic scenarios (shown in Figure 22, where for 12-bit codes we observed errors larger than the event size, due to errors in detection) were further explored by using error correction codes.\nAs described in Section 3.3, we implemented an extended Golay (24, 12) error correction mechanism in our location estimation algorithm.\nThe experimental results are depicted in Figure 25, and show a consistent accuracy.\nThe scenario without error correction codes, is simply the same 12-bit code scenario, shown in Figure 22.\nWe only investigated the 12-bit scenario, due to its match with the 12bit data required by the Golay encoding scheme (extended Golay producing 24-bit codewords).\n22 5.4 Point Scan - Spotlight system In this section we describe the experiments performed at a football stadium, using our Spotlight system.\nThe hardware that we had available allowed us to evaluate the Point Scan technique of the Spotlight system.\nIn our evaluation, we were interested to see the performance of the system at different ranges.\nFigures 26 and 27 show the localization error versus the event size at two different ranges: 46m and 170m.\nFigure 26 shows a remarkable accuracy in localization.\nThe errors are in the centimeter range.\nOur initial, manual measurements of the localization error were most of the time difficult to make, since the spot of the laser was almost perfectly covering the XSM mote.\nWe are able to achieve localization errors of a few centimeters, which only range-based localization schemes are able to achieve [25].\nThe observed dependency on the size of the event is similar to the one observed in the \u03bcSpotlight system evaluation, and shown in Figure 14.\nThis proved that the \u03bcSpotlight system is a viable alternative for investigating complex EDFs, without incurring the costs for the necessary hardware.\n0 5 10 15 20 25 0 5 10 15 20 25 30 Event Size [cm] LocalizationError[cm] 0.41m\/sec 0.81m\/sec 1.7m\/sec Figure 26.\nLocalization Error vs. Event Size for Spotlight system at 46m In the experiments performed over a much longer distance between the Spotlight device and sensor network, the average localization error remains very small.\nLocalization errors of 510cm were measured, as Figure 27 shows.\nWe were simply amazed by the accuracy that the system is capable of, when considering that the Spotlight system operated over the length of a football stadium.\nThroughout our experimentation with the Spotlight system, we have observed localization errors that were simply offsets of real locations.\nSince the same phenomenon was observed when experimenting with the \u03bcSpotlight system, we believe that with auto-calibration, the localization error can be further reduced.\n0 5 10 15 20 25 6 12 18 Event Size [cm] LocalizationError[cm] 0.7m\/sec 1.4m\/sec 3m\/sec Figure 27.\nLocalization Error vs. Event Size for Spotlight system at 170m The time required for localization using the Spotlight system with a Point Scan EDF, is given by: t=(L*l)\/(s*Es), where L and l are the dimensions of the sensor network field, s is the scanning speed, and Es is the size of the event.\nFigure 28 shows the time for localizing a sensor network deployed in an area of size of a football field using the Spotlight system.\nHere we ignore the message propagation time, from the sensor nodes to the Spotlight device.\nFrom Figure 28 it can be observed that the very small localization errors are prohibitively expensive in the case of the Point Scan.\nWhen localization errors of up to 1m are tolerable, localization duration can be as low as 4 minutes.\nLocalization durations of 5-10 minutes, and localization errors of 1m are currently state of art in the realm of range-free localization schemes.\nAnd these results are achieved by using the Point Scan scheme, which required the highest Localization Time, as it was shown in Table 1.\n0 5 10 15 20 25 30 35 40 0 25 50\u00a075\u00a0100\u00a0125 150 Event Size [cm] LocalizationTime[min] 3m\/sec 6m\/sec 9m\/sec Figure 28.\nLocalization Time vs. Event Size for Spotlight system One important characteristic of the Spotlight system is its range.\nThe two most important factors are the sensitivity of the photosensor and the power of the Spotlight source.\nWe were interested in measuring the range of our Spotlight system, considering our capabilities (MTS310 sensor board and inexpensive, $12-$85, diode laser).\nAs a result, we measured the intensity of the laser beam, having the same focus, at different distances.\nThe results are shown in Figure 29.\n950 1000 1050 1100 0 50\u00a0100\u00a0150\u00a0200 Distance [m] Intensity[ADCcount] 35mW 7mW Figure 29.\nLocalization Range for the Spotlight system From Figure 29, it can be observed that only a minor decrease in the intensity occurs, due to absorption and possibly our imperfect focusing of the laser beam.\nA linear fit of the experimental data shows that distances of up to 6500m can be achieved.\nWhile we do not expect atmospheric conditions, over large distances, to be similar to our 200m evaluation, there is strong evidence that distances (i.e. altitude) of 1000-2000m can easily be achieved.\nThe angle between the laser beam and the vertical should be minimized (less than 45\u00b0), as it reduces the difference between the beam cross-section (event size) and the actual projection of the beam on the ground.\nIn a similar manner, we were interested in finding out the maximum size of an event, that can be generated by a COTS laser and that is detectable by the existing photosensor.\nFor this, we 23 varied the divergence of the laser beam and measured the light intensity, as given by the ADC count.\nThe results are shown in Figure 30.\nIt can be observed that for the less powerful laser, an event size of 1.5m is the limit.\nFor the more powerful laser, the event size can be as high as 4m.\nThrough our extensive performance evaluation results, we have shown that the Spotlight system is a feasible, highly accurate, low cost solution for localization of wireless sensor networks.\nFrom our experience with sources of laser radiation, we believe that for small and medium size sensor network deployments, in areas of less than 20,000m2 , the Area Cover scheme is a viable solution.\nFor large size sensor network deployments, the Line Scan, or an incremental use of the Area Cover are very good options.\n0 200 400 600 800 1000 1200 0 50\u00a0100\u00a0150\u00a0200 Event Size [cm] Intensity[ADCcount] 35mW 7mW Figure 30.\nDetectable Event Sizes that can be generated by COTS lasers 6.\nOPTIMIZATIONS\/LESSONS LEARNED 6.1 Distributed Spotlight System The proposed design and the implementation of the Spotlight system can be considered centralized, due to the gathering of the sensor data and the execution of the Localization Function L(t) by the Spotlight device.\nWe show that this design can easily be transformed into a distributed one, by offering two solutions.\nOne idea is to disseminate in the network, information about the path of events, generated by the EDF (similar to an equation, describing a path), and let the sensor nodes execute the Localization Function.\nFor example, in the Line Scan scenario, if the starting and ending points for the horizontal and vertical scans, and the times they were reached, are propagated in the network, then any sensor in the network can obtain its location (assuming a constant scanning speed).\nA second solution is to use anchor nodes which know their positions.\nIn the case of Line Scan, if three anchors are present, after detecting the presence of the two events, the anchors flood the network with their locations and times of detection.\nUsing the same simple formulas as in the previous scheme, all sensor nodes can infer their positions.\n6.2 Localization Overhead Reduction Another requirement imposed by the Spotlight system design, is the use of a time synchronization protocol between the Spotlight device and the sensor network.\nRelaxing this requirement and imposing only a time synchronization protocol among sensor nodes is a very desirable objective.\nThe idea is to use the knowledge that the Spotlight device has about the speed with which the scanning of the sensor field takes place.\nIf the scanning speed is constant (let``s call it s), then the time difference (let``s call it \u0394t) between the event detections of two sensor nodes is, in fact, an accurate measure of the range between them: d=s*\u0394t.\nHence, the Spotlight system can be used for accurate ranging of the distance between any pair of sensor nodes.\nAn important observation is that this ranging technique does not suffer from limitations of others: small range and directionality for ultrasound, or irregularity, fading and multipath for Received Signal Strength Indicator (RSSI).\nAfter the ranges between nodes have been determined (either in a centralized or distributed manner) graph embedding algorithms can be used for a realization of a rigid graph, describing the sensor network topology.\n6.3 Dynamic Event Distribution Function E(t) Another system optimization is for environments where the sensor node density is not uniform.\nOne disadvantage of the Line Scan technique, when compared to the Area Cover, is the localization time.\nAn idea is to use two scans: one which uses a large event size (hence larger localization errors), followed by a second scan in which the event size changes dynamically.\nThe first scan is used for identifying the areas with a higher density of sensor nodes.\nThe second scan uses a larger event in areas where the sensor node density is low and a smaller event in areas with a higher sensor node density.\nA dynamic EDF can also be used when it is very difficult to meet the power requirements for the Spotlight device (imposed by the use of the Area Cover scheme in a very large area).\nIn this scenario, a hybrid scheme can be used: the first scan (Point Scan) is performed quickly, with a very large event size, and it is meant to identify, roughly, the location of the sensor network.\nSubsequent Area Cover scans will be executed on smaller portions of the network, until the entire deployment area is localized.\n6.4 Stealthiness Our implementation of the Spotlight system used visible light for creating events.\nUsing the system during the daylight or in a room well lit, poses challenges due to the solar or fluorescent lamp radiation, which generate a strong background noise.\nThe alternative, which we used in our performance evaluations, was to use the system in a dark room (\u03bcSpotlight system) or during the night (Spotlight system).\nWhile using the Spotlight system during the night is a good solution for environments where stealthiness is not important (e.g. environmental sciences) for others (e.g. military applications), divulging the presence and location of a sensor field, could seriously compromise the efficacy of the system.\nFigure 31.\nFluorescent Light Spectra (top), Spectral Response for CdSe cells (bottom) A solution to this problem, which we experimented with in the \u00b5Spotlight system, was to use an optical filter on top of the light 24 sensor.\nThe spectral response of a CdSe photo sensor spans almost the entire visible domain [37], with a peak at about 700nm (Figure 31-bottom).\nAs shown in Figure 31-top, the fluorescent light has no significant components above 700nm.\nHence, a simple red filter (Schott RG-630), which transmits all light with wavelength approximately above 630nm, coupled with an Event Distribution Function that generates events with wavelengths above the same threshold, would allow the use of the system when a fluorescent light is present.\nA solution for the Spotlight system to be stealthy at night, is to use a source of infra-red radiation (i.e. laser) emitting in the range [750, 1000]nm.\nFor a daylight use of the Spotlight system, the challenge is to overcome the strong background of the natural light.\nA solution we are considering is the use of a narrow-band optical filter, centered at the wavelength of the laser radiation.\nThe feasibility and the cost-effectiveness of this solution remain to be proven.\n6.5 Network Deployed in Unknown Terrain A further generalization is when the map of the terrain where the sensor network was deployed is unknown.\nWhile this is highly unlikely for many civil applications of wireless sensor network technologies, it is not difficult to imagine military applications where the sensor network is deployed in a hostile and unknown terrain.\nA solution to this problem is a system that uses two Spotlight devices, or equivalently, the use of the same device from two distinct positions, executing, from each of them, a complete localization procedure.\nIn this scheme, the position of the sensor node is uniquely determined by the intersection of the two location directions obtained by the system.\nThe relative localization (for each pair of Spotlight devices) will require an accurate knowledge of the 3 translation and 3 rigid-body rotation parameters for Spotlight``s position and orientation (as mentioned in Section 3).\nThis generalization is also applicable to scenarios where, due to terrain variations, there is no single aerial point with a direct line of sight to all sensor nodes, e.g. hilly terrain.\nBy executing the localization procedure from different aerial points, the probability of establishing a line of sight with all the nodes, increases.\nFor some military scenarios [1] [12], where open terrain is prevalent, the existence of a line of sight is not a limiting factor.\nIn light of this, the Spotlight system can not be used in forests or indoor environments.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper we presented the design, implementation and evaluation of a localization system for wireless sensor networks, called Spotlight.\nOur localization solution does not require any additional hardware for the sensor nodes, other than what already exists.\nAll the complexity of the system is encapsulated into a single Spotlight device.\nOur localization system is reusable, i.e. the costs can be amortized through several deployments, and its performance is not affected by the number of sensor nodes in the network.\nOur experimental results, obtained from a real system deployed outdoors, show that the localization error is less than 20cm.\nThis error is currently state of art, even for range-based localization systems and it is 75% smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [31].\nAs future work, we would like to explore the self-calibration and self-tuning of the Spotlight system.\nThe accuracy of the system can be further improved if the distribution of the event, instead of a single timestamp, is reported.\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques.\n8.\nACKNOWLEDGEMENTS This work was supported by the DARPA IXO office, under the NEST project (grant number F336616-01-C-1905) and by the NSF grant CCR-0098269.\nWe would like to thank S. Cornwell for allowing us to run experiments in the stadium, M. Klopf for his assistance with optics, and anonymous reviewers and our shepherd, Koen Langendoen, for their valuable feedback.\n9.\nREFERENCES [1] A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, M. Gouda, Y. Choi, T. Herman, S. Kulharni, U. Arumugam, M. Nesterenko, A. Vora, M. Miyashita, A Line in the Sand: A Wireless Sensor Network for Target Detection, Classification and Tracking, in Computer Networks 46(5), 2004.\n[2] P. Bahl, V.N. Padmanabhan, RADAR: An In-Building RFbased User Location and Tracking System, in Proceedings of Infocom, 2000 [3] M. Broxton, J. Lifton, J. Paradiso, Localizing a Sensor Network via Collaborative Processing of Global Stimuli, in Proceedings of EWSN, 2005.\n[4] N. Bulusu, J. Heidemann, D. Estrin, GPS-less Low Cost Outdoor Localization for Very Small Devices, in IEEE Personal Communications Magazine, 2000.\n[5] P. Corke, R. Peterson, D. Rus, Networked Robots: Flying Robot Navigation Using a Sensor Net, in ISSR, 2003.\n[6] L. Doherty, L. E. Ghaoui, K. Pister, Convex Position Estimation in Wireless Sensor Networks, in Proceedings of Infocom, 2001 [7] P. Dutta, M. Grimmer, A. Arora, S. Bibyk, D. Culler, Design of a Wireless Sensor Network Platform for Detecting Rare, Random, and Ephemeral Events, in Proceedings of IPSN, 2005.\n[8] E. Elnahrawy, X. Li, R. Martin, The Limits of Localization using RSSI, in Proceedings of SECON, 2004.\n[9] D. Fox, W. Burgard, S. Thrun, Markov Localization for Mobile Robots in Dynamic Environments, in Journal of Artificial Intelligence Research, 1999.\n[10] D. Fox, W. Burgard, F. Dellaert, S. Thrun, Monte Carlo Localization: Efficient Position Estimation for Mobile Robots, in Conference on Artificial Intelligence, 2000.\n[11] D. Ganesan, B. Krishnamachari, A. Woo, D. Culler, D. Estrin, S. Wicker, Complex Behaviour at Scale: An Experimental Study of Low Power Wireless Sensor Networks, in Technical Report, UCLA-TR 01-0013, 2001.\n[12] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui, B. Krogh, An Energy-Efficient Surveillance System Using Wireless Sensor Networks, in Proceedings of Mobisys, 2004.\n[13] T. He, C. Huang, B. Blum, J. A. Stankovic, T. Abdelzaher, Range-Free Localization Schemes for Large Scale Sensor Networks in Proceedings of Mobicom, 2003.\n[14] L. Hu, D. Evans, Localization for Mobile Sensor Networks, in Proceedings of Mobicom, 2004.\n[15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, G. Agha, Resilient Localization for Sensor Networks in Outdoor Environments, UIUC Technical Report, 2004.\n25 [16] K. Langendoen, N. Reijers, Distributed Localization in Wireless Sensor Networks, A Comparative Study, in Computer Networks vol.\n43, 2003.\n[17] K. Lorincz, M. Welsh, MoteTrack: A Robust, Decentralized Approach to RF-Based Location Tracking, in Proceedings of Intl..\nWorkshop on Location and Context-Awareness, 2005.\n[18] M. Maroti, B. Kusy, G. Simon, A. Ledeczi, The Flooding Time Synchronization Protocol, in Proceedings of Sensys, 2004.\n[19] D. Moore, J. Leonard, D. Rus, S. Teller, Robust Distributed Network Localization with Noisy Range Measurements in Proceedings of Sensys, 2004.\n[20] R. Nagpal, H. Shrobe, J. Bachrach, Organizing a Global Coordinate System for Local Information on an Adhoc Sensor Network, in A.I Memo 1666.\nMIT A.I. Laboratory, 1999.\n[21] D. Niculescu, B. Nath, DV-based Positioning in Adhoc Networks in Telecommunication Systems, vol.\n22, 2003.\n[22] E. Osterweil, T. Schoellhammer, M. Rahimi, M. Wimbrow, T. Stathopoulos, L.Girod, M. Mysore, A.Wu, D. Estrin, The Extensible Sensing System, CENS-UCLA poster, 2004.\n[23] B.W. Parkinson, J. Spilker, Global Positioning System: theory and applications, in Progress in Aeronautics and Astronautics, vol.\n163, 1996.\n[24] P.N. Pathirana, N. Bulusu, A. Savkin, S. Jha, Node Localization Using Mobile Robots in Delay-Tolerant Sensor Networks, in Transactions on Mobile Computing, 2004.\n[25] N. Priyantha, A. Chakaborty, H. Balakrishnan, The Cricket Location-support System, in Proceedings of MobiCom, 2000.\n[26] N. Priyantha, H. Balakrishnan, E. Demaine, S. Teller, Mobile-Assisted Topology Generation for Auto-Localization in Sensor Networks, in Proceedings of Infocom, 2005.\n[27] A. Savvides, C. Han, M. Srivastava, Dynamic Fine-grained localization in Adhoc Networks of Sensors, in Proceedings of MobiCom, 2001.\n[28] Y. Shang, W. Ruml, Improved MDS-Based Localization, in Proceedings of Infocom, 2004.\n[29] M. Sichitiu, V. Ramadurai,Localization of Wireless Sensor Networks with a Mobile Beacon, in Proceedings of MASS, 2004.\n[30] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas, G. Pap, J. Sallai, Sensor Network-Base Countersniper System, in Proceedings of Sensys, 2004.\n[31] R. Stoleru, T. He, J.A. Stankovic, Walking GPS: A Practical Solution for Localization in Manually Deployed Wireless Sensor Networks, in Proceedings of EmNetS, 2004.\n[32] R. Stoleru, J.A. Stankovic, Probability Grid: A Location Estimation Scheme for Wireless Sensor Networks, in Proceedings of SECON, 2004.\n[33] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, D. Culler, An Analysis of a Large Scale Habitat Monitoring Application, in Proceedings of Sensys, 2004.\n[34] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, D. Culler, The Effects of Ranging Noise on Multi-hop Localization: An Empirical Study, in Proceedings of IPSN, 2005.\n[35] K. Whitehouse, D. Culler, Calibration as Parameter Estimation in Sensor Networks, in Proceedings of WSNA, 2002.\n[36] P. Zhang, C. Sadler, S. A. Lyon, M. Martonosi, Hardware Design Experiences in ZebraNet, in Proceedings of Sensys, 2004.\n[37] Selco Products Co..\nConstruction and Characteristics of CdS Cells, product datasheet, 2004 26","lvl-3":"A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments.\nIn this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight.\nOur system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes.\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems.\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes.\nThrough performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error.\nA sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000.\nTo the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.\n1.\nINTRODUCTION\nRecently, wireless sensor network systems have been used in many promising applications including military surveillance, habitat monitoring, wildlife tracking etc. [12] [22] [33] [36].\nWhile many middleware services, to support these applications, have been designed and implemented successfully, localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically.\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations, such as annotating sensed data with location context, it is an indispensable requirement for a sensor node to be able to find its own location.\nMany approaches have been proposed in the literature [4] [6] [13] [14] [19] [20] [21] [23] [27] [28], however it is still not clear how these solutions can be practically and economically deployed.\nAn on-board GPS [23] is a typical high-end solution, which requires sophisticated hardware to achieve high resolution time synchronization with satellites.\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution.\nOther solutions require per node devices that can perform ranging among neighboring nodes.\nThe difficulties of these approaches are twofold.\nFirst, under constraints of form factor and power supply, the effective ranges of such devices are very limited.\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [26].\nSecond, since most sensor nodes are static, i.e. the location is not expected to change, it is not cost-effective to equip these sensors with special circuitry just for a one-time localization.\nTo overcome these limitations, many range-free localization schemes have been proposed.\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes.\nThese approaches eliminate the need of high-cost specialized hardware, at the cost of a less accurate localization.\nIn addition, the radio propagation characteristics vary over time and are environment dependent, thus imposing high calibration costs for the range-free localizations schemes.\nWith such limitations in mind, this paper addresses the following research challenge: How to reconcile the need for high accuracy in location estimation with the cost to achieve it.\nOur answer to this challenge is a localization system called Spotlight.\nThis system employs an asymmetric architecture, in which sensor nodes do not need any additional hardware, other than what they currently have.\nAll the sophisticated hardware and computation reside on a single Spotlight device.\nThe Spotlight device uses a steerable laser light source, illuminating the sensor nodes placed within a known terrain.\nWe demonstrate that this localization is much more accurate (i.e., tens of centimeters) than the range-based localization schemes and that it has a much longer effective range (i.e., thousands of meters) than the solutions based on ultra-sound\/acoustic ranging.\nAt the same time, since only a single sophisticated device is needed to localize the whole network, the amortized cost is much smaller than the cost to add hardware components to the individual sensors.\n2.\nRELATED WORK\nIn this section, we discuss prior work in localization in two major categories: the range-based localization schemes (which use either expensive, per node, ranging devices for high accuracy, or less accurate ranging solutions, as the Received Signal Strength Indicator (RSSI)), and the range-free schemes, which use only connectivity information (hop-by-hop) as an indication of proximity among the nodes.\nThe localization problem is a fundamental research problem in many domains.\nIn the field of robotics, it has been studied extensively [9] [10].\nThe reported localization errors are on the order of tens of centimeters, when using specialized ranging hardware, i.e. laser range finder or ultrasound.\nDue to the high cost and non-negligible form factor of the ranging hardware, these solutions cannot be simply applied to sensor networks.\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver.\nThe RADAR system [2] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes.\nThe location of a mobile user is estimated within a few meters.\nIn a similar approach, MoteTrack [17] distributes the reference RSSI values to the beacon nodes.\nSolutions that use RSSI and do not require beacon nodes have also been proposed [5] [14] [24] [26] [29].\nThey all share the idea of using a mobile beacon.\nThe sensor nodes that receive the beacons, apply different algorithms for inferring their location.\nIn [29], Sichitiu proposes a solution in which the nodes that receive the beacon construct, based on the RSSI value, a constraint on their position estimate.\nIn [26], Priyantha et al. propose MAL, a localization method in which a mobile node (moving strategically) assists in measuring distances between node pairs, until the constraints on distances generate a rigid graph.\nIn [24], Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it.\nElnahrawy [8] provides strong evidence of inherent limitations of localization accuracy using RSSI, in indoor environments.\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave, to obtain pair wise distances between sensor nodes.\nThis approach produces smaller localization errors, at the cost of additional hardware.\nThe Cricket location-support system [25] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers.\nAHLoS, proposed by Savvides et al. [27], employs Time of Arrival (ToA) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations.\nA similar ToA technique is employed in [3].\nIn [30], Simon et al. implement a distributed system (using acoustic ranging) which locates a sniper in an urban terrain.\nAcoustic ranging for localization is also used by Kwon et al. [15].\nThe reported errors in localization vary from 2.2 m to 9.5 m, depending on the type (centralized vs. distributed) of the Least Square Scaling algorithm used.\nFor wireless sensor networks ranging is a difficult option.\nThe hardware cost, the energy expenditure, the form factor, the small range, all are difficult compromises, and it is hard to envision cheap, unreliable and resource-constraint devices make use of range-based localization solutions.\nHowever, the high localization accuracy, achievable by these schemes is very desirable.\nTo overcome the challenges posed by the range-based localization schemes, when applied to sensor networks, a different approach has been proposed and evaluated in the past.\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes.\nBulusu et al. propose in [4] a localization scheme, called Centroid, in which each node localizes itself to the centroid of its proximate beacon nodes.\nIn [13], He et al. propose APIT, a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node.\nThe Global Coordinate System [20], developed at MIT, uses apriori knowledge of the node density in the network, to estimate the average hop distance.\nThe DV - * family of localization schemes [21], uses the hop count from known beacon nodes to the nodes in the network to infer the distance.\nThe majority of range-free localization schemes have been evaluated in simulations, or controlled environments.\nSeveral studies [11] [32] [34] have emphasized the challenges that real environments pose.\nLangendoen and Reijers present a detailed, comparative study of several localization schemes in [16].\nTo the best of our knowledge, Spotlight is the first range-free localization scheme that works very well in an outdoor environment.\nOur system requires a line of sight between a single device and the sensor nodes, and the map of the terrain where the sensor field is located.\nThe Spotlight system has a long effective range (1000's meters) and does not require any infrastructure or additional hardware for sensor nodes.\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes.\n3.\nSPOTLIGHT SYSTEM DESIGN\nSpotlight system\n3.1 Definitions and Problem Formulation\n3.2 Point Scan Event Distribution Function\n3.3 Line Scan Event Distribution Function\n3.4 Area Cover Event Distribution Function\n3.5 Event Distribution Function Analysis\n3.6 Localization Error Analysis\n4.\nSYSTEM IMPLEMENTATION\n4.1 \u00b5Spotlight System\n4.2 Spotlight System\n4.3 Event Detection Function D (t)\n4.4 Localization Function L (T)\n4.5 Time Synchronization\n5.\nPERFORMANCE EVALUATION\n5.1 Point Scan - \u03bcSpotlight system\nScan EDF\nScan EDF\n5.2 Line Scan - \u03bcSpotlight system\nScan EDF\n5.3 Area Cover - \u03bcSpotlight system\n5.4 Point Scan - Spotlight system\nCOTS lasers 6.\nOPTIMIZATIONS\/LESSONS LEARNED\n6.1 Distributed Spotlight System\n6.2 Localization Overhead Reduction\n6.3 Dynamic Event Distribution Function E (t)\n6.4 Stealthiness\n6.5 Network Deployed in Unknown Terrain\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design, implementation and evaluation of a localization system for wireless sensor networks, called Spotlight.\nOur localization solution does not require any additional hardware for the sensor nodes, other than what already exists.\nAll the complexity of the system is encapsulated into a single Spotlight device.\nOur localization system is reusable, i.e. the costs can be amortized through several deployments, and its performance is not affected by the number of sensor nodes in the network.\nOur experimental results, obtained from a real system deployed outdoors, show that the localization error is less than 20cm.\nThis error is currently state of art, even for range-based localization systems and it is 75% smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [31].\nAs future work, we would like to explore the self-calibration and self-tuning of the Spotlight system.\nThe accuracy of the system can be further improved if the distribution of the event, instead of a single timestamp, is reported.\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques.","lvl-4":"A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments.\nIn this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight.\nOur system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes.\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems.\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes.\nThrough performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error.\nA sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000.\nTo the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.\n1.\nINTRODUCTION\nRecently, wireless sensor network systems have been used in many promising applications including military surveillance, habitat monitoring, wildlife tracking etc. [12] [22] [33] [36].\nWhile many middleware services, to support these applications, have been designed and implemented successfully, localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically.\nAn on-board GPS [23] is a typical high-end solution, which requires sophisticated hardware to achieve high resolution time synchronization with satellites.\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution.\nOther solutions require per node devices that can perform ranging among neighboring nodes.\nThe difficulties of these approaches are twofold.\nFirst, under constraints of form factor and power supply, the effective ranges of such devices are very limited.\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [26].\nSecond, since most sensor nodes are static, i.e. the location is not expected to change, it is not cost-effective to equip these sensors with special circuitry just for a one-time localization.\nTo overcome these limitations, many range-free localization schemes have been proposed.\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes.\nThese approaches eliminate the need of high-cost specialized hardware, at the cost of a less accurate localization.\nIn addition, the radio propagation characteristics vary over time and are environment dependent, thus imposing high calibration costs for the range-free localizations schemes.\nOur answer to this challenge is a localization system called Spotlight.\nThis system employs an asymmetric architecture, in which sensor nodes do not need any additional hardware, other than what they currently have.\nAll the sophisticated hardware and computation reside on a single Spotlight device.\nThe Spotlight device uses a steerable laser light source, illuminating the sensor nodes placed within a known terrain.\nAt the same time, since only a single sophisticated device is needed to localize the whole network, the amortized cost is much smaller than the cost to add hardware components to the individual sensors.\n2.\nRELATED WORK\nThe localization problem is a fundamental research problem in many domains.\nThe reported localization errors are on the order of tens of centimeters, when using specialized ranging hardware, i.e. laser range finder or ultrasound.\nDue to the high cost and non-negligible form factor of the ranging hardware, these solutions cannot be simply applied to sensor networks.\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver.\nThe RADAR system [2] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes.\nThe location of a mobile user is estimated within a few meters.\nIn a similar approach, MoteTrack [17] distributes the reference RSSI values to the beacon nodes.\nSolutions that use RSSI and do not require beacon nodes have also been proposed [5] [14] [24] [26] [29].\nThey all share the idea of using a mobile beacon.\nThe sensor nodes that receive the beacons, apply different algorithms for inferring their location.\nIn [29], Sichitiu proposes a solution in which the nodes that receive the beacon construct, based on the RSSI value, a constraint on their position estimate.\nIn [24], Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it.\nElnahrawy [8] provides strong evidence of inherent limitations of localization accuracy using RSSI, in indoor environments.\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave, to obtain pair wise distances between sensor nodes.\nThis approach produces smaller localization errors, at the cost of additional hardware.\nThe Cricket location-support system [25] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers.\nAHLoS, proposed by Savvides et al. [27], employs Time of Arrival (ToA) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations.\nIn [30], Simon et al. implement a distributed system (using acoustic ranging) which locates a sniper in an urban terrain.\nAcoustic ranging for localization is also used by Kwon et al. [15].\nThe reported errors in localization vary from 2.2 m to 9.5 m, depending on the type (centralized vs. distributed) of the Least Square Scaling algorithm used.\nFor wireless sensor networks ranging is a difficult option.\nHowever, the high localization accuracy, achievable by these schemes is very desirable.\nTo overcome the challenges posed by the range-based localization schemes, when applied to sensor networks, a different approach has been proposed and evaluated in the past.\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes.\nBulusu et al. propose in [4] a localization scheme, called Centroid, in which each node localizes itself to the centroid of its proximate beacon nodes.\nThe Global Coordinate System [20], developed at MIT, uses apriori knowledge of the node density in the network, to estimate the average hop distance.\nThe DV - * family of localization schemes [21], uses the hop count from known beacon nodes to the nodes in the network to infer the distance.\nThe majority of range-free localization schemes have been evaluated in simulations, or controlled environments.\nLangendoen and Reijers present a detailed, comparative study of several localization schemes in [16].\nTo the best of our knowledge, Spotlight is the first range-free localization scheme that works very well in an outdoor environment.\nOur system requires a line of sight between a single device and the sensor nodes, and the map of the terrain where the sensor field is located.\nThe Spotlight system has a long effective range (1000's meters) and does not require any infrastructure or additional hardware for sensor nodes.\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design, implementation and evaluation of a localization system for wireless sensor networks, called Spotlight.\nOur localization solution does not require any additional hardware for the sensor nodes, other than what already exists.\nAll the complexity of the system is encapsulated into a single Spotlight device.\nOur localization system is reusable, i.e. the costs can be amortized through several deployments, and its performance is not affected by the number of sensor nodes in the network.\nOur experimental results, obtained from a real system deployed outdoors, show that the localization error is less than 20cm.\nThis error is currently state of art, even for range-based localization systems and it is 75% smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [31].\nAs future work, we would like to explore the self-calibration and self-tuning of the Spotlight system.\nThe accuracy of the system can be further improved if the distribution of the event, instead of a single timestamp, is reported.\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques.","lvl-2":"A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks\nABSTRACT\nThe problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments.\nIn this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight.\nOur system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes.\nWe demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems.\nWe evaluate the performance of our system in deployments of Mica2 and XSM motes.\nThrough performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error.\nA sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than $1000.\nTo the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.\n1.\nINTRODUCTION\nRecently, wireless sensor network systems have been used in many promising applications including military surveillance, habitat monitoring, wildlife tracking etc. [12] [22] [33] [36].\nWhile many middleware services, to support these applications, have been designed and implemented successfully, localization - finding the position of sensor nodes - remains one of the most difficult research challenges to be solved practically.\nSince most emerging applications based on networked sensor nodes require location awareness to assist their operations, such as annotating sensed data with location context, it is an indispensable requirement for a sensor node to be able to find its own location.\nMany approaches have been proposed in the literature [4] [6] [13] [14] [19] [20] [21] [23] [27] [28], however it is still not clear how these solutions can be practically and economically deployed.\nAn on-board GPS [23] is a typical high-end solution, which requires sophisticated hardware to achieve high resolution time synchronization with satellites.\nThe constraints on power and cost for tiny sensor nodes preclude this as a viable solution.\nOther solutions require per node devices that can perform ranging among neighboring nodes.\nThe difficulties of these approaches are twofold.\nFirst, under constraints of form factor and power supply, the effective ranges of such devices are very limited.\nFor example the effective range of the ultrasonic transducers used in the Cricket system is less than 2 meters when the sender and receiver are not facing each other [26].\nSecond, since most sensor nodes are static, i.e. the location is not expected to change, it is not cost-effective to equip these sensors with special circuitry just for a one-time localization.\nTo overcome these limitations, many range-free localization schemes have been proposed.\nMost of these schemes estimate the location of sensor nodes by exploiting the radio connectivity information among neighboring nodes.\nThese approaches eliminate the need of high-cost specialized hardware, at the cost of a less accurate localization.\nIn addition, the radio propagation characteristics vary over time and are environment dependent, thus imposing high calibration costs for the range-free localizations schemes.\nWith such limitations in mind, this paper addresses the following research challenge: How to reconcile the need for high accuracy in location estimation with the cost to achieve it.\nOur answer to this challenge is a localization system called Spotlight.\nThis system employs an asymmetric architecture, in which sensor nodes do not need any additional hardware, other than what they currently have.\nAll the sophisticated hardware and computation reside on a single Spotlight device.\nThe Spotlight device uses a steerable laser light source, illuminating the sensor nodes placed within a known terrain.\nWe demonstrate that this localization is much more accurate (i.e., tens of centimeters) than the range-based localization schemes and that it has a much longer effective range (i.e., thousands of meters) than the solutions based on ultra-sound\/acoustic ranging.\nAt the same time, since only a single sophisticated device is needed to localize the whole network, the amortized cost is much smaller than the cost to add hardware components to the individual sensors.\n2.\nRELATED WORK\nIn this section, we discuss prior work in localization in two major categories: the range-based localization schemes (which use either expensive, per node, ranging devices for high accuracy, or less accurate ranging solutions, as the Received Signal Strength Indicator (RSSI)), and the range-free schemes, which use only connectivity information (hop-by-hop) as an indication of proximity among the nodes.\nThe localization problem is a fundamental research problem in many domains.\nIn the field of robotics, it has been studied extensively [9] [10].\nThe reported localization errors are on the order of tens of centimeters, when using specialized ranging hardware, i.e. laser range finder or ultrasound.\nDue to the high cost and non-negligible form factor of the ranging hardware, these solutions cannot be simply applied to sensor networks.\nThe RSSI has been an attractive solution for estimating the distance between the sender and the receiver.\nThe RADAR system [2] uses the RSSI to build a centralized repository of signal strengths at various positions with respect to a set of beacon nodes.\nThe location of a mobile user is estimated within a few meters.\nIn a similar approach, MoteTrack [17] distributes the reference RSSI values to the beacon nodes.\nSolutions that use RSSI and do not require beacon nodes have also been proposed [5] [14] [24] [26] [29].\nThey all share the idea of using a mobile beacon.\nThe sensor nodes that receive the beacons, apply different algorithms for inferring their location.\nIn [29], Sichitiu proposes a solution in which the nodes that receive the beacon construct, based on the RSSI value, a constraint on their position estimate.\nIn [26], Priyantha et al. propose MAL, a localization method in which a mobile node (moving strategically) assists in measuring distances between node pairs, until the constraints on distances generate a rigid graph.\nIn [24], Pathirana et al. formulate the localization problem as an on-line estimation in a nonlinear dynamic system and proposes a Robust Extended Kalman Filter for solving it.\nElnahrawy [8] provides strong evidence of inherent limitations of localization accuracy using RSSI, in indoor environments.\nA more precise ranging technique uses the time difference between a radio signal and an acoustic wave, to obtain pair wise distances between sensor nodes.\nThis approach produces smaller localization errors, at the cost of additional hardware.\nThe Cricket location-support system [25] can achieve a location granularity of tens of centimeters with short range ultrasound transceivers.\nAHLoS, proposed by Savvides et al. [27], employs Time of Arrival (ToA) ranging techniques that require extensive hardware and solving relatively large nonlinear systems of equations.\nA similar ToA technique is employed in [3].\nIn [30], Simon et al. implement a distributed system (using acoustic ranging) which locates a sniper in an urban terrain.\nAcoustic ranging for localization is also used by Kwon et al. [15].\nThe reported errors in localization vary from 2.2 m to 9.5 m, depending on the type (centralized vs. distributed) of the Least Square Scaling algorithm used.\nFor wireless sensor networks ranging is a difficult option.\nThe hardware cost, the energy expenditure, the form factor, the small range, all are difficult compromises, and it is hard to envision cheap, unreliable and resource-constraint devices make use of range-based localization solutions.\nHowever, the high localization accuracy, achievable by these schemes is very desirable.\nTo overcome the challenges posed by the range-based localization schemes, when applied to sensor networks, a different approach has been proposed and evaluated in the past.\nThis approach is called range-free and it attempts to obtain location information from the proximity to a set of known beacon nodes.\nBulusu et al. propose in [4] a localization scheme, called Centroid, in which each node localizes itself to the centroid of its proximate beacon nodes.\nIn [13], He et al. propose APIT, a scheme in which each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacon nodes heard by the node.\nThe Global Coordinate System [20], developed at MIT, uses apriori knowledge of the node density in the network, to estimate the average hop distance.\nThe DV - * family of localization schemes [21], uses the hop count from known beacon nodes to the nodes in the network to infer the distance.\nThe majority of range-free localization schemes have been evaluated in simulations, or controlled environments.\nSeveral studies [11] [32] [34] have emphasized the challenges that real environments pose.\nLangendoen and Reijers present a detailed, comparative study of several localization schemes in [16].\nTo the best of our knowledge, Spotlight is the first range-free localization scheme that works very well in an outdoor environment.\nOur system requires a line of sight between a single device and the sensor nodes, and the map of the terrain where the sensor field is located.\nThe Spotlight system has a long effective range (1000's meters) and does not require any infrastructure or additional hardware for sensor nodes.\nThe Spotlight system combines the advantages and does not suffer from the disadvantages of the two localization classes.\n3.\nSPOTLIGHT SYSTEM DESIGN\nThe main idea of the Spotlight localization system is to generate controlled events in the field where the sensor nodes were deployed.\nAn event could be, for example, the presence of light in an area.\nUsing the time when an event is perceived by a sensor node and the spatio-temporal properties of the generated events, spatial information (i.e. location) regarding the sensor node can be inferred.\nFigure 1.\nLocalization of a sensor network using the\nSpotlight system\nWe envision, and depict in Figure 1, a sensor network deployment and localization scenario as follows: wireless sensor nodes are randomly deployed from an unmanned aerial vehicle.\nAfter deployment, the sensor nodes self-organize into a network and execute a time-synchronization protocol.\nAn aerial vehicle (e.g. helicopter), equipped with a device, called Spotlight, flies over the network and generates light events.\nThe sensor nodes detect the events and report back to the Spotlight device, through a base station, the timestamps when the events were detected.\nThe Spotlight device computes the location of the sensor nodes.\nDuring the design of our Spotlight system, we made the following assumptions: - the sensor network to be localized is connected and a middleware, able to forward data from the sensor nodes to the Spotlight device, is present.\n- the aerial vehicle has a very good knowledge about its position and orientation (6 parameters: 3 translation and 3 rigid-body rotation) and it possesses the map of the field where the network was deployed.\n- a powerful Spotlight device is available and it is able to generate\nspatially large events that can be detected by the sensor nodes, even in the presence of background noise (daylight).\n- a line of sight between the Spotlight device and sensor nodes exists.\nOur assumptions are simplifying assumptions, meant to reduce the complexity of the presentation, for clarity.\nWe propose solutions that do not rely on these simplifying assumptions, in Section 6.\nIn order to formally describe and generalize the Spotlight localization system, we introduce the following definitions.\n3.1 Definitions and Problem Formulation\nLet's assume that the space A \u2282 R3 contains all sensor nodes N, and that each node Ni is positioned at pi (x, y, z).\nTo obtain pi (x, y, z), a Spotlight localization system needs to support three main functions, namely an Event Distribution Function (EDF) E (t), an Event Detection Function D (e), and a Localization Function L (Ti).\nThey are formally defined as follows:\nDefinition 1: An event e (t, p) is a detectable phenomenon that occurs at time t and at point p \u0454 A. Examples of events are light, heat, smoke, sound, etc. .\nLet Ti = {ti1, ti2,..., tin} be a set of n timestamps of events detected by a node i. Let T' = {t1', t2',..., tm'} be the set of m timestamps of events generated in the sensor field.\nDefinition 2: The Event Detection Function D (e) defines a binary detection algorithm.\nFor a given event e:\ntrue, Event e is detected\nDefinition 4: The Localization Function L (Ti) defines a localization algorithm with input Ti, a sequence of timestamps of events detected by the node i:\nFigure 2.\nSpotlight system architecture\nAs shown in Figure 2, the Event Detection Function D (e) is supported by the sensor nodes.\nIt is used to determine whether an external event happens or not.\nIt can be implemented through either a simple threshold-based detection algorithm or other advanced digital signal processing techniques.\nThe Event Distribution E (t) and Localization Functions L (Ti) are implemented by a Spotlight device.\nThe Localization function is an aggregation algorithm which calculates the intersection of multiple sets of points.\nThe Event Distribution Function E (t) describes the distribution of events over time.\nIt is the core of the Spotlight system and it is much more sophisticated than the other two functions.\nDue to the fact that E (t) is realized by the Spotlight device, the hardware requirements for the sensor nodes remain minimal.\nWith the support of these three functions, the localization process goes as follows:\n1) A Spotlight device distributes events in the space A over a period of time.\n2) During the event distribution, sensor nodes record the time sequence Ti = {ti1, ti2,..., tin} at which they detect the events.\n3) After the event distribution, each sensor node sends the detection time sequence back to the Spotlight device.\n4) The Spotlight device estimates the location of a sensor node i, using the time sequence Ti and the known E (t) function.\nThe Event Distribution Function E (t) is the core technique used in the Spotlight system and we propose three designs for it.\nThese designs have different tradeoffs and the cost comparison is presented in Section 3.5.\n3.2 Point Scan Event Distribution Function\nTo illustrate the basic functionality of a Spotlight system, we start with a simple sensor system where a set of nodes are placed along a straight line (A = [0, l] \u2282 R).\nThe Spotlight device generates point events (e.g. light spots) along this line with constant speed s.\nThe set of timestamps of events detected by a node i is Ti = {ti1}.\nThe Event Distribution Function E (t) is:\nwhere t \u2208 [0, l\/s].\nThe resulting localization function is:\nwhere D (e (ti1, pi)) = true for node i positioned at pi.\nThe implementation of the Event Distribution Function E (t) is straightforward.\nAs shown in Figure 3 (a), when a light source emits a beam of light with the angular speed given by\ngenerated along the line situated at distance d.\nFigure 3.\nThe implementation of the Point Scan EDF\nThe Point Scan EDF can be generalized to the case where nodes are placed in a two dimensional plane R2.\nIn this case, the Spotlight system progressively scans the plane to activate the sensor nodes.\nThis scenario is depicted in Figure 3 (b).\n3.3 Line Scan Event Distribution Function\nSome devices, e.g. diode lasers, can generate an entire line of events simultaneously.\nWith these devices, we can support the Line Scan Event Distributed Function easily.\nWe assume that the \u23a7 is not detected\nsensor nodes are placed in a two dimensional plane (A = [l x l] \u2282 R2) and that the scanning speed is s.\nThe set of timestamps of events detected by a node i is Ti = {ti1, ti2}.\nFigure 4.\nThe implementation of the Line Scan EDF The Line Scan EDF is defined as follows:\nwhere D (e (ti1, pi)) = true, D (e (ti2, pi)) = true for node i positioned at pi.\n3.4 Area Cover Event Distribution Function\nOther devices, such as light projectors, can generate events that cover an area.\nThis allows the implementation of the Area Cover EDF.\nThe idea of Area Cover EDF is to partition the space A into multiple sections and assign a unique binary identifier, called code, to each section.\nLet's suppose that the localization is done within a plane (A \u2282 R2).\nEach section Sk within A has a unique code k.\nThe Area Cover EDF is then defined as follows:\nwhere COG (Sk) denotes the center of gravity of Sk.\nWe illustrate the Area Cover EDF with a simple example.\nAs shown in Figure 5, the plane A is divided in 16 sections.\nEach section Sk has a unique code k.\nThe Spotlight device distributes the events according to these codes: at time j a section Sk is covered by an event (lit by light), if jth bit of k is 1.\nA node residing anywhere in the section Sk is localized at the center of gravity of that section.\nFor example, nodes within section 1010 detect the events at time T = {1, 3}.\nAt t = 4 the section where each node resides can be determined A more accurate localization requires a finer partitioning of the plane, hence the number of bits in the code will increase.\nConsidering the noise that is present in a real, outdoor environment, it is easy to observe that a relatively small error in detecting the correct bit pattern could result in a large localization error.\nReturning to the example shown in Figure 5, if a sensor node is located in the section with code 0000, and due to the noise, at time t = 3, it thinks it detected an event, it will incorrectly conclude that its code is 1000, and it positions itself two squares below its correct position.\nThe localization accuracy can deteriorate even further, if multiple errors are present in the transmission of the code.\nA natural solution to this problem is to use error-correcting codes, which greatly reduce the probability of an error, without paying the price of a re-transmission, or lengthening the transmission time too much.\nSeveral error correction schemes have been proposed in the past.\nTwo of the most notable ones are the Hamming (7, 4) code and the Golay (23, 12) code.\nBoth are perfect linear error correcting codes.\nThe Hamming coding scheme can detect up to 2-bit errors and correct 1-bit errors.\nIn the Hamming (7, 4) scheme, a message having 4 bits of data (e.g. dddd, where d is a data bit) is transmitted as a 7-bit word by adding 3 error control bits (e.g. dddpdpp, where p is a parity bit).\nFigure 5.\nThe steps of Area Cover EDF.\nThe events cover\nthe shaded areas.\nThe steps of the Area Cover technique, when using Hamming (7, 4) scheme are shown in Figure 6.\nGolay codes can detect up to 6-bit errors and correct up to 3-bit errors.\nSimilar to Hamming (7, 4), Golay constructs a 23-bit codeword from 12-bit data.\nGolay codes have been used in satellite and spacecraft data transmission and are most suitable in cases where short codeword lengths are desirable.\nFigure 6.\nThe steps of Area Cover EDF with Hamming (7, 4) ECC.\nThe events cover the shaded areas.\nLet's assume a 1-bit error probability of 0.01, and a 12-bit message that needs to be transmitted.\nThe probability of a failed transmission is thus: 0.11, if no error detection and correction is used; 0.0061 for the Hamming scheme (i.e. more than 1-bit error); and 0.000076 for the Golay scheme (i.e. more than 3-bit errors).\nGolay is thus 80 times more robust that the Hamming scheme, which is 20 times more robust than the no error correction scheme.\n\u23a7\nConsidering that a limited number of corrections is possible by any coding scheme, a natural question arises: can we minimize the localization error when there are errors that cannot be corrected?\nThis can be achieved by a clever placement of codes in the grid.\nAs shown in Figure 7, the placement A, in the presence of a 1-bit error has a smaller average localization error when compared to the placement B.\nThe objective of our code placement strategy is to reduce the total Euclidean distance between all pairs of codes with Hamming distances smaller than K, the largest number of expected 1-bit errors.\nFigure 7.\nDifferent code placement strategies\nFormally, a placement is represented by a function P: [0, l] d \u2192 C, which assigns a code to every coordinate in the d-dimensional cube of size l (e.g., in the planar case, we place codes in a 2dimensional grid).\nWe denote by dE (i, j) the Euclidean distance and by dH (i, j) the Hamming distance between two codes i and j.\nIn a noisy environment, dH (i, j) determines the crossover probability between the two codes.\nFor the case of independent detections, the higher dH (i, j) is, the lower the crossover probability will be.\nThe objective function is defined as follows:\nEquation 10 is a non-linear and non-convex programming problem.\nIn general, it is analytically hard to obtain the global minimum.\nTo overcome this, we propose a Greedy Placement method to obtain suboptimal results.\nIn this method we initialize the 2-dimensional grid with codes.\nThen we swap the codes within the grid repeatedly, to minimize the objective function.\nFor each swap, we greedily chose a pair of codes, which can reduce the objective function (Equation 10) the most.\nThe proposed Greedy Placement method ends when no swap of codes can further minimize the objective function.\nFor evaluation, we compared the average localization error in the presence of K-bit error for two strategies: the proposed Greedy Placement and the Row-Major Placement (it places the codes consecutively in the array, in row-first order).\nFigure 8.\nLocalization error with code placement and no\nECC As Figure 8 shows, if no error detection\/correction capability is present and 1-bit errors occur, then our Greedy Placement method can reduce the localization error by an average 23%, when compared to the Row-Major Placement.\nIf error detection and correction schemes are used (e.g. Hamming (12, 8) and if 3-bit errors occur (K = 3) then the Greedy Placement method reduces localization error by 12%, when compared to the Row-Major Placement, as shown in Figure 9.\nIf K = 1, then there is no benefit in using the Greedy Placement method, since the 1-bit error can be corrected by the Hamming scheme.\nFigure 9.\nLocalization error with code placement and\nHamming ECC\n3.5 Event Distribution Function Analysis\nAlthough all three aforementioned techniques are able to localize the sensor nodes, they differ in the localization time, communication overhead and energy consumed by the Event Distribution Function (let's call it Event Overhead).\nLet's assume that all sensor nodes are located in a square with edge size D, and that the Spotlight device can generate N events (e.g. Point, Line and Area Cover events) every second and that the maximum tolerable localization error is r. Table 1 presents the execution cost comparison of the three different Spotlight techniques.\nTable 1.\nExecution Cost Comparison\nTable 1 indicates that the Event Overhead for the Point Scan\nmethod is the smallest - it requires a one-time coverage of the area, hence the D2.\nHowever the Point Scan takes a much longer time than the Area Cover technique, which finishes in logrD seconds.\nThe Line Scan method trades the Event Overhead well with the localization time.\nBy doubling the Event Overhead, the Line Scan method takes only r\/2D percentage of time to complete, when compared with the Point Scan method.\nFrom Table 1, it can be observed that the execution costs do not depend on the number of sensor nodes to be localized.\nIt is important to remark the ratio Event Overhead per unit time, which is indicative of the power requirement for the Spotlight device.\nThis ratio is constant for the Point Scan (r2 * N) while it grows linearly with area, for the Area Cover (D2 * N\/2).\nIf the deployment area is very large, the use of the Area Cover EDF is prohibitively expensive, if not impossible.\nFor practical purposes, the Area Cover is a viable solution for small to medium size networks, while the Line Scan works well for large networks.\nWe discuss the implications of the power requirement for the Spotlight device, and offer a hybrid solution in Section 6.\n3.6 Localization Error Analysis\nThe accuracy of localization with the Spotlight technique depends on many aspects.\nThe major factors that were considered during the implementation of the system are discussed below:\n- Time Synchronization: the Spotlight system exchanges time stamps between sensor nodes and the Spotlight device.\nIt is necessary for the system to reach consensus on global time through synchronization.\nDue to the uncertainty in hardware processing and wireless communication, we can only confine such errors within certain bounds (e.g. one jiffy).\nAn imprecise input to the Localization Function L (T) leads to an error in node localization.\n- Uncertainty in Detection: the sampling rate of the sensor nodes is finite, consequently, there will be an unpredictable delay between the time when an event is truly present and when the sensor node detects it.\nLower sampling rates will generate larger localizations errors.\n- Size of the Event: the events distributed by the Spotlight device cannot be infinitely small.\nIf a node detects one event, it is hard for it to estimate the exact location of itself within the event.\n- Realization of Event Distribution Function: EDF defines locations of events at time t. Due to the limited accuracy (e.g. mechanical imprecision), a Spotlight device might generate events which locate differently from where these events are supposed to be.\nIt is important to remark that the localization error is independent of the number of sensor nodes in the network.\nThis independence, as well as the aforementioned independence of the execution cost, indicate the very good scalability properties (with the number of sensor nodes, but not with the area of deployment) that the Spotlight system possesses.\n4.\nSYSTEM IMPLEMENTATION\nFor our performance evaluation we implemented two Spotlight systems.\nUsing these two implementations we were able to investigate the full spectrum of Event Distribution techniques, proposed in Section 3, at a reduced \"one time\" cost (less than $1,000).\nThe first implementation, called \u03bcSpotlight, had a short range (10-20 meters), however its capability of generating the entire spectrum of EDFs made it very useful.\nWe used this implementation mainly to investigate the capabilities of the Spotlight system and tune its performance.\nIt was not intended to represent the full solution, but only a scaled down version of the system.\nThe second implementation, the Spotlight system, had a much longer range (as far as 6500m), but it was limited in the types of EDFs that it can generate.\nThe goal of this implementation was to show how the Spotlight system works in a real, outdoor environment, and show correlations with the experimental results obtained from the \u03bcSpotlight system implementation.\nIn the remaining part of this section, we describe how we implemented the three components (Event Distribution, Event Detection and Localization functions) of the Spotlight architecture, and the time synchronization protocol, a key component of our system.\n4.1 \u00b5Spotlight System\nThe first system we built, called \u03bcSpotlight, used as the Spotlight device, an Infocus LD530 projector connected to an IBM Thinkpad laptop.\nThe system is shown in Figure 10.\nThe Event Distribution Function was implemented as a Java GUI.\nDue to the stringent timing requirements and the delay caused by the buffering in the windowing system of a PC, we used the Full-Screen Exclusive Mode API provided by Java2.\nThis allowed us to bypass the windowing system and more precisely estimate the time when an event is displayed by the projector, hence a higher accuracy of timestamps of events.\nBecause of the 50Hz refresh rate of our projector, there was still an uncertainty in the time stamping of the events of 20msec.\nWe explored the possibility of using and modifying the Linux kernel to expose the vertical synch (VSYNCH) interrupt, generated by the displaying device after each screen refresh, out of the kernel mode.\nThe performance evaluation results showed, however, that this level of accuracy was not needed.\nThe sensor nodes that we used were Berkeley Mica2 motes equipped with MTS310 multi-sensor boards from Crossbow.\nThis sensor board contains a CdSe photo sensor which can detect the light from the projector.\nFigure 10.\n\u03bcSpotlight system implementation\nWith this implementation of the Spotlight system, we were able to generate Point, Line and Area Scan events.\n4.2 Spotlight System\nThe second Spotlight system we built used, as the Spotlight device, diode lasers, a computerized telescope mount (Celestron CG-5GT, shown in Figure 11), and an IBM Thinkpad laptop.\nThe laptop was connected, through RS232 interfaces, to the telescope mount and to one XSM600CA [7] mote, acting as a base station.\nThe diode lasers we used ranged in power from 7mW to 35mW.\nThey emitted at 650nm, close to the point of highest sensitivity for CdSe photosensor.\nThe diode lasers were equipped with lenses that allowed us to control the divergence of the beam.\nFigure 11.\nSpotlight system implementation\nThe telescope mount has worm gears for a smooth motion and high precision angular measurements.\nThe two angular measures that we used were the, so called, Alt (from Altitude) and Az (from Azimuth).\nIn astronomy, the Altitude of a celestial object is its angular distance above or below the celestial horizon, and the Azimuth is the angular distance of an object eastwards of the meridian, along the horizon.\nThe laptop computer, through a Java GUI, controls the motion of the telescope mount, orienting it such that a full Point Scan of an area is performed, similar to the one described in Figure 3 (b).\nFor each turning point i, the 3-tuple (Alti and Azi angles and the timestamp ti) is recorded.\nThe Spotlight system uses the timestamp received from a sensor node j, to obtain the angular measures Altj and Azj for its location.\nFor the sensor nodes, we used XSM motes, mainly because of their longer communication range.\nThe XSM mote has the photo sensor embedded in its main board.\nWe had to make minor adjustments to the plastic housing, in order to expose the photo sensor to the outside.\nThe same mote code, written in nesC, for TinyOS, was used for both \u00b5Spotlight and Spotlight system implementations.\n4.3 Event Detection Function D (t)\nThe Event Detection Function aims to detect the beginning of an event and record the time when the event was observed.\nWe implemented a very simple detection function based on the observed maximum value.\nAn event i will be time stamped with time ti, if the reading from the photo sensor dti, fulfills the condition:\nwhere dmax is the maximum value reported by the photo sensor before ti and \u0394 is a constant which ensures that the first large detection gives the timestamp of the event (i.e. small variations around the first large signal are not considered).\nHence \u0394 guarantees that only sharp changes in the detected value generate an observed event.\n4.4 Localization Function L (T)\nThe Localization Function is implemented in the Java GUI.\nIt matches the timestamps created by the Event Distribution Function with those reported by the sensor nodes.\nThe Localization Function for the Point Scan EDF has as input a time sequence Ti = {t1}, as reported by node i.\nThe function performs a simple search for the event with a timestamp closest to t1.\nIf t1 is constrained by:\nwhere en and en +1 are two consecutive events, then the obtained location for node i is: tj' \u2208 Ti and dij = 0 if tj' \u2209 Ti.\nThe function performs a search for an event with an identical code.\nIf the following condition is true:\nwhere en is an event with code den, then the inferred location for node i is:\n4.5 Time Synchronization\nThe time synchronization in the Spotlight system consists of two parts: - Synchronization between sensor nodes: This is achieved through the Flooding Time Synchronization Protocol [18].\nIn this protocol, synchronized nodes (the root node is the only synchronized node at the beginning) send time synchronization message to unsynchronized nodes.\nThe sender puts the time stamp into the synchronization message right before the bytes containing the time stamp are transmitted.\nOnce a receiver gets the message, it follows the sender's time and performs the necessary calculations to compensate for the clock drift.\n- Synchronization between the sensor nodes and the Spotlight device: We implemented this part through a two-way handshaking between the Spotlight device and one node, used as the base station.\nThe sensor node is attached to the Spotlight device through a serial interface.\nFigure 12.\nTwo-way synchronization\nAs shown in Figure 12, let's assume that the Spotlight device sends a synchronization message (SYNC) at local time T1, the sensor node receives it at its local time T2 and acknowledges it at local time T3 (both T2 and T3 are sent back through ACK).\nAfter the Spotlight device receives the ACK, at its local time T4, the time synchronization can be achieved as follows:\nThe case for the Line Scan is treated similarly.\nThe input to the Localization Function is the time sequence Ti = {t1, t2} as reported by node i.\nIf the reported timestamps are constrained by:\nwhere en and en +1 are two consecutive events on the horizontal scan and em and em +1 are two consecutive events on vertical scan, then the inferred location for node i is:\nThe Localization Function for the Area Cover EDF has as input a timestamp set Ti = {ti1, ti2,..., tin} of the n events, detected by node i.\nWe recall the notation for the set of m timestamps of events generated by the Spotlight device, T' = {t1', t2',..., tm'}.\nA code di = di1di2...dim is then constructed for each node i, such that dij = 1 if\nWe note that Equation 11 assumes that the one trip delays are the same in both directions.\nIn practice this does not hold well enough.\nTo improve the performance, we separate the handshaking process from the timestamp exchanges.\nThe handshaking is done fast, through a 2 byte exchange between the Spotlight device and the sensor node (the timestamps are still recorded, but not sent).\nAfter this fast handshaking, the recorded time stamps are exchanged.\nThe result indicates that this approach can significantly improve the accuracy of time synchronization.\n5.\nPERFORMANCE EVALUATION\nIn this section we present the performance evaluation of the Spotlight systems when using the three event distribution functions, i.e. Point Scan, Line Scan and Area Cover, described in Section 3.\nFor the \u00b5Spotlight system we used 10 Mica2 motes.\nThe sensor nodes were attached to a vertically positioned Veltex board.\nBy projecting the light to the sensor nodes, we are able to generate well controlled Point, Line and Area events.\nThe Spotlight device was able to generate events, i.e. project light patterns, covering an area of approximate size 180x140cm2.\nThe screen resolution for the projector was 1024x768, and the movement of the Point Scan and Line Scan techniques was done through increments (in the appropriate direction) of 10 pixels between events.\nEach experimental point was obtained from 10 successive runs of the localization procedure.\nEach set of 10 runs was preceded by a calibration phase, aimed at estimating the total delays (between the Spotlight device and each sensor node) in detecting an event.\nDuring the calibration, we created an event covering the entire sensor field (illuminated the entire area).\nThe timestamp reported by each sensor node, in conjunction with the timestamp created by the Spotlight device were used to obtain the time offset, for each sensor node.\nMore sophisticated calibration procedures have been reported previously [35].\nIn addition to the time offset, we added a manually configurable parameter, called bias.\nIt was used to best estimate the center of an event.\nFigure 13.\nDeployment site for the Spotlight system\nFor the Spotlight system evaluation, we deployed 10 XSM motes in a football field.\nThe site is shown in Figure 13 (laser beams are depicted with red arrows and sensor nodes with white dots).\nTwo sets of experiments were run, with the Spotlight device positioned at 46m and at 170m from the sensor field.\nThe sensor nodes were aligned and the Spotlight device executed a Point Scan.\nThe localization system computed the coordinates of the sensor nodes, and the Spotlight device was oriented, through a GoTo command sent to the telescope mount, towards the computed location.\nIn the initial stages of the experiments, we manually measured the localization error.\nFor our experimental evaluation, the metrics of interest were as follows: - Localization error, defined as the distance, between the real location and the one obtained from the Spotlight system.\n- Localization duration, defined as the time span between the first and last event.\n- Localization range, defined as the maximum distance between the Spotlight device and the sensor nodes.\n- A Localization Cost function Cost:{{localization accuracy}, {localization duration}} \u2192 [0,1] quantifies the trade-off between the accuracy in localization and the localization duration.\nThe objective is to minimize the Localization Cost function.\nBy denoting with ei the localization error for the ith scenario, with di the localization duration for the ith scenario, with max (e) the maximum localization error, with max (d) the maximum localization duration, and with \u03b1 the importance factor, the Localization Cost function is formally defined as:\n- Localization Bias.\nThis metric is used to investigate the effectiveness of the calibration procedure.\nIf, for example, all computed locations have a bias in the west direction, a calibration factor can be used to compensate for the difference.\nThe parameters that we varied during the performance evaluation of our system were: the type of scanning (Point, Line and Area), the size of the event, the duration of the event (for Area Cover), the scanning speed, the power of the laser and the distance between the Spotlight device and sensor field, to estimate the range of the system.\n5.1 Point Scan - \u03bcSpotlight system\nIn this experiment, we investigated how the size of the event and the scanning speed affect the localization error.\nFigure 14 shows the mean localization errors with their standard deviations.\nIt can be observed, that while the scanning speed, varying between 35cm\/sec and 87cm\/sec has a minor influence on the localization accuracy, the size of the event has a dramatic effect.\nFigure 14.\nLocalization Error vs. Event Size for the Point\nScan EDF\nThe obtained localization error varied from as little as 2cm to over 11cm for the largest event.\nThis dependence can be explained by our Event Detection algorithm: the first detection above a threshold gave the timestamp for the event.\nThe duration of the localization scheme is shown in Figure 15.\nThe dependency of the localization duration on the size of the event and scanning speed is natural.\nA bigger event allows a reduction in the total duration of up to 70%.\nThe localization duration is directly proportional to the scanning speed, as expected, and depicted in Figure 15.\nFigure 15.\nLocalization Duration vs. Event Size for the Point\nAn interesting trade-off is between the localization accuracy (usually the most important factor), and the localization time (important in environments where stealthiness is paramount).\nFigure 16 shows the Localization Cost function, for \u03b1 = 0.5 (accuracy and duration are equally important).\nAs shown in Figure 16, it can be observed that an event size of approximately 10-15cm (depending on the scanning speed) minimizes our Cost function.\nFor \u03b1 = 1, the same graph would be a monotonically increasing function, while for \u03b1 = 0, it would be monotonically decreasing function.\nFigure 16.\nLocalization Cost vs. Event Size for the Point\nScan EDF\n5.2 Line Scan - \u03bcSpotlight system\nIn a similar manner to the Point Scan EDF, for the Line Scan EDF we were interested in the dependency of the localization error and duration on the size of the event and scanning speed.\nWe represent in Figure 17 the localization error for different event sizes.\nIt is interesting to observe the dependency (concave shape) of the localization error vs. the event size.\nMoreover, a question that should arise is why the same dependency was not observed in the case of Point Scan EDF.\nFigure 17.\nLocalization Error vs. Event Size for the Line\nScan EDF\nThe explanation for this concave dependency is the existence of a bias in location estimation.\nAs a reminder, a bias factor was introduced in order to best estimate the central point of events that have a large size.\nWhat Figure 17 shows is the fact that the bias factor was optimal for an event size of approximately 7cm.\nFor events smaller and larger than this, the bias factor was too large, and too small, respectively.\nThus, it introduced biased errors in the position estimation.\nThe reason why we did not observe the same dependency in the case of the Point Scan EDF was that we did not experiment with event sizes below 7cm, due to the long time it would have taken to scan the entire field with events as small as 1.7 cm.\nThe results for the localization duration as a function of the size of the event are shown in Figure 18.\nAs shown, the localization duration is directly proportional to the scanning speed.\nThe size of the event has a smaller influence on the localization duration.\nOne can remark the average localization duration of about 10sec, much shorter then the duration obtained in the Point Scan experiment.\nThe Localization Cost function dependency on the event size and scanning speed, for \u03b1 = 0.5, is shown in Figure 19.\nThe dependency on the scanning speed is very small (the Cost Function achieves a minimum in the same 4-6cm range).\nIt is interesting to note that this 4-6cm optimal event size is smaller than the one observed in the case of Point Scan EDF.\nThe explanation for this is that the smaller localization duration observed in the Line Scan EDF, allowed a shift (towards smaller event sizes) in the total Localization Cost Function.\nFigure 18.\nLocalization Duration vs. Event Size for the Line\nFigure 19.\nCost Function vs. Event Size for the Line Scan\nEDF During our experiments with the Line Scan EDF, we observed evidence of a bias in location estimation.\nThe estimated locations for all sensor nodes exhibited different biases, for different event sizes.\nFor example, for an event size of 17.5 cm, the estimated location for sensor nodes was to the upper-left size of the actual location.\nThis was equivalent to an \"early\" detection, since our scanning was done from left to right and from top to bottom.\nThe scanning speed did not influence the bias.\nIn order to better understand the observed phenomena, we analyzed our data.\nFigure 20 shows the bias in the horizontal direction, for different event sizes (the vertical bias was almost identical, and we omit it, due to space constraints).\nFrom Figure 20, one can observe that the smallest observed bias, and hence the most accurate positioning, was for an event of size 7cm.\nThese results are consistent with the observed localization error, shown in Figure 17.\nWe also adjusted the measured localization error (shown in Figure 17) for the observed bias (shown in Figure 20).\nThe results of an ideal case of Spotlight Localization system with Line Scan EDF are shown in Figure 21.\nThe errors are remarkably small, varying between 0.1 cm and 0.8 cm, with a general trend of higher localization errors for larger event sizes.\nFigure 20.\nPosition Estimation Bias for the Line Scan EDF\nFigure 21.\nPosition Estimation w\/o Bias (ideal), for the Line Scan EDF\n5.3 Area Cover - \u03bcSpotlight system\nIn this experiment, we investigated how the number of bits used to quantify the entire sensor field, affected the localization accuracy.\nIn our first experiment we did not use error correcting codes.\nThe results are shown in Figure 22.\nFigure 22.\nLocalization Error vs. Event Size for the Area Cover EDF\nOne can observe a remarkable accuracy, with localization error on the order of 0.3-0 .6 cm.\nWhat is important to observe is the variance in the localization error.\nIn the scenario where 12 bits were used, while the average error was very small, there were a couple of cases, where an incorrect event detection generated a larger than expected error.\nAn example of how this error can occur was described in Section 3.4.\nThe experimental results, presented in Figure 22, emphasize the need for error correction of the bit patterns observed and reported by the sensor nodes.\nThe localization duration results are shown in Figure 23.\nIt can be observed that the duration is directly proportional with the number of bits used, with total durations ranging from 3sec, for the least accurate method, to 6-7sec for the most accurate.\nThe duration of an event had a small influence on the total localization time, when considering the same scenario (same number of bits for the code).\nThe Cost Function dependency on the number of bits in the code, for \u03b1 = 0.5, is shown in Figure 24.\nGenerally, since the localization duration for the Area Scan can be extremely small, a higher accuracy in the localization is desired.\nWhile the Cost function achieves a minimum when 10 bits are used, we attribute the slight increase observed when 12 bits were used to the two 12bit scenarios where larger than the expected errors were observed, namely 6-7mm (as shown in Figure 22).\nFigure 23.\nLocalization Duration vs. Event Size for the Area\nFigure 24.\nCost Function vs. Event Size for the Area Cover\nFigure 25.\nLocalization Error w \/ and w\/o Error Correction\nThe two problematic scenarios (shown in Figure 22, where for 12-bit codes we observed errors larger than the event size, due to errors in detection) were further explored by using error correction codes.\nAs described in Section 3.3, we implemented an extended Golay (24, 12) error correction mechanism in our location estimation algorithm.\nThe experimental results are depicted in Figure 25, and show a consistent accuracy.\nThe scenario without error correction codes, is simply the same 12-bit code scenario, shown in Figure 22.\nWe only investigated the 12-bit scenario, due to its match with the 12bit data required by the Golay encoding scheme (extended Golay producing 24-bit codewords).\n5.4 Point Scan - Spotlight system\nIn this section we describe the experiments performed at a football stadium, using our Spotlight system.\nThe hardware that we had available allowed us to evaluate the Point Scan technique of the Spotlight system.\nIn our evaluation, we were interested to see the performance of the system at different ranges.\nFigures 26 and 27 show the localization error versus the event size at two different ranges: 46m and 170m.\nFigure 26 shows a remarkable accuracy in localization.\nThe errors are in the centimeter range.\nOur initial, manual measurements of the localization error were most of the time difficult to make, since the spot of the laser was almost perfectly covering the XSM mote.\nWe are able to achieve localization errors of a few centimeters, which only range-based localization schemes are able to achieve [25].\nThe observed dependency on the size of the event is similar to the one observed in the \u03bcSpotlight system evaluation, and shown in Figure 14.\nThis proved that the \u03bcSpotlight system is a viable alternative for investigating complex EDFs, without incurring the costs for the necessary hardware.\nFigure 26.\nLocalization Error vs. Event Size for Spotlight system at 46m\nIn the experiments performed over a much longer distance between the Spotlight device and sensor network, the average localization error remains very small.\nLocalization errors of 510cm were measured, as Figure 27 shows.\nWe were simply amazed by the accuracy that the system is capable of, when considering that the Spotlight system operated over the length of a football stadium.\nThroughout our experimentation with the Spotlight system, we have observed localization errors that were simply offsets of real locations.\nSince the same phenomenon was observed when experimenting with the \u03bcSpotlight system, we believe that with auto-calibration, the localization error can be further reduced.\nFigure 27.\nLocalization Error vs. Event Size for Spotlight system at 170m\nThe time required for localization using the Spotlight system with a Point Scan EDF, is given by: t = (L * l) \/ (s * Es), where L and l are the dimensions of the sensor network field, s is the scanning speed, and Es is the size of the event.\nFigure 28 shows the time for localizing a sensor network deployed in an area of size of a football field using the Spotlight system.\nHere we ignore the message propagation time, from the sensor nodes to the Spotlight device.\nFrom Figure 28 it can be observed that the very small localization errors are prohibitively expensive in the case of the Point Scan.\nWhen localization errors of up to 1m are tolerable, localization duration can be as low as 4 minutes.\nLocalization durations of 5-10 minutes, and localization errors of 1m are currently state of art in the realm of range-free localization schemes.\nAnd these results are achieved by using the Point Scan scheme, which required the highest Localization Time, as it was shown in Table 1.\nFigure 28.\nLocalization Time vs. Event Size for Spotlight\nsystem One important characteristic of the Spotlight system is its range.\nThe two most important factors are the sensitivity of the photosensor and the power of the Spotlight source.\nWe were interested in measuring the range of our Spotlight system, considering our capabilities (MTS310 sensor board and inexpensive, $12 - $85, diode laser).\nAs a result, we measured the intensity of the laser beam, having the same focus, at different distances.\nThe results are shown in Figure 29.\nFigure 29.\nLocalization Range for the Spotlight system\nFrom Figure 29, it can be observed that only a minor decrease in the intensity occurs, due to absorption and possibly our imperfect focusing of the laser beam.\nA linear fit of the experimental data shows that distances of up to 6500m can be achieved.\nWhile we do not expect atmospheric conditions, over large distances, to be similar to our 200m evaluation, there is strong evidence that distances (i.e. altitude) of 1000-2000m can easily be achieved.\nThe angle between the laser beam and the vertical should be minimized (less than 45 \u00b0), as it reduces the difference between the beam cross-section (event size) and the actual projection of the beam on the ground.\nIn a similar manner, we were interested in finding out the maximum size of an event, that can be generated by a COTS laser and that is detectable by the existing photosensor.\nFor this, we\nvaried the divergence of the laser beam and measured the light intensity, as given by the ADC count.\nThe results are shown in Figure 30.\nIt can be observed that for the less powerful laser, an event size of 1.5 m is the limit.\nFor the more powerful laser, the event size can be as high as 4m.\nThrough our extensive performance evaluation results, we have shown that the Spotlight system is a feasible, highly accurate, low cost solution for localization of wireless sensor networks.\nFrom our experience with sources of laser radiation, we believe that for small and medium size sensor network deployments, in areas of less than 20,000 m2, the Area Cover scheme is a viable solution.\nFor large size sensor network deployments, the Line Scan, or an incremental use of the Area Cover are very good options.\nFigure 30.\nDetectable Event Sizes that can be generated by\nCOTS lasers 6.\nOPTIMIZATIONS\/LESSONS LEARNED\n6.1 Distributed Spotlight System\nThe proposed design and the implementation of the Spotlight system can be considered centralized, due to the gathering of the sensor data and the execution of the Localization Function L (t) by the Spotlight device.\nWe show that this design can easily be transformed into a distributed one, by offering two solutions.\nOne idea is to disseminate in the network, information about the path of events, generated by the EDF (similar to an equation, describing a path), and let the sensor nodes execute the Localization Function.\nFor example, in the Line Scan scenario, if the starting and ending points for the horizontal and vertical scans, and the times they were reached, are propagated in the network, then any sensor in the network can obtain its location (assuming a constant scanning speed).\nA second solution is to use anchor nodes which know their positions.\nIn the case of Line Scan, if three anchors are present, after detecting the presence of the two events, the anchors flood the network with their locations and times of detection.\nUsing the same simple formulas as in the previous scheme, all sensor nodes can infer their positions.\n6.2 Localization Overhead Reduction\nAnother requirement imposed by the Spotlight system design, is the use of a time synchronization protocol between the Spotlight device and the sensor network.\nRelaxing this requirement and imposing only a time synchronization protocol among sensor nodes is a very desirable objective.\nThe idea is to use the knowledge that the Spotlight device has about the speed with which the scanning of the sensor field takes place.\nIf the scanning speed is constant (let's call it s), then the time difference (let's call it \u0394t) between the event detections of two sensor nodes is, in fact, an accurate measure of the range between them: d = s * \u0394t.\nHence, the Spotlight system can be used for accurate ranging of the distance between any pair of sensor nodes.\nAn important observation is that this ranging technique does not suffer from limitations of others: small range and directionality for ultrasound, or irregularity, fading and multipath for Received Signal Strength Indicator (RSSI).\nAfter the ranges between nodes have been determined (either in a centralized or distributed manner) graph embedding algorithms can be used for a realization of a rigid graph, describing the sensor network topology.\n6.3 Dynamic Event Distribution Function E (t)\nAnother system optimization is for environments where the sensor node density is not uniform.\nOne disadvantage of the Line Scan technique, when compared to the Area Cover, is the localization time.\nAn idea is to use two scans: one which uses a large event size (hence larger localization errors), followed by a second scan in which the event size changes dynamically.\nThe first scan is used for identifying the areas with a higher density of sensor nodes.\nThe second scan uses a larger event in areas where the sensor node density is low and a smaller event in areas with a higher sensor node density.\nA dynamic EDF can also be used when it is very difficult to meet the power requirements for the Spotlight device (imposed by the use of the Area Cover scheme in a very large area).\nIn this scenario, a hybrid scheme can be used: the first scan (Point Scan) is performed quickly, with a very large event size, and it is meant to identify, roughly, the location of the sensor network.\nSubsequent Area Cover scans will be executed on smaller portions of the network, until the entire deployment area is localized.\n6.4 Stealthiness\nOur implementation of the Spotlight system used visible light for creating events.\nUsing the system during the daylight or in a room well lit, poses challenges due to the solar or fluorescent lamp radiation, which generate a strong background noise.\nThe alternative, which we used in our performance evaluations, was to use the system in a dark room (\u03bcSpotlight system) or during the night (Spotlight system).\nWhile using the Spotlight system during the night is a good solution for environments where stealthiness is not important (e.g. environmental sciences) for others (e.g. military applications), divulging the presence and location of a sensor field, could seriously compromise the efficacy of the system.\nFigure 31.\nFluorescent Light Spectra (top), Spectral Response for CdSe cells (bottom) A solution to this problem, which we experimented with in the \u00b5Spotlight system, was to use an optical filter on top of the light\nsensor.\nThe spectral response of a CdSe photo sensor spans almost the entire visible domain [37], with a peak at about 700nm (Figure 31-bottom).\nAs shown in Figure 31-top, the fluorescent light has no significant components above 700nm.\nHence, a simple red filter (Schott RG-630), which transmits all light with wavelength approximately above 630nm, coupled with an Event Distribution Function that generates events with wavelengths above the same threshold, would allow the use of the system when a fluorescent light is present.\nA solution for the Spotlight system to be stealthy at night, is to use a source of infra-red radiation (i.e. laser) emitting in the range [750, 1000] nm.\nFor a daylight use of the Spotlight system, the challenge is to overcome the strong background of the natural light.\nA solution we are considering is the use of a narrow-band optical filter, centered at the wavelength of the laser radiation.\nThe feasibility and the cost-effectiveness of this solution remain to be proven.\n6.5 Network Deployed in Unknown Terrain\nA further generalization is when the map of the terrain where the sensor network was deployed is unknown.\nWhile this is highly unlikely for many civil applications of wireless sensor network technologies, it is not difficult to imagine military applications where the sensor network is deployed in a hostile and unknown terrain.\nA solution to this problem is a system that uses two Spotlight devices, or equivalently, the use of the same device from two distinct positions, executing, from each of them, a complete localization procedure.\nIn this scheme, the position of the sensor node is uniquely determined by the intersection of the two location directions obtained by the system.\nThe relative localization (for each pair of Spotlight devices) will require an accurate knowledge of the 3 translation and 3 rigid-body rotation parameters for Spotlight's position and orientation (as mentioned in Section 3).\nThis generalization is also applicable to scenarios where, due to terrain variations, there is no single aerial point with a direct line of sight to all sensor nodes, e.g. hilly terrain.\nBy executing the localization procedure from different aerial points, the probability of establishing a line of sight with all the nodes, increases.\nFor some military scenarios [1] [12], where open terrain is prevalent, the existence of a line of sight is not a limiting factor.\nIn light of this, the Spotlight system cannot be used in forests or indoor environments.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented the design, implementation and evaluation of a localization system for wireless sensor networks, called Spotlight.\nOur localization solution does not require any additional hardware for the sensor nodes, other than what already exists.\nAll the complexity of the system is encapsulated into a single Spotlight device.\nOur localization system is reusable, i.e. the costs can be amortized through several deployments, and its performance is not affected by the number of sensor nodes in the network.\nOur experimental results, obtained from a real system deployed outdoors, show that the localization error is less than 20cm.\nThis error is currently state of art, even for range-based localization systems and it is 75% smaller than the error obtained when using GPS devices or when the manual deployment of sensor nodes is a feasible option [31].\nAs future work, we would like to explore the self-calibration and self-tuning of the Spotlight system.\nThe accuracy of the system can be further improved if the distribution of the event, instead of a single timestamp, is reported.\nA generalization could be obtained by reformulating the problem as an angular estimation problem that provides the building blocks for more general localization techniques.","keyphrases":["local","wireless sensor network","sensor network","accuraci","perform","local error","rang-base local","rang-free scheme","transmiss","spotlight system","local techniqu","distribut","event distribut","laser"],"prmu":["P","P","P","P","P","P","M","U","U","R","M","U","M","U"]} {"id":"C-33","title":"Rewards-Based Negotiation for Providing Context Information","abstract":"How to provide appropriate context information is a challenging problem in context-aware computing. Most existing approaches use a centralized selection mechanism to decide which context information is appropriate. In this paper, we propose a novel approach based on negotiation with rewards to solving such problem. Distributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds. In order to support our approach, we have designed a concrete negotiation model with rewards. We also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.","lvl-1":"Rewards-Based Negotiation for Providing Context Information Bing Shi State Key Laboratory for Novel Software Technology NanJing University NanJing, China shibing@ics.nju.edu.cn Xianping Tao State Key Laboratory for Novel Software Technology NanJing University NanJing, China txp@ics.nju.edu.cn Jian Lu State Key Laboratory for Novel Software Technology NanJing University NanJing, China lj@nju.edu.cn ABSTRACT How to provide appropriate context information is a challenging problem in context-aware computing.\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate.\nIn this paper, we propose a novel approach based on negotiation with rewards to solving such problem.\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds.\nIn order to support our approach, we have designed a concrete negotiation model with rewards.\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applicationsproviding context information General Terms Context 1.\nINTRODUCTION Context-awareness is a key concept in pervasive computing.\nContext informs both recognition and mapping by providing a structured, unified view of the world in which the system operates [1].\nContext-aware applications exploit context information, such as location, preferences of users and so on, to adapt their behaviors in response to changing requirements of users and pervasive environments.\nHowever, one specific kind of context can often be provided by different context providers (sensors or other data sources of context information) with different quality levels.\nFor example, in a smart home, thermometer A``s measurement precision is 0.1 \u25e6 C, and thermometer B``s measurement precision is 0.5 \u25e6 C. Thus A could provide more precise context information about temperature than B. Moreover, sometimes different context providers may provide conflictive context information.\nFor example, different sensors report that the same person is in different places at the same time.\nBecause context-aware applications utilize context information to adapt their behaviors, inappropriate context information may lead to inappropriate behavior.\nThus we should design a mechanism to provide appropriate context information for current context-aware applications.\nIn pervasive environments, context providers considered as relatively independent entities, have their own interests.\nThey hope to get proceeds when they provide context information.\nHowever, most existing approaches consider context providers as entities without any personal interests, and use a centralized arbitrator provided by the middleware to decide who can provide appropriate context.\nThus the burden of the middleware is very heavy, and its decision may be unfair and harm some providers'' interests.\nMoreover, when such arbitrator is broken down, it will cause serious consequences for context-aware applications.\nIn this paper, we let distributed context providers themselves decide who provide context information.\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future, providers try to get the right to provide good context to enhance their reputation.\nIn order to get such right, context providers may agree to share some portion of the proceeds with its opponents.\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds.\nOur approach has some specific advantages: 1.\nWe do not need an arbitrator provided by the middleware of pervasive computing to decide who provides context.\nThus it will reduce the burden of the middleware.\n2.\nIt is more reasonable that distributed context providers decide who provide context, because it can avoid the serious consequences caused by a breakdown of a centralized arbitrator.\n3.\nIt can guarantee providers'' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems.\n4.\nThis approach can choose an appropriate provider automatically.\nIt does not need any applications and users'' intervention.\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain.\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process (i.e. rewards).\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement, and accepts current offer and reward.\nWithout the reward, it may find current offer worth less than the counter-offer, and proposes its counter-offer.\nIt will cost more time to reach agreement.\nIt also expands the negotiation space considered in present negotiation process, and therefore provides more possibilities to find better agreement.\nThe remainder of this paper is organized as follows.\nSection 2 presents some assumptions.\nSection 3 describes our approach based on negotiation detailedly, including utility functions, negotiation protocol and context providers'' strategies.\nSection 4 evaluates our approach.\nIn section 5 we introduce some related work and conclude in section 6.\n2.\nSOME ASSUMPTIONS Before introducing our approach, we would like to give some assumptions: 1.\nAll context providers are well-meaning and honest.\nDuring the negotiation process, they exchange information honestly.\nRewards confirmed in this negotiation process will be fulfilled in the next negotiation process.\n2.\nAll providers must guarantee the system``s interests.\nThey should provide appropriate context information for current applications.\nAfter guaranteeing the system``s interest, they can try to maximize their own personal interests.\nThe assumption is reasonable, because when an inappropriate context provider gets the right to provide bad context, as a punishment, its reputation will decrease, and the proceeds is also very small.\n3.\nAs context providers are independent, factors which influence their negotiation stance and behavior are private and not available to their opponents.\nTheir utility functions are also private.\n4.\nSince the negotiation takes place in pervasive environments, time is a critical factors.\nThe current application often hopes to get context information as quickly as possible, so the time cost to reach agreement should be as short as possible.\nContext providers often have strict deadline by when the negotiation must be completed.\nAfter presenting these assumptions, we will propose our approach based on negotiation with rewards in the next section.\n3.\nOUR APPROACH In the beginning, we introduce the concepts of reputation and Quality of Context (QoC) attributes.\nBoth will be used in our approach.\nReputation of an agent is a perception regarding its behavior norms, which is held by other agents, based on experiences and observation of its past actions [7].\nHere agent means context provider.\nEach provider``s reputation indicates its historical ability to provide appropriate context information.\nQuality of Context (QoC) attributes characterize the quality of context information.\nWhen applications require context information, they should specify their QoC requirements which express constraints of QoC attributes.\nContext providers can specify QoC attributes for the context information they deliver.\nAlthough we can decide who provides appropriate context according to QoC requirements and context providers'' QoC information, applications'' QoC requirements might not reflect the actual quality requirements.\nThus, in addition to QoC, reputation information of context providers is another factor affecting the decision who can provide context information.\nNegotiation is a process by which a joint decision is made by two or more parties.\nThe parties first verbalize contradictory demands and then move towards agreement by a process of concession making or search for new alternatives [2].\nIn pervasive environments, all available context providers negotiate with each other to decide who can provide context information.\nThis process will be repeated because a kind of context is needed more than one time.\nNegotiation using persuasive arguments (such as threats, promises of future rewards, and appeals) allows negotiation parties to influence each others'' preferences to reach better deals effectively and efficiently [9].\nThis pervasive negotiation is effective in repeated interaction because arguments can be constructed to directly impact future encounters.\nIn this paper, for simplicity, we let negotiation take place between two providers.\nWe extend Raiffa``s basic model for bilateral negotiation [8], and allow negotiators to negotiate with each other by exchanging arguments in the form of promises of future rewards or requests for future rewards.\nRewards mean some extra proceeds in the next negotiation process.\nThey can influence outcomes of current and future negotiation.\nIn our approach, as described by Figure 1, the current application requires Context Manager to provide a specific type of context information satisfying QoC requirements.\nContext Manager finds that provider A and B can provide such kind of context with different quality levels.\nThen the manager tells A and B to negotiate to reach agreement on who can provide the context information and how they will allocate the proceeds.\nBoth providers get reputation information from the database Reputation of Context Providers and QoC requirements, and then negotiate with each other according to our negotiation model.\nWhen negotiation is completed, the chosen provider will provide the context information to Context Manager, and then Context Manager delivers such information to the application and also stores it in Context Knowledge Base where current and historical context information is stored.\nThe current application gives the feedback information about the provided context, and then Context Manager will update the chosen provider``s reputation information according to the feedback information.\nContext Manager also provides the proceeds to providers according to the feedback information and the time cost on negotiation.\nIn the following parts of this section, we describe our negotiation model in detail, including context providers'' utility functions to evaluate offers and rewards, negotiation protocol, and strategies to generate offers and rewards.\nContext Knowledge Base Reputation of Context Providers Context provider A Context Manager Negotiate Application``s QoC requirements and feedback Provide QoC requirements and proceeds Manage Context Provide Context Getreputation Getreputation Update reputation information according to feedback Context provider B Figure 1: Negotiate to provide appropriate context information.\n3.1 Utility function During the negotiation process, one provider proposes an offer and a reward to the other provider.\nAn offer is noted as o = (c, p): c indicates the chosen context provider and its domain is Dc (i.e. the two context providers participating in the negotiation); p means the proposer``s portion of the proceeds, and its domain is Dp = [0,1].\nIts opponent``s portion of the proceeds is 1\u2212p.\nThe reward ep``s domain is Dep = [-1,1], and |ep| means the extra portion of proceeds the proposer promises to provide or requests in the next negotiation process.\nep < 0 means the proposer promises to provide reward, ep > 0 means the proposer requests reward and ep =0 means no reward.\nThe opponent evaluates the offer and reward to decide to accept them or propose a counter-offer and a reward.\nThus context providers should have utility functions to evaluate offers and rewards.\nTime is a critical factor, and only at times in the set T = {0, 1, 2, ... tdeadline}, context providers can propose their offers.\nThe set O include all available offers.\nContext provider A``s utility function of the offer and reward at time t UA : O \u00d7 Dep \u00d7 T \u2192 [\u22121, 1] is defined as: UA(o,ep,t)=(wA 1 \u00b7UA c (c)+wA 2 \u00b7UA p (p)+wA 3 \u00b7UA ep(ep))\u00b7\u03b4A(t) (1) Similarly, the utility function of A``s opponent (i.e. B) can be defined as: UB(o,ep,t)=(wB 1 \u00b7UB c (c)+wB 2 \u00b7UB p (1\u2212p)+wB 3 \u00b7UB ep(\u2212ep))\u00b7\u03b4B(t) In (1), wA 1 , wA 2 and wA 3 are weights given to c, p and ep respectively, and wA 1 + wA 2 + wA 3 =1.\nUsually, the context provider pays the most attention to the system``s interests, pays the least attention to the reward, thus wA 1 > wA 2 > wA 3 .\nUA c : Dc \u2192 [\u22121, 1] is the utility function of the issue who provides context.\nThis function is determined by two factors: the distance between c``s QoC and current application``s QoC requirements, and c``s reputation.\nThe two negotiators acquire c``s QoC information from c, and we use the approach proposed in [4] to calculate the distance between c``s QoC and the application``s Qoc requirements.\nThe required context has n QoC attributes and let the application``s wishes for this context be a = (a1, a2 ... an) (where ai = means the application``s indifference to the i-th QoC attribute), c``s QoC attributes cp = (cp1, cp2 ... cpn) (where cpi = means c``s inability to provide a quantitative value for the i-th QoC attribute).\nBecause numerical distance values of different properties are combined, e.g. location precision in metres with refresh rate in Hz, thus a standard scale for all dimension is needed.\nThe scaling factors for the QoC attributes are s = (s1, s2 ... sn).\nIn addition, different QoC attributes may have different weights: w = (w1, w2 ... wn).\nThen d = (d1, d2 ... dn) di = (cpi \u2212 ai) \u00b7 si \u00b7 wi where cpi\u2212ai = 0 for ai = and cpi\u2212ai = o(ai) for cpi = ( o(.)\ndetermines the application``s satisfaction or dissatisfaction when c is unable to provide an estimate of a QoC attribute, given the value wished for by the application).\nThe distance can be linear distance (1-norm), Euclidean distance (2-norm), or the maximum distance (max-norm): |d| = |d1| + |d2| + ... + |dn| (1 \u2212 norm) ||d||2 = |d1|2 + |d2|2 + ... + |dn|2 (2 \u2212 norm) ||d||\u221e = max{|d1|, |d2| ... |dn|} (max \u2212 norm) The detail description of this calculation can be found in [4].\nReputation of c can be acquired from the database Reputation of Context Providers.\nUA c (c) : R \u00d7 Drep \u2192 [\u22121, 1] can be defined as: UA c (c) = wA c1 \u00b7 UA d (d) + wA c2 \u00b7 UA rep(rep) wA c1 and wA c2 are weights given to the distance and reputation respectively, and wA c1 + wA c2 = 1.\nDrep is the domain of reputation information.\nUA d : R \u2192 [0, 1] is a monotonedecreasing function and UA rep : Drep \u2192 [\u22121, 1] is a monotoneincreasing function.\nUA p : Dp \u2192 [0, 1] is the utility function of the portion of proceeds A will receive and it is also a monotone-increasing function.\nA``s utility function of reward ep UA ep : Dep \u2192 [\u22121, 1] is also a monotone-increasing function and UA ep(0) = 0.\n\u03b4A : T \u2192 [0, 1] is the time discount function.\nIt is also a monotone-decreasing function.\nWhen time t cost on negotiation increases, \u03b4A(t) will decrease, and the utility will also decrease.\nThus both negotiators want to reach agreement as quickly as possible to avoid loss of utility.\n3.2 Negotiation protocol When provider A and B have got QoC requirements and reputation information, they begin to negotiate.\nThey first set their reserved (the lowest acceptable) utility which can guarantee the system``s interests and their personal interests.\nWhen the context provider finds the utility of an offer and a reward is lower than its reserved utility, it will reject this proposal and terminate the negotiation process.\nThe provider who starts the negotiation is chosen randomly.\nWe assume A starts the negotiation, and it proposes offer o and reward ep to B according to its strategy (see subsection 3.3).\nWhen B receives the proposal from A, it uses its utility function to evaluate it.\nIf it is lower than its reserved utility, the provider terminates the negotiation.\nOtherwise, if UB(o, ep, t) \u2265 UB(o , ep , t + 1) i.e. the utility of o and ep proposed by A at time t is greater than the utility of offer o'' and reward ep'' which B will propose to A at time t + 1, B will accept this offer and reward.\nThe negotiation is completed.\nHowever, if UB(o, ep, t) < UB(o , ep , t + 1) then B will reject A``s proposal, and propose its counter-offer and reward to A.\nWhen A receives B``s counter-offer and reward, A evaluates them using its utility function, and compares the utility with the utility of offer and reward it wants to propose to B at time t+2, decides to accept it or give its counter-offer and reward.\nThis negotiation process continues and in each negotiation round, context providers concede in order to reach agreement.\nThe negotiation will be successfully finished when agreement is reached, or be terminated forcibly due to deadline or the utility lower than reserved utility.\nWhen negotiation is forced to be terminated, Context manager will ask A and B to calculate UA c (A), UA c (B), UB c (A) and UB c (B) respectively.\nIf UA c (A) + UB c (A) > UA c (B) + UB c (B) Context Manager let A provide context.\nIf UA c (A) + UB c (A) < UA c (B) + UB c (B) then B will get the right to provide context information.\nWhen UA c (A) + UB c (A) = UA c (B) + UB c (B) Context Manager will select a provider from A and B randomly.\nIn addition, Context Manager allocates the proceeds between the two providers.\nAlthough we can select one provider when negotiation is terminated forcibly, however, this may lead to the unfair allocation of the proceeds.\nMoreover, more time negotiators cost on negotiation, less proceeds will be given.\nThus negotiators will try to reach agreement as soon as possible in order to avoid unnecessary loss.\nWhen the negotiation is finished, the chosen provider provides the context information to Context Manager which will deliver the information to current application.\nAccording to the application``s feedback information about this context, Context Manager updates the provider``s reputation stored in Reputation of Context Providers.\nThe provider``s reputation may be enhanced or decreased.\nIn addition, according to the feedback and the negotiation time, Context Manager will give proceeds to the provider.\nThen the provider will share the proceeds with its opponent according to the negotiation outcome and the reward confirmed in the last negotiation process.\nFor example, in the last negotiation process A promised to give reward ep (0 \u2264 ep < 1) to B, and A``s portion of the proceeds is p in current negotiation.\nThen A``s actual portion of the proceeds is p \u00b7 (1 \u2212 ep), and its opponent B``s portion of the proceeds is 1\u2212p+p\u00b7ep.\n3.3 Negotiation strategy The context provider might want to pursue the right to provide context information blindly in order to enhance its reputation.\nHowever when it finally provides bad context information, its reputation will be decreased and the proceeds is also very small.\nThus the context provider should take action according to its strategy.\nThe aim of provider``s negotiation strategy is to determine the best course of action which will result in a negotiation outcome maximizing its utility function (i.e how to generate an offer and a reward).\nIn our negotiation model, the context provider generates its offer and reward according to its pervious offer and reward and the last one sent by its opponent.\nAt the beginning of the negotiation, context providers initialize their offers and rewards according to their beliefs and their reserved utility.\nIf context provider A considers that it can provide good context and wants to enhance reputation, then it will propose that A provides the context information, shares some proceeds with its opponent B, and even promises to give reward.\nHowever, if A considers that it may provide bad context, A will propose that its opponent B provide the context, and require B to share some proceeds and provide reward.\nDuring the negotiation process, we assume that at time t A proposes offer ot and reward ept to B, at time t + 1, B proposes counter-offer ot+1 and reward ept+1 to A.\nThen at time t + 2, when the utility of B``s proposal is greater than A``s reserved utility, A gives its response.\nNow we calculate the expected utility to be conceded at time t +2, we use Cu to express the conceded utility.\nCu = (UA(ot, ept, t) \u2212 UA(ot+1, ept+1, t + 1)) \u00b7 cA(t + 2) (UA(ot, ept, t) > UA(ot+1, ept+1, t + 1), otherwise, A will accept B``s proposal) where cA : T \u2192 [0, 1] is a monotoneincreasing function.\ncA(t) indicates A``s utility concession rate1 .\nA concedes a little in the beginning before conceding significantly towards the deadline.\nThen A generates its offer ot+2 = (ct+2, pt+2) and reward ept+2 at time t + 2.\nThe expected utility of A at time t + 2 is: UA(ot+2, ept+2, t + 2) = UA(ot, ept, t + 2) \u2212 Cu If UA(ot+2, ept+2, t + 2) \u2264 UA(ot+1, ept+1, t + 1) then A will accept B``s proposal (i.e. ot+1 and ept+1).\nOtherwise, A will propose its counter-offer and reward based on Cu.\nWe assume that Cu is distributed evenly on c, p and ep (i.e. the utility to be conceded on c, p and ep is 1 3 Cu respectively).\nIf |UA c (ct)\u2212(UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) )| \u2264 |UA c (ct+1)\u2212(UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) )| i.e. the expected utility of c at time t+2 is UA c (ct)\u2212 1 3 Cu \u03b4A(t+2) and it is closer to the utility of A``s proposal ct at time t, then at time t + 2, ct+2 = ct, else the utility is closer to B``proposal ct+1 and ct+2 = ct+1.\nWhen ct+2 is equal to ct, the actual conceded utility of c is 0, and the total concession of p and ep is Cu.\nWe divide the total concession of p and ep evenly, and get the conceded utility of p and ep respectively.\nWe calculate pt+2 and ept+2 as follows: pt+2 = (UA p )\u22121 (UA p (pt) \u2212 1 2 Cu \u03b4A(t + 2) ) ept+2 = (UA ep)\u22121 (UA ep(ept) \u2212 1 2 Cu \u03b4A(t + 2) ) When ct+2 is equal to ct+1, the actual conceded utility of c is |UA c (ct+2) \u2212 UA c (ct)|, the total concession of p and ep is Cu \u03b4A(t+2) \u2212 |UA c (ct+2) \u2212 UA c (ct)|, then: pt+2 = (UA p )\u22121 (UA p (pt)\u2212 1 2 ( Cu \u03b4A(t + 2) \u2212|UA c (ct+2)\u2212UA c (ct)|)) ept+2 = (UA ep)\u22121 (UA ep(ept)\u22121 2 ( Cu \u03b4A(t+2) \u2212|UA c (ct+2)\u2212UA c (ct)|)) Now, we have generated the offer and reward A will propose at time t + 2.\nSimilarly, B also can generate its offer and reward.\n1 For example, cA(t) = ( t tdeadline ) 1 \u03b2 (0 < \u03b2 < 1) Utility function and weight of c, p and ep Uc, w1 Up, w2 Uep, w3 A 0.5(1 \u2212 dA 500 ) + 0.5repA 1000 , 0.6 0.9p, 0.3 0.9ep, 0.1 B 0.52(1 \u2212 dB 500 ) + 0.48repB 1000 , 0.5 0.9p, 0.45 0.8ep, 0.05 Table 1: Utility functions and weights of c, p and ep for each provider 4.\nEVALUATION In this section, we evaluate the effectiveness of our approach by simulated experiments.\nContext providers A and B negotiate to reach agreement.\nThey get QoC requirements and calculate the distance between Qoc requirements and their QoC.\nFor simplicity, in our experiments, we assume that the distance has been calculated, and dA represents distance between QoC requirements and A``s QoC, dB represents distance between QoC requirements and B``s QoC.\nThe domain of dA and dB is [0,500].\nWe assume reputation value is a real number and its domain is [-1000, 1000], repA represents A``s reputation value and repB represents B``s reputation value.\nWe assume that both providers pay the most attention to the system``s interests, and pay the least attention to the reward, thus w1 > w2 > w3, and the weight of Ud approximates the weight of Urep.\nA and B``s utility functions and weights of c, p and ep are defined in Table 1.\nWe set deadline tdeadline = 100, and define time discount function \u03b4(t) and concession rate function c(t) of A and B as follows: \u03b4A(t) = 0.9t \u03b4B(t) = 0.88t cA(t) = ( t tdeadline ) 1 0.8 cB(t) = ( t tdeadline ) 1 0.6 Given different values of dA, dB, repA and repB, A and B negotiate to reach agreement.\nThe provider that starts the negotiation is chosen at random.\nWe hope that when dA dB and repA repB, A will get the right to provide context and get a major portion of the proceeds, and when \u2206d = dA \u2212 dB is in a small range (e.g. [-50,50]) and \u2206rep = repA \u2212 repB is in a small range (e.g. [-50,50]), A and B will get approximately equal opportunities to provide context, and allocate the proceeds evenly.\nWhen dA\u2212dB 500 approximates to dA\u2212dB 1000 (i.e. the two providers'' abilities to provide context information are approximately equal), we also hope that A and B get equal opportunities to provide context and allocate the proceeds evenly.\nAccording to the three situations above, we make three experiments as follows: Experiment 1 : In this experiment, A and B negotiate with each other for 50 times, and at each time, we assign different values to dA, dB, repA, repB (satisfying dA dB and repA repB) and the reserved utilities of A and B.\nWhen the experiment is completed, we find 3 negotiation games are terminated due to the utility lower than the reserved utility.\nA gets the right to provide context for 47 times.\nThe average portion of proceeds A get is about 0.683, and B``s average portion of proceeds is 0.317.\nThe average time cost to reach agreement is 8.4.\nWe also find that when B asks A to provide context in its first offer, B can require and get more portion of the proceeds because of its goodwill.\nExperiment 2 : A and B also negotiate with each other for 50 times in this experiment given different values of dA, dB, repA, repB (satisfying \u221250 \u2264 \u2206d = dA \u2212 dB \u2264 50 and \u221250 \u2264 \u2206rep = drep \u2212drep \u2264 50) and the reserved utilities of A and B.\nAfter the experiment, we find that there are 8 negotiation games terminated due to the utility lower than the reserved utility.\nA and B get the right to provide context for 20 times and 22 times respectively.\nThe average portion of proceeds A get is 0.528 and B``s average portion of the proceeds is 0.472.\nThe average time cost on negotiation is 10.5.\nExperiment 3 : In this experiment, A and B also negotiate with each other for 50 times given dA, dB, repA, repB (satisfying \u22120.2 \u2264 dA\u2212dB 500 \u2212 dA\u2212dB 1000 \u2264 0.2) and the reserved utilities of A and B.\nThere are 6 negotiation games terminated forcibly.\nA and B get the right to provide context for 21 times and 23 times respectively.\nThe average portion of proceeds A get is 0.481 and B``s average portion of the proceeds is 0.519.\nThe average time cost on negotiation is 9.2.\nOne thing should be mentioned is that except for d, rep, p and ep, other factors (e.g. weights, time discount function \u03b4(t) and concession rate function c(t)) could also affect the negotiation outcome.\nThese factors should be adjusted according to providers'' beliefs at the beginning of each negotiation process.\nIn our experiments, for similarity, we assign values to them without any particularity in advance.\nThese experiments'' results prove that our approach can choose an appropriate context provider and can provide a relatively fair proceeds allocation.\nWhen one provider is obviously more appropriate than the other provider, the provider will get the right to provide context and get a major portion of the proceeds.\nWhen both providers have the approximately same abilities to provide context, their opportunities to provide context are equal and they can get about a half portion of the proceeds respectively.\n5.\nRELATED WORK In [4], Huebscher and McCann have proposed an adaptive middleware design for context-aware applications.\nTheir adaptive middleware uses utility functions to choose the best context provider (given the QoC requirements of applications and the QoC of alternative means of context acquisition).\nIn our negotiation model, the calculation of utility function Uc was inspired by this approach.\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [3].\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs.\nThey also outline a software infrastructure that supports the management and use of imperfect context information.\nJudd and Steenkiste in [5] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy, confidence, update time and sample interval.\nIn [6], Lei et al. present a context service which accepts freshness and confidence meta-data from context sources, and passes this along to clients so that they can adjust their level of trust accordingly.\n[10] presents a framework for realizing dynamic context consistency management.\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources.\nMost approaches to provide appropriate context utilize a centralized arbitrator.\nIn our approach, we let distributed context providers themselves decide who can provide appropriate context information.\nOur approach can reduce the burden of the middleware, because we do not need the middleware to provide a context selection mechanism.\nIt can avoid the serious consequences caused by a breakdown of the arbitrator.\nAlso, it can guarantee context providers'' interests.\n6.\nCONCLUSION AND FUTURE WORK How to provide the appropriate context information is a challenging problem in pervasive computing.\nIn this paper, we have presented a novel approach based on negotiation with rewards to attempt to solve such problem.\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds.\nThe results of our experiments have showed that our approach can choose an appropriate context provider, and also can guarantee providers'' interests by a relatively fair proceeds allocation.\nIn this paper, we only consider how to choose an appropriate context provider from two providers.\nIn the future work, this negotiation model will be extended, and more than two context providers can negotiate with each other to decide who is the most appropriate context provider.\nIn the extended negotiation model, how to design efficient negotiation strategies will be a challenging problem.\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process.\nIn fact, the context provider might deceive its opponent and provide illusive promise.\nWe should solve this problem in the future.\nWe also should deal with interactions which are interrupted by failing communication links in the future work.\n7.\nACKNOWLEDGEMENT The work is funded by 973 Project of China(2002CB312002, 2006CB303000), NSFC(60403014) and NSFJ(BK2006712).\n8.\nREFERENCES [1] J. Coutaz, J. L. Crowley, S. Dobson, and D. Garlan.\nContext is key.\nCommun.\nACM, 48(3):49 - 53, March 2005.\n[2] D.G.Pruitt.\nNegotiation behavior.\nAcademic Press, 1981.\n[3] K. Henricksen and J. Indulska.\nModelling and using imperfect context information.\nIn Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops, pages 33-37, 2004.\n[4] M. C. Huebscher and J. A. McCann.\nAdaptive middleware for context-aware applications in smart-homes.\nIn Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing MPAC ``04, pages 111-116, October 2004.\n[5] G. Judd and P. Steenkiste.\nProviding contextual information to pervasive computing applications.\nIn Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, pages 133-142, 2003.\n[6] H. Lei, D. M. Sow, J. S. Davis, G. Banavar, and M. R. Ebling.\nThe design and applications of a context service.\nACM SIGMOBILE Mobile Computing and Communications Review, 6(4):45-55, 2002.\n[7] J. Liu and V. Issarny.\nEnhanced reputation mechanism for mobile ad-hoc networks.\nIn Trust Management: Second International Conference, iTrust, 2004.\n[8] H. Raiffa.\nThe Art and Science of Negotiation.\nHarvard University Press, 1982.\n[9] S. D. Ramchurn, N. R. Jennings, and C. Sierra.\nPersuasive negotiation for autonomous agents: A rhetorical approach.\nIn C. Reed, editor, Workshop on the Computational Models of Natural Argument, IJCAI, pages 9-18, 2003.\n[10] C. Xu and S. C. Cheung.\nInconsistency detection and resolution for context-aware middleware support.\nIn Proceedings of the 10th European software engineering conference, pages 336-345, 2005.","lvl-3":"Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing.\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate.\nIn this paper, we propose a novel approach based on negotiation with rewards to solving such problem.\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds.\nIn order to support our approach, we have designed a concrete negotiation model with rewards.\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.\n1.\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing.\nContext informs both recognition and mapping by providing a structured, unified view of the world in which the system operates [1].\nContext-aware applications exploit context information, such as location, preferences of users and so on, to adapt their behaviors in response to changing requirements of users and pervasive environments.\nHowever, one specific kind of context can often be provided by different context providers (sensors or other data sources of context information) with different quality levels.\nFor example,\nin a smart home, thermometer A's measurement precision is 0.1 \u00b0 C, and thermometer B's measurement precision is 0.5 \u00b0 C. Thus A could provide more precise context information about temperature than B. Moreover, sometimes different context providers may provide conflictive context information.\nFor example, different sensors report that the same person is in different places at the same time.\nBecause context-aware applications utilize context information to adapt their behaviors, inappropriate context information may lead to inappropriate behavior.\nThus we should design a mechanism to provide appropriate context information for current context-aware applications.\nIn pervasive environments, context providers considered as relatively independent entities, have their own interests.\nThey hope to get proceeds when they provide context information.\nHowever, most existing approaches consider context providers as entities without any personal interests, and use a centralized \"arbitrator\" provided by the middleware to decide who can provide appropriate context.\nThus the burden of the middleware is very heavy, and its decision may be unfair and harm some providers' interests.\nMoreover, when such \"arbitrator\" is broken down, it will cause serious consequences for context-aware applications.\nIn this paper, we let distributed context providers themselves decide who provide context information.\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future, providers try to get the right to provide \"good\" context to enhance their reputation.\nIn order to get such right, context providers may agree to share some portion of the proceeds with its opponents.\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds.\nOur approach has some specific advantages:\n1.\nWe do not need an \"arbitrator\" provided by the middleware of pervasive computing to decide who provides context.\nThus it will reduce the burden of the middleware.\n2.\nIt is more reasonable that distributed context providers decide who provide context, because it can avoid the serious consequences caused by a breakdown of a centralized \"arbitrator\".\n3.\nIt can guarantee providers' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems.\n4.\nThis approach can choose an appropriate provider au\ntomatically.\nIt does not need any applications and users' intervention.\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain.\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process (i.e. rewards).\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement, and accepts current offer and reward.\nWithout the reward, it may find current offer worth less than the counter-offer, and proposes its counter-offer.\nIt will cost more time to reach agreement.\nIt also expands the negotiation space considered in present negotiation process, and therefore provides more possibilities to find better agreement.\nThe remainder of this paper is organized as follows.\nSection 2 presents some assumptions.\nSection 3 describes our approach based on negotiation detailedly, including utility functions, negotiation protocol and context providers' strategies.\nSection 4 evaluates our approach.\nIn section 5 we introduce some related work and conclude in section 6.\n2.\nSOME ASSUMPTIONS\n3.\nOUR APPROACH\n3.1 Utility function\n3.2 Negotiation protocol\n3.3 Negotiation strategy\n4.\nEVALUATION\n5.\nRELATED WORK\nIn [4], Huebscher and McCann have proposed an adaptive middleware design for context-aware applications.\nTheir adaptive middleware uses utility functions to choose the best context provider (given the QoC requirements of applications and the QoC of alternative means of context acquisition).\nIn our negotiation model, the calculation of utility function Uc was inspired by this approach.\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [3].\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs.\nThey also outline a software infrastructure that supports the management and use of imperfect context information.\nJudd and Steenkiste in [5] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy, confidence, update time and sample interval.\nIn [6], Lei et al. present a context service which accepts freshness and confidence meta-data from context sources, and passes this along to clients so that they can adjust their level of trust accordingly.\n[10] presents a framework for realizing dynamic context consistency management.\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources.\nMost approaches to provide appropriate context utilize a centralized \"arbitrator\".\nIn our approach, we let distributed context providers themselves decide who can provide appropriate context information.\nOur approach can reduce the burden of the middleware, because we do not need the middleware to provide a context selection mechanism.\nIt can avoid the serious consequences caused by a breakdown of the \"arbitrator\".\nAlso, it can guarantee context providers' interests.\n6.\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing.\nIn this paper, we have presented a novel approach based on negotiation with rewards to attempt to solve such problem.\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds.\nThe results of our experiments have showed that our approach can choose an appropriate context provider, and also can guarantee providers' interests by a relatively fair proceeds allocation.\nIn this paper, we only consider how to choose an appropriate context provider from two providers.\nIn the future work, this negotiation model will be extended, and more than two context providers can negotiate with each other to decide who is the most appropriate context provider.\nIn the extended negotiation model, how to design efficient negotiation strategies will be a challenging problem.\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process.\nIn fact, the context provider might deceive its opponent and provide illusive promise.\nWe should solve this problem in the future.\nWe also should deal with interactions which are interrupted by failing communication links in the future work.","lvl-4":"Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing.\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate.\nIn this paper, we propose a novel approach based on negotiation with rewards to solving such problem.\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds.\nIn order to support our approach, we have designed a concrete negotiation model with rewards.\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.\n1.\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing.\nContext informs both recognition and mapping by providing a structured, unified view of the world in which the system operates [1].\nContext-aware applications exploit context information, such as location, preferences of users and so on, to adapt their behaviors in response to changing requirements of users and pervasive environments.\nHowever, one specific kind of context can often be provided by different context providers (sensors or other data sources of context information) with different quality levels.\nFor example,\nBecause context-aware applications utilize context information to adapt their behaviors, inappropriate context information may lead to inappropriate behavior.\nThus we should design a mechanism to provide appropriate context information for current context-aware applications.\nIn pervasive environments, context providers considered as relatively independent entities, have their own interests.\nThey hope to get proceeds when they provide context information.\nHowever, most existing approaches consider context providers as entities without any personal interests, and use a centralized \"arbitrator\" provided by the middleware to decide who can provide appropriate context.\nThus the burden of the middleware is very heavy, and its decision may be unfair and harm some providers' interests.\nMoreover, when such \"arbitrator\" is broken down, it will cause serious consequences for context-aware applications.\nIn this paper, we let distributed context providers themselves decide who provide context information.\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future, providers try to get the right to provide \"good\" context to enhance their reputation.\nIn order to get such right, context providers may agree to share some portion of the proceeds with its opponents.\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds.\nOur approach has some specific advantages:\n1.\nWe do not need an \"arbitrator\" provided by the middleware of pervasive computing to decide who provides context.\nThus it will reduce the burden of the middleware.\n2.\nIt is more reasonable that distributed context providers decide who provide context, because it can avoid the serious consequences caused by a breakdown of a centralized \"arbitrator\".\n3.\nIt can guarantee providers' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems.\n4.\nThis approach can choose an appropriate provider au\ntomatically.\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain.\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process (i.e. rewards).\nIt will cost more time to reach agreement.\nIt also expands the negotiation space considered in present negotiation process, and therefore provides more possibilities to find better agreement.\nSection 2 presents some assumptions.\nSection 3 describes our approach based on negotiation detailedly, including utility functions, negotiation protocol and context providers' strategies.\nSection 4 evaluates our approach.\nIn section 5 we introduce some related work and conclude in section 6.\n5.\nRELATED WORK\nIn [4], Huebscher and McCann have proposed an adaptive middleware design for context-aware applications.\nTheir adaptive middleware uses utility functions to choose the best context provider (given the QoC requirements of applications and the QoC of alternative means of context acquisition).\nIn our negotiation model, the calculation of utility function Uc was inspired by this approach.\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [3].\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs.\nThey also outline a software infrastructure that supports the management and use of imperfect context information.\n[10] presents a framework for realizing dynamic context consistency management.\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources.\nMost approaches to provide appropriate context utilize a centralized \"arbitrator\".\nIn our approach, we let distributed context providers themselves decide who can provide appropriate context information.\nOur approach can reduce the burden of the middleware, because we do not need the middleware to provide a context selection mechanism.\nAlso, it can guarantee context providers' interests.\n6.\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing.\nIn this paper, we have presented a novel approach based on negotiation with rewards to attempt to solve such problem.\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds.\nThe results of our experiments have showed that our approach can choose an appropriate context provider, and also can guarantee providers' interests by a relatively fair proceeds allocation.\nIn this paper, we only consider how to choose an appropriate context provider from two providers.\nIn the future work, this negotiation model will be extended, and more than two context providers can negotiate with each other to decide who is the most appropriate context provider.\nIn the extended negotiation model, how to design efficient negotiation strategies will be a challenging problem.\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process.\nIn fact, the context provider might deceive its opponent and provide illusive promise.\nWe should solve this problem in the future.","lvl-2":"Rewards-Based Negotiation for Providing Context Information\nABSTRACT\nHow to provide appropriate context information is a challenging problem in context-aware computing.\nMost existing approaches use a centralized selection mechanism to decide which context information is appropriate.\nIn this paper, we propose a novel approach based on negotiation with rewards to solving such problem.\nDistributed context providers negotiate with each other to decide who can provide context and how they allocate proceeds.\nIn order to support our approach, we have designed a concrete negotiation model with rewards.\nWe also evaluate our approach and show that it indeed can choose an appropriate context provider and allocate the proceeds fairly.\n1.\nINTRODUCTION\nContext-awareness is a key concept in pervasive computing.\nContext informs both recognition and mapping by providing a structured, unified view of the world in which the system operates [1].\nContext-aware applications exploit context information, such as location, preferences of users and so on, to adapt their behaviors in response to changing requirements of users and pervasive environments.\nHowever, one specific kind of context can often be provided by different context providers (sensors or other data sources of context information) with different quality levels.\nFor example,\nin a smart home, thermometer A's measurement precision is 0.1 \u00b0 C, and thermometer B's measurement precision is 0.5 \u00b0 C. Thus A could provide more precise context information about temperature than B. Moreover, sometimes different context providers may provide conflictive context information.\nFor example, different sensors report that the same person is in different places at the same time.\nBecause context-aware applications utilize context information to adapt their behaviors, inappropriate context information may lead to inappropriate behavior.\nThus we should design a mechanism to provide appropriate context information for current context-aware applications.\nIn pervasive environments, context providers considered as relatively independent entities, have their own interests.\nThey hope to get proceeds when they provide context information.\nHowever, most existing approaches consider context providers as entities without any personal interests, and use a centralized \"arbitrator\" provided by the middleware to decide who can provide appropriate context.\nThus the burden of the middleware is very heavy, and its decision may be unfair and harm some providers' interests.\nMoreover, when such \"arbitrator\" is broken down, it will cause serious consequences for context-aware applications.\nIn this paper, we let distributed context providers themselves decide who provide context information.\nSince high reputation could help providers get more opportunities to provide context and get more proceeds in the future, providers try to get the right to provide \"good\" context to enhance their reputation.\nIn order to get such right, context providers may agree to share some portion of the proceeds with its opponents.\nThus context providers negotiate with each other to reach agreement on the issues who can provide context and how they allocate the proceeds.\nOur approach has some specific advantages:\n1.\nWe do not need an \"arbitrator\" provided by the middleware of pervasive computing to decide who provides context.\nThus it will reduce the burden of the middleware.\n2.\nIt is more reasonable that distributed context providers decide who provide context, because it can avoid the serious consequences caused by a breakdown of a centralized \"arbitrator\".\n3.\nIt can guarantee providers' interests and provide fair proceeds allocation when providers negotiate with each other to reach agreement on their concerned problems.\n4.\nThis approach can choose an appropriate provider au\ntomatically.\nIt does not need any applications and users' intervention.\nThe negotiation model we have designed to support our approach is also a novel model in negotiation domain.\nThis model can help negotiators reach agreement in the present negotiation process by providing some guarantees over the outcome of next negotiation process (i.e. rewards).\nNegotiator may find current offer and reward worth more than counter-offer which will delay the agreement, and accepts current offer and reward.\nWithout the reward, it may find current offer worth less than the counter-offer, and proposes its counter-offer.\nIt will cost more time to reach agreement.\nIt also expands the negotiation space considered in present negotiation process, and therefore provides more possibilities to find better agreement.\nThe remainder of this paper is organized as follows.\nSection 2 presents some assumptions.\nSection 3 describes our approach based on negotiation detailedly, including utility functions, negotiation protocol and context providers' strategies.\nSection 4 evaluates our approach.\nIn section 5 we introduce some related work and conclude in section 6.\n2.\nSOME ASSUMPTIONS\nBefore introducing our approach, we would like to give some assumptions:\n1.\nAll context providers are well-meaning and honest.\nDuring the negotiation process, they exchange information honestly.\nRewards confirmed in this negotiation process will be fulfilled in the next negotiation process.\n2.\nAll providers must guarantee the system's interests.\nThey should provide appropriate context information for current applications.\nAfter guaranteeing the system's interest, they can try to maximize their own personal interests.\nThe assumption is reasonable, because when an inappropriate context provider gets the right to provide \"bad\" context, as a punishment, its reputation will decrease, and the proceeds is also very small.\n3.\nAs context providers are independent, factors which influence their negotiation stance and behavior are private and not available to their opponents.\nTheir utility functions are also private.\n4.\nSince the negotiation takes place in pervasive environ\nments, time is a critical factors.\nThe current application often hopes to get context information as quickly as possible, so the time cost to reach agreement should be as short as possible.\nContext providers often have strict deadline by when the negotiation must be completed.\nAfter presenting these assumptions, we will propose our approach based on negotiation with rewards in the next section.\n3.\nOUR APPROACH\nIn the beginning, we introduce the concepts of reputation and Quality of Context (QoC) attributes.\nBoth will be used in our approach.\nReputation of an agent is a perception regarding its behavior norms, which is held by other agents, based on experiences and observation of its past actions [7].\nHere agent means context provider.\nEach provider's reputation indicates its historical ability to provide appropriate context information.\nQuality of Context (QoC) attributes characterize the quality of context information.\nWhen applications require context information, they should specify their QoC requirements which express constraints of QoC attributes.\nContext providers can specify QoC attributes for the context information they deliver.\nAlthough we can decide who provides appropriate context according to QoC requirements and context providers' QoC information, applications' QoC requirements might not reflect the actual quality requirements.\nThus, in addition to QoC, reputation information of context providers is another factor affecting the decision who can provide context information.\nNegotiation is a process by which a joint decision is made by two or more parties.\nThe parties first verbalize contradictory demands and then move towards agreement by a process of concession making or search for new alternatives [2].\nIn pervasive environments, all available context providers negotiate with each other to decide who can provide context information.\nThis process will be repeated because a kind of context is needed more than one time.\nNegotiation using persuasive arguments (such as threats, promises of future rewards, and appeals) allows negotiation parties to influence each others' preferences to reach better deals effectively and efficiently [9].\nThis pervasive negotiation is effective in repeated interaction because arguments can be constructed to directly impact future encounters.\nIn this paper, for simplicity, we let negotiation take place between two providers.\nWe extend Raiffa's basic model for bilateral negotiation [8], and allow negotiators to negotiate with each other by exchanging arguments in the form of promises of future rewards or requests for future rewards.\nRewards mean some extra proceeds in the next negotiation process.\nThey can influence outcomes of current and future negotiation.\nIn our approach, as described by Figure 1, the current application requires Context Manager to provide a specific type of context information satisfying QoC requirements.\nContext Manager finds that provider A and B can provide such kind of context with different quality levels.\nThen the manager tells A and B to negotiate to reach agreement on who can provide the context information and how they will allocate the proceeds.\nBoth providers get reputation information from the database Reputation of Context Providers and QoC requirements, and then negotiate with each other according to our negotiation model.\nWhen negotiation is completed, the chosen provider will provide the context information to Context Manager, and then Context Manager delivers such information to the application and also stores it in Context Knowledge Base where current and historical context information is stored.\nThe current application gives the feedback information about the provided context, and then Context Manager will update the chosen provider's reputation information according to the feedback information.\nContext Manager also provides the proceeds to providers according to the feedback information and the time cost on negotiation.\nIn the following parts of this section, we describe our negotiation model in detail, including context providers' utility functions to evaluate offers and rewards, negotiation protocol, and strategies to generate offers and rewards.\nFigure 1: Negotiate to provide appropriate context information.\nsion in metres with refresh rate in Hz, thus a standard scale for all dimension is needed.\nThe scaling factors for the QoC attributes are s ~ = (s1, s2...sn).\nIn addition, different QoC attributes may have different weights: w ~ = (w1, w2...wn).\nThen d ~ = (d1, d2...dn)\nwhere cpi \u2212 ai = 0 for ai = \u00af and cpi \u2212 ai = o (ai) for cpi = \u00af (o (.)\ndetermines the application's satisfaction or dissatisfaction when c is unable to provide an estimate of a QoC attribute, given the value wished for by the application).\nThe distance can be linear distance (1-norm), Euclidean distance (2-norm), or the maximum distance (max-norm):\n3.1 Utility function\nDuring the negotiation process, one provider proposes an offer and a reward to the other provider.\nAn offer is noted as o = (c, p): c indicates the chosen context provider and its domain is Dc (i.e. the two context providers participating in the negotiation); p means the proposer's portion of the proceeds, and its domain is Dp = [0,1].\nIts opponent's portion of the proceeds is 1 \u2212 p.\nThe reward ep's domain is Dep = [-1,1], and | ep | means the extra portion of proceeds the proposer promises to provide or requests in the next negotiation process.\nep <0 means the proposer promises to provide reward, ep> 0 means the proposer requests reward and ep = 0 means no reward.\nThe opponent evaluates the offer and reward to decide to accept them or propose a counter-offer and a reward.\nThus context providers should have utility functions to evaluate offers and rewards.\nTime is a critical factor, and only at times in the set T = {0, 1, 2,...tdeadline}, context providers can propose their offers.\nThe set O include all available offers.\nContext provider A's utility function of the offer and reward at time t UA: O \u00d7 Dep \u00d7 T \u2192 [\u2212 1, 1] is defined as:\nSimilarly, the utility function of A's opponent (i.e. B) can be defined as:\nIn (1), wA1, wA2 and wA3 are weights given to c, p and ep respectively, and wA1 + wA2 + wA3 = 1.\nUsually, the context provider pays the most attention to the system's interests, pays the least attention to the reward, thus wA1> wA2> wA3.\nUcA: Dc \u2192 [\u2212 1, 1] is the utility function of the issue who provides context.\nThis function is determined by two factors: the distance between c's QoC and current application's QoC requirements, and c's reputation.\nThe two negotiators acquire c's QoC information from c, and we use the approach proposed in [4] to calculate the distance between c's QoC and the application's Qoc requirements.\nThe required context has n QoC attributes and let the application's wishes for this context be a ~ = (a1, a2...an) (where ai = \u00af means the application's indifference to the i-th QoC attribute), c's QoC attributes ~ cp = (cp1, cp2...cpn) (where cpi = \u00af means c's inability to provide a quantitative value for the i-th QoC attribute).\nBecause numerical distance values of different properties are combined, e.g. location preci | | ~ d | | \u221e = max {| d1 |, | d2 |...| dn |} (max \u2212 norm) The detail description of this calculation can be found in [4].\nReputation of c can be acquired from the database Reputation of Context Providers.\nUcA (c): R \u00d7 Drep \u2192 [\u2212 1, 1] can be defined as:\nwAc1 and wAc2 are weights given to the distance and reputation respectively, and wAc1 + wAc2 = 1.\nDrep is the domain of reputation information.\nUdA: R \u2192 [0, 1] is a monotonedecreasing function and UA rep: Drep \u2192 [\u2212 1, 1] is a monotoneincreasing function.\nUpA: Dp \u2192 [0, 1] is the utility function of the portion of proceeds A will receive and it is also a monotone-increasing function.\nA's utility function of reward ep UA ep: Dep \u2192 [\u2212 1, 1] is also a monotone-increasing function and UAep (0) = 0.\n\u03b4A: T \u2192 [0, 1] is the time discount function.\nIt is also a monotone-decreasing function.\nWhen time t cost on negotiation increases, \u03b4A (t) will decrease, and the utility will also decrease.\nThus both negotiators want to reach agreement as quickly as possible to avoid loss of utility.\n3.2 Negotiation protocol\nWhen provider A and B have got QoC requirements and reputation information, they begin to negotiate.\nThey first set their reserved (the lowest acceptable) utility which can guarantee the system's interests and their personal interests.\nWhen the context provider finds the utility of an offer and a reward is lower than its reserved utility, it will reject this proposal and terminate the negotiation process.\nThe provider who starts the negotiation is chosen randomly.\nWe assume A starts the negotiation, and it proposes offer o and reward ep to B according to its strategy (see subsection 3.3).\nWhen B receives the proposal from A, it uses its utility function to evaluate it.\nIf it is lower than its reserved utility, the provider terminates the negotiation.\nOtherwise, if\ni.e. the utility of o and ep proposed by A at time t is greater than the utility of offer o' and reward ep' which B will propose to A at time t + 1, B will accept this offer and reward.\nThe negotiation is completed.\nHowever, if\nthen B will reject A's proposal, and propose its counter-offer and reward to A.\nWhen A receives B's counter-offer and reward, A evaluates them using its utility function, and compares the utility with the utility of offer and reward it wants to propose to B at time t +2, decides to accept it or give its counter-offer and reward.\nThis negotiation process continues and in each negotiation round, context providers concede in order to reach agreement.\nThe negotiation will be successfully finished when agreement is reached, or be terminated forcibly due to deadline or the utility lower than reserved utility.\nWhen negotiation is forced to be terminated, Context manager will ask A and B to calculate UcA (A), UcA (B), UcB (A) and UcB (B) respectively.\nIf\nContext Manager will select a provider from A and B randomly.\nIn addition, Context Manager allocates the proceeds between the two providers.\nAlthough we can select one provider when negotiation is terminated forcibly, however, this may lead to the unfair allocation of the proceeds.\nMoreover, more time negotiators cost on negotiation, less proceeds will be given.\nThus negotiators will try to reach agreement as soon as possible in order to avoid unnecessary loss.\nWhen the negotiation is finished, the chosen provider provides the context information to Context Manager which will deliver the information to current application.\nAccording to the application's feedback information about this context, Context Manager updates the provider's reputation stored in Reputation of Context Providers.\nThe provider's reputation may be enhanced or decreased.\nIn addition, according to the feedback and the negotiation time, Context Manager will give proceeds to the provider.\nThen the provider will share the proceeds with its opponent according to the negotiation outcome and the reward confirmed in the last negotiation process.\nFor example, in the last negotiation process A promised to give reward ep (0 \u2264 ep <1) to B, and A's portion of the proceeds is p in current negotiation.\nThen A's actual portion of the proceeds is p \u00b7 (1 \u2212 ep), and its opponent B's portion of the proceeds is 1 \u2212 p + p \u00b7 ep.\n3.3 Negotiation strategy\nThe context provider might want to pursue the right to provide context information blindly in order to enhance its reputation.\nHowever when it finally provides \"bad\" context information, its reputation will be decreased and the proceeds is also very small.\nThus the context provider should take action according to its strategy.\nThe aim of provider's negotiation strategy is to determine the best course of action which will result in a negotiation outcome maximizing its utility function (i.e how to generate an offer and a reward).\nIn our negotiation model, the context provider generates its offer and reward according to its pervious offer and reward and the last one sent by its opponent.\nAt the beginning of the negotiation, context providers initialize their offers and rewards according to their beliefs and their reserved utility.\nIf context provider A considers that it can provide \"good\" context and wants to enhance reputation, then it will propose that A provides the context information, shares some proceeds with its opponent B, and even promises to give reward.\nHowever, if A considers that it may provide \"bad\" context, A will propose that its opponent B provide the context, and require B to share some proceeds and provide reward.\nDuring the negotiation process, we assume that at time t A proposes offer ot and reward ept to B, at time t + 1, B proposes counter-offer ot +1 and reward ept +1 to A.\nThen at time t + 2, when the utility of B's proposal is greater than A's reserved utility, A gives its response.\nNow we calculate the expected utility to be conceded at time t +2, we use Cu to express the conceded utility.\ncept B's proposal) where cA: T \u2192 [0, 1] is a monotoneincreasing function.\ncA (t) indicates A's utility concession rate1.\nA concedes a little in the beginning before conceding significantly towards the deadline.\nThen A generates its offer ot +2 = (ct +2, pt +2) and reward ept +2 at time t + 2.\nThe expected utility of A at time t + 2 is:\nthen A will accept B's proposal (i.e. ot +1 and ept +1).\nOtherwise, A will propose its counter-offer and reward based on Cu.\nWe assume that Cu is distributed evenly on c, p and ep (i.e. the utility to be conceded on c, p and ep is 31 Cu respectively).\nIf\ni.e. the expected utility of c at time t +2 is UcA (ct) \u2212 \u03b4a (t +2) and it is closer to the utility of A's proposal ct at time t, then at time t + 2, ct +2 = ct, else the utility is closer to B'proposal ct +1 and ct +2 = ct +1.\nWhen ct +2 is equal to ct, the actual conceded utility of c is 0, and the total concession of p and ep is Cu.\nWe divide the total concession of p and ep evenly, and get the conceded utility of p and ep respectively.\nWe calculate pt +2 and ept +2 as follows:\nNow, we have generated the offer and reward A will propose at time t + 2.\nSimilarly, B also can generate its offer and reward.\nTable 1: Utility functions and weights of c, p and ep for each provider\n4.\nEVALUATION\nIn this section, we evaluate the effectiveness of our approach by simulated experiments.\nContext providers A and B negotiate to reach agreement.\nThey get QoC requirements and calculate the distance between Qoc requirements and their QoC.\nFor simplicity, in our experiments, we assume that the distance has been calculated, and dA represents distance between QoC requirements and A's QoC, dB represents distance between QoC requirements and B's QoC.\nThe domain of dA and dB is [0,500].\nWe assume reputation value is a real number and its domain is [-1000, 1000], repA represents A's reputation value and repB represents B's reputation value.\nWe assume that both providers pay the most attention to the system's interests, and pay the least attention to the reward, thus w1> w2> w3, and the weight of Ud approximates the weight of Urep.\nA and B's utility functions and weights of c, p and ep are defined in Table 1.\nWe set deadline tdeadline = 100, and define time discount function \u03b4 (t) and concession rate function c (t) of A and B as follows:\ntdeadline) 1 Given different values of dA, dB, repA and repB, A and B negotiate to reach agreement.\nThe provider that starts the negotiation is chosen at random.\nWe hope that when dA \"dB and repA\" repB, A will get the right to provide context and get a major portion of the proceeds, and when \u0394d = dA--dB is in a small range (e.g. [-50,50]) and \u0394rep = repA--repB is in a small range (e.g. [-50,50]), A and B will get approximately equal opportunities to provide context, and allocate the proceeds evenly.\nWhen dA \u2212 dB 500 approximates to dA \u2212 dB 1000 (i.e. the two providers' abilities to provide context information are approximately equal), we also hope that A and B get equal opportunities to provide context and allocate the proceeds evenly.\nAccording to the three situations above, we make three experiments as follows: Experiment 1: In this experiment, A and B negotiate with each other for 50 times, and at each time, we assign different values to dA, dB, repA, repB (satisfying dA \"dB and repA\" repB) and the reserved utilities of A and B.\nWhen the experiment is completed, we find 3 negotiation games are terminated due to the utility lower than the reserved utility.\nA gets the right to provide context for 47 times.\nThe average portion of proceeds A get is about 0.683, and B's average portion of proceeds is 0.317.\nThe average time cost to reach agreement is 8.4.\nWe also find that when B asks A to provide context in its first offer, B can require and get more portion of the proceeds because of its goodwill.\nExperiment 2: A and B also negotiate with each other for 50 times in this experiment given different values of dA, dB, repA, repB (satisfying--50 <_ \u0394d = dA--dB <_ 50 and--50 <_ \u0394rep = drep--drep <_ 50) and the reserved utilities of A and B.\nAfter the experiment, we find that there are 8 negotiation games terminated due to the utility lower than the reserved utility.\nA and B get the right to provide context for 20 times and 22 times respectively.\nThe average portion of proceeds A get is 0.528 and B's average portion of the proceeds is 0.472.\nThe average time cost on negotiation is 10.5.\nExperiment 3: In this experiment, A and B also negotiate with each other for 50 times given dA, dB, repA, repB (satisfying--0.2 <_ dA \u2212 dB 500--dA \u2212 dB 1000 <_ 0.2) and the reserved utilities of A and B.\nThere are 6 negotiation games terminated forcibly.\nA and B get the right to provide context for 21 times and 23 times respectively.\nThe average portion of proceeds A get is 0.481 and B's average portion of the proceeds is 0.519.\nThe average time cost on negotiation is 9.2.\nOne thing should be mentioned is that except for d, rep, p and ep, other factors (e.g. weights, time discount function \u03b4 (t) and concession rate function c (t)) could also affect the negotiation outcome.\nThese factors should be adjusted according to providers' beliefs at the beginning of each negotiation process.\nIn our experiments, for similarity, we assign values to them without any particularity in advance.\nThese experiments' results prove that our approach can choose an appropriate context provider and can provide a relatively fair proceeds allocation.\nWhen one provider is obviously more appropriate than the other provider, the provider will get the right to provide context and get a major portion of the proceeds.\nWhen both providers have the approximately same abilities to provide context, their opportunities to provide context are equal and they can get about a half portion of the proceeds respectively.\n5.\nRELATED WORK\nIn [4], Huebscher and McCann have proposed an adaptive middleware design for context-aware applications.\nTheir adaptive middleware uses utility functions to choose the best context provider (given the QoC requirements of applications and the QoC of alternative means of context acquisition).\nIn our negotiation model, the calculation of utility function Uc was inspired by this approach.\nHenricksen and Indulska propose an approach to modelling and using imperfect information in [3].\nThey characterize various types and sources of imperfect context information and present a set of novel context modelling constructs.\nThey also outline a software infrastructure that supports the management and use of imperfect context information.\nJudd and Steenkiste in [5] describe a generic interface to query context services allowing clients to specify their quality requirements as bounds on accuracy, confidence, update time and sample interval.\nIn [6], Lei et al. present a context service which accepts freshness and confidence meta-data from context sources, and passes this along to clients so that they can adjust their level of trust accordingly.\n[10] presents a framework for realizing dynamic context consistency management.\nThe framework supports inconsistency detection based on a semantic matching and inconsistency triggering model, and inconsistency resolution with proactive actions to context sources.\nMost approaches to provide appropriate context utilize a centralized \"arbitrator\".\nIn our approach, we let distributed context providers themselves decide who can provide appropriate context information.\nOur approach can reduce the burden of the middleware, because we do not need the middleware to provide a context selection mechanism.\nIt can avoid the serious consequences caused by a breakdown of the \"arbitrator\".\nAlso, it can guarantee context providers' interests.\n6.\nCONCLUSION AND FUTURE WORK\nHow to provide the appropriate context information is a challenging problem in pervasive computing.\nIn this paper, we have presented a novel approach based on negotiation with rewards to attempt to solve such problem.\nDistributed context providers negotiate with each other to reach agreement on the issues who can provide the appropriate context and how they allocate the proceeds.\nThe results of our experiments have showed that our approach can choose an appropriate context provider, and also can guarantee providers' interests by a relatively fair proceeds allocation.\nIn this paper, we only consider how to choose an appropriate context provider from two providers.\nIn the future work, this negotiation model will be extended, and more than two context providers can negotiate with each other to decide who is the most appropriate context provider.\nIn the extended negotiation model, how to design efficient negotiation strategies will be a challenging problem.\nWe assume that the context provider will fulfill its promise of reward in the next negotiation process.\nIn fact, the context provider might deceive its opponent and provide illusive promise.\nWe should solve this problem in the future.\nWe also should deal with interactions which are interrupted by failing communication links in the future work.","keyphrases":["negoti","context-awar","context-awar comput","context provid","concret negoti model","distribut applic","pervas comput","reput","context qualiti","persuas argument"],"prmu":["P","P","P","P","P","M","M","U","M","U"]} {"id":"J-17","title":"Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity","abstract":"We consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players. This is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O(m)-approximation truthful mechanisms [22, 20]. We study a well-motivated special case of this problem, where the processing time of a job on each machine may either be low or high, and the low and high values are public and job-dependent. This preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj, \u221e}) setting in scheduling. We give a general technique to convert any c-approximation algorithm to a 3c-approximation truthful-in-expectation mechanism. This is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion. When the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism. These are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain. Our constructions are novel in two respects. First, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity. Cycle monotonicity [23] is a necessary and sufficient condition for truthfulness, is a generalization of value monotonicity for multidimensional domains. However, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in single-dimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting. Second, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthful-in-expectation mechanism. This builds upon a technique of [16], and shows the usefulness of fractional mechanisms in truthful mechanism design.","lvl-1":"Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity Ron Lavi Industrial Engineering and Management The Technion - Israel Institute of Technology ronlavi@ie.technion.ac.il Chaitanya Swamy Combinatorics and Optimization University of Waterloo cswamy@math.uwaterloo.ca ABSTRACT We consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players.\nThis is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O(m)-approximation truthful mechanisms [22, 20].\nWe study a well-motivated special case of this problem, where the processing time of a job on each machine may either be low or high, and the low and high values are public and job-dependent.\nThis preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj, \u221e}) setting in scheduling.\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism.\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion.\nWhen the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nOur constructions are novel in two respects.\nFirst, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity.\nCycle monotonicity [23] is a necessary and sufficient condition for truthfulness, is a generalization of value monotonicity for multidimensional domains.\nHowever, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting.\nSecond, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthfulin-expectation mechanism.\nThis builds upon a technique of [16], and shows the usefulness of fractional mechanisms in truthful mechanism design.\nCategories and Subject Descriptors F.2 [Analysis of Algorithms and Problem Complexity]; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION Mechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm.\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare (or equivalently, minimize social cost), or on auction settings where revenuemaximization is the main goal.\nAlternative optimization goals, such as those that incorporate fairness criteria (which have been investigated algorithmically and in social choice theory), have received very little or no attention.\nIn this paper, we consider such an alternative goal in the context of machine scheduling, namely, makespan minimization.\nThere are n jobs or tasks that need to be assigned to m machines, where each job has to be assigned to exactly one machine.\nAssigning a job j to a machine i incurs a load (cost) of pij \u2265 0 on machine i, and the load of a machine is the sum of the loads incurred due to the jobs assigned to it; the goal is to schedule the jobs so as to minimize the maximum load of a machine, which is termed the makespan of the schedule.\nMakespan minimization is a common objective in scheduling environments, and has been well studied algorithmically in both the Computer Science and Operations Research communities (see, e.g., the survey [12]).\nFollowing the work of Nisan and Ronen [22], we consider each machine to be a strategic player or agent who privately knows its own processing time for each job, and may misrepresent these values in order to decrease its load (which is its incurred cost).\nHence, we approach the problem via mechanism design: the social designer, who holds the set of jobs to be assigned, needs to specify, in addition to a schedule, suitable payments to the players in order to incentivize them to reveal their true processing times.\nSuch a mechanism is called a truthful mechanism.\nThe makespan-minimization objective is quite different from the classic goal of social-welfare maximization, where one wants to maximize the total welfare (or minimize the total cost) of all players.\nInstead, it 252 corresponds to maximizing the minimum welfare and the notion of max-min fairness, and appears to be a much harder problem from the viewpoint of mechanism design.\nIn particular, the celebrated VCG [26, 9, 10] family of mechanisms does not apply here, and we need to devise new techniques.\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players'' processing times, in particular, the dimensionality of the domain.\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary.\nThis is a multidimensional domain, since a player``s private value is its entire vector of processing times (pij)j. Very few positive results are known for multidimensional domains in general, and the only positive results known for multidimensional scheduling are O(m)-approximation truthful mechanisms [22, 20].\nWe emphasize that regardless of computational considerations, even the existence of a truthful mechanism with a significantly better (than m) approximation ratio is not known for any such scheduling domain.\nOn the negative side, [22] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2, and strengthened this lower bound to m for two specific classes of deterministic mechanisms.\nRecently, [20] extended this lower bound to randomized mechanisms, and [8] improved the deterministic lower bound.\nIn stark contrast with the above state of affairs, much stronger (and many more) positive results are known for a special case of the unrelated machines problem, namely, the setting of related machines.\nHere, we have pij = pj\/si for every i, j, where pj is public knowledge, and the speed si is the only private parameter of machine i.\nThis assumption makes the domain of players'' types single-dimensional.\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [21, 3], which appears to make it significantly easier to design truthful mechanisms in such domains.\nArcher and Tardos [3] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism.\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [3] showed that there exists a truthful mechanism that always outputs an optimal schedule.\n(Recall that in the multidimensional unrelated machines setting, it is impossible to obtain a truthful mechanism with approximation ratio better than 2.)\nVarious follow-up results [2, 4, 1, 13] have strengthened the notion of truthfulness and\/or improved the approximation ratio.\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings (e.g., combinatorial auctions).\nThus, in addition to the specific importance of scheduling in strategic environments, ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains.\nIn this paper, we consider the makespan-minimization problem for a special case of unrelated machines, where the processing time of a job is either low or high on each machine.\nMore precisely, in our setting, pij \u2208 {Lj, Hj} for every i, j, where the Lj, Hj values are publicly known (Lj \u2261low, Hj \u2261high).\nWe call this model the jobdependent two-values case.\nThis model generalizes the classic restricted machines setting, where pij \u2208 {Lj, \u221e} which has been well-studied algorithmically.\nA special case of our model is when Lj = L and Hj = H for all jobs j, which we denote simply as the two-values scheduling model.\nBoth of our domains are multidimensional, since the machines are unrelated: one job may be low on one machine and high on the other, while another job may follow the opposite pattern.\nThus, the private information of each machine is a vector specifying which jobs are low and high on it.\nThus, they retain the core property underlying the hardness of truthful mechanism design for unrelated machines, and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem.\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains.\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism.\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem.\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting.\nInterestingly, the proof of truthfulness is not based on supplying explicit prices, and our construction does not necessarily yield efficiently-computable prices (but the allocation rule is efficiently computable).\nOur second result applies to the twovalues setting (Lj = L, Hj = H), for which we improve both the approximation ratio and strengthen the notion of truthfulness.\nWe obtain a deterministic 2-approximation truthful mechanism (along with prices) for this problem.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nComplementing this, we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule (unlike in the case of related machines).\nBy exploiting the multidimensionality of the domain, we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan (irrespective of computational considerations).\nThe main technique, and one of the novelties, underlying our constructions and proofs, is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms.\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm, i.e., an algorithm for which prices ensuring truthfulness exist, and then find these prices (by further delving into the proof of implementability).\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains, where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem (see, e.g., [3, 7, 4, 1, 13]).\nBut for multidimensional domains, almost all positive results have relied on explicit price specifications in order to prove truthfulness (an exception is the work on unknown single-minded players in combinatorial auctions [17, 7]), a fact that yet again shows the gap in our understanding of multidimensional vs. single-dimensional domains.\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains.\nThe monotonicity condition we use, which is sometimes called cycle monotonicity, was first proposed by Rochet [23] (see also [11]).\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain.\nOur methods and analyses demonstrate the potential benefits 253 of this characterization, and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains.\nConsider, for example, our first result showing that any c-approximation algorithm can be exported to a 3c-approximation truthful-in-expectation mechanism.\nAt the level of generality of an arbitrary approximation algorithm, it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism.\nBut, cycle monotonicity does allow us to prove such a statement.\nIn fact, some such condition based only on the underlying algorithm (and not on the prices) seems necessary to prove such a general statement.\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea.\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule.\nMoving to a fractional domain allows us to plug-in truthfulness into the approximation algorithm in a rather simple fashion, while losing a factor of 2 in the approximation ratio.\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment.\nFor this, we use a recent rounding procedure of Kumar et al. [14] that is tailored for unrelated-machine scheduling.\nThis preserves truthfulness, but we lose another additive factor equal to the approximation ratio.\nOur construction uses and extends some observations of Lavi and Swamy [16], and further demonstrates the benefits of fractional mechanisms in truthful mechanism design.\nRelated Work Nisan and Ronen [22] first considered the makespan-minimization problem for unrelated machines.\nThey gave an m-approximation positive result and proved various lower bounds.\nRecently, Mu``alem and Schapira [20] proved a lower bound of 2 on the approximation ratio achievable by truthful-in-expectation mechanisms, and Christodoulou, Koutsoupias, and Vidali [8] proved a (1 + \u221a 2)-lower bound for deterministic truthful mechanisms.Archer and Tardos [3] first considered the related-machines problem and gave a 3-approximation truthful-in-expectation mechanism.\nThis been improved in [2, 4, 1, 13] to: a 2-approximation randomized mechanism [2]; an FPTAS for any fixed number of machines given by Andelman, Azar and Sorani [1], and a 3-approximation deterministic mechanism by Kov\u00b4acs [13].\nThe algorithmic problem (i.e., without requiring truthfulness) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known.\nLenstra, Shmoys and Tardos [18] gave the first such algorithm.\nShmoys and Tardos [25] later gave a 2approximation algorithm for the generalized assignment problem, a generalization where there is a cost cij for assigning a job j to a machine i, and the goal is to minimize the cost subject to a bound on the makespan.\nRecently, Kumar, Marathe, Parthasarathy, and Srinivasan [14] gave a randomized rounding algorithm that yields the same bounds.\nWe use their procedure in our randomized mechanism.\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [23] (see also Gui et al. [11]).\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [21] and rediscovered by [3].\nAs mentioned earlier, this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [3, 7, 4, 1, 13].\nFor convex domains (i.e., each players'' set of private values is convex), it is known that cycle monotonicity is implied by a simpler condition, called weak monotonicity [15, 6, 24].\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems.\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design.\nIn the context of combinatorial auctions, the problems of maximizing the minimum value received by a player, and computing an envy-minimizing allocation have been studied briefly.\nLavi, Mu``alem, and Nisan [15] showed that the former objective cannot be implemented truthfully; Bezakova and Dani [5] gave a 0.5-approximation mechanism for two players with additive valuations.\nLipton et al. [19] showed that the latter objective cannot be implemented truthfully.\nThese lower bounds were strengthened in [20].\n2.\nPRELIMINARIES 2.1 The scheduling domain In our scheduling problem, we are given n jobs and m machines, and each job must be assigned to exactly one machine.\nIn the unrelated-machines setting, each machine i is characterized by a vector of processing times (pij)j, where pij \u2208 R\u22650 \u222a {\u221e} denotes i``s processing time for job j with the value \u221e specifying that i cannot process j.\nWe consider two special cases of this problem: 1.\nThe job-dependent two-values case, where pij \u2208 {Lj, Hj} for every i, j, with Lj \u2264 Hj, and the values Lj, Hj are known.\nThis generalizes the classic scheduling model of restricted machines, where Hj = \u221e.\n2.\nThe two-values case, which is a special case of above where Lj = L and Hj = H for all jobs j, i.e., pij \u2208 {L, H} for every i, j.\nWe say that a job j is low on machine i if pij = Lj, and high if pij = Hj.\nWe will use the terms schedule and assignment interchangeably.\nWe represent a deterministic schedule by a vector x = (xij)i,j, where xij is 1 if job j is assigned to machine i, thus we have xij \u2208 {0, 1} for every i, j, P i xij = 1 for every job j.\nWe will also consider randomized algorithms and algorithms that return a fractional assignment.\nIn both these settings, we will again specify an assignment by a vector x = (xij)i,j with P j xij = 1, but now xij \u2208 [0, 1] for every i, j. For a randomized algorithm, xij is simply the probability that j is assigned to i (thus, x is a convex combination of integer assignments).\nWe denote the load of machine i (under a given assignment) by li = P j xijpij, and the makespan of a schedule is defined as the maximum load on any machine, i.e., maxi li.\nThe goal in the makespan-minimization problem is to assign the jobs to the machines so as to minimize the makespan of the schedule.\n2.2 Mechanism design We consider the makespan-minimization problem in the above scheduling domains in the context of mechanism design.\nMechanism design studies strategic settings where the social designer needs to ensure the cooperation of the different entities involved in the algorithmic procedure.\nFollowing the work of Nisan and Ronen [22], we consider the machines to be the strategic players or agents.\nThe social designer holds the set of jobs that need to be assigned, but does 254 not know the (true) processing times of these jobs on the different machines.\nEach machine is a selfish entity, that privately knows its own processing time for each job.\non a machine incurs a cost to the machine equal to the true processing time of the job on the machine, and a machine may choose to misrepresent its vector of processing times, which are private, in order to decrease its cost.\nWe consider direct-revelation mechanisms: each machine reports its (possibly false) vector of processing times, the mechanism then computes a schedule and hands out payments to the players (i.e., machines) to compensate them for the cost they incur in processing their assigned jobs.\nA (direct-revelation) mechanism thus consists of a tuple (x, P): x specifies the schedule, and P = {Pi} specifies the payments handed out to the machines, where both x and the Pis are functions of the reported processing times p = (pij)i,j.\nThe mechanism``s goal is to compute a schedule that has near-optimal makespan with respect to the true processing times; a machine i is however only interested in maximizing its own utility, Pi \u2212 li, where li is its load under the output assignment, and may declare false processing times if this could increase its utility.\nThe mechanism must therefore incentivize the machines\/players to truthfully reveal their processing times via the payments.\nThis is made precise using the notion of dominant-strategy truthfulness.\nDefinition 2.1 (Truthfulness) A scheduling mechanism is truthful if, for every machine i, every vector of processing times of the other machines, p\u2212i, every true processing-time vector p1 i and any other vector p2 i of machine i, we have: P1 i \u2212 X j x1 ijp1 ij \u2265 P2 i \u2212 X j x2 ijp1 ij, (1) where (x1 , P1 ) and (x2 , P2 ) are respectively the schedule and payments when the other machines declare p\u2212i and machine i declares p1 i and p2 i , i.e., x1 = x(p1 i , p\u2212i), P1 i = Pi(p1 i , p\u2212i) and x2 = x(p2 i , p\u2212i), P2 i = Pi(p2 i , p\u2212i).\nTo put it in words, in a truthful mechanism, no machine can improve its utility by declaring a false processing time, no matter what the other machines declare.\nWe will also consider fractional mechanisms that return a fractional assignment, and randomized mechanisms that are allowed to toss coins and where the assignment and the payments may be random variables.\nThe notion of truthfulness for a fractional mechanism is the same as in Definition 2.1, where x1 , x2 are now fractional assignments.\nFor a randomized mechanism, we will consider the notion of truthfulness in expectation [3], which means that a machine (player) maximizes her expected utility by declaring her true processing-time vector.\nInequality (1) also defines truthfulness-in-expectation for a randomized mechanism, where P1 i , P2 i now denote the expected payments made to player i, x1 , x2 are the fractional assignments denoting the randomized algorithm``s schedule (i.e., xk ij is the probability that j is assigned to i in the schedule output for (pk i , p\u2212i)).\nFor our two scheduling domains, the informational assumption is that the values Lj, Hj are publicly known.\nThe private information of a machine is which jobs have value Lj (or L) and which ones have value Hj (or H) on it.\nWe emphasize that both of our domains are multidimensional, since each machine i needs to specify a vector saying which jobs are low and high on it.\n3.\nCYCLE MONOTONICITY Although truthfulness is defined in terms of payments, it turns out that truthfulness actually boils down to a certain algorithmic condition of monotonicity.\nThis seems to have been first observed for multidimensional domains by Rochet [23] in 1987, and has been used successfully in algorithmic mechanism design several times, but for singledimensional domains.\nHowever for multidimensional domains, the monotonicity condition is more involved and there has been no success in employing it in the design of truthful mechanisms.\nMost positive results for multidimensional domains have relied on explicit price specifications in order to prove truthfulness.\nOne of the main contributions of this paper is to demonstrate that the monotonicity condition for multidimensional settings, which is sometimes called cycle monotonicity, can indeed be effectively utilized to devise truthful mechanisms.\nWe include a brief exposition on it for completeness.\nThe exposition here is largely based on [11].\nCycle monotonicity is best described in the abstract social choice setting: there is a finite set A of alternatives, there are m players, and each player has a private type (valuation function) v : A \u2192 R, where vi(a) should be interpreted as i``s value for alternative a.\nIn the scheduling domain, A represents all the possible assignments of jobs to machines, and vi(a) is the negative of i``s load in the schedule a. Let Vi denote the set of all possible types of player i.\nA mechanism is a tuple (f, {Pi}) where f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A is the algorithm for choosing the alternative, and Pi : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A is the price charged to player i (in the scheduling setting, the mechanism pays the players, which corresponds to negative prices).\nThe mechanism is truthful if for every i, every v\u2212i \u2208 V\u2212i = Q i =i Vi , and any vi, vi \u2208 Vi we have vi(a) \u2212 Pi(vi, v\u2212i) \u2265 vi(b) \u2212 Pi(vi, v\u2212i), where a = f(vi, v\u2212i) and b = f(vi, v\u2212i).\nA basic question that arises is given an algorithm f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A, do there exist prices that will make the resulting mechanism truthful?\nIt is well known (see e.g. [15]) that the price Pi can only depend on the alternative chosen and the others'' declarations, that is, we may write Pi : V\u2212i \u00d7 A \u2192 R. Thus, truthfulness implies that for every i, every v\u2212i \u2208 V\u2212i, and any vi, vi \u2208 Vi with f(vi, v\u2212i) = a and f(vi, v\u2212i) = b, we have vi(a) \u2212 Pi(a, v\u2212i) \u2265 vi(b) \u2212 Pi(b, v\u2212i).\nNow fix a player i, and fix the declarations v\u2212i of the others.\nWe seek an assignment to the variables {Pa}a\u2208A such that vi(a) \u2212 vi(b) \u2265 Pa \u2212 Pb for every a, b \u2208 A and vi \u2208 Vi with f(vi, v\u2212i) = a. (Strictly speaking, we should use A = f(Vi, v\u2212i) instead of A here.)\nDefine \u03b4a,b := inf{vi(a)\u2212 vi(b) : vi \u2208 Vi, f(vi, v\u2212i) = a}.\nWe can now rephrase the above price-assignment problem: we seek an assignment to the variables {Pa}a\u2208A such that Pa \u2212 Pb \u2264 \u03b4a,b \u2200a, b \u2208 A (2) This is easily solved by looking at the allocation graph and applying a standard basic result of graph theory.\nDefinition 3.1 (Gui et al. [11]) The allocation graph of f is a directed weighted graph G = (A, E) where E = A \u00d7 A and the weight of an edge b \u2192 a (for any a, b \u2208 A) is \u03b4a,b. Theorem 3.2 There exists a feasible assignment to (2) iff the allocation graph has no negative-length cycles.\nFurthermore, if all cycles are non-negative, a feasible assignment is 255 obtained as follows: fix an arbitrary node a\u2217 \u2208 A and set Pa to be the length of the shortest path from a\u2217 to a.\nThis leads to the following definition, which is another way of phrasing the condition that the allocation graph have no negative cycles.\nDefinition 3.3 (Cycle monotonicity) A social choice function f satisfies cycle monotonicity if for every player i, every v\u2212i \u2208 V\u2212i, every integer K, and every v1 i , ... , vK i \u2208 Vi, KX k=1 h vk i (ak) \u2212 vk i (ak+1) i \u2265 0 where ak = f(vk i , v\u2212i) for 1 \u2264 k \u2264 K, and aK+1 = a1.\nCorollary 3.4 There exist prices P such that the mechanism (f, P) is truthful iff f satisfies cycle monotonicity.1 We now consider our specific scheduling domain.\nFix a player i, p\u2212i, and any p1 i , ... , pK i .\nLet x(pk i , p\u2212i) = xk for 1 \u2264 k \u2264 K, and let xK+1 = x1 , pK+1 = p1 .\nxk could be a {0, 1}-assignment or a fractional assignment.\nWe have vk i (xk ) = \u2212 P j xk ijpk ij, so cycle monotonicity translates to PK k=1 \u02c6 \u2212 P j xk ijpk ij + P j xk+1 ij pk ij \u02dc \u2265 0.\nRearranging, we get KX k=1 X j xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0.\n(3) Thus (3) reduces our mechanism design problem to a concrete algorithmic problem.\nFor most of this paper, we will consequently ignore any strategic considerations and focus on designing an approximation algorithm for minimizing makespan that satisfies (3).\n4.\nA GENERAL TECHNIQUE TO OBTAIN RANDOMIZED MECHANISMS In this section, we consider the case of job-dependent Lj, Hj values (with Lj \u2264 Hj), which generalizes the classical restricted-machines model (where Hj = \u221e).\nWe show the power of randomization, by providing a general technique that converts any c-approximation algorithm into a 3c-approximation, truthful-in-expectation mechanism.\nThis is one of the few results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms when the algorithm is given as a black box.\nOur construction and proof are simple, and based on two ideas.\nFirst, as outlined above, we prove truthfulness using cycle monotonicity.\nIt seems unlikely that for an arbitrary approximation algorithm given only as a black box, one would be able to come up with payments in order to prove truthfulness; but cycle-monotonicity allows us to prove precisely this.\nSecond, we obtain our randomized mechanism by (a) first moving to a fractional domain, and constructing a fractional truthful mechanism that is allowed to return fractional assignments; then (b) using a rounding procedure to express the fractional schedule as a convex combination of integer schedules.\nThis builds upon a theme introduced by Lavi and Swamy [16], namely that of using fractional mechanisms to obtain truthful-in-expectation mechanisms.\n1 It is not clear if Theorem 3.2, and hence, this statement, hold if A is not finite.\nWe should point out however that one cannot simply plug in the results of [16].\nTheir results hold for social-welfaremaximization problems and rely on using VCG to obtain a fractional truthful mechanism.\nVCG however does not apply to makespan minimization, and in our case even the existence of a near-optimal fractional truthful mechanism is not known.\nWe use the following result adapted from [16].\nLemma 4.1 (Lavi and Swamy [16]) Let M = (x, P) be a fractional truthful mechanism.\nLet A be a randomized rounding algorithm that given a fractional assignment x, outputs a random assignment X such that E \u02c6 Xij \u02dc = xij for all i, j.\nThen there exist payments P such that the mechanism M = (A, P ) is truthful in expectation.\nFurthermore, if M is individually rational then M is individually rational for every realization of coin tosses.\nLet OPT(p) denote the optimal makespan (over integer schedules) for instance p.\nAs our first step, we take a capproximation algorithm and convert it to a 2c-approximation fractional truthful mechanism.\nThis conversion works even when the approximation algorithm returns only a fractional schedule (satisfying certain properties) of makespan at most c \u00b7 OPT(p) for every instance p.\nWe prove truthfulness by showing that the fractional algorithm satisfies cycle monotonicity (3).\nNotice that the alternative-set of our fractional mechanism is finite (although the set of all fractional assignments is infinite): its cardinality is at most that of the inputdomain, which is at most 2mn in the two-value case.\nThus, we can apply Corollary 3.4 here.\nTo convert this fractional truthful mechanism into a randomized truthful mechanism we need a randomized rounding procedure satisfying the requirements of Lemma 4.1.\nFortunately, such a procedure is already provided by Kumar, Marathe, Parthasarathy, and Srinivasan [14].\nLemma 4.2 (Kumar et al. [14]) Given a fractional assignment x and a processing time vector p, there exists a randomized rounding procedure that yields a (random) assignment X such that, 1.\nfor any i, j, E \u02c6 Xij \u02dc = xij.\n2.\nfor any i, P j Xijpij < P j xijpij + max{j:xij \u2208(0,1)} pij with probability 1.\nProperty 1 will be used to obtain truthfulness in expectation, and property 2 will allow us to prove an approximation guarantee.\nWe first show that any algorithm that returns a fractional assignment having certain properties satisfies cycle monotonicity.\nLemma 4.3 Let A be an algorithm that for any input p, outputs a (fractional) assignment x such that, if pij = Hj then xij \u2264 1\/m, and if pij = Lj then xij \u2265 1\/m.\nThen A satisfies cycle-monotonicity.\nProof.\nFix a player i, and the vector of processing times of the other players p\u2212i.\nWe need to prove (3), that is, PK k=1 P j xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0 for every p1 i , ... , pK i , where index k = K + 1 is taken to be k = 1.\nWe will show that for every job j, PK k=1 xk+1 ij ` pk ij \u2212 pk+1 ij \u00b4 \u2265 0.\nIf pk ij is the same for all k (either always Lj or always Hj), then the above inequality clearly holds.\nOtherwise we can 256 divide the indices 1, ... , K, into maximal segments, where a maximal segment is a maximal set of consecutive indices k , k + 1, ... , k \u2212 1, k (where K + 1 \u2261 1) such that pk ij = Hj \u2265 pk +1 ij \u2265 \u00b7 \u00b7 \u00b7 \u2265 pk ij = Lj.\nThis follows because there must be some k such that pk ij = Hj > pk\u22121 ij = Lj.\nWe take k = k and then keep including indices in this segment till we reach a k such that pk ij = Lj and pk+1 ij = Hj.\nWe set k = k, and then start a new maximal segment with index k + 1.\nNote that k = k and k + 1 = k \u2212 1.\nWe now have a subset of indices and we can continue recursively.\nSo all indices are included in some maximal segment.\nWe will show that for every such maximal segment k , k +1, ... , k ,P k \u22121\u2264k 0 implies that pij \u2264 T, where T is the makespan of x. (In particular, note that any algorithm that returns an integral assignment has these properties.)\nOur algorithm, which we call A , returns the following assignment xF .\nInitialize xF ij = 0 for all i, j. For every i, j, 1.\nif pij = Hj, set xF ij = P i :pi j =Hj xi j\/m; 2.\nif pij = Lj, set xF ij = xij + P i =i:pi j =Lj (xi j \u2212xij)\/m+ P i :pi j =Hj xi j\/m.\nTheorem 4.4 Suppose algorithm A satisfies the conditions in Algorithm 1 and returns a makespan of at most c\u00b7OPT(p) for every p. Then, the algorithm A constructed above is a 2c-approximation, cycle-monotone fractional algorithm.\nMoreover, if xF ij > 0 on input p, then pij \u2264 c \u00b7 OPT(p).\nProof.\nFirst, note that xF is a valid assignment: for every job j, P i xF ij = P i xij + P i,i =i:pij =pi j =Lj (xi j \u2212 xij)\/m = P i xij = 1.\nWe also have that if pij = Hj then xF ij = P i :pi j =Hj xi j\/m \u2264 1\/m.\nIf pij = Lj, then xF ij = xij(1 \u2212 \/m) + P i =i xi j\/m where = |{i = i : pi j = Lj}| \u2264 m \u2212 1; so xF ij \u2265 P i xi j\/m \u2265 1\/m.\nThus, by Lemma 4.3, A satisfies cycle monotonicity.\nThe total load on any machine i under xF is at mostP j:pij =Hj P i :pi j =Hj Hj\u00b7 xi j m + P j:pij =Lj Lj ` xij+ P i =i xi j m \u00b4 , which is at most P j pijxij + P i =i P j pi jxi j\/m \u2264 2c \u00b7 OPT(p).\nFinally, if xF ij > 0 and pij = Lj, then pij \u2264 OPT(p).\nIf pij = Hj, then for some i (possibly i) with pi j = Hj we have xi j > 0, so by assumption, pi j = Hj = pij \u2264 c \u00b7 OPT(p).\nTheorem 4.4 combined with Lemmas 4.1 and 4.2, gives a 3c-approximation, truthful-in-expectation mechanism.\nThe computation of payments will depend on the actual approximation algorithm used.\nSection 3 does however give an explicit procedure to compute payments ensuring truthfulness, though perhaps not in polynomial-time.\nTheorem 4.5 The procedure in Algorithm 1 converts any c-approximation fractional algorithm into a 3c-approximation, truthful-in-expectation mechanism.\nTaking A in Algorithm 1 to be the algorithm that returns an LP-optimum assignment satisfying the required conditions (see [18, 25]), we obtain a 3-approximation mechanism.\nCorollary 4.6 There is a truthful-in-expectation mechanism with approximation ratio 3 for the Lj-Hj setting.\n5.\nA DETERMINISTIC MECHANISM FOR THE TWO-VALUES CASE We now present a deterministic 2-approximation truthful mechanism for the case where pij \u2208 {L, H} for all i, j.\nIn the sequel, we will often say that j is assigned to a lowmachine to denote that j is assigned to a machine i where pij = L.\nWe will call a job j a low job of machine i if pij = L; the low-load of i is the load on i due to its low jobs, i.e., P j:pij =L xijpij.\nAs in Section 4, our goal is to obtain an approximation algorithm that satisfies cycle monotonicity.\nWe first obtain a simplification of condition (3) for our two-values {L, H} scheduling domain (Proposition 5.1) that will be convenient to work with.\nWe describe our algorithm in Section 5.1.\nIn Section 5.2, we bound its approximation guarantee and prove that it satisfies cycle-monotonicity.\nIn Section 5.3, we compute explicit payments giving a truthful mechanism.\nFinally, in Section 5.4 we show that no deterministic mechanism can achieve the optimum makespan.\nDefine nk, H = \u02db \u02db{j : xk ij = 1, pk ij = L, pij = H} \u02db \u02db (4) nk, L = \u02db \u02db{j : xk ij = 1, pk ij = H, pij = L} \u02db \u02db.\n(5) Then, P j xk+1 ij (pk ij \u2212 pk+1 ij ) = (nk+1,k H \u2212 nk+1,k L )(H \u2212 L).\nPlugging this into (3) and dividing by (H \u2212 L), we get the following.\nProposition 5.1 Cycle monotonicity in the two-values scheduling domain is equivalent to the condition that, for every player i, every p\u2212i, every integer K, and every p1 i , ... , pK i , KX k=1 ` nk+1,k H \u2212 nk+1,k L \u00b4 \u2265 0.\n(6) 257 5.1 Acycle-monotone approximation algorithm We now describe an algorithm that satisfies condition (6) and achieves a 2-approximation.\nWe will assume that L, H are integers, which is without loss of generality.\nA core component of our algorithm will be a procedure that takes an integer load threshold T and computes an integer partial assignment x of jobs to machines such that (a) a job is only assigned to a low machine; (b) the load on any machine is at most T; and (c) the number of jobs assigned is maximized.\nSuch an assignment can be computed by solving a max-flow problem: we construct a directed bipartite graph with a node for every job j and every machine i, and an edge (j, i) of infinite capacity if pij = L.\nWe also add a source node s with edges (s, j) having capacity 1, and sink node t with edges (i, t) having capacity T\/L .\nClearly any integer flow in this network corresponds to a valid integer partial assignment x of makespan at most T, where xij = 1 iff there is a flow of 1 on the edge from j to i.\nWe will therefore use the terms assignment and flow interchangeably.\nMoreover, there is always an integral max-flow (since all capacities are integers).\nWe will often refer to such a max-flow as the max-flow for (p, T).\nWe need one additional concept before describing the algorithm.\nThere could potentially be many max-flows and we will be interested in the most balanced ones, which we formally define as follows.\nFix some max-flow.\nLet ni p,T be the amount of flow on edge (i, t) (or equivalently the number of jobs assigned to i in the corresponding schedule), and let np,T be the total size of the max-flow, i.e., np,T = P i ni p,T .\nFor any T \u2264 T, define ni p,T |T = min(ni p,T , T ), that is, we truncate the flow\/assignment on i so that the total load on i is at most T .\nDefine np,T |T = P i ni p,T |T .\nWe define a prefix-maximal flow or assignment for T as follows.\nDefinition 5.2 (Prefix-maximal flow) A flow for the above network with threshold T is prefix-maximal if for every integer T \u2264 T, we have np,T |T = np,T .\nThat is, in a prefix-maximal flow for (p, T), if we truncate the flow at some T \u2264 T, we are left with a max-flow for (p, T ).\nAn elementary fact about flows is that if an assignment\/flow x is not a maximum flow for (p, T) then there must be an augmenting path P = (s, j1, i1, ... , jK , iK , t) in the residual graph that allows us to increase the size of the flow.\nThe interpretation is that in the current assignment, j1 is unassigned, xi j = 0, which is denoted by the forward edges (j , i ), and xi j +1 = 1, which is denoted by the reverse edges (i , j +1).\nAugmenting x using P changes the assignment so that each j is assigned to i in the new assignment, which increases the value of the flow by 1.\nA simple augmenting path does not decrease the load of any machine; thus, one can argue that a prefix-maximal flow for a threshold T always exists.\nWe first compute a max-flow for threshold 1, use simple augmenting paths to augment it to a max-flow for threshold 2, and repeat, each time augmenting the max-flow for the previous threshold t to a max-flow for threshold t + 1 using simple augmenting paths.\nAlgorithm 2 Given a vector of processing times p, construct an assignment of jobs to machines as follows.\n1.\nCompute T\u2217 (p) = min \u02d8 T \u2265 H, T multiple of L : np,T \u00b7 L + (n \u2212 np,T ) \u00b7 H \u2264 m \u00b7 T \u00af .\nNote that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2212m\u00b7T is a decreasing function of T, so T\u2217 (p) can be computed in polynomial time via binary search.\n2.\nCompute a prefix-maximal flow for threshold T\u2217 (p) and the corresponding partial assignment (i.e., j is assigned to i iff there is 1 unit of flow on edge (j, i)).\n3.\nAssign the remaining jobs, i.e., the jobs unassigned in the flow-phase, in a greedy manner as follows.\nConsider these jobs in an arbitrary order and assign each job to the machine with the current lowest load (where the load includes the jobs assigned in the flow-phase).\nOur algorithm needs to compute a prefix-maximal assignment for the threshold T\u2217 (p).\nThe proof showing the existence of a prefix-maximal flow only yields a pseudopolynomial time algorithm for computing it.\nBut notice that the max-flow remains the same for any T \u2265 T = n \u00b7 L.\nSo a prefix-maximal flow for T is also prefix-maximal for any T \u2265 T .\nThus, we only need to compute a prefix-maximal flow for T = min{T\u2217 (p), T }.\nThis can be be done in polynomial time by using the iterative-augmenting-paths algorithm in the existence proof to compute iteratively the maxflow for the polynomially many multiples of L up to (and including) T .\nTheorem 5.3 One can efficiently compute payments that when combined with Algorithm 2 yield a deterministic 2approximation truthful mechanism for the two-values scheduling domain.\n5.2 Analysis Let OPT(p) denote the optimal makespan for p.\nWe now prove that Algorithm 2 is a 2-approximation algorithm that satisfies cycle monotonicity.\nThis will then allow us to compute payments in Section 5.3 and prove Theorem 5.3.\n5.2.1 Proof of approximation Claim 5.4 If OPT(p) < H, the makespan is at most OPT(p).\nProof.\nIf OPT(p) < H, it must be that the optimal schedule assigns all jobs to low machines, so np,OPT(p) = n. Thus, we have T\u2217 (p) = L \u00b7 H L .\nFurthermore, since we compute a prefix-maximal flow for threshold T\u2217 (p) we have np,T \u2217(p)|OPT(p) = np,OPT(p) = n, which implies that the load on each machine is at most OPT(p).\nSo in this case the makespan is at most (and hence exactly) OPT(p).\nClaim 5.5 If OPT(p) \u2265 H, then T\u2217 (p) \u2264 L \u00b7 OPT(p) L \u2264 OPT(p) + L. Proof.\nLet nOPT(p) be the number of jobs assigned to low machines in an optimum schedule.\nThe total load on all machines is exactly nOPT(p) \u00b7 L + (n \u2212 nOPT(p)) \u00b7 H, and is at most m \u00b7 OPT(p), since every machine has load at most OPT(p).\nSo taking T = L \u00b7 OPT(p) L \u2265 H, since np,T \u2265 nOPT(p) we have that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2264 m\u00b7T. Hence, T\u2217 (p), the smallest such T, is at most L \u00b7 OPT(p) L .\nClaim 5.6 Each job assigned in step 3 of the algorithm is assigned to a high machine.\n258 Proof.\nSuppose j is assigned to machine i in step 3.\nIf pij = L, then we must have ni p,T \u2217(p) = T\u2217 (p), otherwise we could have assigned j to i in step 2 to obtain a flow of larger value.\nSo at the point just before j is assigned in step 3, the load of each machine must be at least T\u2217 (p).\nHence, the total load after j is assigned is at least m \u00b7 T\u2217 (p) + L > m \u00b7 T\u2217 (p).\nBut the total load is also at most np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H \u2264 m \u00b7 T\u2217 (p), yielding a contradiction.\nLemma 5.7 The above algorithm returns a schedule with makespan at most OPT(p)+max \u02d8 L, H(1\u2212 1 m ) \u00af \u2264 2\u00b7OPT(p).\nProof.\nIf OPT(p) < H, then by Claim 5.4, we are done.\nSo suppose OPT(p) \u2265 H. By Claim 5.5, we know that T\u2217 (p) \u2264 OPT(p) + L.\nIf there are no unassigned jobs after step 2 of the algorithm, then the makespan is at most T\u2217 (p) and we are done.\nSo assume that there are some unassigned jobs after step 2.\nWe will show that the makespan after step 3 is at most T +H ` 1\u2212 1 m \u00b4 where T = min \u02d8 T\u2217 (p), OPT(p) \u00af .\nSuppose the claim is false.\nLet i be the machine with the maximum load, so li > T + H ` 1 \u2212 1 m \u00b4 .\nLet j be the last job assigned to i in step 3, and consider the point just before it is assigned to i.\nSo li > T \u2212 H\/m at this point.\nAlso since j is assigned to i, by our greedy rule, the load on all the other machines must be at least li.\nSo the total load after j is assigned, is at least H + m \u00b7 li > m \u00b7 T (since pij = H by Claim 5.6).\nAlso, for any assignment of jobs to machines in step 3, the total load is at most np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H since there are np,T \u2217(p) jobs assigned to low machines.\nTherefore, we must have m \u00b7 T < np,T \u2217(p) \u00b7 L + (n \u2212 np,T \u2217(p)) \u00b7 H.\nBut we will argue that m \u00b7 T \u2265 np,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H, which yields a contradiction.\nIf T = T\u2217 (p), this follows from the definition of T\u2217 (p).\nIf T = OPT(p), then letting nOPT(p) denote the number of jobs assigned to low machines in an optimum schedule, we have np,T \u2217(p) \u2265 nOPT(p).\nSo np,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H \u2264 nOPT(p) \u00b7L+(n\u2212nOPT(p))\u00b7H.\nThis is exactly the total load in an optimum schedule, which is at most m \u00b7 OPT(p).\n5.2.2 Proof of cycle monotonicity Lemma 5.8 Consider any two instances p = (pi, p\u2212i) and p = (pi, p\u2212i) where pi \u2265 pi, i.e., pij \u2265 pij \u2200j.\nIf T is a threshold such that np,T > np ,T , then every maximum flow x for (p , T) must assign all jobs j such that pij = L. Proof.\nLet Gp denote the residual graph for (p , T) and flow x .\nSuppose by contradiction that there exists a job j\u2217 with pij\u2217 = L that is unassigned by x .\nSince pi \u2265 pi, all edges (j, i) that are present in the network for (p , T) are also present in the network for (p, T).\nThus, x is a valid flow for (p, T).\nBut it is not a max-flow, since np,T > np ,T .\nSo there exists an augmenting path P in the residual graph for (p, T) and flow x .\nObserve that node i must be included in P, otherwise P would also be an augmenting path in the residual graph Gp contradicting the fact that x is a maxflow.\nIn particular, this implies that there is a path P \u2282 P from i to the sink t. Let P = (i, j1, i1, ... , jK , iK , t).\nAll the edges of P are also present as edges in Gp - all reverse edges (i , j +1) are present since such an edge implies that xi j +1 = 1; all forward edges (j , i ) are present since i = i so pi j = pi j = L, and xi j +1 = 0.\nBut then there is an augmenting path (j\u2217 , i, j1, i1, ... , jK , iK , t) in Gp which contradicts the maximality of x .\nLet L denote the all-low processing time vector.\nDefine TL i (p\u2212i) = T\u2217 (L, p\u2212i).\nSince we are focusing on machine i, and p\u2212i is fixed throughout, we abbreviate TL i (p\u2212i) to TL .\nAlso, let pL = (L, p\u2212i).\nNote that T\u2217 (p) \u2265 TL for every instance p = (pi, p\u2212i).\nCorollary 5.9 Let p = (pi, p\u2212i) be any instance and let x be any prefix-maximal flow for (p, T\u2217 (p)).\nThen, the low-load on machine i is at most TL .\nProof.\nLet T\u2217 = T\u2217 (p).\nIf T\u2217 = TL , then this is clearly true.\nOtherwise, consider the assignment x truncated at TL .\nSince x is prefix-maximal, we know that this constitutes a max-flow for (p, TL ).\nAlso, np,T L < npL,T L because T\u2217 > TL .\nSo by Lemma 5.8, this truncated flow must assign all the low jobs of i. Hence, there cannot be a job j with pij = L that is assigned to i after the TL -threshold since then j would not be assigned by this truncated flow.\nThus, the low-load of i is at most TL .\nUsing these properties, we will prove the following key inequality: for any p1 = (p\u2212i, p1 i ) and p2 = (p\u2212i, p2 i ), np1,T L \u2265 np2,T L \u2212 n2,1 H + n2,1 L (7) where n2,1 H and n2,1 L are as defined in (4) and (5), respectively.\nNotice that this immediately implies cycle monotonicity, since if we take p1 = pk and p2 = pk+1 , then (7) implies that npk,T L \u2265 npk+1,T L \u2212 nk+1,k H + nk+1,k L ; summing this over all k = 1, ... , K gives (6).\nLemma 5.10 If T\u2217 (p1 ) > TL , then (7) holds.\nProof.\nLet T1 = T\u2217 (p1 ) and T2 = T\u2217 (p2 ).\nTake the prefix-maximal flow x2 for (p2 , T2 ), truncate it at TL , and remove all the jobs from this assignment that are counted in n2,1 H , that is, all jobs j such that x2 ij = 1, p2 ij = L, p1 ij = H. Denote this flow by x. Observe that x is a valid flow for (p1 , TL ), and the size of this flow is exactly np2,T 2 |T L \u2212n2,1 H = np2,T L \u2212n2,1 H .\nAlso none of the jobs that are counted in n2,1 L are assigned by x since each such job j is high on i in p2 .\nSince T1 > TL , we must have np1,T L < npL,T L .\nSo if we augment x to a max-flow for (p1 , TL ), then by Lemma 5.8 (with p = pL and p = p1 ), all the jobs corresponding to n2,1 L must be assigned in this max-flow.\nThus, the size of this max-flow is at least (size of x) + n2,1 L , that is, np1,T L \u2265 np2,T L \u2212 n2,1 H + n2,1 L , as claimed.\nLemma 5.11 Suppose T\u2217 (p1 ) = TL .\nThen (7) holds.\nProof.\nAgain let T1 = T\u2217 (p1 ) = TL and T2 = T\u2217 (p2 ).\nLet x1 , x2 be the complete assignment, i.e., the assignment after both steps 2 and 3, computed by our algorithm for p1 , p2 respectively.\nLet S = {j : x2 ij = 1 and p2 ij = L} and S = {j : x2 ij = 1 and p1 ij = L}.\nTherefore, |S | = |S| \u2212 n2,1 H + n2,1 L and |S| = ni p2,T 2 = ni p2,T 2 |T L (by Corollary 5.9).\nLet T = |S | \u00b7 L.\nWe consider two cases.\nSuppose first that T \u2264 TL .\nConsider the following flow for (p1 , TL ): assign to every machine other than i the lowassignment of x2 truncated at TL , and assign the jobs in S to machine i.\nThis is a valid flow for (p1 , TL ) since the load on i is T \u2264 TL .\nIts size is equal to P i =i ni p2,T 2 |T L +|S | = np2,T 2 |T L \u2212n2,1 H +n2,1 L = np2,T L \u2212n2,1 H +n2,1 L .\nThe size of the max-flow for (p1 , TL ) is no smaller, and the claim follows.\n259 Now suppose T > TL .\nSince |S| \u00b7 L \u2264 TL (by Corollary 5.9), it follows that n2,1 L > n2,1 H \u2265 0.\nLet \u02c6T = T \u2212 L \u2265 TL since T , TL are both multiples of L. Let M = np2,T 2 \u2212 n2,1 H + n2,1 L = |S | + P i =i ni p2,T 2 .\nWe first show that m \u00b7 \u02c6T < M \u00b7 L + (n \u2212 M) \u00b7 H. (8) Let N be the number of jobs assigned to machine i in x2 .\nThe load on machine i is |S|\u00b7L+(N \u2212|S|)\u00b7H \u2265 |S |\u00b7L\u2212n2,1 L \u00b7 L+(N\u2212|S|)\u00b7H which is at least |S |\u00b7L > \u02c6T since n2,1 L \u2264 N\u2212 |S|.\nThus we get the inequality |S |\u00b7L+(N \u2212|S |)\u00b7H > \u02c6T.\nNow consider the point in the execution of the algorithm on instance p2 just before the last high job is assigned to i in Step 3 (there must be such a job since n2,1 L > 0).\nThe load on i at this point is |S| \u00b7 L + (N \u2212 |S| \u2212 1) \u00b7 H which is least |S | \u00b7 L \u2212 L = \u02c6T by a similar argument as above.\nBy the greedy property, every i = i also has at least this load at this point, so P j p2 i jx2 i j \u2265 \u02c6T.\nAdding these inequalities for all i = i, and the earlier inequality for i, we get that |S | \u00b7 L + (N \u2212 |S |) \u00b7 H + P i =i P j p2 i jx2 i j > m \u02c6T.\nBut the left-hand-side is exactly M \u00b7 L + (n \u2212 M) \u00b7 H. On the other hand, since T1 = TL , we have m \u00b7 \u02c6T \u2265 m \u00b7 TL \u2265 np1,T L \u00b7 L + (n \u2212 np1,T L ) \u00b7 H. (9) Combining (8) and (9), we get that np1,T L > M = np2,T 2 \u2212 n2,1 H + n2,1 L \u2265 np2,T L \u2212 n2,1 H + n2,1 L .\nLemma 5.12 Algorithm 2 satisfies cycle monotonicity.\nProof.\nTaking p1 = pk and p2 = pk+1 in (7), we get that npk,T L \u2265 npk+1,T L \u2212nk+1,k H +nk+1,k L .\nSumming this over all k = 1, ... , K (where K + 1 \u2261 1) yields (6).\n5.3 Computation of prices Lemmas 5.7 and 5.12 show that our algorithm is a 2approximation algorithm that satisfies cycle monotonicity.\nThus, by the discussion in Section 3, there exist prices that yield a truthful mechanism.\nTo obtain a polynomial-time mechanism, we also need to show how to compute these prices (or payments) in polynomial-time.\nIt is not clear, if the procedure outlined in Section 3 based on computing shortest paths in the allocation graph yields a polynomial time algorithm, since the allocation graph has an exponential number of nodes (one for each output assignment).\nInstead of analyzing the allocation graph, we will leverage our proof of cycle monotonicity, in particular, inequality (7), and simply spell out the payments.\nRecall that the utility of a player is ui = Pi \u2212 li, where Pi is the payment made to player i. For convenience, we will first specify negative payments (i.e., the Pis will actually be prices charged to the players) and then show that these can be modified so that players have non-negative utilities (if they act truthfully).\nLet Hi denote the number of jobs assigned to machine i in step 3.\nBy Corollary 5.6, we know that all these jobs are assigned to high machines (according to the declared pis).\nLet H\u2212i = P i =i Hi and n\u2212i p,T = P i =i ni p,T .\nThe payment Pi to player i is defined as: Pi(p) = \u2212L \u00b7 n\u2212i p,T \u2217(p) \u2212 H \u00b7 H\u2212i (p) \u2212 (H \u2212 L) ` np,T \u2217(p) \u2212 np,T L i (p\u2212i) \u00b4 (10) We can interpret our payments as equating the player``s cost to a careful modification of the total load (in the spirit of VCG prices).\nThe first and second terms in (10), when subtracted from i``s load li equate i``s cost to the total load.\nThe term np,T \u2217(p) \u2212 np,T L i (p\u2212i) is in fact equal to n\u2212i p,T \u2217(p) \u2212 n\u2212i p,T \u2217(p)|T L i (p\u2212i) since the low-load on i is at most TL i (p\u2212i) (by Claim 5.9).\nThus the last term in equation (10) implies that we treat the low jobs that were assigned beyond the TL i (p\u2212i) threshold (to machines other than i) effectively as high jobs for the total utility calculation from i``s point of view.\nIt is not clear how one could have conjured up these payments a priori in order to prove the truthfulness of our algorithm.\nHowever, by relying on cycle monotonicity, we were not only able to argue the existence of payments, but also our proof paved the way for actually inferring these payments.\nThe following lemma explicitly verifies that the payments defined above do indeed give a truthful mechanism.\nLemma 5.13 Fix a player i and the other players'' declarations p\u2212i. Let i``s true type be p1 i .\nThen, under the payments defined in (10), i``s utility when she declares her true type p1 i is at least her utility when she declares any other type p2 i .\nProof.\nLet c1 i , c2 i denote i``s total cost, defined as the negative of her utility, when she declares p1 , and p2 , respectively (and the others declare p\u2212i).\nSince p\u2212i is fixed, we omit p\u2212i from the expressions below for notational clarity.\nThe true load of i when she declares her true type p1 i is L \u00b7 ni p1,T \u2217(p1) + H \u00b7 Hi (p1 ), and therefore c1 i = L \u00b7 np1,T \u2217(p1) + H \u00b7 (n \u2212 np1,T \u2217(p1)) + (H \u2212 L) ` np1,T \u2217(p1) \u2212 np1,T L i \u00b4 = n \u00b7 H \u2212 (H \u2212 L)np1,T L i (11) On the other hand, i``s true load when she declares p2 i is L \u00b7 (ni p2,T \u2217(p2) \u2212 n2,1 H + n2,1 L ) + H \u00b7 (Hi + n2,1 H \u2212 n2,1 L ) (since i``s true processing time vector is p1 i ), and thus c2 i = n \u00b7 H \u2212 (H \u2212 L)np2,T L i + (H \u2212 L)n2,1 H \u2212 (H \u2212 L)n2,1 L .\nThus, (7) implies that c1 i \u2264 c2 i .\nPrice specifications are commonly required to satisfy, in addition to truthfulness, individual rationality, i.e., a player``s utility should be non-negative if she reveals her true value.\nThe payments given by (10) are not individually rational as they actually charge a player a certain amount.\nHowever, it is well-known that this problem can be easily solved by adding a large-enough constant to the price definition.\nIn our case, for example, letting H denote the vector of all H``s, we can add the term n\u00b7H \u2212(H \u2212L)n(H,p\u2212i),T L i (p\u2212i) to (10).\nNote that this is a constant for player i. Thus, the new payments are Pi (p) = n \u00b7 H \u2212 L \u00b7 n\u2212i p,T \u2217(p) \u2212 H \u00b7 H\u2212i (p) \u2212 (H \u2212L) ` np,T \u2217(p) \u2212np,T L i (p\u2212i) +n(H,p\u2212i),T L i (p\u2212i) \u00b4 .\nAs shown by (11), this will indeed result in a non-negative utility for i (since n(H,p\u2212i),T L i (p\u2212i) \u2264 n(pi,p\u2212i),T L i (p\u2212i) for any type pi of player i).\nThis modification also ensures the additionally desired normalization property that if a player receives no jobs then she receives zero payment: if player i receives the empty set for some type pi then she will also receive the empty set for the type H (this is easy to verify for our specific algorithm), and for the type H, her utility equals zero; thus, by truthfulness this must also be the utility of every other declaration that results in i receiving the empty set.\nThis completes the proof of Theorem 5.3.\n260 5.4 Impossibility of exact implementation We now show that irrespective of computational considerations, there does not exist a cycle-monotone algorithm for the L-H case with an approximation ratio better than 1.14.\nLet H = \u03b1\u00b7L for some 2 < \u03b1 < 2.5 that we will choose later.\nThere are two machines I, II and seven jobs.\nConsider the following two scenarios: Scenario 1.\nEvery job has the same processing time on both machines: jobs 1-5, are L, and jobs 6, 7 are H. Any optimal schedule assigns jobs 1-5 to one machine and jobs 6, 7 to the other, and has makespan OPT1 = 5L.\nThe secondbest schedule has makespan at least Second1 = 2H + L. Scenario 2.\nIf the algorithm chooses an optimal schedule for scenario 1, assume without loss of generality that jobs 6, 7 are assigned to machine II.\nIn scenario 2, machine I has the same processing-time vector.\nMachine II lowers jobs 6, 7 to L and increases 1-5 to H.\nAn optimal schedule has makespan 2L + H, where machine II gets jobs 6, 7 and one of the jobs 1-5.\nThe second-best schedule for this scenario has makespan at least Second2 = 5L.\nTheorem 5.14 No deterministic truthful mechanism for the two-value scheduling problem can obtain an approximation ratio better than 1.14.\nProof.\nWe first argue that a cycle-monotone algorithm cannot choose the optimal schedule in both scenarios.\nThis follows because otherwise cycle monotonicity is violated for machine II.\nTaking p1 II , p2 II to be machine II``s processingtime vectors for scenarios 1, 2 respectively, we get P j(p1 II ,j \u2212 p2 II ,j)(x2 II ,j \u2212x1 II ,j) = (L\u2212H)(1\u22120) < 0.\nThus, any truthful mechanism must return a sub-optimal makespan in at least one scenario, and therefore its approximation ratio is at least min \u02d8Second1 OPT1 , Second2 OPT2 \u00af \u2265 1.14 for \u03b1 = 2.364.\nWe remark that for the {Lj, Hj}-case where there is a common ratio r = Hj Lj for all jobs (this generalizes the restricted-machines setting) one can obtain a fractional truthful mechanism (with efficiently computable prices) that returns a schedule of makespan at most OPT(p) for every p.\nOne can view each job j as consisting of Lj sub-jobs of size 1 on a machine i if pij = Lj, and size r if pij = Hj.\nFor this new instance \u02dcp, note that \u02dcpij \u2208 {1, r} for every i, j. Notice also that any assignment \u02dcx for the instance \u02dcp translates to a fractional assignment x for p, where pijxij =P j : sub-job of j \u02dcpij \u02dcxij .\nThus, if we use Algorithm 2 to obtain a schedule for the instance \u02dcp, equation (6) translates precisely to (3) for the assignment x; moreover, the prices for \u02dcp translate to prices for the instance p.\nThe number of sub-jobs assigned to low-machines in the flow-phase is simply the total work assigned to low-machines.\nThus, we can implement the above reduction by setting up a max-flow problem that seems to maximize the total work assigned to low machines.\nMoreover, since we have a fractional domain, we can use a more efficient greedy rule for packing the unassigned portions of jobs and argue that the fractional assignment has makespan at most OPT(p).\nThe assignment x need not however satisfy the condition that xij > 0 implies pij \u2264 OPT(p) for arbitrary r, therefore, the rounding procedure of Lemma 4.2 does not yield a 2-approximation truthful-in-expectation mechanism.\nBut if r > OPT(p) (as in the restricted-machines setting), this condition does hold, so we get a 2-approximation truthful mechanism.\nAcknowledgments We thank Elias Koutsoupias for his help in refining the analysis of the lower bound in Section 5.4, and the reviewers for their helpful comments.\n6.\nREFERENCES [1] N. Andelman, Y. Azar, and M. Sorani.\nTruthful approximation mechanisms for scheduling selfish related machines.\nIn Proc.\n22nd STACS, 69-82, 2005.\n[2] A. Archer.\nMechanisms for discrete optimization with rational agents.\nPhD thesis, Cornell University, 2004.\n[3] A. Archer and \u00b4E. Tardos.\nTruthful mechanisms for one-parameter agents.\nIn Proc.\n42nd FOCS, pages 482-491, 2001.\n[4] V. Auletta, R. De-Prisco, P. Penna, and G. Persiano.\nDeterministic truthful approximation mechanisms for scheduling related machines.\nIn Proc.\n21st STACS, pages 608-619, 2004.\n[5] I. Bez\u00b4akov\u00b4a and V. Dani.\nAllocating indivisible goods.\nIn ACM SIGecom Exchanges, 2005.\n[6] S. Bikhchandani, S. Chatterjee, R. Lavi, A. Mu``alem, N. Nisan, and A. Sen. Weak monotonicity characterizes deterministic dominant-strategy implementation.\nEconometrica, 74:1109-1132, 2006.\n[7] P. Briest, P. Krysta, and B. Vocking.\nApproximation techniques for utilitarian mechanism design.\nIn Proc.\n37th STOC, pages 39-48, 2005.\n[8] G. Christodoulou, E. Koutsoupias, and A. Vidali.\nA lower bound for scheduling mechanisms.\nIn Proc.\n18th SODA, pages 1163-1170, 2007.\n[9] E. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 8:17-33, 1971.\n[10] T. Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[11] H. Gui, R. Muller, and R. V. Vohra.\nCharacterizing dominant strategy mechanisms with multi-dimensional types, 2004.\nWorking paper.\n[12] L. A. Hall.\nApproximation algorithms for scheduling.\nIn D. Hochbaum, editor, Approximation Algorithms for NP-Hard Problems.\nPWS Publishing, MA, 1996.\n[13] A. Kov\u00b4acs.\nFast monotone 3-approximation algorithm for scheduling related machines.\nIn Proc.\n13th ESA, pages 616-627, 2005.\n[14] V. S. A. Kumar, M. V. Marathe, S. Parthasarathy, and A. Srinivasan.\nApproximation algorithms for scheduling on multiple machines.\nIn Proc.\n46th FOCS, pages 254-263, 2005.\n[15] R. Lavi, A. Mu``alem, and N. Nisan.\nTowards a characterization of truthful combinatorial auctions.\nIn Proc.\n44th FOCS, pages 574-583, 2003.\n[16] R. Lavi and C. Swamy.\nTruthful and near-optimal mechanism design via linear programming.\nIn Proc.\n46th FOCS, pages 595-604, 2005.\n[17] D. Lehmann, L. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJournal of the ACM, 49:577-602, 2002.\n[18] J. K. Lenstra, D. B. Shmoys, and \u00b4E. Tardos.\nApproximation algorithms for scheduling unrelated parallel machines.\nMath.\nProg., 46:259-271, 1990.\n[19] R. J. Lipton, E. Markakis, E. Mossel, and A. Saberi.\nOn approximately fair allocations of indivisible goods.\nIn Proc.\n5th EC, pages 125-131, 2004.\n[20] A. Mu``alem and M. Schapira.\nSetting lower bounds on truthfulness.\nIn Proc.\n18th SODA, 1143-1152, 2007.\n[21] R. Myerson.\nOptimal auction design.\nMathematics of Operations Research, 6:58-73, 1981.\n[22] N. Nisan and A. Ronen.\nAlgorithmic mechanism design.\nGames and Econ.\nBehavior, 35:166-196, 2001.\n[23] J. C. Rochet.\nA necessary and sufficient condition for rationalizability in a quasilinear context.\nJournal of Mathematical Economics, 16:191-200, 1987.\n[24] M. Saks and L. Yu.\nWeak monotonicity suffices for truthfulness on convex domains.\nIn Proc.\n6th EC, pages 286-293, 2005.\n[25] D. B. Shmoys and \u00b4E. Tardos.\nAn approximation algorithm for the generalized assignment problem.\nMathematical Programming, 62:461-474, 1993.\n[26] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJ. Finance, 16:8-37, 1961.\n261","lvl-3":"Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity\nABSTRACT\nWe consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players.\nThis is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O (m) - approximation truthful mechanisms [22, 20].\nWe study a well-motivated special case of this problem, where the processing time of a job on each machine may either be \"low\" or \"high\", and the low and high values are public and job-dependent.\nThis preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj, \u221e}) setting in scheduling.\nWe give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism.\nThis is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion.\nWhen the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nOur constructions are novel in two respects.\nFirst, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity.\nCycle monotonicity [23] is a necessary and sufficient condition for truthfulness, is a generalization of value monotonicity for multidimensional domains.\nHowever, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting.\nSecond, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthfulin-expectation mechanism.\nThis builds upon a technique of [16], and shows the usefulness of fractional mechanisms in truthful mechanism design.\n1.\nINTRODUCTION\nMechanism design studies algorithmic constructions under the presence of strategic players who hold the inputs to the algorithm.\nAlgorithmic mechanism design has focused mainly on settings were the social planner or designer wishes to maximize the social welfare (or equivalently, minimize social cost), or on auction settings where revenuemaximization is the main goal.\nAlternative optimization goals, such as those that incorporate fairness criteria (which have been investigated algorithmically and in social choice theory), have received very little or no attention.\nIn this paper, we consider such an alternative goal in the context of machine scheduling, namely, makespan minimization.\nThere are n jobs or tasks that need to be assigned to m machines, where each job has to be assigned to exactly one machine.\nAssigning a job j to a machine i incurs a load (cost) of pij \u2265 0 on machine i, and the load of a machine is the sum of the loads incurred due to the jobs assigned to it; the goal is to schedule the jobs so as to minimize the maximum load of a machine, which is termed the makespan of the schedule.\nMakespan minimization is a common objective in scheduling environments, and has been well studied algorithmically in both the Computer Science and Operations Research communities (see, e.g., the survey [12]).\nFollowing the work of Nisan and Ronen [22], we consider each machine to be a strategic player or agent who privately knows its own processing time for each job, and may misrepresent these values in order to decrease its load (which is its incurred cost).\nHence, we approach the problem via mechanism design: the social designer, who holds the set of jobs to be assigned, needs to specify, in addition to a schedule, suitable payments to the players in order to incentivize them to reveal their true processing times.\nSuch a mechanism is called a truthful mechanism.\nThe makespan-minimization objective is quite different from the classic goal of social-welfare maximization, where one wants to maximize the total welfare (or minimize the total cost) of all players.\nInstead, it\ncorresponds to maximizing the minimum welfare and the notion of max-min fairness, and appears to be a much harder problem from the viewpoint of mechanism design.\nIn particular, the celebrated VCG [26, 9, 10] family of mechanisms does not apply here, and we need to devise new techniques.\nThe possibility of constructing a truthful mechanism for makespan minimization is strongly related to assumptions on the players' processing times, in particular, the \"dimensionality\" of the domain.\nNisan and Ronen considered the setting of unrelated machines where the pij values may be arbitrary.\nThis is a multidimensional domain, since a player's private value is its entire vector of processing times (pij) j. Very few positive results are known for multidimensional domains in general, and the only positive results known for multidimensional scheduling are O (m) - approximation truthful mechanisms [22, 20].\nWe emphasize that regardless of computational considerations, even the existence of a truthful mechanism with a significantly better (than m) approximation ratio is not known for any such scheduling domain.\nOn the negative side, [22] showed that no truthful deterministic mechanism can achieve approximation ratio better than 2, and strengthened this lower bound to m for two specific classes of deterministic mechanisms.\nRecently, [20] extended this lower bound to randomized mechanisms, and [8] improved the deterministic lower bound.\nIn stark contrast with the above state of affairs, much stronger (and many more) positive results are known for a special case of the unrelated machines problem, namely, the setting of related machines.\nHere, we have pij = pj\/si for every i, j, where pj is public knowledge, and the speed si is the only private parameter of machine i.\nThis assumption makes the domain of players' types single-dimensional.\nTruthfulness in such domains is equivalent to a convenient value-monotonicity condition [21, 3], which appears to make it significantly easier to design truthful mechanisms in such domains.\nArcher and Tardos [3] first considered the related machines setting and gave a randomized 3-approximation truthful-in-expectation mechanism.\nThe gap between the single-dimensional and multidimensional domains is perhaps best exemplified by the fact that [3] showed that there exists a truthful mechanism that always outputs an optimal schedule.\n(Recall that in the multidimensional unrelated machines setting, it is impossible to obtain a truthful mechanism with approximation ratio better than 2.)\nVarious follow-up results [2, 4, 1, 13] have strengthened the notion of truthfulness and\/or improved the approximation ratio.\nSuch difficulties in moving from the single-dimensional to the multidimensional setting also arise in other mechanism design settings (e.g., combinatorial auctions).\nThus, in addition to the specific importance of scheduling in strategic environments, ideas from multidimensional scheduling may also have a bearing in the more general context of truthful mechanism design for multidimensional domains.\nIn this paper, we consider the makespan-minimization problem for a special case of unrelated machines, where the processing time of a job is either \"low\" or \"high\" on each machine.\nMore precisely, in our setting, pij \u2208 {Lj, Hj} for every i, j, where the Lj, Hj values are publicly known (Lj - \"low\", Hj - \"high\").\nWe call this model the \"jobdependent two-values\" case.\nThis model generalizes the classic \"restricted machines\" setting, where pij \u2208 {Lj, \u221e} which has been well-studied algorithmically.\nA special case of our model is when Lj = L and Hj = H for all jobs j, which we denote simply as the \"two-values\" scheduling model.\nBoth of our domains are multidimensional, since the machines are unrelated: one job may be low on one machine and high on the other, while another job may follow the opposite pattern.\nThus, the private information of each machine is a vector specifying which jobs are low and high on it.\nThus, they retain the core property underlying the hardness of truthful mechanism design for unrelated machines, and by studying these special settings we hope to gain some insights that will be useful for tackling the general problem.\nOur Results and Techniques We present various positive results for our multidimensional scheduling domains.\nOur first result is a general method to convert any capproximation algorithm for the job-dependent two values setting into a 3c-approximation truthful-in-expectation mechanism.\nThis is one of the very few known results that use an approximation algorithm in a black-box fashion to obtain a truthful mechanism for a multidimensional problem.\nOur result implies that there exists a 3-approximation truthfulin-expectation mechanism for the Lj-Hj setting.\nInterestingly, the proof of truthfulness is not based on supplying explicit prices, and our construction does not necessarily yield efficiently-computable prices (but the allocation rule is efficiently computable).\nOur second result applies to the twovalues setting (Lj = L, Hj = H), for which we improve both the approximation ratio and strengthen the notion of truthfulness.\nWe obtain a deterministic 2-approximation truthful mechanism (along with prices) for this problem.\nThese are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.\nComplementing this, we observe that even this seemingly simple setting does not admit truthful mechanisms that return an optimal schedule (unlike in the case of related machines).\nBy exploiting the multidimensionality of the domain, we prove that no truthful deterministic mechanism can obtain an approximation ratio better than 1.14 to the makespan (irrespective of computational considerations).\nThe main technique, and one of the novelties, underlying our constructions and proofs, is that we do not rely on explicit price specifications in order to prove the truthfulness of our mechanisms.\nInstead we exploit certain algorithmic monotonicity conditions that characterize truthfulness to first design an implementable algorithm, i.e., an algorithm for which prices ensuring truthfulness exist, and then find these prices (by further delving into the proof of implementability).\nThis kind of analysis has been the method of choice in the design of truthful mechanisms for singledimensional domains, where value-monotonicity yields a convenient characterization enabling one to concentrate on the algorithmic side of the problem (see, e.g., [3, 7, 4, 1, 13]).\nBut for multidimensional domains, almost all positive results have relied on explicit price specifications in order to prove truthfulness (an exception is the work on unknown single-minded players in combinatorial auctions [17, 7]), a fact that yet again shows the gap in our understanding of multidimensional vs. single-dimensional domains.\nOur work is the first to leverage monotonicity conditions for truthful mechanism design in arbitrary domains.\nThe monotonicity condition we use, which is sometimes called cycle monotonicity, was first proposed by Rochet [23] (see also [11]).\nIt is a generalization of value-monotonicity and completely characterizes truthfulness in every domain.\nOur methods and analyses demonstrate the potential benefits\nof this characterization, and show that cycle monotonicity can be effectively utilized to devise truthful mechanisms for multidimensional domains.\nConsider, for example, our first result showing that any c-approximation algorithm can be \"exported\" to a 3c-approximation truthful-in-expectation mechanism.\nAt the level of generality of an arbitrary approximation algorithm, it seems unlikely that one would be able to come up with prices to prove truthfulness of the constructed mechanism.\nBut, cycle monotonicity does allow us to prove such a statement.\nIn fact, some such condition based only on the underlying algorithm (and not on the prices) seems necessary to prove such a general statement.\nThe method for converting approximation algorithms into truthful mechanisms involves another novel idea.\nOur randomized mechanism is obtained by first constructing a truthful mechanism that returns a fractional schedule.\nMoving to a fractional domain allows us to \"plug-in\" truthfulness into the approximation algorithm in a rather simple fashion, while losing a factor of 2 in the approximation ratio.\nWe then use a suitable randomized rounding procedure to convert the fractional assignment into a random integral assignment.\nFor this, we use a recent rounding procedure of Kumar et al. [14] that is tailored for unrelated-machine scheduling.\nThis preserves truthfulness, but we lose another additive factor equal to the approximation ratio.\nOur construction uses and extends some observations of Lavi and Swamy [16], and further demonstrates the benefits of fractional mechanisms in truthful mechanism design.\nRelated Work Nisan and Ronen [22] first considered the makespan-minimization problem for unrelated machines.\nThey gave an m-approximation positive result and proved various lower bounds.\nRecently, Mu'alem and Schapira [20] proved a lower bound of 2 on the approximation ratio achievable by truthful-in-expectation mechanisms, and Christodoulou, Koutsoupias, and Vidali [8] proved a (1 + \\ \/ 2) - lower bound for deterministic truthful mechanisms.Archer and Tardos [3] first considered the related-machines problem and gave a 3-approximation truthful-in-expectation mechanism.\nThis been improved in [2, 4, 1, 13] to: a 2-approximation randomized mechanism [2]; an FPTAS for any fixed number of machines given by Andelman, Azar and Sorani [1], and a 3-approximation deterministic mechanism by Kov \u00b4 acs [13].\nThe algorithmic problem (i.e., without requiring truthfulness) of makespan-minimization on unrelated machines is well understood and various 2-approximation algorithms are known.\nLenstra, Shmoys and Tardos [18] gave the first such algorithm.\nShmoys and Tardos [25] later gave a 2approximation algorithm for the generalized assignment problem, a generalization where there is a cost cij for assigning a job j to a machine i, and the goal is to minimize the cost subject to a bound on the makespan.\nRecently, Kumar, Marathe, Parthasarathy, and Srinivasan [14] gave a randomized rounding algorithm that yields the same bounds.\nWe use their procedure in our randomized mechanism.\nThe characterization of truthfulness for arbitrary domains in terms of cycle monotonicity seems to have been first observed by Rochet [23] (see also Gui et al. [11]).\nThis generalizes the value-monotonicity condition for single-dimensional domains which was given by Myerson [21] and rediscovered by [3].\nAs mentioned earlier, this condition has been exploited numerous times to obtain truthful mechanisms for single-dimensional domains [3, 7, 4, 1, 13].\nFor convex domains (i.e., each players' set of private values is convex), it is known that cycle monotonicity is implied by a simpler condition, called weak monotonicity [15, 6, 24].\nBut even this simpler condition has not found much application in truthful mechanism design for multidimensional problems.\nObjectives other than social-welfare maximization and revenue maximization have received very little attention in mechanism design.\nIn the context of combinatorial auctions, the problems of maximizing the minimum value received by a player, and computing an envy-minimizing allocation have been studied briefly.\nLavi, Mu'alem, and Nisan [15] showed that the former objective cannot be implemented truthfully; Bezakova and Dani [5] gave a 0.5-approximation mechanism for two players with additive valuations.\nLipton et al. [19] showed that the latter objective cannot be implemented truthfully.\nThese lower bounds were strengthened in [20].\n2.\nPRELIMINARIES\n2.1 The scheduling domain\nIn our scheduling problem, we are given n jobs and m machines, and each job must be assigned to exactly one machine.\nIn the unrelated-machines setting, each machine i is characterized by a vector of processing times (pij) j, where pij E R \u2265 0 U {oo} denotes i's processing time for job j with the value oo specifying that i cannot process j.\nWe consider two special cases of this problem: 1.\nThe job-dependent two-values case, where pij E {Lj, Hj} for every i, j, with Lj pk \u2212 1 ij = Lj.\nWe take k0 = k and then keep including indices in this segment till we reach a k such that pkij = Lj and pk +1 ij = Hj.\nWe set k00 = k, and then start a new maximal segment with index k00 + 1.\nNote that k00 = 6 k0 and k00 + 1 = 6 k0 \u2212 1.\nWe now have a subset of indices and we can continue recursively.\nSo all indices are included in some maximal segment.\nWe will show that for every such maximal segment k0, k0 + 1,..., k00, Pk, \u2212 1 \u2264 k 0 implies that pij \u2264 T, where T is the makespan of x. (In particular, note that any algorithm that returns an integral assignment has these properties.)\nOur algorithm, which we call A0, returns the following assignment xF.\nInitialize xFij = 0 for all i, j. For every i, j,\nTheorem 4.4 Suppose algorithm A satisfies the conditions in Algorithm 1 and returns a makespan of at most c \u00b7 OPT (p) for every p. Then, the algorithm A0 constructed above is a 2c-approximation, cycle-monotone fractional algorithm.\nMoreover, if xFij> 0 on input p, then pij \u2264 c \u00b7 OPT (p).\nPROOF.\nFirst, note that xF is a valid assignment: for every job j, Pi xFij = Pi xij + P i, i,6 = i:p% j =p%, j = Lj (xi, j \u2212 xij) \/ m = Pi xij = 1.\nWe also have that if pij = Hj then xFij = Pi,:p%, j = Hj xi, j\/m \u2264 1\/m.\nIf pij = Lj, then xF ij = xij (1 \u2212 ` \/ m) + Pi,6 = i xi, j\/m where ` = | {i0 = 6 i:\nTheorem 4.4 combined with Lemmas 4.1 and 4.2, gives a 3c-approximation, truthful-in-expectation mechanism.\nThe computation of payments will depend on the actual approximation algorithm used.\nSection 3 does however give an explicit procedure to compute payments ensuring truthfulness, though perhaps not in polynomial-time.\nTheorem 4.5 The procedure in Algorithm 1 converts any c-approximation fractional algorithm into a 3c-approximation, truthful-in-expectation mechanism.\nTaking A in Algorithm 1 to be the algorithm that returns an LP-optimum assignment satisfying the required conditions (see [18, 25]), we obtain a 3-approximation mechanism.\nCorollary 4.6 There is a truthful-in-expectation mechanism with approximation ratio 3 for the Lj-Hj setting.\n5.\nA DETERMINISTIC MECHANISM FOR THE TWO-VALUES CASE\nWe now present a deterministic 2-approximation truthful mechanism for the case where pij \u2208 {L, H} for all i, j.\nIn the sequel, we will often say that j is assigned to a lowmachine to denote that j is assigned to a machine i where pij = L.\nWe will call a job j a low job of machine i if pij = L; the low-load of i is the load on i due to its low jobs, i.e., Pj:p% j = L xijpij.\nAs in Section 4, our goal is to obtain an approximation algorithm that satisfies cycle monotonicity.\nWe first obtain a simplification of condition (3) for our two-values {L, H} scheduling domain (Proposition 5.1) that will be convenient to work with.\nWe describe our algorithm in Section 5.1.\nIn Section 5.2, we bound its approximation guarantee and prove that it satisfies cycle-monotonicity.\nIn Section 5.3, we compute explicit payments giving a truthful mechanism.\nFinally, in Section 5.4 we show that no deterministic mechanism can achieve the optimum makespan.\nDefine\nPlugging this into (3) and dividing by (H \u2212 L), we get the following.\n5.1 A cycle-monotone approximation algorithm\nWe now describe an algorithm that satisfies condition (6) and achieves a 2-approximation.\nWe will assume that L, H are integers, which is without loss of generality.\nA core component of our algorithm will be a procedure that takes an integer load threshold T and computes an integer partial assignment x of jobs to machines such that (a) a job is only assigned to a low machine; (b) the load on any machine is at most T; and (c) the number of jobs assigned is maximized.\nSuch an assignment can be computed by solving a max-flow problem: we construct a directed bipartite graph with a node for every job j and every machine i, and an edge (j, i) of infinite capacity if pij = L.\nWe also add a source node s with edges (s, j) having capacity 1, and sink node t with edges (i, t) having capacity bT\/Lc.\nClearly any integer flow in this network corresponds to a valid integer partial assignment x of makespan at most T, where xij = 1 iff there is a flow of 1 on the edge from j to i.\nWe will therefore use the terms assignment and flow interchangeably.\nMoreover, there is always an integral max-flow (since all capacities are integers).\nWe will often refer to such a max-flow as the max-flow for (p, T).\nWe need one additional concept before describing the algorithm.\nThere could potentially be many max-flows and we will be interested in the most \"balanced\" ones, which we formally define as follows.\nFix some max-flow.\nLet ni p, T be the amount of flow on edge (i, t) (or equivalently the number of jobs assigned to i in the corresponding schedule), and let np, T be the total size of the max-flow, i.e., np, T = Pi nip, T.\nFor any T' \u2264 T, define nip, T | T' = min (nip, T, T'), that is, we\nThat is, in a prefix-maximal flow for (p, T), if we truncate the flow at some T' \u2264 T, we are left with a max-flow for (p, T').\nAn elementary fact about flows is that if an assignment\/flow x is not a maximum flow for (p, T) then there must be an augmenting path P = (s, j1, i1,..., jK, iK, t) in the residual graph that allows us to increase the size of the flow.\nThe interpretation is that in the current assignment, j1 is unassigned, xi8j8 = 0, which is denoted by the forward edges (j `, i `), and xi8j8 +1 = 1, which is denoted by the reverse edges (i `, j ` +1).\nAugmenting x using P changes the assignment so that each j ` is assigned to i ` in the new assignment, which increases the value of the flow by 1.\nA simple augmenting path does not decrease the load of any machine; thus, one can argue that a prefix-maximal flow for a threshold T always exists.\nWe first compute a max-flow for threshold 1, use simple augmenting paths to augment it to a max-flow for threshold 2, and repeat, each time augmenting the max-flow for the previous threshold t to a max-flow for threshold t + 1 using simple augmenting paths.\nAlgorithm 2 Given a vector of processing times p, construct an assignment of jobs to machines as follows.\n1.\nCompute T * (p) = min\u02d8T \u2265 H, T multiple of L: np, T \u00b7 L + (n \u2212 np, T) \u00b7 H \u2264 m \u00b7 T \u00af.\nNote that np, T \u00b7 L + (n \u2212 np, T) \u00b7 H \u2212 m \u00b7 T is a decreasing function of T, so T * (p) can be computed in polynomial time via binary search.\n2.\nCompute a prefix-maximal flow for threshold T * (p) and the corresponding partial assignment (i.e., j is assigned to i iff there is 1 unit of flow on edge (j, i)).\n3.\nAssign the remaining jobs, i.e., the jobs unassigned in the flow-phase, in a greedy manner as follows.\nCon\nsider these jobs in an arbitrary order and assign each job to the machine with the current lowest load (where the load includes the jobs assigned in the flow-phase).\nOur algorithm needs to compute a prefix-maximal assignment for the threshold T * (p).\nThe proof showing the existence of a prefix-maximal flow only yields a pseudopolynomial time algorithm for computing it.\nBut notice that the max-flow remains the same for any T \u2265 T' = n \u00b7 L.\nSo a prefix-maximal flow for T' is also prefix-maximal for any T \u2265 T'.\nThus, we only need to compute a prefix-maximal flow for T\" = min {T * (p), T'}.\nThis can be be done in polynomial time by using the iterative-augmenting-paths algorithm in the existence proof to compute iteratively the maxflow for the polynomially many multiples of L up to (and including) T\".\n5.2 Analysis\nLet OPT (p) denote the optimal makespan for p.\nWe now prove that Algorithm 2 is a 2-approximation algorithm that satisfies cycle monotonicity.\nThis will then allow us to compute payments in Section 5.3 and prove Theorem 5.3.\n5.2.1 Proof of approximation Claim 5.4 If OPT (p) m \u00b7 T \u2217 (p).\nBut the total load is also at most np, T * (p) \u00b7 L + (n \u2212 np, T * (p)) \u00b7 H H. By Claim 5.5, we know that T \u2217 (p) T + H (1 \u2212 1 Suppose the claim is false.\nLet i be the machine with the).\nLet j be the last jobm assigned to i in step 3, and consider the point just before it is assigned to i.\nSo li> T \u2212 H\/m at this point.\nAlso since j is assigned to i, by our greedy rule, the load on all the other machines must be at least li.\nSo the total load after j is assigned, is at least H + m \u00b7 li> m \u00b7 T (since pij = H by Claim 5.6).\nAlso, for any assignment of jobs to machines in step 3, the total load is at most np, T * (p) \u00b7 L + (n \u2212 np, T * (p)) \u00b7 H since there are np, T * (p) jobs assigned to low machines.\nTherefore, we must have m \u00b7 T np, T * (p) \u00b7 L + (n \u2212 np, T * (p)) \u00b7 H, which yields a contradiction.\nIf T = T \u2217 (p), this follows from the definition of T \u2217 (p).\nIf T = OPT (p), then letting nOPT (p) denote the number of jobs assigned to low machines in an optimum schedule, we have np, T * (p)> nOPT (p).\nSo np, T * (p) \u00b7 L + (n \u2212 np, T * (p)) \u00b7 H pi, i.e., p0ij> pij ` dj.\nIf T is a threshold such that np, T> np,, T, then every maximum flow x0 for (p0, T) must assign all jobs j such that p0ij = L. PROOF.\nLet Gp, denote the residual graph for (p0, T) and flow x0.\nSuppose by contradiction that there exists a job j \u2217 with p0ij * = L that is unassigned by x0.\nSince p0i> pi, all edges (j, i) that are present in the network for (p0, T) are also present in the network for (p, T).\nThus, x0 is a valid flow for (p, T).\nBut it is not a max-flow, since np, T> np,, T.\nSo there exists an augmenting path P in the residual graph for (p, T) and flow x0.\nObserve that node i must be included in P, otherwise P would also be an augmenting path in the residual graph Gp, contradicting the fact that x0 is a maxflow.\nIn particular, this implies that there is a path P0 C P from i to the sink t. Let P0 = (i, j1, i1,..., jK, iK, t).\nAll the edges of P0 are also present as edges in Gp,--all reverse edges (i `, j ` +1) are present since such an edge implies that x0 i ` j ` +1 = 1; all forward edges (j `, i `) are present since i ` = 6 i so p0i ` j ` = pi ` j ` = L, and x0i ` j ` +1 = 0.\nBut then there is an augmenting path (j \u2217, i, j1, i1,..., jK, iK, t) in Gp, which contradicts the maximality of x0.\nL ~ denote the all-low processing time vector.\nDefine TiL (p \u2212 i) = T \u2217 (~ L, p \u2212 i).\nSince we are focusing on machine i, and p \u2212 i is fixed throughout, we abbreviate TiL (p \u2212 i) to TL.\nAlso, let pL = (~ L, p \u2212 i).\nNote that T \u2217 (p)> TL for every instance p = (pi, p \u2212 i).\nCorollary 5.9 Let p = (pi, p \u2212 i) be any instance and let x be any prefix-maximal flow for (p, T \u2217 (p)).\nThen, the low-load on machine i is at most TL.\nPROOF.\nLet T \u2217 = T \u2217 (p).\nIf T \u2217 = TL, then this is clearly true.\nOtherwise, consider the assignment x truncated at TL.\nSince x is prefix-maximal, we know that this constitutes a max-flow for (p, TL).\nAlso, np, T L TL.\nSo by Lemma 5.8, this truncated flow must assign all the low jobs of i. Hence, there cannot be a job j with pij = L that is assigned to i after the TL-threshold since then j would not be assigned by this truncated flow.\nThus, the low-load of i is at most TL.\nUsing these properties, we will prove the following key inequality: for any p1 = (p \u2212 i, p1i) and p2 = (p \u2212 i, p2i),\nwhere n2,1 H and n2,1 L are as defined in (4) and (5), respectively.\nNotice that this immediately implies cycle monotonicity, since if we take p1 = pk and p2 = pk +1, then (7) implies that npk, T L> npk +1, T L \u2212 nk +1, k\nPROOF.\nLet T1 = T \u2217 (p1) and T2 = T \u2217 (p2).\nTake the prefix-maximal flow x2 for (p2, T2), truncate it at TL, and remove all the jobs from this assignment that are counted in n2,1 H, that is, all jobs j such that x2ij = 1, p2ij = L, p1ij = H. Denote this flow by x. Observe that x is a valid flow for (p1, TL), and the size of this flow is exactly np2, T 2 | T L \u2212 n2,1\nare assigned by x since each such job j is high on i in p2.\nSince T1> TL, we must have np1, T L T\u02c6 since n2,1 L \u2264 N \u2212 | S |.\nThus we get the inequality | S00 | \u00b7 L + (N \u2212 | S00 |) \u00b7 H> T\u02c6.\nNow consider the point in the execution of the algorithm on instance p2 just before the last high job is assigned to i in Step 3 (there must be such a job since n2,1 L> 0).\nThe load on i at this point is | S | \u00b7 L + (N \u2212 | S | \u2212 1) \u00b7 H which is least | S00 | \u00b7 L \u2212 L = T\u02c6 by a similar argument as above.\nBy the greedy property, every i0 = 6 i also has at least this load at this point, so Pj p2i0jx2i0j \u2265 T\u02c6.\nAdding these inequalities for all i0 = 6 i, and the earlier inequality for i, we get that\n5.3 Computation of prices\nLemmas 5.7 and 5.12 show that our algorithm is a 2approximation algorithm that satisfies cycle monotonicity.\nThus, by the discussion in Section 3, there exist prices that yield a truthful mechanism.\nTo obtain a polynomial-time mechanism, we also need to show how to compute these prices (or payments) in polynomial-time.\nIt is not clear, if the procedure outlined in Section 3 based on computing shortest paths in the allocation graph yields a polynomial time algorithm, since the allocation graph has an exponential number of nodes (one for each output assignment).\nInstead of analyzing the allocation graph, we will leverage our proof of cycle monotonicity, in particular, inequality (7), and simply spell out the payments.\nRecall that the utility of a player is ui = Pi \u2212 li, where Pi is the payment made to player i. For convenience, we will first specify negative payments (i.e., the Pis will actually be prices charged to the players) and then show that these can be modified so that players have non-negative utilities (if they act truthfully).\nLet Hi denote the number of jobs assigned to machine i in step 3.\nBy Corollary 5.6, we know that all these jobs are assigned to high machines (according to the declared pis).\nLet H_i = Pi06 = i Hi0 and n_i\nWe can interpret our payments as equating the player's cost to a careful modification of the total load (in the spirit of VCG prices).\nThe first and second terms in (10), when subtracted from i's load li equate i's cost to the total load.\nThe term np, T \u2217 (p) \u2212 np, TiL (p \u2212 i) is in fact equal to n_i\np, T \u2217 (p) | TiL (p \u2212 i) since the low-load on i is at most TiL (p_i) (by Claim 5.9).\nThus the last term in equation (10) implies that we treat the low jobs that were assigned beyond the TiL (p_i) threshold (to machines other than i) effectively as high jobs for the total utility calculation from i's point of view.\nIt is not clear how one could have conjured up these payments a priori in order to prove the truthfulness of our algorithm.\nHowever, by relying on cycle monotonicity, we were not only able to argue the existence of payments, but also our proof paved the way for actually inferring these payments.\nThe following lemma explicitly verifies that the payments defined above do indeed give a truthful mechanism.\nLemma 5.13 Fix a player i and the other players' declarations p_i.\nLet i's true type be p1i.\nThen, under the payments defined in (10), i's utility when she declares her true type p1i is at least her utility when she declares any other type p2i.\nPROOF.\nLet c1i, c2i denote i's total cost, defined as the negative of her utility, when she declares p1, and p2, respectively (and the others declare p_i).\nSince p_i is fixed, we omit p_i from the expressions below for notational clarity.\nThe true load of i when she declares her true type p1i is\nPrice specifications are commonly required to satisfy, in addition to truthfulness, individual rationality, i.e., a player's utility should be non-negative if she reveals her true value.\nThe payments given by (10) are not individually rational as they actually charge a player a certain amount.\nHowever, it is well-known that this problem can be easily solved by adding a large-enough constant to the price definition.\nIn our case, for example, letting H ~ denote the vector of all H's, we can add the term n \u00b7 H \u2212 (H \u2212 L) n (~ H, p \u2212 i), TiL (p \u2212 i) to (10).\nNote that this is a constant for player i. Thus, the new payments are Pi0 (p) = n \u00b7 H \u2212 L \u00b7 n_i\nby (11), this will indeed result in a non-negative utility for i (since n (~ H, p \u2212 i), TiL (p \u2212 i) \u2264 n (pi, p \u2212 i), TiL (p \u2212 i) for any type pi of player i).\nThis modification also ensures the additionally desired normalization property that if a player receives no jobs then she receives zero payment: if player i receives the empty set for some type pi then she will also receive the empty set for the type H ~ (this is easy to verify for our specific algorithm), and for the type ~ H, her utility equals zero; thus, by truthfulness this must also be the utility of every other declaration that results in i receiving the empty set.\nThis completes the proof of Theorem 5.3.\n5.4 Impossibility of exact implementation\nWe now show that irrespective of computational considerations, there does not exist a cycle-monotone algorithm for the L-H case with an approximation ratio better than 1.14.\nLet H = \u03b1 \u00b7 L for some 2 <\u03b1 <2.5 that we will choose later.\nThere are two machines I, II and seven jobs.\nConsider the following two scenarios: Scenario 1.\nEvery job has the same processing time on both machines: jobs 1--5, are L, and jobs 6, 7 are H. Any optimal schedule assigns jobs 1--5 to one machine and jobs 6, 7 to the other, and has makespan OPT1 = 5L.\nThe secondbest schedule has makespan at least Second1 = 2H + L. Scenario 2.\nIf the algorithm chooses an optimal schedule for scenario 1, assume without loss of generality that jobs 6, 7 are assigned to machine II.\nIn scenario 2, machine I has the same processing-time vector.\nMachine II lowers jobs 6, 7 to L and increases 1--5 to H.\nAn optimal schedule has makespan 2L + H, where machine II gets jobs 6, 7 and one of the jobs 1--5.\nThe second-best schedule for this scenario has makespan at least Second2 = 5L.\nTheorem 5.14 No deterministic truthful mechanism for the two-value scheduling problem can obtain an approximation ratio better than 1.14.\nPROOF.\nWe first argue that a cycle-monotone algorithm cannot choose the optimal schedule in both scenarios.\nThis follows because otherwise cycle monotonicity is violated for machine II.\nTaking p1 II, p2 time vectors for scenarios 1, 2 respectively, we getP II to be machine II's processing\n, j \u2212 p2 II, j) (x2 II, j \u2212 x1 II, j) = (L \u2212 H) (1 \u2212 0) <0.\nThus, any truthful mechanism must return a sub-optimal makespan in at least one scenario, and therefore its approximation ratio is at least min\u02d8 Second1 OPT1, Second2 \u00af>--1.14 for \u03b1 = 2.364.\nOPT2 We remark that for the {Lj, Hj} - case where there is a Hj common ratio r = for all jobs (this generalizes the Lj restricted-machines setting) one can obtain a fractional truthful mechanism (with efficiently computable prices) that returns a schedule of makespan at most OPT (p) for every p.\nOne can view each job j as consisting of Lj sub-jobs of size 1 on a machine i if pij = Lj, and size r if pij = Hj.\nFor this new instance \u02dcp, note that \u02dcpij E {1, r} for every i, j. Notice also that any assignment x\u02dc for the instance p\u02dc translates to a fractional assignment x for p, where pijxij = P jl: sub-job of j\u02dcpij \u02dcxij.\nThus, if we use Algorithm 2 to obtain a schedule for the instance \u02dcp, equation (6) translates precisely to (3) for the assignment x; moreover, the prices for p\u02dc translate to prices for the instance p.\nThe number of sub-jobs assigned to low-machines in the flow-phase is simply the total work assigned to low-machines.\nThus, we can implement the above reduction by setting up a max-flow problem that seems to maximize the total work assigned to low machines.\nMoreover, since we have a fractional domain, we can use a more efficient greedy rule for packing the unassigned portions of jobs and argue that the fractional assignment has makespan at most OPT (p).\nThe assignment x need not however satisfy the condition that xij> 0 implies pij OPT (p) (as in the restricted-machines setting), this condition does hold, so we get a 2-approximation truthful mechanism.","keyphrases":["truth mechan design","mechan design","multi-dimension schedul","schedul","schedul","cycl monoton","makespan minim","algorithm","approxim algorithm","random mechan","fraction mechan us","fraction domain"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"C-32","title":"BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN","abstract":"Collaborative applications provide a shared work environment for groups of networked clients collaborating on a common task. They require strong consistency for shared persistent data and efficient access to fine-grained objects. These properties are difficult to provide in wide-area networks because of high network latency. BuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments. The challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers. We have implemented a BuddyCache prototype and evaluated its performance. Analytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.","lvl-1":"BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN \u2217 Magnus E. Bjornsson and Liuba Shrira Department of Computer Science Brandeis University Waltham, MA 02454-9110 {magnus, liuba}@cs.\nbrandeis.edu ABSTRACT Collaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area networks because of high network latency.\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers.\nWe have implemented a BuddyCache prototype and evaluated its performance.\nAnalytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: Distributed Systems General Terms Design, Performance 1.\nINTRODUCTION Improvements in network connectivity erode the distinction between local and wide-area computing and, increasingly, users expect their work environment to follow them wherever they go.\nNevertheless, distributed applications may perform poorly in wide-area network environments.\nNetwork bandwidth problems will improve in the foreseeable future, but improvement in network latency is fundamentally limited.\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment.\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task, for example a team of engineers jointly overseeing a construction project.\nStrong-consistency collaborative applications, for example CAD systems, use client\/server transactional object storage systems to ensure consistent access to shared persistent data.\nUp to now however, users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [24].\nFor transactional storage systems, the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore, in wide-area network environments, collaborative applications have been adapted to use weaker consistency storage systems [22].\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics.\nIf shared persistent objects could be accessed with low-latency, a new field of distributed strong-consistency applications could be opened.\nCooperative web caching [10, 11, 15] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server.\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems, namely determining what objects are cached where, becomes easy in small groups typical of collaborative settings.\nHowever, cooperative web caching techniques do not provide two important properties needed by collaborative applications, strong consistency and efficient 26 access to fine-grained objects.\nCooperative object caching systems [2] provide these properties.\nHowever, they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page.\nInteraction with the server increases latency.\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments.\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site.\nThe engineers use a collaborative CAD application to revise and update complex project design documents.\nThe shared documents are stored in transactional repository servers at the company home site.\nThe engineers use workstations running repository clients.\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow.\nTo improve access latency, clients fetch objects from repository servers and cache and access them locally.\nA coherence protocol ensures that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects.\nWith BuddyCache, a group of close-by collaborating clients, connected to storage repository via a high-latency link, can avoid interactions with the server if needed objects, updates or coherency information are available in some client in the group.\nBuddyCache presents two main technical challenges.\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system.\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes.\nBuddyCache uses a redirection approach similar to one used in cooperative web caching systems [11].\nA redirector server, interposed between the clients and the remote servers, runs on the same network as the collaborating group and, when possible, replaces the function of the remote servers.\nIf the client request can not be served locally, the redirector forwards it to a remote server.\nWhen one of the clients in the group fetches a shared object from the repository, the object is likely to be needed by other clients.\nBuddyCache redirects subsequent requests for this object to the caching client.\nSimilarly, when a client creates or modifies a shared object, the new data is likely to be of potential interest to all group members.\nBuddyCache uses redirection to support peer update, a lightweight application-level multicast technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nNevertheless, in a transactional system, redirection interferes with shared object availability.\nSolo commit, is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow, or clients fail independently.\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information.\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding, this raises the question is the cure worse than the disease?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements.\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency.\nAnalytical results, supported by measurements based on the multi-user 007 benchmark, indicate that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments.\n2.\nRELATED WORK Cooperative caching techniques [20, 16, 13, 2, 28] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network.\nThese techniques use the server to provide redirection and do not consider issues of high network latency.\nMultiprocessor systems and distributed shared memory systems [14, 4, 17, 18, 5] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail.\nCooperative Web caching techniques, (e.g. [11, 15]) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment, using distributed directory protocols for tracking cache changes.\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects.\nCheriton and Li propose MMO [12] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol, exploiting locality in a way similar to BuddyCache.\nThis multicast transport level solution is geared to the single writer semantics of web objects.\nIn contrast, BuddyCache uses application level multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects.\nApplication level multicast solution in a middle-ware system was described by Pendarakis, Shi and Verma in [27].\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing.\nYin, Alvisi, Dahlin and Lin [32, 31] present a hierarchical WAN cache coherence scheme.\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions.\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme.\nIn contrast, our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system.\nAnderson, Eastham and Vahdat in WebFS [29] present a global file system coherence protocol that allows clients to choose 27 on per file basis between receiving updates or invalidations.\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels.\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications, to deal with concurrent updates but is limited to file systems.\nMazieres studies a bandwidth saving technique [24] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache.\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache.\n3.\nBUDDYCACHE High network latency imposes performance penalty for transactional applications accessing shared persistent objects in wide-area network environment.\nThis section describes the BuddyCache approach for reducing the network latency penalty in collaborative applications and explains the main design decisions.\nWe consider a system in which a distributed transactional object repository stores objects in highly reliable servers, perhaps outsourced in data-centers connected via high-bandwidth reliable networks.\nCollaborating clients interconnected via a fast local network, connect via high-latency, possibly satellite, links to the servers at the data-centers to access shared persistent objects.\nThe servers provide disk storage for the persistent objects.\nA persistent object is owned by a single server.\nObjects may be small (order of 100 bytes for programming language objects [23]).\nTo amortize the cost of disk and network transfer objects are grouped into physical pages.\nTo improve object access latency, clients fetch the objects from the servers and cache and access them locally.\nA transactional cache coherence protocol runs at clients and servers to ensure that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborating client group is the high latency of coordinating consistent access to the shared objects.\nBuddyCache architecture is based on a request redirection server, interposed between the clients and the remote servers.\nThe interposed server (the redirector) runs on the same network as the collaborative group and, when possible, replaces the function of the remote servers.\nIf the client request can be served locally, the interaction with the server is avoided.\nIf the client request can not be served locally, redirector forwards it to a remote server.\nRedirection approach has been used to improve the performance of web caching protocols.\nBuddyCache redirector supports the correctness, availability and fault-tolerance properties of transactional caching protocol [19].\nThe correctness property ensures onecopy serializability of the objects committed by the client transactions.\nThe availability and fault-tolerance properties ensure that a crashed or slow client does not disrupt any other client``s access to persistent objects.\nThe three types of client server interactions in a transactional caching protocol are the commit of a transaction, the fetch of an object missing in a client cache, and the exchange of cache coherence information.\nBuddyCache avoids interactions with the server when a missing object, or cache coherence information needed by a client is available within the collaborating group.\nThe redirector always interacts with the servers at commit time because only storage servers provide transaction durability in a way that ensures committed Client Redirector Client Client Buddy Group Client Redirector Client Client Buddy Group Servers Figure 1: BuddyCache.\ndata remains available in the presence of client or redirector failures.\nFigure 1 shows the overall BuddyCache architecture.\n3.1 Cache Coherence The redirector maintains a directory of pages cached at each client to provide cooperative caching [20, 16, 13, 2, 28], redirecting a client fetch request to another client that caches the requested object.\nIn addition, redirector manages cache coherence.\nSeveral efficient transactional cache coherence protocols [19] exist for persistent object storage systems.\nProtocols make different choices in granularity of data transfers and granularity of cache consistency.\nThe current best-performing protocols use page granularity transfers when clients fetch missing objects from a server and object granularity coherence to avoid false (page-level) conflicts.\nThe transactional caching taxonomy [19] proposed by Carey, Franklin and Livny classifies the coherence protocols into two main categories according to whether a protocol avoids or detects access to stale objects in the client cache.\nThe BuddyCache approach could be applied to both categories with different performance costs and benefits in each category.\nWe chose to investigate BuddyCache in the context of OCC [3], the current best performing detection-based protocol.\nWe chose OCC because it is simple, performs well in high-latency networks, has been implemented and we had access to the implementation.\nWe are investigating BuddyCache with PSAA [33], the best performing avoidancebased protocol.\nBelow we outline the OCC protocol [3].\nThe OCC protocol uses object-level coherence.\nWhen a client requests a missing object, the server transfers the containing page.\nTransaction can read and update locally cached objects without server intervention.\nHowever, before a transaction commits it must be validated; the server must make sure the validating transaction has not read a stale version of some object that was updated by a successfully committed or validated transaction.\nIf validation fails, the transaction is aborted.\nTo reduce the number and cost of aborts, 28 Helper Requester A:p Fetch pPeer fetch p Page p Redirector Figure 2: Peer fetch a server sends background object invalidation messages to clients caching the containing pages.\nWhen clients receive invalidations they remove stale objects from the cache and send background acknowledgments to let server know about this.\nSince invalidations remove stale objects from the client cache, invalidation acknowledgment indicates to the server that a client with no outstanding invalidations has read upto-date objects.\nAn unacknowledged invalidation indicates a stale object may have been accessed in the client cache.\nThe validation procedure at the server aborts a client transaction if a client reads an object while an invalidation is outstanding.\nThe acknowledged invalidation mechanism supports object-level cache coherence without object-based directories or per-object version numbers.\nAvoiding per-object overheads is very important to reduce performance penalties [3] of managing many small objects, since typical objects are small.\nAn important BuddyCache design goal is to maintain this benefit.\nSince in BuddyCache a page can be fetched into a client cache without server intervention (as illustrated in figure 2), cache directories at the servers keep track of pages cached in each collaborating group rather than each client.\nRedirector keeps track of pages cached in each client in a group.\nServers send to the redirector invalidations for pages cached in the entire group.\nThe redirector propagates invalidations from servers to affected clients.\nWhen all affected clients acknowledge invalidations, redirector can propagate the group acknowledgment to the server.\n3.2 Light-weight Peer Update When one of the clients in the collaborative group creates or modifies shared objects, the copies cached by any other client become stale but the new data is likely to be of potential interest to the group members.\nThe goal in BuddyCache is to provide group members with efficient and consistent access to updates committed within the group without imposing extra overhead on other parts of the storage system.\nThe two possible approaches to deal with stale data are cache invalidations and cache updates.\nCache coherence studies in web systems (e.g. [7]) DSM systems (e.g. [5]), and transactional object systems (e.g. [19]) compare the benefits of update and invalidation.\nThe studies show the Committing Client Server Redirector x2.\nStore x 6.\nUpdate x 3.\nCommit x 4.\nCommit OK 5.\nCommit OK1.\nCommit x Figure 3: Peer update.\nbenefits are strongly workload-dependent.\nIn general, invalidation-based coherence protocols are efficient since invalidations are small, batched and piggybacked on other messages.\nMoreover, invalidation protocols match the current hardware trend for increasing client cache sizes.\nLarger caches are likely to contain much more data than is actively used.\nUpdate-based protocols that propagate updates to low-interest objects in a wide-area network would be wasteful.\nNevertheless, invalidation-based coherence protocols can perform poorly in high-latency networks [12] if the object``s new value is likely to be of interest to another group member.\nWith an invalidation-based protocol, one member``s update will invalidate another member``s cached copy, causing the latter to perform a high-latency fetch of the new value from the server.\nBuddyCache circumvents this well-known bandwidth vs. latency trade-off imposed by update and invalidation protocols in wide-area network environments.\nIt avoids the latency penalty of invalidations by using the redirector to retain and propagate updates committed by one client to other clients within the group.\nThis avoids the bandwidth penalty of updates because servers propagate invalidations to the redirectors.\nAs far as we know, this use of localized multicast in BuddyCache redirector is new and has not been used in earlier caching systems.\nThe peer update works as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinating server.\nAfter the transaction commits, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client, and also propagates the retained committed updates to the clients caching the modified pages (see figure 3).\nSince the groups outside the BuddyCache propagate invalidations, there is no extra overhead outside the committing group.\n3.3 Solo commit In the OCC protocol, clients acknowledge server invalidations (or updates) to indicate removal of stale data.\nThe straightforward group acknowledgement protocol where redirector collects and propagates a collective acknowledge29 Redirector commit ok ABORT Client 1 Client 2 Server commit (P(x)) commit (P(x)) ok + inv(P(x)) inv(P(x)) commit(P(x)) commit(P(x)) ack(P(x)) ack(P(x)) Figure 4: Validation with Slow Peers ment to the server, interferes with the availability property of the transactional caching protocol [19] since a client that is slow to acknowledge an invalidation or has failed can delay a group acknowledgement and prevent another client in the group from committing a transaction.\nE.g. an engineer that commits a repeated revision to the same shared design object (and therefore holds the latest version of the object) may need to abort if the group acknowledgement has not propagated to the server.\nConsider a situation depicted in figure 4 where Client1 commits a transaction T that reads the latest version of an object x on page P recently modified by Client1.\nIf the commit request for T reaches the server before the collective acknowledgement from Client2 for the last modification of x arrives at the server, the OCC validation procedure considers x to be stale and aborts T (because, as explained above, an invalidation unacknowledged by a client, acts as indication to the server that the cached object value is stale at the client).\nNote that while invalidations are not required for the correctness of the OCC protocol, they are very important for the performance since they reduce the performance penalties of aborts and false sharing.\nThe asynchronous invalidations are an important part of the reason OCC has competitive performance with PSAA [33], the best performing avoidance-based protocol [3].\nNevertheless, since invalidations are sent and processed asynchronously, invalidation processing may be arbitrarily delayed at a client.\nLease-based schemes (time-out based) have been proposed to improve the availability of hierarchical callback-based coherence protocols [32] but the asynchronous nature of invalidations makes the lease-based approaches inappropriate for asynchronous invalidations.\nThe Solo commit validation protocol allows a client with up-to-date objects to commit a transaction even if the group acknowledgement is delayed due to slow or crashed peers.\nThe protocol requires clients to include extra information with the transaction read sets in the commit message, to indicate to the server the objects read by the transaction are up-to-date.\nObject version numbers could provide a simple way to track up-to-date objects but, as mentioned above, maintaining per object version numbers imposes unacceptably high overheads (in disk storage, I\/O costs and directory size) on the entire object system when objects are small [23].\nInstead, solo commit uses coarse-grain page version numbers to identify fine-grain object versions.\nA page version number is incremented at a server when at transaction that modifies objects on the page commits.\nUpdates committed by a single transaction and corresponding invalidations are therefore uniquely identified by the modified page version number.\nPage version numbers are propagated to clients in fetch replies, commit replies and with invalidations, and clients include page version numbers in commit requests sent to the servers.\nIf a transaction fails validation due to missing group acknowledgement, the server checks page version numbers of the objects in the transaction read set and allows the transaction to commit if the client has read from the latest page version.\nThe page version numbers enable independent commits but page version checks only detect page-level conflicts.\nTo detect object-level conflicts and avoid the problem of false sharing we need the acknowledged invalidations.\nSection 4 describes the details of the implementation of solo commit support for fine-grain sharing.\n3.4 Group Configuration The BuddyCache architecture supports multiple concurrent peer groups.\nPotentially, it may be faster to access data cached in another peer group than to access a remote server.\nIn such case extending BuddyCache protocols to support multi-level peer caching could be worthwhile.\nWe have not pursued this possibility for several reasons.\nIn web caching workloads, simply increasing the population of clients in a proxy cache often increases the overall cache hit rate [30].\nIn BuddyCache applications, however, we expect sharing to result mainly from explicit client interaction and collaboration, suggesting that inter-group fetching is unlikely to occur.\nMoreover, measurements from multi-level web caching systems [9] indicate that a multilevel system may not be advantageous unless the network connection between the peer groups is very fast.\nWe are primarily interested in environments where closely collaborating peers have fast close-range connectivity, but the connection between peer groups may be slow.\nAs a result, we decided that support for inter-group fetching in BuddyCache is not a high priority right now.\nTo support heterogenous resource-rich and resource-poor peers, the BuddyCache redirector can be configured to run either in one of the peer nodes or, when available, in a separate node within the site infrastructure.\nMoreover, in a resource-rich infrastructure node, the redirector can be configured as a stand-by peer cache to receive pages fetched by other peers, emulating a central cache somewhat similar to a regional web proxy cache.\nFrom the BuddyCache cache coherence protocol point of view, however, such a stand-by peer cache is equivalent to a regular peer cache and therefore we do not consider this case separately in the discussion in this paper.\n4.\nIMPLEMENTATION In this section we provide the details of the BuddyCache implementation.\nWe have implemented BuddyCache in the Thor client\/server object-oriented database [23].\nThor supports high performance access to distributed objects and therefore provides a good test platform to investigate BuddyCache performance.\n30 4.1 Base Storage System Thor servers provide persistent storage for objects and clients cache copies of these objects.\nApplications run at the clients and interact with the system by making calls on methods of cached objects.\nAll method calls occur within atomic transactions.\nClients communicate with servers to fetch pages or to commit a transaction.\nThe servers have a disk for storing persistent objects, a stable transaction log, and volatile memory.\nThe disk is organized as a collection of pages which are the units of disk access.\nThe stable log holds commit information and object modifications for committed transactions.\nThe server memory contains cache directory and a recoverable modified object cache called the MOB.\nThe directory keeps track of which pages are cached by which clients.\nThe MOB holds recently modified objects that have not yet been written back to their pages on disk.\nAs MOB fills up, a background process propagates modified objects to the disk [21, 26].\n4.2 Base Cache Coherence Transactions are serialized using optimistic concurrency control OCC [3] described in Section 3.1.\nWe provide some of the relevant OCC protocol implementation details.\nThe client keeps track of objects that are read and modified by its transaction; it sends this information, along with new copies of modified objects, to the servers when it tries to commit the transaction.\nThe servers determine whether the commit is possible, using a two-phase commit protocol if the transaction used objects at multiple servers.\nIf the transaction commits, the new copies of modified objects are appended to the log and also inserted in the MOB.\nThe MOB is recoverable, i.e. if the server crashes, the MOB is reconstructed at recovery by scanning the log.\nSince objects are not locked before being used, a transaction commit can cause caches to contain obsolete objects.\nServers will abort a transaction that used obsolete objects.\nHowever, to reduce the probability of aborts, servers notify clients when their objects become obsolete by sending them invalidation messages; a server uses its directory and the information about the committing transaction to determine what invalidation messages to send.\nInvalidation messages are small because they simply identify obsolete objects.\nFurthermore, they are sent in the background, batched and piggybacked on other messages.\nWhen a client receives an invalidation message, it removes obsolete objects from its cache and aborts the current transaction if it used them.\nThe client continues to retain pages containing invalidated objects; these pages are now incomplete with holes in place of the invalidated objects.\nPerforming invalidation on an object basis means that false sharing does not cause unnecessary aborts; keeping incomplete pages in the client cache means that false sharing does not lead to unnecessary cache misses.\nClients acknowledge invalidations to indicate removal of stale data as explained in Section 3.1.\nInvalidation messages prevent some aborts, and accelerate those that must happen - thus wasting less work and o\ufb04oading detection of aborts from servers to clients.\nWhen a transaction aborts, its client restores the cached copies of modified objects to the state they had before the transaction started; this is possible because a client makes a copy of an object the first time it is modified by a transaction.\n4.3 Redirection The redirector runs on the same local network as the peer group, in one of the peer nodes, or in a special node within the infrastructure.\nIt maintains a directory of pages available in the peer group and provides fast centralized fetch redirection (see figure 2) between the peer caches.\nTo improve performance, clients inform the redirector when they evict pages or objects by piggybacking that information on messages sent to the redirector.\nTo ensure up-to-date objects are fetched from the group cache the redirector tracks the status of the pages.\nA cached page is either complete in which case it contains consistent values for all the objects, or incomplete, in which case some of the objects on a page are marked invalid.\nOnly complete pages are used by the peer fetch.\nThe protocol for maintaining page status when pages are updated and invalidated is described in Section 4.4.\nWhen a client request has to be processed at the servers, e.g., a complete requested page is unavailable in the peer group or a peer needs to commit a transaction, the redirector acts as a server proxy: it forwards the request to the server, and then forwards the reply back to the client.\nIn addition, in response to invalidations sent by a server, the redirector distributes the update or invalidation information to clients caching the modified page and, after all clients acknowledge, propagates the group acknowledgment back to the server (see figure 3).\nThe redirector-server protocol is, in effect, the client-server protocol used in the base Thor storage system, where the combined peer group cache is playing the role of a single client cache in the base system.\n4.4 Peer Update The peer update is implemented as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinator server.\nAfter a transaction commits, using a two phase commit if needed, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client.\nIt waits for the invalidations to arrive to propagate corresponding retained (committed) updates to the clients caching the modified pages (see figure 3.)\nParticipating servers that are home to objects modified by the transaction generate object invalidations for each cache group that caches pages containing the modified objects (including the committing group).\nThe invalidations are sent lazily to the redirectors to ensure that all the clients in the groups caching the modified objects get rid of the stale data.\nIn cache groups other than the committing group, redirectors propagates the invalidations to all the clients caching the modified pages, collect the client acknowledgments and after completing the collection, propagate collective acknowledgments back to the server.\nWithin the committing client group, the arriving invalidations are not propagated.\nInstead, updates are sent to clients caching those objects'' pages, the updates are acknowledged by the client, and the collective acknowledgment is propagated to the server.\nAn invalidation renders a cached page unavailable for peer fetch changing the status of a complete page p into an incomplete.\nIn contrast, an update of a complete page preserves the complete page status.\nAs shown by studies of the 31 fragment reconstruction [2], such update propagation allows to avoid the performance penalties of false sharing.\nThat is, when clients within a group modify different objects on the same page, the page retains its complete status and remains available for peer fetch.\nTherefore, the effect of peer update is similar to eager fragment reconstruction [2].\nWe have also considered the possibility of allowing a peer to fetch an incomplete page (with invalid objects marked accordingly) but decided against this possibility because of the extra complexity involved in tracking invalid objects.\n4.5 Vcache The solo commit validation protocol allows clients with up-to-date objects to commit independently of slower (or failed) group members.\nAs explained in Section 3.3, the solo commit protocol allows a transaction T to pass validation if extra coherence information supplied by the client indicates that transaction T has read up-to-date objects.\nClients use page version numbers to provide this extra coherence information.\nThat is, a client includes the page version number corresponding to each object in the read object set sent in the commit request to the server.\nSince a unique page version number corresponds to each committed object update, the page version number associated with an object allows the validation procedure at the server to check if the client transaction has read up-to-date objects.\nThe use of coarse-grain page versions to identify object versions avoids the high penalty of maintaining persistent object versions for small objects, but requires an extra protocol at the client to maintain the mapping from a cached object to the identifying page version (ObjectToVersion).\nThe main implementation issue is concerned with maintaining this mapping efficiently.\nAt the server side, when modifications commit, servers associate page version numbers with the invalidations.\nAt validation time, if an unacknowledged invalidation is pending for an object x read by a transaction T, the validation procedure checks if the version number for x in T``s read set matches the version number for highest pending invalidation for x, in which case the object value is current, otherwise T fails validation.\nWe note again that the page version number-based checks, and the invalidation acknowledgment-based checks are complimentary in the solo commit validation and both are needed.\nThe page version number check allows the validation to proceed before invalidation acknowledgments arrive but by itself a page version number check detects page-level conflicts and is not sufficient to support fine-grain coherence without the object-level invalidations.\nWe now describe how the client manages the mapping ObjectToVersion.\nThe client maintains a page version number for each cached page.\nThe version number satisfies the following invariant V P about the state of objects on a page: if a cached page P has a version number v, then the value of an object o on a cached page P is either invalid or it reflects at least the modifications committed by transactions preceding the transaction that set P``s version number to v. New object values and new page version numbers arrive when a client fetches a page or when a commit reply or invalidations arrive for this page.\nThe new object values modify the page and, therefore, the page version number needs to be updated to maintain the invariant V P.\nA page version number that arrives when a client fetches a page, replaces Object Version x 8 Redirector Server 1Client 1 com(P(x,6),Q(y,9)) com(P(x,6),Q(y,9)) ok(P(x,8),Q(y,10)) ok(P(x,8),Q(y,10)) inv(Q(s,11)) inv(Q(s,11)) inv(P(r,7) inv(P(r,7) Server 2 Figure 5: Reordered Invalidations the page version number for this page.\nSuch an update preserves the invariant V P. Similarly, an in-sequence page version number arriving at the client in a commit or invalidation message advances the version number for the entire cached page, without violating V P. However, invalidations or updates and their corresponding page version numbers can also arrive at the client out of sequence, in which case updating the page version number could violate V P. For example, a commit reply for a transaction that updates object x on page P in server S1, and object y on page Q in server S2, may deliver a new version number for P from the transaction coordinator S1 before an invalidation generated for an earlier transaction that has modified object r on page P arrives from S1 (as shown in figure 5).\nThe cache update protocol ensures that the value of any object o in a cached page P reflects the update or invalidation with the highest observed version number.\nThat is, obsolete updates or invalidations received out of sequence do not affect the value of an object.\nTo maintain the ObjectToVersion mapping and the invariant V P in the presence of out-of-sequence arrival of page version numbers, the client manages a small version number cache vcache that maintains the mapping from an object into its corresponding page version number for all reordered version number updates until a complete page version number sequence is assembled.\nWhen the missing version numbers for the page arrive and complete a sequence, the version number for the entire page is advanced.\nThe ObjectToVersion mapping, including the vcache and page version numbers, is used at transaction commit time to provide version numbers for the read object set as follows.\nIf the read object has an entry in the vcache, its version number is equal to the highest version number in the vcache for this object.\nIf the object is not present in the vcache, its version number is equal the version number of its containing cached page.\nFigure 6 shows the ObjectToVersion mapping in the client cache, including the page version numbers for pages and the vcache.\nClient can limit vcache size as needed since re-fetching a page removes all reordered page version numbers from the vcache.\nHowever, we expect version number reordering to be uncommon and therefore expect the vcache to be very small.\n5.\nBUDDYCACHE FAILOVER A client group contains multiple client nodes and a redi32 VersionPageObject Version VCache Client Cache Client Page Cache Figure 6: ObjectToVersion map with vcache rector that can fail independently.\nThe goal of the failover protocol is to reconfigure the BuddyCache in the case of a node failure, so that the failure of one node does not disrupt other clients from accessing shared objects.\nMoreover, the failure of the redirector should allow unaffected clients to keep their caches intact.\nWe have designed a failover protocols for BuddyCache but have not implemented it yet.\nThe appendix outlines the protocol.\n6.\nPERFORMANCE EVALUATION BuddyCache redirection supports the performance benefits of avoiding communication with the servers but introduces extra processing cost due to availability mechanisms and request forwarding.\nIs the cure worse then the disease?\nTo answer the question, we have implemented a BuddyCache prototype for the OCC protocol and conducted experiments to analyze the performance benefits and costs over a range of network latencies.\n6.1 Analysis The performance benefits of peer fetch and peer update are due to avoided server interactions.\nThis section presents a simple analytical performance model for this benefit.\nThe avoided server interactions correspond to different types of client cache misses.\nThese can be cold misses, invalidation misses and capacity misses.\nOur analysis focuses on cold misses and invalidation misses, since the benefit of avoiding capacity misses can be derived from the cold misses.\nMoreover, technology trends indicate that memory and storage capacity will continue to grow and therefore a typical BuddyCache configuration is likely not to be cache limited.\nThe client cache misses are determined by several variables, including the workload and the cache configuration.\nOur analysis tries, as much as possible, to separate these variables so they can be controlled in the validation experiments.\nTo study the benefit of avoiding cold misses, we consider cold cache performance in a read-only workload (no invalidation misses).\nWe expect peer fetch to improve the latency cost for client cold cache misses by fetching objects from nearby cache.\nWe evaluate how the redirection cost affects this benefit by comparing and analyzing the performance of an application running in a storage system with BuddyCache and without (called Base).\nTo study the benefit of avoiding invalidation misses, we consider hot cache performance in a workload with modifications (with no cold misses).\nIn hot caches we expect BuddyCache to provide two complementary benefits, both of which reduce the latency of access to shared modified objects.\nPeer update lets a client access an object modified by a nearby collaborating peer without the delay imposed by invalidation-only protocols.\nIn groups where peers share a read-only interest in the modified objects, peer fetch allows a client to access a modified object as soon as a collaborating peer has it, which avoids the delay of server fetch without the high cost imposed by the update-only protocols.\nTechnology trends indicate that both benefits will remain important in the foreseeable future.\nThe trend toward increase in available network bandwidth decreases the cost of the update-only protocols.\nHowever, the trend toward increasingly large caches, that are updated when cached objects are modified, makes invalidation-base protocols more attractive.\nTo evaluate these two benefits we consider the performance of an application running without BuddyCache with an application running BuddyCache in two configurations.\nOne, where a peer in the group modifies the objects, and another where the objects are modified by a peer outside the group.\nPeer update can also avoid invalidation misses due to false-sharing, introduced when multiple peers update different objects on the same page concurrently.\nWe do not analyze this benefit (demonstrated by earlier work [2]) because our benchmarks do not allow us to control object layout, and also because this benefit can be derived given the cache hit rate and workload contention.\n6.1.1 The Model The model considers how the time to complete an execution with and without BuddyCache is affected by invalidation misses and cold misses.\nConsider k clients running concurrently accessing uniformly a shared set of N pages in BuddyCache (BC) and Base.\nLet tfetch(S), tredirect(S), tcommit(S), and tcompute(S) be the time it takes a client to, respectively, fetch from server, peer fetch, commit a transaction and compute in a transaction, in a system S, where S is either a system with BuddyCache (BC) or without (Base).\nFor simplicity, our model assumes the fetch and commit times are constant.\nIn general they may vary with the server load, e.g. they depend on the total number of clients in the system.\nThe number of misses avoided by peer fetch depends on k, the number of clients in the BuddyCache, and on the client co-interest in the shared data.\nIn a specific BuddyCache execution it is modeled by the variable r, defined as a number of fetches arriving at the redirector for a given version of page P (i.e. until an object on the page is invalidated).\nConsider an execution with cold misses.\nA client starts with a cold cache and runs read-only workload until it accesses all N pages while committing l transactions.\nWe assume there are no capacity misses, i.e. the client cache is large enough to hold N pages.\nIn BC, r cold misses for page P reach the redirector.\nThe first of the misses fetches P from the server, and the subsequent r \u2212 1 misses are redirected.\nSince each client accesses the entire shared set r = k. Let Tcold(Base) and Tcold(BC) be the time it takes to complete the l transactions in Base and BC.\n33 Tcold(Base) = N \u2217 tfetch(Base) +(tcompute + tcommit(Base)) \u2217 l (1) Tcold(BC) = N \u2217 1 k \u2217 tfetch(BC) + (1 \u2212 1 k ) \u2217 tredirect +(tcompute + tcommit(BC)) \u2217 l (2) Consider next an execution with invalidation misses.\nA client starts with a hot cache containing the working set of N pages.\nWe focus on a simple case where one client (writer) runs a workload with modifications, and the other clients (readers) run a read-only workload.\nIn a group containing the writer (BCW ), peer update eliminates all invalidation misses.\nIn a group containing only readers (BCR), during a steady state execution with uniform updates, a client transaction has missinv invalidation misses.\nConsider the sequence of r client misses on page P that arrive at the redirector in BCR between two consequent invalidations of page P.\nThe first miss goes to the server, and the r \u2212 1 subsequent misses are redirected.\nUnlike with cold misses, r \u2264 k because the second invalidation disables redirection for P until the next miss on P causes a server fetch.\nAssuming uniform access, a client invalidation miss has a chance of 1\/r to be the first miss (resulting in server fetch), and a chance of (1 \u2212 1\/r) to be redirected.\nLet Tinval(Base), Tinval(BCR) and Tinval(BCW ) be the time it takes to complete a single transaction in the Base, BCR and BCW systems.\nTinval(Base) = missinv \u2217 tfetch(Base) +tcompute + tcommit(Base) (3) Tinval(BCR) = missinv \u2217 ( 1 r \u2217 tfetch(BCR) +(1 \u2212 1 r ) \u2217 tredirect(BCR)) +tcompute + tcommit(BCR) (4) Tinval(BCW ) = tcompute + tcommit(BCW ) (5) In the experiments described below, we measure the parameters N, r, missinv, tfetch(S), tredirect(S), tcommit(S), and tcompute(S).\nWe compute the completion times derived using the above model and derive the benefits.\nWe then validate the model by comparing the derived values to the completion times and benefits measured directly in the experiments.\n6.2 Experimental Setup Before presenting our results we describe our experimental setup.\nWe use two systems in our experiments.\nThe Base system runs Thor distributed object storage system [23] with clients connecting directly to the servers.\nThe Buddy system runs our implementation of BuddyCache prototype in Thor, supporting peer fetch, peer update, and solo commit, but not the failover.\nOur workloads are based on the multi-user OO7 benchmark [8]; this benchmark is intended to capture the characteristics of many different multi-user CAD\/CAM\/CASE applications, but does not model any specific application.\nWe use OO7 because it is a standard benchmark for measuring object storage system performance.\nThe OO7 database contains a tree of assembly objects with leaves pointing to three composite parts chosen randomly from among 500 such objects.\nEach composite part contains a graph of atomic parts linked by connection objects; each atomic part has 3 outgoing connections.\nWe use a medium database that has 200 atomic parts per composite part.\nThe multi-user database allocates for each client a private module consisting of one tree of assembly objects, and adds an extra shared module that scales proportionally to the number of clients.\nWe expect a typical BuddyCache configuration not to be cache limited and therefore focus on workloads where the objects in the client working set fit in the cache.\nSince the goal of our study is to evaluate how effectively our techniques deal with access to shared objects, in our study we limit client access to shared data only.\nThis allows us to study the effect our techniques have on cold cache and cache consistency misses and isolate as much as possible the effect of cache capacity misses.\nTo keep the length of our experiments reasonable, we use small caches.\nThe OO7 benchmark generates database modules of predefined size.\nIn our implementation of OO7, the private module size is about 38MB.\nTo make sure that the entire working set fits into the cache we use a single private module and choose a cache size of 40MB for each client.\nThe OO7 database is generated with modules for 3 clients, only one of which is used in our experiments as we explain above.\nThe objects in the database are clustered in 8K pages, which are also the unit of transfer in the fetch requests.\nWe consider two types of transaction workloads in our analysis, read-only and read-write.\nIn OO7 benchmark, read-only transactions use the T1 traversal that performs a depth-first traversal of entire composite part graph.\nWrite transactions use the T2b traversal that is identical to T1 except that it modifies all the atomic parts in a single composite.\nA single transaction includes one traversal and there is no sleep time between transactions.\nBoth read-only and read-write transactions always work with data from the same module.\nClients running read-write transactions don``t modify in every transaction, instead they have a 50% probability of running read-only transactions.\nThe database was stored by a server on a 40GB IBM 7200RPM hard drive, with a 8.5 average seek time and 40 MB\/sec data transfer rates.\nIn Base system clients connect directly to the database.\nIn Buddy system clients connect to the redirector that connects to the database.\nWe run the experiments with 1-10 clients in Base, and one or two 1-10 client groups in Buddy.\nThe server, the clients and the redirectors ran on a 850MHz Intel Pentium III processor based PC, 512MB of memory, and Linux Red Hat 6.2.\nThey were connected by a 100Mb\/s Ethernet.\nThe server was configured with a 50MB cache (of which 6MB were used for the modified object buffer), the client had a 40MB cache.\nThe experiments ran in Utah experimental testbed emulab.net [1].\n34 Latency [ms] Base Buddy 3 group 5 group 3 group 5 group Fetch 1.3 1.4 2.4 2.6 Commit 2.5 5.5 2.4 5.7 Table 1: Commit and Server fetch Operation Latency [ms] PeerFetch 1.8 - 5.5 \u2212AlertHelper 0.3 - 4.6 \u2212CopyUnswizzle 0.24 \u2212CrossRedirector 0.16 Table 2: Peer fetch 6.3 Basic Costs This section analyzes the basic cost of the requests in the Buddy system during the OO7 runs.\n6.3.1 Redirection Fetch and commit requests in the BuddyCache cross the redirector, a cost not incurred in the Base system.\nFor a request redirected to the server (server fetch) the extra cost of redirection includes a local request from the client to redirector on the way to and from the server.\nWe evaluate this latency overhead indirectly by comparing the measured latency of the Buddy system server fetch or commit request with the measured latency of the corresponding request in the Base system.\nTable 1 shows the latency for the commit and server fetch requests in the Base and Buddy system for 3 client and 5 client groups in a fast local area network.\nAll the numbers were computed by averaging measured request latency over 1000 requests.\nThe measurements show that the redirection cost of crossing the redirector in not very high even in a local area network.\nThe commit cost increases with the number of clients since commits are processed sequentially.\nThe fetch cost does not increase as much because the server cache reduces this cost.\nIn a large system with many groups, however, the server cache becomes less efficient.\nTo evaluate the overheads of the peer fetch, we measure the peer fetch latency (PeerFetch) at the requesting client and break down its component costs.\nIn peer fetch, the cost of the redirection includes, in addition to the local network request cost, the CPU processing latency of crossing the redirector and crossing the helper, the latter including the time to process the help request and the time to copy, and unswizzle the requested page.\nWe directly measured the time to copy and unswizzle the requested page at the helper, (CopyUnswizzle), and timed the crossing times using a null crossing request.\nTable 2 summarizes the latencies that allows us to break down the peer fetch costs.\nCrossRedirector, includes the CPU latency of crossing the redirector plus a local network round-trip and is measured by timing a round-trip null request issued by a client to the redirector.\nAlertHelper, includes the time for the helper to notice the request plus a network roundtrip, and is measured by timing a round-trip null request issued from an auxiliary client to the helper client.\nThe local network latency is fixed and less than 0.1 ms. The AlertHelper latency which includes the elapsed time from the help request arrival until the start of help request processing is highly variable and therefore contributes to the high variability of the PeerFetch time.\nThis is because the client in Buddy system is currently single threaded and therefore only starts processing a help request when blocked waiting for a fetch- or commit reply.\nThis overhead is not inherent to the BuddyCache architecture and could be mitigated by a multi-threaded implementation in a system with pre-emptive scheduling.\n6.3.2 Version Cache The solo commit allows a fast client modifying an object to commit independently of a slow peer.\nThe solo commit mechanism introduces extra processing at the server at transaction validation time, and extra processing at the client at transaction commit time and at update or invalidation processing time.\nThe server side overheads are minimal and consist of a page version number update at commit time, and a version number comparison at transaction validation time.\nThe version cache has an entry only when invalidations or updates arrive out of order.\nThis may happen when a transaction accesses objects in multiple servers.\nOur experiments run in a single server system and therefore, the commit time overhead of version cache management at the client does not contribute in the results presented in the section below.\nTo gauge these client side overheads in a multiple server system, we instrumented the version cache implementation to run with a workload trace that included reordered invalidations and timed the basic operations.\nThe extra client commit time processing includes a version cache lookup operation for each object read by the transaction at commit request preparation time, and a version cache insert operation for each object updated by a transaction at commit reply processing time, but only if the updated page is missing some earlier invalidations or updates.\nIt is important that the extra commit time costs are kept to a minimum since client is synchronously waiting for the commit completion.\nThe measurements show that in the worst case, when a large number of invalidations arrive out of order, and about half of the objects modified by T2a (200 objects) reside on reordered pages, the cost of updating the version cache is 0.6 ms. The invalidation time cost are comparable, but since invalidations and updates are processed in the background this cost is less important for the overall performance.\nWe are currently working on optimizing the version cache implementation to further reduce these costs.\n6.4 Overall Performance This section examines the performance gains seen by an application running OO7 benchmark with a BuddyCache in a wide area network.\n6.4.1 Cold Misses To evaluate the performance gains from avoiding cold misses we compare the cold cache performance of OO7 benchmark running read-only workload in the Buddy and Base systems.\nWe derive the times by timing the execution of the systems in the local area network environment and substituting 40 ms and 80 ms delays for the requests crossing the redirector and the server to estimate the performance in the wide-area-network.\nFigures 7 and 8 show the overall time to complete 1000 cold cache transactions.\nThe numbers were 35 0 5 0 100 150 200 250 Base Buddy Base Buddy Base Buddy 3 Clients 5 Clients 10 Clients [ms] CPU Commit Server Fetch Peer Fetch Figure 7: Breakdown for cold read-only 40ms RTT 0 5 0 100 150 200 250 300 350 400 Base Buddy Base Buddy Base Buddy 3 Clients 5 Clients 10 Clients [ms] CPU Commit Server Fetch Peer Fetch Figure 8: Breakdown for cold read-only 80ms RTT obtained by averaging the overall time of each client in the group.\nThe results show that in a 40 ms network Buddy system reduces significantly the overall time compared to the Base system, providing a 39% improvement in a three client group, 46% improvement in the five client group and 56% improvement in the ten client case.\nThe overall time includes time spent performing client computation, direct fetch requests, peer fetches, and commit requests.\nIn the three client group, Buddy and Base incur almost the same commit cost and therefore the entire performance benefit of Buddy is due to peer fetch avoiding direct fetches.\nIn the five and ten client group the server fetch cost for individual client decreases because with more clients faulting in a fixed size shared module into BuddyCache, each client needs to perform less server fetches.\nFigure 8 shows the overall time and cost break down in the 80 ms network.\nThe BuddyCache provides similar performance improvements as with the 40ms network.\nHigher network latency increases the relative performance advantage provided by peer fetch relative to direct fetch but this benefit is offset by the increased commit times.\nFigure 9 shows the relative latency improvement provided by BuddyCache (computed as the overall measured time difference between Buddy and Base relative to Base) as a -10% 0% 10% 20% 30% 40% 50% 60% 70% 1 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 100 Latency [ms] 3 Clients 3 Clients (Perf model) 5 Clients 5 Clients (Perf model) 10 Clients 10 FEs (perf model) Figure 9: Cold miss benefit 0 2 0 4 0 6 0 8 0 100 120 140 Base Buddy Reader Buddy Writer [ms] CPU Commit Server Fetch Peer Fetch Figure 10: Breakdown for hot read-write 40ms RTT function of network latency, with a fixed server load.\nThe cost of the extra mechanism dominates BuddyCache benefit when network latency is low.\nAt typical Internet latencies 20ms-60ms the benefit increases with latency and levels off around 60ms with significant (up to 62% for ten clients) improvement.\nFigure 9 includes both the measured improvement and the improvement derived using the analytical model.Remarkably, the analytical results predict the measured improvement very closely, albeit being somewhat higher than the empirical values.\nThe main reason why the simplified model works well is it captures the dominant performance component, network latency cost.\n6.4.2 Invalidation Misses To evaluate the performance benefits provided by BuddyCache due to avoided invalidation misses, we compared the hot cache performance of the Base system with two different Buddy system configurations.\nOne of the Buddy system configurations represents a collaborating peer group modifying shared objects (Writer group), the other represents a group where the peers share a read-only interest in the modified objects (Reader group) and the writer resides outside the BuddyCache group.\nIn each of the three systems, a single client runs a readwrite workload (writer) and three other clients run read-only workload (readers).\nBuddy system with one group contain36 0 5 0 100 150 200 250 300 Base Buddy Reader Buddy Writer [ms] CPU Commit Server Fetch Peer Fetch Figure 11: Breakdown for hot read-write 80ms RTT ing a single reader and another group containing two readers and one writer models the Writer group.\nBuddy system with one group containing a single writer and another group running three readers models the Reader group.\nIn Base, one writer and three readers access the server directly.\nThis simple configuration is sufficient to show the impact of BuddyCache techniques.\nFigures 10 and 11 show the overall time to complete 1000 hot cache OO7 read-only transactions.\nWe obtain the numbers by running 2000 transactions to filter out cold misses and then time the next 1000 transactions.\nHere again, the reported numbers are derived from the local area network experiment results.\nThe results show that the BuddyCache reduces significantly the completion time compared to the Base system.\nIn a 40 ms network, the overall time in the Writer group improves by 62% compared to Base.\nThis benefit is due to peer update that avoids all misses due to updates.\nThe overall time in the Reader group improves by 30% and is due to peer fetch that allows a client to access an invalidated object at the cost of a local fetch avoiding the delay of fetching from the server.\nThe latter is an important benefit because it shows that on workloads with updates, peer fetch allows an invalidation-based protocol to provide some of the benefits of update-based protocol.\nNote that the performance benefit delivered by the peer fetch in the Reader group is approximately 50% less than the performance benefit delivered by peer update in the Writer group.\nThis difference is similar in 80ms network.\nFigure 12 shows the relative latency improvement provided by BuddyCache in Buddy Reader and Buddy Writer configurations (computed as the overall time difference between BuddyReader and Base relative to Base, and Buddy Writer and Base relative to Base) in a hot cache experiment as a function of increasing network latency, for fixed server load.\nThe peer update benefit dominates overhead in Writer configuration even in low-latency network (peer update incurs minimal overhead) and offers significant 44-64% improvement for entire latency range.\nThe figure includes both the measured improvement and the improvement derived using the analytical model.\nAs in cold cache experiments, here the analytical results predict the measured improvement closely.\nThe difference is -10% 0% 10% 20% 30% 40% 50% 60% 70% 1 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 100 Latency [ms] Benefits[%] Buddy Reader Buddy Reader (perf model) Buddy Writer Buddy Writer (perf model) Figure 12: Invalidation miss benefit minimal in the ``writer group'', and somewhat higher in the ``reader group'' (consistent with the results in the cold cache experiments).\nAs in cold cache case, the reason why the simplified analytical model works well is because it captures the costs of network latency, the dominant performance cost.\n7.\nCONCLUSION Collaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area network because of high network latency.\nThis paper described BuddyCache, a new transactional cooperative caching [20, 16, 13, 2, 28] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients.\nBuddyCache uses redirection to fetch missing objects directly from group members caches, and to support peer update, a new lightweight application-level multicast technique that gives group members consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nRedirection, however, can interfere with object availability.\nSolo commit, is a new validation technique that allows a client in a group to commit independently of slow or failed peers.\nIt provides fine-grained validation using inexpensive coarse-grain version information.\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [23] and evaluated the benefits and costs of the system over a range of network latencies.\nAnalytical results, supported by the system measurements using the multi-user 007 benchmark indicate, that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThe main contributions of the paper are: 1.\nextending cooperative caching techniques to support 37 fine-grain strong-consistency access in high-latency environments, 2.\nan implementation of the system prototype that yields strong performance gains over the base system, 3.\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost, high network latency.\n8.\nACKNOWLEDGMENTS We are grateful to Jay Lepreau and the staff of Utah experimental testbed emulab.net [1], especially Leigh Stoller, for hosting the experiments and the help with the testbed.\nWe also thank Jeff Chase, Maurice Herlihy, Butler Lampson and the OOPSLA reviewers for the useful comments that improved this paper.\n9.\nREFERENCES [1] ``emulab.net'', the Utah Network Emulation Facility.\nhttp:\/\/www.emulab.net.\n[2] A. Adya, M. Castro, B. Liskov, U. Maheshwari, and L. Shrira.\nFragment Reconstruction: Providing Global Cache Coherence in a Transactional Storage System.\nProceedings of the International Conference on Distributed Computing Systems, May 1997.\n[3] A. Adya, R. Gruber, B. Liskov, and U. Maheshwari.\nEfficient optimistic concurrencty control using loosely synchronized clocks.\nIn Proceedings of the ACM SIGMOD International Conference on Management of Data, May 1995.\n[4] C. Amza, A.L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W. Zwaenepoel.\nTreadmarks: Shared memory computing on networks of workstations.\nIEEE Computer, 29(2), February 1996.\n[5] C. Anderson and A. Karlin.\nTwo Adaptive Hybrid Cache Coherency Protocols.\nIn Proceedings of the 2nd IEEE Symposium on High-Performance Computer Architecture (HPCA ``96), February 1996.\n[6] M. Baker.\nFast Crash Recovery in Distributed File Systems.\nPhD thesis, University of California at Berkeley, 1994.\n[7] P. Cao and C. Liu.\nMaintaining Strong Cache Consistency in the World Wide Web.\nIn 17th International Conference on Distributed Computing Systems., April 1998.\n[8] M. Carey, D. J. Dewitt, C. Kant, and J. F. Naughton.\nA Status Report on the OO7 OODBMS Benchmarking Effort.\nIn Proceedings of OOPSLA, October 1994.\n[9] A. Chankhunthod, M. Schwartz, P. Danzig, K. Worrell, and C. Neerdaels.\nA Hierarchical Internet Object Cache.\nIn USENIX Annual Technical Conference, January 1995.\n[10] J. Chase, S. Gadde, and M. Rabinovich.\nDirectory Structures for Scalable Internet Caches.\nTechnical Report CS-1997-18, Dept. of Computer Science, Duke University, November 1997.\n[11] J. Chase, S. Gadde, and M. Rabinovich.\nNot All Hits Are Created Equal: Cooperative Proxy Caching Over a Wide-Area Network.\nIn Third International WWW Caching Workshop, June 1998.\n[12] D. R. Cheriton and D. Li.\nScalable Web Caching of Frequently Updated Objects using Reliable Multicast.\n2nd USENIX Symposium on Internet Technologies and Systems, October 1999.\n[13] M. D. Dahlin, R. Y. Wang, T. E. Anderson, and D. A. Patterson.\nCooperative caching: Using remote client memory to improve file system performance.\nProceedings of the USENIX Conference on Operating Systems Design and Implementation, November 1994.\n[14] S. Dwarkadas, H. Lu, A.L. Cox, R. Rajamony, and W. Zwaenepoel.\nCombining Compile-Time and Run-Time Support for Efficient Software Distributed Shared Memory.\nIn Proceedings of IEEE, Special Issue on Distributed Shared Memory, March 1999.\n[15] Li Fan, Pei Cao, Jussara Almeida, and Andrei Broder.\nSummary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.\nIn Proceedings of ACM SIGCOMM, September 1998.\n[16] M. Feeley, W. Morgan, F. Pighin, A. Karlin, and H. Levy.\nImplementing Global Memory Management in a Workstation Cluster.\nProceedings of the 15th ACM Symposium on Operating Systems Principles, December 1995.\n[17] M. J. Feeley, J. S. Chase, V. R. Narasayya, and H. M. Levy.\nIntegrating Coherency and Recoverablity in Distributed Systems.\nIn Proceedings of the First Usenix Symposium on Operating sustems Design and Implementation, May 1994.\n[18] P. Ferreira and M. Shapiro et al..\nPerDiS: Design, Implementation, and Use of a PERsistent DIstributed Store.\nIn Recent Advances in Distributed Systems, LNCS 1752, Springer-Verlag, 1999.\n[19] M. J. Franklin, M. Carey, and M. Livny.\nTransactional Client-Server Cache Consistency: Alternatives and Performance.\nIn ACM Transactions on Database Systems, volume 22, pages 315-363, September 1997.\n[20] Michael Franklin, Michael Carey, and Miron Livny.\nGlobal Memory Management for Client-Server DBMS Architectures.\nIn Proceedings of the 19th Intl..\nConference on Very Large Data Bases (VLDB), August 1992.\n[21] S. Ghemawat.\nThe Modified Object Buffer: A Storage Management Technique for Object-Oriented Databases.\nPhD thesis, Massachusetts Institute of Technology, 1997.\n[22] L. Kawell, S. Beckhardt, T. Halvorsen, R. Ozzie, and I. Greif.\nReplicated document management in a group communication system.\nIn Proceedings of the ACM CSCW Conference, September 1988.\n[23] B. Liskov, M. Castro, L. Shrira, and A. Adya.\nProviding Persistent Objects in Distributed Systems.\nIn Proceedings of the 13th European Conference on Object-Oriented Programming (ECOOP ``99), June 1999.\n[24] A. Muthitacharoen, B. Chen, and D. Mazieres.\nA Low-bandwidth Network File System.\nIn 18th ACM Symposium on Operating Systems Principles, October 2001.\n[25] B. Oki and B. Liskov.\nViewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems.\nIn Proc.\nof ACM Symposium on Principles of Distributed 38 Computing, August 1988.\n[26] J. O``Toole and L. Shrira.\nOpportunistic Log: Efficient Installation Reads in a Reliable Object Server.\nIn Usenix Symposium on Operation Systems Design and Implementation, November 1994.\n[27] D. Pendarakis, S. Shi, and D. Verma.\nALMI: An Application Level Multicast Infrastructure.\nIn 3rd USENIX Symposium on Internet Technologies and Systems, March 2001.\n[28] P. Sarkar and J. Hartman.\nEfficient Cooperative Caching Using Hints.\nIn Usenix Symposium on Operation Systems Design and Implementation, October 1996.\n[29] A. M. Vahdat, P. C. Eastham, and T. E Anderson.\nWebFS: A Global Cache Coherent File System.\nTechnical report, University of California, Berkeley, 1996.\n[30] A. Wolman, G. Voelker, N. Sharma, N. Cardwell, A. Karlin, and H. Levy.\nOn the Scale and Performance of Cooperative Web Proxy Caching.\nIn 17th ACM Symposium on Operating Systems Principles, December 1999.\n[31] J. Yin, L. Alvisi, M. Dahlin, and C. Lin.\nHierarchical Cache Consistency in a WAN.\nIn USENIX Symposium on Internet Technologies and Systems, October 1999.\n[32] J. Yin, L. Alvisi, M. Dahlin, and C. Lin.\nVolume Leases for Consistency in Large-Scale Systems.\nIEEE Transactions on Knowledge and Data Engineering, 11(4), July\/August 1999.\n[33] M. Zaharioudakis, M. J. Carey, and M. J. Franklin.\nAdaptive, Fine-Grained Sharing in a Client-Server OODBMS: A Callback-Based Approach.\nACM Transactions on Database Systems, 22:570-627, December 1997.\n10.\nAPPENDIX This appendix outlines the BuddyCache failover protocol.\nTo accommodate heterogeneous clients including resourcepoor hand-helds we do not require the availability of persistent storage in the BuddyCache peer group.\nThe BuddyCache design assumes that the client caches and the redirector data structures do not survive node failures.\nA failure of a client or a redirector is detected by a membership protocol that exchanges periodic I am alive messages between group members and initiates a failover protocol.\nThe failover determines the active group participants, re-elects a redirector if needed, reinitializes the BuddyCache data structures in the new configuration and restarts the protocol.\nThe group reconfiguration protocol is similar to the one presented in [25].\nHere we describe how the failover manages the BuddyCache state.\nTo restart the BuddyCache protocol, the failover needs to resynchronize the redirector page directory and clientserver request forwarding so that active clients can continue running transactions using their caches.\nIn the case of a client failure, the failover removes the crashed client pages from the directory.\nAny response to an earlier request initiated by the failed client is ignored except a commit reply, in which case the redirector distributes the retained committed updates to active clients caching the modified pages.\nIn the case of a redirector failure, the failover protocol reinitializes sessions with the servers and clients, and rebuilds the page directory using a protocol similar to one in [6].\nThe newly restarted redirector asks the active group members for the list of pages they are caching and the status of these pages, i.e. whether the pages are complete or incomplete.\nRequests outstanding at the redirector at the time of the crash may be lost.\nA lost fetch request will time out at the client and will be retransmitted.\nA transaction running at the client during a failover and committing after the failover is treated as a regular transaction, a transaction trying to commit during a failover is aborted by the failover protocol.\nA client will restart the transaction and the commit request will be retransmitted after the failover.\nInvalidations, updates or collected update acknowledgements lost at the crashed redirector could prevent the garbage collection of pending invalidations at the servers or the vcache in the clients.\nTherefore, servers detecting a redirector crash retransmit unacknowledged invalidations and commit replies.\nUnique version numbers in invalidations and updates ensure that duplicate retransmitted requests are detected and discarded.\nSince the transaction validation procedure depends on the cache coherence protocol to ensure that transactions do not read stale data, we now need to argue that BuddyCache failover protocol does not compromise the correctness of the validation procedure.\nRecall that BuddyCache transaction validation uses two complementary mechanisms, page version numbers and invalidation acknowledgements from the clients, to check that a transaction has read up-to-date data.\nThe redirector-based invalidation (and update) acknowledgement propagation ensures the following invariant.\nWhen a server receives an acknowledgement for an object o modification (invalidation or update) from a client group, any client in the group caching the object o has either installed the latest value of object o, or has invalidated o. Therefore, if a server receives a commit request from a client for a transaction T reading an object o after a failover in the client group, and the server has no unacknowledged invalidation for o pending for this group, the version of the object read by the transaction T is up-to-date independently of client or redirector failures.\nNow consider the validation using version numbers.\nThe transaction commit record contains a version number for each object read by the transaction.\nThe version number protocol maintains the invariant V P that ensures that the value of object o read by the transaction corresponds to the highest version number for o received by the client.\nThe invariant holds since the client never applies an earlier modification after a later modification has been received.\nRetransmition of invalidations and updates maintains this invariant.\nThe validation procedure checks that the version number in the commit record matches the version number in the unacknowledged outstanding invalidation.\nIt is straightforward to see that since this check is an end-to-end client-server check it is unaffected by client or redirector failure.\nThe failover protocol has not been implemented yet.\n39","lvl-3":"BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area networks because of high network latency.\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers.\nWe have implemented a BuddyCache prototype and evaluated its performance.\nAnalytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.\n1.\nINTRODUCTION\nImprovements in network connectivity erode the distinction between local and wide-area computing and, increasingly, users expect their work environment to follow them wherever they go.\nNevertheless, distributed applications may perform poorly in wide-area network environments.\nNetwork bandwidth problems will improve in the foreseeable future, but improvement in network latency is fundamentally limited.\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment.\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task, for example a team of engineers jointly overseeing a construction project.\nStrong-consistency collaborative applications, for example CAD systems, use client\/server transactional object storage systems to ensure consistent access to shared persistent data.\nUp to now however, users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [24].\nFor transactional storage systems, the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore, in wide-area network environments, collaborative applications have been adapted to use weaker consistency storage systems [22].\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics.\nIf shared persistent objects could be accessed with low-latency, a new field of distributed strong-consistency applications could be opened.\nCooperative web caching [10, 11, 15] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server.\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems, namely determining what objects are cached where, becomes easy in small groups typical of collaborative settings.\nHowever, cooperative web caching techniques do not provide two important properties needed by collaborative applications, strong consistency and efficient\naccess to fine-grained objects.\nCooperative object caching systems [2] provide these properties.\nHowever, they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page.\nInteraction with the server increases latency.\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments.\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site.\nThe engineers use a collaborative CAD application to revise and update complex project design documents.\nThe shared documents are stored in transactional repository servers at the company home site.\nThe engineers use workstations running repository clients.\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow.\nTo improve access latency, clients fetch objects from repository servers and cache and access them locally.\nA coherence protocol ensures that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects.\nWith BuddyCache, a group of close-by collaborating clients, connected to storage repository via a high-latency link, can avoid interactions with the server if needed objects, updates or coherency information are available in some client in the group.\nBuddyCache presents two main technical challenges.\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system.\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes.\nBuddyCache uses a\" redirection\" approach similar to one used in cooperative web caching systems [11].\nA redirector server, interposed between the clients and the remote servers, runs on the same network as the collaborating group and, when possible, replaces the function of the remote servers.\nIf the client request cannot be served locally, the redirector forwards it to a remote server.\nWhen one of the clients in the group fetches a shared object from the repository, the object is likely to be needed by other clients.\nBuddyCache redirects subsequent requests for this object to the caching client.\nSimilarly, when a client creates or modifies a shared object, the new data is likely to be of potential interest to all group members.\nBuddyCache uses redirection to support peer update, a lightweight\" application-level multicast\" technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nNevertheless, in a transactional system, redirection interferes with shared object availability.\nSolo commit, is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow, or clients fail independently.\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information.\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding, this raises the question is the\" cure\" worse than the\" disease\"?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements.\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency.\nAnalytical results, supported by measurements based on the multi-user 007 benchmark, indicate that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments.\n2.\nRELATED WORK\nCooperative caching techniques [20, 16, 13, 2, 28] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network.\nThese techniques use the server to provide redirection and do not consider issues of high network latency.\nMultiprocessor systems and distributed shared memory systems [14, 4, 17, 18, 5] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail.\nCooperative Web caching techniques, (e.g. [11, 15]) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment, using distributed directory protocols for tracking cache changes.\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects.\nCheriton and Li propose MMO [12] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol, exploiting locality in a way similar to BuddyCache.\nThis multicast transport level solution is geared to the single writer semantics of web objects.\nIn contrast, BuddyCache uses\" application level\" multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects.\nApplication level multicast solution in a middle-ware system was described by Pendarakis, Shi and Verma in [27].\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing.\nYin, Alvisi, Dahlin and Lin [32, 31] present a hierarchical WAN cache coherence scheme.\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions.\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme.\nIn contrast, our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system.\nAnderson, Eastham and Vahdat in WebFS [29] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations.\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels.\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications, to deal with concurrent updates but is limited to file systems.\nMazieres studies a bandwidth saving technique [24] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache.\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache.\n3.\nBUDDYCACHE\n3.1 Cache Coherence\n3.2 Light-weight Peer Update\n3.3 Solo commit\n3.4 Group Configuration\n4.\nIMPLEMENTATION\n4.1 Base Storage System\n4.2 Base Cache Coherence\n4.3 Redirection\n4.4 Peer Update\n4.5 Vcache\n5.\nBUDDYCACHE FAILOVER\n6.\nPERFORMANCE EVALUATION\n6.1 Analysis\n6.1.1 The Model\n6.2 Experimental Setup\n6.3 Basic Costs\n6.3.1 Redirection\n6.3.2 Version Cache\n6.4 Overall Performance\n6.4.1 Cold Misses\n6.4.2 Invalidation Misses\n7.\nCONCLUSION\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area network because of high network latency.\nThis paper described BuddyCache, a new transactional cooperative caching [20, 16, 13, 2, 28] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients.\nBuddyCache uses redirection to fetch missing objects directly from group members caches, and to support peer update, a new lightweight\" application-level multicast\" technique that gives group members consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nRedirection, however, can interfere with object availability.\nSolo commit, is a new validation technique that allows a client in a group to commit independently of slow or failed peers.\nIt provides fine-grained validation using inexpensive coarse-grain version information.\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [23] and evaluated the benefits and costs of the system over a range of network latencies.\nAnalytical results, supported by the system measurements using the multi-user 007 benchmark indicate, that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThe main contributions of the paper are:\nfine-grain strong-consistency access in high-latency environments, 2.\nan implementation of the system prototype that yields strong performance gains over the base system, 3.\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost, high network latency.","lvl-4":"BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area networks because of high network latency.\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers.\nWe have implemented a BuddyCache prototype and evaluated its performance.\nAnalytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.\n1.\nINTRODUCTION\nNevertheless, distributed applications may perform poorly in wide-area network environments.\nNetwork bandwidth problems will improve in the foreseeable future, but improvement in network latency is fundamentally limited.\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment.\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task, for example a team of engineers jointly overseeing a construction project.\nStrong-consistency collaborative applications, for example CAD systems, use client\/server transactional object storage systems to ensure consistent access to shared persistent data.\nUp to now however, users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [24].\nFor transactional storage systems, the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore, in wide-area network environments, collaborative applications have been adapted to use weaker consistency storage systems [22].\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics.\nIf shared persistent objects could be accessed with low-latency, a new field of distributed strong-consistency applications could be opened.\nCooperative web caching [10, 11, 15] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server.\nHowever, cooperative web caching techniques do not provide two important properties needed by collaborative applications, strong consistency and efficient\naccess to fine-grained objects.\nCooperative object caching systems [2] provide these properties.\nHowever, they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page.\nInteraction with the server increases latency.\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments.\nThe engineers use a collaborative CAD application to revise and update complex project design documents.\nThe shared documents are stored in transactional repository servers at the company home site.\nThe engineers use workstations running repository clients.\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow.\nTo improve access latency, clients fetch objects from repository servers and cache and access them locally.\nA coherence protocol ensures that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects.\nWith BuddyCache, a group of close-by collaborating clients, connected to storage repository via a high-latency link, can avoid interactions with the server if needed objects, updates or coherency information are available in some client in the group.\nBuddyCache presents two main technical challenges.\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system.\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes.\nBuddyCache uses a\" redirection\" approach similar to one used in cooperative web caching systems [11].\nA redirector server, interposed between the clients and the remote servers, runs on the same network as the collaborating group and, when possible, replaces the function of the remote servers.\nIf the client request cannot be served locally, the redirector forwards it to a remote server.\nWhen one of the clients in the group fetches a shared object from the repository, the object is likely to be needed by other clients.\nBuddyCache redirects subsequent requests for this object to the caching client.\nSimilarly, when a client creates or modifies a shared object, the new data is likely to be of potential interest to all group members.\nBuddyCache uses redirection to support peer update, a lightweight\" application-level multicast\" technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nNevertheless, in a transactional system, redirection interferes with shared object availability.\nSolo commit, is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow, or clients fail independently.\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information.\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements.\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency.\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments.\n2.\nRELATED WORK\nCooperative caching techniques [20, 16, 13, 2, 28] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network.\nThese techniques use the server to provide redirection and do not consider issues of high network latency.\nCooperative Web caching techniques, (e.g. [11, 15]) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment, using distributed directory protocols for tracking cache changes.\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects.\nThis multicast transport level solution is geared to the single writer semantics of web objects.\nIn contrast, BuddyCache uses\" application level\" multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects.\nApplication level multicast solution in a middle-ware system was described by Pendarakis, Shi and Verma in [27].\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing.\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions.\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme.\nIn contrast, our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system.\nAnderson, Eastham and Vahdat in WebFS [29] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations.\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels.\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications, to deal with concurrent updates but is limited to file systems.\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache.\n7.\nCONCLUSION\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area network because of high network latency.\nThis paper described BuddyCache, a new transactional cooperative caching [20, 16, 13, 2, 28] technique that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe technique improves performance yet provides strong correctness and availability properties in the presence of node failures and slow clients.\nRedirection, however, can interfere with object availability.\nSolo commit, is a new validation technique that allows a client in a group to commit independently of slow or failed peers.\nIt provides fine-grained validation using inexpensive coarse-grain version information.\nWe have designed and implemented BuddyCache prototype in Thor distributed transactional object storage system [23] and evaluated the benefits and costs of the system over a range of network latencies.\nfine-grain strong-consistency access in high-latency environments, 2.\nan implementation of the system prototype that yields strong performance gains over the base system, 3.\nanalytical and measurement based performance evaluation of the costs and benefits of the new techniques capturing the dominant performance cost, high network latency.","lvl-2":"BuddyCache: High-Performance Object Storage for Collaborative Strong-Consistency Applications in a WAN *\nABSTRACT\nCollaborative applications provide a shared work environment for groups of networked clients collaborating on a common task.\nThey require strong consistency for shared persistent data and efficient access to fine-grained objects.\nThese properties are difficult to provide in wide-area networks because of high network latency.\nBuddyCache is a new transactional caching approach that improves the latency of access to shared persistent objects for collaborative strong-consistency applications in high-latency network environments.\nThe challenge is to improve performance while providing the correctness and availability properties of a transactional caching protocol in the presence of node failures and slow peers.\nWe have implemented a BuddyCache prototype and evaluated its performance.\nAnalytical results, confirmed by measurements of the BuddyCache prototype using the multiuser 007 benchmark indicate that for typical Internet latencies, e.g. ranging from 40 to 80 milliseconds round trip time to the storage server, peers using BuddyCache can reduce by up to 50% the latency of access to shared objects compared to accessing the remote servers directly.\n1.\nINTRODUCTION\nImprovements in network connectivity erode the distinction between local and wide-area computing and, increasingly, users expect their work environment to follow them wherever they go.\nNevertheless, distributed applications may perform poorly in wide-area network environments.\nNetwork bandwidth problems will improve in the foreseeable future, but improvement in network latency is fundamentally limited.\nBuddyCache is a new object caching technique that addresses the network latency problem for collaborative applications in wide-area network environment.\nCollaborative applications provide a shared work environment for groups of networked users collaborating on a common task, for example a team of engineers jointly overseeing a construction project.\nStrong-consistency collaborative applications, for example CAD systems, use client\/server transactional object storage systems to ensure consistent access to shared persistent data.\nUp to now however, users have rarely considered running consistent network storage systems over wide-area networks as performance would be unacceptable [24].\nFor transactional storage systems, the high cost of wide-area network interactions to maintain data consistency is the main cost limiting the performance and therefore, in wide-area network environments, collaborative applications have been adapted to use weaker consistency storage systems [22].\nAdapting an application to use weak consistency storage system requires significant effort since the application needs to be rewritten to deal with a different storage system semantics.\nIf shared persistent objects could be accessed with low-latency, a new field of distributed strong-consistency applications could be opened.\nCooperative web caching [10, 11, 15] is a well-known approach to reducing client interaction with a server by allowing one client to obtain missing objects from a another client instead of the server.\nCollaborative applications seem a particularly good match to benefit from this approach since one of the hard problems, namely determining what objects are cached where, becomes easy in small groups typical of collaborative settings.\nHowever, cooperative web caching techniques do not provide two important properties needed by collaborative applications, strong consistency and efficient\naccess to fine-grained objects.\nCooperative object caching systems [2] provide these properties.\nHowever, they rely on interaction with the server to provide fine-grain cache coherence that avoids the problem of false sharing when accesses to unrelated objects appear to conflict because they occur on the same physical page.\nInteraction with the server increases latency.\nThe contribution of this work is extending cooperative caching techniques to provide strong consistency and efficient access to fine-grain objects in wide-area environments.\nConsider a team of engineers employed by a construction company overseeing a remote project and working in a shed at the construction site.\nThe engineers use a collaborative CAD application to revise and update complex project design documents.\nThe shared documents are stored in transactional repository servers at the company home site.\nThe engineers use workstations running repository clients.\nThe workstations are interconnected by a fast local Ethernet but the network connection to the home repository servers is slow.\nTo improve access latency, clients fetch objects from repository servers and cache and access them locally.\nA coherence protocol ensures that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborative application is coordinating with the servers consistent access to shared objects.\nWith BuddyCache, a group of close-by collaborating clients, connected to storage repository via a high-latency link, can avoid interactions with the server if needed objects, updates or coherency information are available in some client in the group.\nBuddyCache presents two main technical challenges.\nOne challenge is how to provide efficient access to shared finegrained objects in the collaborative group without imposing performance overhead on the entire caching system.\nThe other challenge is to support fine-grain cache coherence in the presence of slow and failed nodes.\nBuddyCache uses a\" redirection\" approach similar to one used in cooperative web caching systems [11].\nA redirector server, interposed between the clients and the remote servers, runs on the same network as the collaborating group and, when possible, replaces the function of the remote servers.\nIf the client request cannot be served locally, the redirector forwards it to a remote server.\nWhen one of the clients in the group fetches a shared object from the repository, the object is likely to be needed by other clients.\nBuddyCache redirects subsequent requests for this object to the caching client.\nSimilarly, when a client creates or modifies a shared object, the new data is likely to be of potential interest to all group members.\nBuddyCache uses redirection to support peer update, a lightweight\" application-level multicast\" technique that provides group members with consistent access to the new data committed within the collaborating group without imposing extra overhead outside the group.\nNevertheless, in a transactional system, redirection interferes with shared object availability.\nSolo commit, is a validation technique used by BuddyCache to avoid the undesirable client dependencies that reduce object availability when some client nodes in the group are slow, or clients fail independently.\nA salient feature of solo commit is supporting fine-grained validation using inexpensive coarse-grained coherence information.\nSince redirection supports the performance benefits of reducing interaction with the server but introduces extra processing cost due to availability mechanisms and request forwarding, this raises the question is the\" cure\" worse than the\" disease\"?\nWe designed and implemented a BuddyCache prototype and studied its performance benefits and costs using analytical modeling and system measurements.\nWe compared the storage system performance with and without BuddyCache and considered how the cost-benefit balance is affected by network latency.\nAnalytical results, supported by measurements based on the multi-user 007 benchmark, indicate that for typical Internet latencies BuddyCache provides significant performance benefits, e.g. for latencies ranging from 40 to 80 milliseconds round trip time, clients using the BuddyCache can reduce by up to 50% the latency of access to shared objects compared to the clients accessing the repository directly.\nThese strong performance gains could make transactional object storage systems more attractive for collaborative applications in wide-area environments.\n2.\nRELATED WORK\nCooperative caching techniques [20, 16, 13, 2, 28] provide access to client caches to avoid high disk access latency in an environment where servers and clients run on a fast local area network.\nThese techniques use the server to provide redirection and do not consider issues of high network latency.\nMultiprocessor systems and distributed shared memory systems [14, 4, 17, 18, 5] use fine-grain coherence techniques to avoid the performance penalty of false sharing but do not address issues of availability when nodes fail.\nCooperative Web caching techniques, (e.g. [11, 15]) investigate issues of maintaining a directory of objects cached in nearby proxy caches in wide-area environment, using distributed directory protocols for tracking cache changes.\nThis work does not consider issues of consistent concurrent updates to shared fine-grained objects.\nCheriton and Li propose MMO [12] a hybrid web coherence protocol that combines invalidations with updates using multicast delivery channels and receiver-reliable protocol, exploiting locality in a way similar to BuddyCache.\nThis multicast transport level solution is geared to the single writer semantics of web objects.\nIn contrast, BuddyCache uses\" application level\" multicast and a sender-reliable coherence protocol to provide similar access latency improvements for transactional objects.\nApplication level multicast solution in a middle-ware system was described by Pendarakis, Shi and Verma in [27].\nThe schema supports small multi-sender groups appropriate for collaborative applications and considers coherence issues in the presence of failures but does not support strong consistency or fine-grained sharing.\nYin, Alvisi, Dahlin and Lin [32, 31] present a hierarchical WAN cache coherence scheme.\nThe protocol uses leases to provide fault-tolerant call-backs and takes advantage of nearby caches to reduce the cost of lease extensions.\nThe study uses simulation to investigate latency and fault tolerance issues in hierarchical avoidance-based coherence scheme.\nIn contrast, our work uses implementation and analysis to evaluate the costs and benefits of redirection and fine grained updates in an optimistic system.\nAnderson, Eastham and Vahdat in WebFS [29] present a global file system coherence protocol that allows clients to choose\non per file basis between receiving updates or invalidations.\nUpdates and invalidations are multicast on separate channels and clients subscribe to one of the channels.\nThe protocol exploits application specific methods e.g. last-writer-wins policy for broadcast applications, to deal with concurrent updates but is limited to file systems.\nMazieres studies a bandwidth saving technique [24] to detect and avoid repeated file fragment transfers across a WAN when fragments are available in a local cache.\nBuddyCache provides similar bandwidth improvements when objects are available in the group cache.\n3.\nBUDDYCACHE\nHigh network latency imposes performance penalty for transactional applications accessing shared persistent objects in wide-area network environment.\nThis section describes the BuddyCache approach for reducing the network latency penalty in collaborative applications and explains the main design decisions.\nWe consider a system in which a distributed transactional object repository stores objects in highly reliable servers, perhaps outsourced in data-centers connected via high-bandwidth reliable networks.\nCollaborating clients interconnected via a fast local network, connect via high-latency, possibly satellite, links to the servers at the data-centers to access shared persistent objects.\nThe servers provide disk storage for the persistent objects.\nA persistent object is owned by a single server.\nObjects may be small (order of 100 bytes for programming language objects [23]).\nTo amortize the cost of disk and network transfer objects are grouped into physical pages.\nTo improve object access latency, clients fetch the objects from the servers and cache and access them locally.\nA transactional cache coherence protocol runs at clients and servers to ensure that client caches remain consistent when objects are modified.\nThe performance problem facing the collaborating client group is the high latency of coordinating consistent access to the shared objects.\nBuddyCache architecture is based on a request redirection server, interposed between the clients and the remote servers.\nThe interposed server (the redirector) runs on the same network as the collaborative group and, when possible, replaces the function of the remote servers.\nIf the client request can be served locally, the interaction with the server is avoided.\nIf the client request cannot be served locally, redirector forwards it to a remote server.\nRedirection approach has been used to improve the performance of web caching protocols.\nBuddyCache redirector supports the correctness, availability and fault-tolerance properties of transactional caching protocol [19].\nThe correctness property ensures onecopy serializability of the objects committed by the client transactions.\nThe availability and fault-tolerance properties ensure that a crashed or slow client does not disrupt any other client's access to persistent objects.\nThe three types of client server interactions in a transactional caching protocol are the commit of a transaction, the fetch of an object missing in a client cache, and the exchange of cache coherence information.\nBuddyCache avoids interactions with the server when a missing object, or cache coherence information needed by a client is available within the collaborating group.\nThe redirector always interacts with the servers at commit time because only storage servers provide transaction durability in a way that ensures committed\nFigure 1: BuddyCache.\ndata remains available in the presence of client or redirector failures.\nFigure 1 shows the overall BuddyCache architecture.\n3.1 Cache Coherence\nThe redirector maintains a directory of pages cached at each client to provide cooperative caching [20, 16, 13, 2, 28], redirecting a client fetch request to another client that caches the requested object.\nIn addition, redirector manages cache coherence.\nSeveral efficient transactional cache coherence protocols [19] exist for persistent object storage systems.\nProtocols make different choices in granularity of data transfers and granularity of cache consistency.\nThe current best-performing protocols use page granularity transfers when clients fetch missing objects from a server and object granularity coherence to avoid false (page-level) conflicts.\nThe transactional caching taxonomy [19] proposed by Carey, Franklin and Livny classifies the coherence protocols into two main categories according to whether a protocol avoids or detects access to stale objects in the client cache.\nThe BuddyCache approach could be applied to both categories with different performance costs and benefits in each category.\nWe chose to investigate BuddyCache in the context of OCC [3], the current best performing detection-based protocol.\nWe chose OCC because it is simple, performs well in high-latency networks, has been implemented and we had access to the implementation.\nWe are investigating BuddyCache with PSAA [33], the best performing avoidancebased protocol.\nBelow we outline the OCC protocol [3].\nThe OCC protocol uses object-level coherence.\nWhen a client requests a missing object, the server transfers the containing page.\nTransaction can read and update locally cached objects without server intervention.\nHowever, before a transaction commits it must be\" validated\"; the server must make sure the validating transaction has not read a stale version of some object that was updated by a successfully committed or validated transaction.\nIf validation fails, the transaction is aborted.\nTo reduce the number and cost of aborts,\nFigure 2: Peer fetch Figure 3: Peer update.\na server sends background object invalidation messages to clients caching the containing pages.\nWhen clients receive invalidations they remove stale objects from the cache and send background acknowledgments to let server know about this.\nSince invalidations remove stale objects from the client cache, invalidation acknowledgment indicates to the server that a client with no outstanding invalidations has read upto-date objects.\nAn unacknowledged invalidation indicates a stale object may have been accessed in the client cache.\nThe validation procedure at the server aborts a client transaction if a client reads an object while an invalidation is outstanding.\nThe\" acknowledged invalidation\" mechanism supports object-level cache coherence without object-based directories or per-object version numbers.\nAvoiding per-object overheads is very important to reduce performance penalties [3] of managing many small objects, since typical objects are small.\nAn important BuddyCache design goal is to maintain this benefit.\nSince in BuddyCache a page can be fetched into a client cache without server intervention (as illustrated in figure 2), cache directories at the servers keep track of pages cached in each collaborating group rather than each client.\nRedirector keeps track of pages cached in each client in a group.\nServers send to the redirector invalidations for pages cached in the entire group.\nThe redirector propagates invalidations from servers to affected clients.\nWhen all affected clients acknowledge invalidations, redirector can propagate the\" group acknowledgment\" to the server.\n3.2 Light-weight Peer Update\nWhen one of the clients in the collaborative group creates or modifies shared objects, the copies cached by any other client become stale but the new data is likely to be of potential interest to the group members.\nThe goal in BuddyCache is to provide group members with efficient and consistent access to updates committed within the group without imposing extra overhead on other parts of the storage system.\nThe two possible approaches to deal with stale data are cache invalidations and cache updates.\nCache coherence studies in web systems (e.g. [7]) DSM systems (e.g. [5]), and transactional object systems (e.g. [19]) compare the benefits of update and invalidation.\nThe studies show the benefits are strongly workload-dependent.\nIn general, invalidation-based coherence protocols are efficient since invalidations are small, batched and piggybacked on other messages.\nMoreover, invalidation protocols match the current hardware trend for increasing client cache sizes.\nLarger caches are likely to contain much more data than is actively used.\nUpdate-based protocols that propagate updates to low-interest objects in a wide-area network would be wasteful.\nNevertheless, invalidation-based coherence protocols can perform poorly in high-latency networks [12] if the object's new value is likely to be of interest to another group member.\nWith an invalidation-based protocol, one member's update will invalidate another member's cached copy, causing the latter to perform a high-latency fetch of the new value from the server.\nBuddyCache circumvents this well-known bandwidth vs. latency trade-off imposed by update and invalidation protocols in wide-area network environments.\nIt avoids the latency penalty of invalidations by using the redirector to retain and propagate updates committed by one client to other clients within the group.\nThis avoids the bandwidth penalty of updates because servers propagate invalidations to the redirectors.\nAs far as we know, this use of localized multicast in BuddyCache redirector is new and has not been used in earlier caching systems.\nThe peer update works as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinating server.\nAfter the transaction commits, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client, and also propagates the retained committed updates to the clients caching the modified pages (see figure 3).\nSince the groups outside the BuddyCache propagate invalidations, there is no extra overhead outside the committing group.\n3.3 Solo commit\nIn the OCC protocol, clients acknowledge server invalidations (or updates) to indicate removal of stale data.\nThe straightforward\" group acknowledgement\" protocol where redirector collects and propagates a collective acknowledge\nClient 1 Client 2 Redirector Server ment to the server, interferes with the availability property of the transactional caching protocol [19] since a client that is slow to acknowledge an invalidation or has failed can delay a group acknowledgement and prevent another client in the group from committing a transaction.\nE.g. an engineer that commits a repeated revision to the same shared design object (and therefore holds the latest version of the object) may need to abort if the\" group acknowledgement\" has not propagated to the server.\nConsider a situation depicted in figure 4 where Client1 commits a transaction T that reads the latest version of an object x on page P recently modified by Client1.\nIf the commit request for T reaches the server before the collective acknowledgement from Client2 for the last modification of x arrives at the server, the OCC validation procedure considers x to be stale and aborts T (because, as explained above, an invalidation unacknowledged by a client, acts as indication to the server that the cached object value is stale at the client).\nNote that while invalidations are not required for the correctness of the OCC protocol, they are very important for the performance since they reduce the performance penalties of aborts and false sharing.\nThe asynchronous invalidations are an important part of the reason OCC has competitive performance with PSAA [33], the best performing avoidance-based protocol [3].\nNevertheless, since invalidations are sent and processed asynchronously, invalidation processing may be arbitrarily delayed at a client.\nLease-based schemes (time-out based) have been proposed to improve the availability of hierarchical callback-based coherence protocols [32] but the asynchronous nature of invalidations makes the lease-based approaches inappropriate for asynchronous invalidations.\nThe Solo commit validation protocol allows a client with up-to-date objects to commit a transaction even if the group acknowledgement is delayed due to slow or crashed peers.\nThe protocol requires clients to include extra information with the transaction read sets in the commit message, to indicate to the server the objects read by the transaction are up-to-date.\nObject version numbers could provide a simple way to track up-to-date objects but, as mentioned above, maintaining per object version numbers imposes unacceptably high overheads (in disk storage, I\/O costs and directory size) on the entire object system when objects are small [23].\nInstead, solo commit uses coarse-grain page version numbers to identify fine-grain object versions.\nA page version number is incremented at a server when at transaction that modifies objects on the page commits.\nUpdates committed by a single transaction and corresponding invalidations are therefore uniquely identified by the modified page version number.\nPage version numbers are propagated to clients in fetch replies, commit replies and with invalidations, and clients include page version numbers in commit requests sent to the servers.\nIf a transaction fails validation due to missing\" group acknowledgement\", the server checks page version numbers of the objects in the transaction read set and allows the transaction to commit if the client has read from the latest page version.\nThe page version numbers enable independent commits but page version checks only detect page-level conflicts.\nTo detect object-level conflicts and avoid the problem of false sharing we need the\" acknowledged invalidations\".\nSection 4 describes the details of the implementation of solo commit support for fine-grain sharing.\n3.4 Group Configuration\nThe BuddyCache architecture supports multiple concurrent peer groups.\nPotentially, it may be faster to access data cached in another peer group than to access a remote server.\nIn such case extending BuddyCache protocols to support multi-level peer caching could be worthwhile.\nWe have not pursued this possibility for several reasons.\nIn web caching workloads, simply increasing the population of clients in a proxy cache often increases the overall cache hit rate [30].\nIn BuddyCache applications, however, we expect sharing to result mainly from explicit client interaction and collaboration, suggesting that inter-group fetching is unlikely to occur.\nMoreover, measurements from multi-level web caching systems [9] indicate that a multilevel system may not be advantageous unless the network connection between the peer groups is very fast.\nWe are primarily interested in environments where closely collaborating peers have fast close-range connectivity, but the connection between peer groups may be slow.\nAs a result, we decided that support for inter-group fetching in BuddyCache is not a high priority right now.\nTo support heterogenous resource-rich and resource-poor peers, the BuddyCache redirector can be configured to run either in one of the peer nodes or, when available, in a separate node within the site infrastructure.\nMoreover, in a resource-rich infrastructure node, the redirector can be configured as a stand-by peer cache to receive pages fetched by other peers, emulating a central cache somewhat similar to a regional web proxy cache.\nFrom the BuddyCache cache coherence protocol point of view, however, such a stand-by peer cache is equivalent to a regular peer cache and therefore we do not consider this case separately in the discussion in this paper.\n4.\nIMPLEMENTATION\nIn this section we provide the details of the BuddyCache implementation.\nWe have implemented BuddyCache in the Thor client\/server object-oriented database [23].\nThor supports high performance access to distributed objects and therefore provides a good test platform to investigate BuddyCache performance.\nFigure 4: Validation with Slow Peers\n4.1 Base Storage System\nThor servers provide persistent storage for objects and clients cache copies of these objects.\nApplications run at the clients and interact with the system by making calls on methods of cached objects.\nAll method calls occur within atomic transactions.\nClients communicate with servers to fetch pages or to commit a transaction.\nThe servers have a disk for storing persistent objects, a stable transaction log, and volatile memory.\nThe disk is organized as a collection of pages which are the units of disk access.\nThe stable log holds commit information and object modifications for committed transactions.\nThe server memory contains cache directory and a recoverable modified object cache called the MOB.\nThe directory keeps track of which pages are cached by which clients.\nThe MOB holds recently modified objects that have not yet been written back to their pages on disk.\nAs MOB fills up, a background process propagates modified objects to the disk [21, 26].\n4.2 Base Cache Coherence\nTransactions are serialized using optimistic concurrency control OCC [3] described in Section 3.1.\nWe provide some of the relevant OCC protocol implementation details.\nThe client keeps track of objects that are read and modified by its transaction; it sends this information, along with new copies of modified objects, to the servers when it tries to commit the transaction.\nThe servers determine whether the commit is possible, using a two-phase commit protocol if the transaction used objects at multiple servers.\nIf the transaction commits, the new copies of modified objects are appended to the log and also inserted in the MOB.\nThe MOB is recoverable, i.e. if the server crashes, the MOB is reconstructed at recovery by scanning the log.\nSince objects are not locked before being used, a transaction commit can cause caches to contain obsolete objects.\nServers will abort a transaction that used obsolete objects.\nHowever, to reduce the probability of aborts, servers notify clients when their objects become obsolete by sending them invalidation messages; a server uses its directory and the information about the committing transaction to determine what invalidation messages to send.\nInvalidation messages are small because they simply identify obsolete objects.\nFurthermore, they are sent in the background, batched and piggybacked on other messages.\nWhen a client receives an invalidation message, it removes obsolete objects from its cache and aborts the current transaction if it used them.\nThe client continues to retain pages containing invalidated objects; these pages are now incomplete with\" holes\" in place of the invalidated objects.\nPerforming invalidation on an object basis means that false sharing does not cause unnecessary aborts; keeping incomplete pages in the client cache means that false sharing does not lead to unnecessary cache misses.\nClients acknowledge invalidations to indicate removal of stale data as explained in Section 3.1.\nInvalidation messages prevent some aborts, and accelerate those that must happen--thus wasting less work and offloading detection of aborts from servers to clients.\nWhen a transaction aborts, its client restores the cached copies of modified objects to the state they had before the transaction started; this is possible because a client makes a copy of an object the first time it is modified by a transaction.\n4.3 Redirection\nThe redirector runs on the same local network as the peer group, in one of the peer nodes, or in a special node within the infrastructure.\nIt maintains a directory of pages available in the peer group and provides fast centralized fetch redirection (see figure 2) between the peer caches.\nTo improve performance, clients inform the redirector when they evict pages or objects by piggybacking that information on messages sent to the redirector.\nTo ensure up-to-date objects are fetched from the group cache the redirector tracks the status of the pages.\nA cached page is either complete in which case it contains consistent values for all the objects, or incomplete, in which case some of the objects on a page are marked invalid.\nOnly complete pages are used by the peer fetch.\nThe protocol for maintaining page status when pages are updated and invalidated is described in Section 4.4.\nWhen a client request has to be processed at the servers, e.g., a complete requested page is unavailable in the peer group or a peer needs to commit a transaction, the redirector acts as a server proxy: it forwards the request to the server, and then forwards the reply back to the client.\nIn addition, in response to invalidations sent by a server, the redirector distributes the update or invalidation information to clients caching the modified page and, after all clients acknowledge, propagates the group acknowledgment back to the server (see figure 3).\nThe redirector-server protocol is, in effect, the client-server protocol used in the base Thor storage system, where the combined peer group cache is playing the role of a single client cache in the base system.\n4.4 Peer Update\nThe peer update is implemented as follows.\nAn update commit request from a client arriving at the redirector contains the object updates.\nRedirector retains the updates and propagates the request to the coordinator server.\nAfter a transaction commits, using a two phase commit if needed, the coordinator server sends a commit reply to the redirector of the committing client group.\nThe redirector forwards the reply to the committing client.\nIt waits for the invalidations to arrive to propagate corresponding retained (committed) updates to the clients caching the modified pages (see figure 3.)\nParticipating servers that are home to objects modified by the transaction generate object invalidations for each cache group that caches pages containing the modified objects (including the committing group).\nThe invalidations are sent lazily to the redirectors to ensure that all the clients in the groups caching the modified objects get rid of the stale data.\nIn cache groups other than the committing group, redirectors propagates the invalidations to all the clients caching the modified pages, collect the client acknowledgments and after completing the collection, propagate collective acknowledgments back to the server.\nWithin the committing client group, the arriving invalidations are not propagated.\nInstead, updates are sent to clients caching those objects' pages, the updates are acknowledged by the client, and the collective acknowledgment is propagated to the server.\nAn invalidation renders a cached page unavailable for peer fetch changing the status of a complete page p into an incomplete.\nIn contrast, an update of a complete page preserves the complete page status.\nAs shown by studies of the\nfragment reconstruction [2], such update propagation allows to avoid the performance penalties of false sharing.\nThat is, when clients within a group modify different objects on the same page, the page retains its complete status and remains available for peer fetch.\nTherefore, the effect of peer update is similar to\" eager\" fragment reconstruction [2].\nWe have also considered the possibility of allowing a peer to fetch an incomplete page (with invalid objects marked accordingly) but decided against this possibility because of the extra complexity involved in tracking invalid objects.\n4.5 Vcache\nThe solo commit validation protocol allows clients with up-to-date objects to commit independently of slower (or failed) group members.\nAs explained in Section 3.3, the solo commit protocol allows a transaction T to pass validation if extra coherence information supplied by the client indicates that transaction T has read up-to-date objects.\nClients use page version numbers to provide this extra coherence information.\nThat is, a client includes the page version number corresponding to each object in the read object set sent in the commit request to the server.\nSince a unique page version number corresponds to each committed object update, the page version number associated with an object allows the validation procedure at the server to check if the client transaction has read up-to-date objects.\nThe use of coarse-grain page versions to identify object versions avoids the high penalty of maintaining persistent object versions for small objects, but requires an extra protocol at the client to maintain the mapping from a cached object to the identifying page version (ObjectToVersion).\nThe main implementation issue is concerned with maintaining this mapping efficiently.\nAt the server side, when modifications commit, servers associate page version numbers with the invalidations.\nAt validation time, if an unacknowledged invalidation is pending for an object x read by a transaction T, the validation procedure checks if the version number for x in T's read set matches the version number for highest pending invalidation for x, in which case the object value is current, otherwise T fails validation.\nWe note again that the page version number-based checks, and the invalidation acknowledgment-based checks are complimentary in the solo commit validation and both are needed.\nThe page version number check allows the validation to proceed before invalidation acknowledgments arrive but by itself a page version number check detects page-level conflicts and is not sufficient to support fine-grain coherence without the object-level invalidations.\nWe now describe how the client manages the mapping ObjectToVersion.\nThe client maintains a page version number for each cached page.\nThe version number satisfies the following invariant VP about the state of objects on a page: if a cached page P has a version number v, then the value of an object o on a cached page P is either invalid or it reflects at least the modifications committed by transactions preceding the transaction that set P's version number to v. New object values and new page version numbers arrive when a client fetches a page or when a commit reply or invalidations arrive for this page.\nThe new object values modify the page and, therefore, the page version number needs to be updated to maintain the invariant VP.\nA page version number that arrives when a client fetches a page, replaces\nFigure 5: Reordered Invalidations\nthe page version number for this page.\nSuch an update preserves the invariant VP.\nSimilarly, an in-sequence page version number arriving at the client in a commit or invalidation message advances the version number for the entire cached page, without violating V P. However, invalidations or updates and their corresponding page version numbers can also arrive at the client out of sequence, in which case updating the page version number could violate V P. For example, a commit reply for a transaction that updates object x on page P in server S1, and object y on page Q in server S2, may deliver a new version number for P from the transaction coordinator S1 before an invalidation generated for an earlier transaction that has modified object r on page P arrives from S1 (as shown in figure 5).\nThe cache update protocol ensures that the value of any object o in a cached page P reflects the update or invalidation with the highest observed version number.\nThat is, obsolete updates or invalidations received out of sequence do not affect the value of an object.\nTo maintain the ObjectToVersion mapping and the invariant V P in the presence of out-of-sequence arrival of page version numbers, the client manages a small version number cache vcache that maintains the mapping from an object into its corresponding page version number for all reordered version number updates until a complete page version number sequence is assembled.\nWhen the missing version numbers for the page arrive and complete a sequence, the version number for the entire page is advanced.\nThe ObjectToVersion mapping, including the vcache and page version numbers, is used at transaction commit time to provide version numbers for the read object set as follows.\nIf the read object has an entry in the vcache, its version number is equal to the highest version number in the vcache for this object.\nIf the object is not present in the vcache, its version number is equal the version number of its containing cached page.\nFigure 6 shows the ObjectToVersion mapping in the client cache, including the page version numbers for pages and the vcache.\nClient can limit vcache size as needed since re-fetching a page removes all reordered page version numbers from the vcache.\nHowever, we expect version number reordering to be uncommon and therefore expect the vcache to be very small.\n5.\nBUDDYCACHE FAILOVER\nFigure 6: ObjectToVersion map with vcache\nrector that can fail independently.\nThe goal of the failover protocol is to reconfigure the BuddyCache in the case of a node failure, so that the failure of one node does not disrupt other clients from accessing shared objects.\nMoreover, the failure of the redirector should allow unaffected clients to keep their caches intact.\nWe have designed a failover protocols for BuddyCache but have not implemented it yet.\nThe appendix outlines the protocol.\n6.\nPERFORMANCE EVALUATION\nBuddyCache redirection supports the performance benefits of avoiding communication with the servers but introduces extra processing cost due to availability mechanisms and request forwarding.\nIs the\" cure\" worse then the\" disease?\"\nTo answer the question, we have implemented a BuddyCache prototype for the OCC protocol and conducted experiments to analyze the performance benefits and costs over a range of network latencies.\n6.1 Analysis\nThe performance benefits of peer fetch and peer update are due to avoided server interactions.\nThis section presents a simple analytical performance model for this benefit.\nThe avoided server interactions correspond to different types of client cache misses.\nThese can be cold misses, invalidation misses and capacity misses.\nOur analysis focuses on cold misses and invalidation misses, since the benefit of avoiding capacity misses can be derived from the cold misses.\nMoreover, technology trends indicate that memory and storage capacity will continue to grow and therefore a typical BuddyCache configuration is likely not to be cache limited.\nThe client cache misses are determined by several variables, including the workload and the cache configuration.\nOur analysis tries, as much as possible, to separate these variables so they can be controlled in the validation experiments.\nTo study the benefit of avoiding cold misses, we consider cold cache performance in a read-only workload (no invalidation misses).\nWe expect peer fetch to improve the latency cost for client cold cache misses by fetching objects from nearby cache.\nWe evaluate how the redirection cost affects this benefit by comparing and analyzing the performance of an application running in a storage system with BuddyCache and without (called Base).\nTo study the benefit of avoiding invalidation misses, we consider hot cache performance in a workload with modifications (with no cold misses).\nIn hot caches we expect BuddyCache to provide two complementary benefits, both of which reduce the latency of access to shared modified objects.\nPeer update lets a client access an object modified by a nearby collaborating peer without the delay imposed by invalidation-only protocols.\nIn groups where peers share a read-only interest in the modified objects, peer fetch allows a client to access a modified object as soon as a collaborating peer has it, which avoids the delay of server fetch without the high cost imposed by the update-only protocols.\nTechnology trends indicate that both benefits will remain important in the foreseeable future.\nThe trend toward increase in available network bandwidth decreases the cost of the update-only protocols.\nHowever, the trend toward increasingly large caches, that are updated when cached objects are modified, makes invalidation-base protocols more attractive.\nTo evaluate these two benefits we consider the performance of an application running without BuddyCache with an application running BuddyCache in two configurations.\nOne, where a peer in the group modifies the objects, and another where the objects are modified by a peer outside the group.\nPeer update can also avoid invalidation misses due to false-sharing, introduced when multiple peers update different objects on the same page concurrently.\nWe do not analyze this benefit (demonstrated by earlier work [2]) because our benchmarks do not allow us to control object layout, and also because this benefit can be derived given the cache hit rate and workload contention.\n6.1.1 The Model\nThe model considers how the time to complete an execution with and without BuddyCache is affected by invalidation misses and cold misses.\nConsider k clients running concurrently accessing uniformly a shared set of N pages in BuddyCache (BC) and Base.\nLet tfetch (S), tredirect (S), tcommit (S), and tcompute (S) be the time it takes a client to, respectively, fetch from server, peer fetch, commit a transaction and compute in a transaction, in a system S, where S is either a system with BuddyCache (BC) or without (Base).\nFor simplicity, our model assumes the fetch and commit times are constant.\nIn general they may vary with the server load, e.g. they depend on the total number of clients in the system.\nThe number of misses avoided by peer fetch depends on k, the number of clients in the BuddyCache, and on the client co-interest in the shared data.\nIn a specific BuddyCache execution it is modeled by the variable r, defined as a number of fetches arriving at the redirector for a given \"version\" of page P (i.e. until an object on the page is invalidated).\nConsider an execution with cold misses.\nA client starts with a cold cache and runs read-only workload until it accesses all N pages while committing l transactions.\nWe assume there are no capacity misses, i.e. the client cache is large enough to hold N pages.\nIn BC, r cold misses for page P reach the redirector.\nThe first of the misses fetches P from the server, and the subsequent r \u2212 1 misses are redirected.\nSince each client accesses the entire shared set r = k. Let Tcold (Base) and Tcold (BC) be the time it takes to complete the l transactions in Base and BC.\nConsider next an execution with invalidation misses.\nA client starts with a hot cache containing the working set of N pages.\nWe focus on a simple case where one client (writer) runs a workload with modifications, and the other clients (readers) run a read-only workload.\nIn a group containing the writer (BCW), peer update eliminates all invalidation misses.\nIn a group containing only readers (BCR), during a steady state execution with uniform updates, a client transaction has missinv invalidation misses.\nConsider the sequence of r client misses on page P that arrive at the redirector in BCR between two consequent invalidations of page P.\nThe first miss goes to the server, and the r--1 subsequent misses are redirected.\nUnlike with cold misses, r rel(B) 102 28.7% Figure 3: Relevance relationships at clickthrough inversions.\nCompares relevance between the higher ranking member of a caption pair (rel(A)) to the relevance of the lower ranking member (rel(B)), where caption A received fewer clicks than caption B. pair (B).\nThe figure shows the corresponding relevance judgments.\nFor example, the first row rel(A) < rel(B), indicates that the higher ranking member of pair (A) was rated as less relevant than the lower ranking member of the pair (B).\nAs we see in the figure, relevance alone appears inadequate to explain the majority of clickthrough inversions.\nFor twothirds of the inversions (236), the page associated with caption A is at least as relevant as the page associated with caption B. For 28.7% of the inversions, A has greater relevance than B, which received the greater number of clickthroughs.\n4.\nINFLUENCE OF CAPTION FEATURES Having demonstrated that clickthrough inversions cannot always be explained by relevance differences, we explore what features of caption pairs, if any, lead users to prefer one caption over another.\nFor example, we may hypothesize that the absence of a snippet in caption A and the presence of a snippet in caption B (e.g. captions 2 and 3 in figure 1) leads users to prefer caption A. Nonetheless, due to competing factors, a large set of clickthrough inversions may also include pairs where the snippet is missing in caption B and not in caption A. However, if we compare a large set of clickthrough inversions to a similar set of pairs for which the clickthroughs are consistent with their ranking, we would expect to see relatively more pairs where the snippet was missing in caption A. 4.1 Evaluation methodology Following this line of reasoning, we extracted two sets of caption pairs from search logs over a three day period.\nThe first is a set of nearly five thousand clickthrough inversions, extracted according to the procedure described in section 3.1.\nThe second is a corresponding set of caption pairs that do not exhibit clickthrough inversions.\nIn other words, for pairs in this set, the result at the higher rank (caption A) received more clickthroughs than the result at the lower rank (caption B).\nTo the greatest extent possible, each pair in the second set was selected to correspond to a pair in the first set, in terms of result position and number of clicks on each result.\nWe refer to the first set, containing clickthrough inversions, as the INV set; we refer to the second set, containing caption pairs for which the clickthroughs are consistent with their rank order, as the CON set.\nWe extract a number of features characterizing snippets (described in detail in the next section) and compare the presence of each feature in the INV and CON sets.\nWe describe the features as a hypothesized preference (e.g., a preference for captions containing a snippet).\nThus, in either set, a given feature may be present in one of two forms: favoring the higher ranked caption (caption A) or favoring the lower ranked caption (caption B).\nFor example, the abFeature Tag Description MissingSnippet snippet missing in caption A and present in caption B SnippetShort short snippet in caption A (< 25 characters) with long snippet (> 100 characters) in caption B TermMatchTitle title of caption A contains matches to fewer query terms than the title of caption B TermMatchTS title+snippet of caption A contains matches to fewer query terms than the title+snippet of caption B TermMatchTSU title+snippet+URL of caption A contains matches to fewer query terms than caption B TitleStartQuery title of caption B (but not A) starts with a phrase match to the query QueryPhraseMatch title+snippet+url contains the query as a phrase match MatchAll caption B contains one match to each term; caption A contains more matches with missing terms URLQuery caption B URL is of the form www.query.com where the query matches exactly with spaces removed URLSlashes caption A URL contains more slashes (i.e. a longer path length) than the caption B URL URLLenDIff caption A URL is longer than the caption B URL Official title or snippet of caption B (but not A) contains the term official (with stemming) Home title or snippet of caption B (but not A) contains the phrase home page Image title or snippet of caption B (but not A) contains a term suggesting the presence of an image gallery Readable caption B (but not A) passes a simple readability test Figure 4: Features measured in caption pairs (caption A and caption B), with caption A as the higher ranked result.\nThese features are expressed from the perspective of the prevalent relationship predicted for clickthrough inversions.\nsence of a snippet in caption A favors caption B, and the absence of a snippet in caption B favors caption A.\nWhen the feature favors caption B (consistent with a clickthrough inversion) we refer to the caption pair as a positive pair.\nWhen the feature favors caption A, we refer to it as a negative pair.\nFor missing snippets, a positive pair has the caption missing in caption A (but not B) and a negative pair has the caption missing in B (but not A).\nThus, for a specific feature, we can construct four subsets: 1) INV+, the set of positive pairs from INV; 2) INV\u2212, the set of negative pairs from INV; 3) CON+; the set of positive pairs from CON; and 4) CON\u2212 the set of negative pairs from CON.\nThe sets INV+, INV\u2212, CON+, and CON\u2212 will contain different subsets of INV and CON for each feature.\nWhen stating a feature corresponding to a hypothesized user preference, we follow the practice of stating the feature with the expectation that the size of INV+ relative to the size of INV\u2212 should be greater than the size of CON+ relative to the size of CON\u2212.\nFor example, we state the missing snippet feature as snippet missing in caption A and present in caption B.\nThis evaluation methodology allows us to construct a contingency table for each feature, with INV essentially forming the experimental group and CON the control group.\nWe can then apply Pearson``s chi-square test for significance.\n4.2 Features Figure 4 lists the features tested.\nMany of the features on this list correspond to our own assumptions regarding the importance of certain caption characteristics: the presence of query terms, the inclusion of a snippet, and the importance of query term matches in the title.\nOther features suggested themselves during the examination of the snippets collected as part of the study described in section 3.3 and during a pilot of the evaluation methodology (section 4.1).\nFor this pilot we collected INV and CON sets of similar sizes, and used these sets to evaluate a preliminary list of features and to establish appropriate parameters for the SnippetShort and Readable features.\nIn the pilot, all of the features list in figure 4 were significant at the 95% level.\nA small number of other features were dropped after the pilot.\nThese features all capture simple aspects of the captions.\nThe first feature concerns the existence of a snippet and the second concerns the relative size of snippets.\nApart from this first feature, we ignore pairs where one caption has a missing snippet.\nThese pairs are not included in the sets constructed for the remaining features, since captions with missing snippets do not contain all the elements of a standard caption and we wanted to avoid their influence.\nThe next six features concern the location and number of matching query terms.\nFor the first five, a match for each query term is counted only once, additional matches for the same term are ignored.\nThe MatchAll feature tests the idea that matching all the query terms exactly once is preferable to matching a subset of the terms many times with a least one query term unmatched.\nThe next three features concern the URLs, capturing aspects of their length and complexity, and the last four features concern caption content.\nThe first two of these content features (Official and Home) suggest claims about the importance or significance of the associated page.\nThe third content feature (Image) suggests the presence of an image gallery, a popular genre of Web page.\nTerms represented by this feature include pictures, pics, and gallery.\nThe last content feature (Readable) applies an ad-hoc readability metric to each snippet.\nRegular users of Web search engines may notice occasional snippets that consist of little more than lists of words and phrases, rather than a coherent description.\nWe define our own metric, since the Flesch-Kincaid readability score and similar measures are intended for entire documents not text fragments.\nWhile the metric has not been experimentally validated, it does reflect our intuitions and observations regarding result snippets.\nIn English, the 100 most frequent words represent about 48% of text, and we would expect readable prose, as opposed to a disjointed list of words, to contain these words in roughly this proportion.\nThe Readable feature computes the percentage of these top-100 words appearing in each caption.\nIf these words represent more than 40% of one caption and less than 10% of the other, the pair is included in the appropriate set.\nFeature Tag INV+ INV\u2212 %+ CON+ CON\u2212 %+ \u03c72 p-value MissingSnippet 185 121 60.4 144 133 51.9 4.2443 0.0393 SnippetShort 20 6 76.9 12 16 42.8 6.4803 0.0109 TermMatchTitle 800 559 58.8 660 700 48.5 29.2154 <.0001 TermMatchTS 310 213 59.2 269 216 55.4 1.4938 0.2216 TermMatchTSU 236 138 63.1 189 149 55.9 3.8088 0.0509 TitleStartQuery 1058 933 53.1 916 1096 45.5 23.1999 <.0001 QueryPhraseMatch 465 346 57.3 427 422 50.2 8.2741 0.0040 MatchAll 8 2 80.0 1 4 20.0 0.0470 URLQuery 277 188 59.5 159 315 33.5 63.9210 <.0001 URLSlashes 1715 1388 55.2 1380 1758 43.9 79.5819 <.0001 URLLenDiff 2288 2233 50.6 2062 2649 43.7 43.2974 <.0001 Official 215 142 60.2 133 215 38.2 34.1397 <.0001 Home 62 49 55.8 64 82 43.8 3.6458 0.0562 Image 391 270 59.1 315 335 48.4 15.0735 <.0001 Readable 52 43 54.7 31 48 39.2 4.1518 0.0415 Figure 5: Results corresponding to the features listed in figure 4 with \u03c72 and p-values (df = 1).\nFeatures supported at the 95% confidence level are bolded.\nThe p-value for the MatchAll feature is computed using Fisher``s Exact Test.\n4.3 Results Figure 5 presents the results.\nEach row lists the size of the four sets (INV+, INV\u2212, CON+, and CON\u2212) for a given feature and indicates the percentage of positive pairs (%+) for INV and CON.\nIn order to reject the null hypothesis, this percentage should be significantly greater for INV than CON.\nExcept in one case, we applied the chi-squared test of independence to these sizes, with p-values shown in the last column.\nFor the MatchAll feature, where the sum of the set sizes is 15, we applied Fisher``s exact test.\nFeatures supported at the 95% confidence level are bolded.\n5.\nCOMMENTARY The results support claims that missing snippets, short snippets, missing query terms and complex URLs negatively impact clickthroughs.\nWhile this outcome may not be surprising, we are aware of no other work that can provide support for claims of this type in the context of a commercial Web search engine.\nThis work was originally motivated by our desire to validate some simple guidelines for the generation of captionssummarizing opinions that we formulated while working on related issues.\nWhile our results do not direct address all of the many variables that influence users understanding of captions, they are consistent with the major guidelines.\nFurther work is needed to provide additional support for the guidelines and to understand the relationships among variables.\nThe first of these guidelines underscores the importance of displaying query terms in context: Whenever possible all of the query terms should appear in the caption, reflecting their relationship to the associated page.\nIf a query term is missing from a caption, the user may have no idea why the result was returned.\nThe results for the MatchAll feature directly support this guideline.\nThe results for TermMatchTitle and TermMatchTSU confirm that matching more terms is desirable.\nOther features provide additional indirect support for this guideline, and none of the results are inconsistent with it.\nA second guideline speaks to the desirability of presenting the user with a readable snippet: When query terms are present in the title, they need not be repeated in the snippet.\nIn particular, when a high-quality query-independent summary is available from an external source, such as a Web directory, it may be more appropriate to display this summary than a lower-quality query-dependent fragment selected on-the-fly.\nWhen titles are available from multiple sources -the header, the body, Web directories - a caption generation algorithm might a select a combination of title, snippet and URL that includes as many of the query terms as possible.\nWhen a title containing all query terms can be found, the algorithm might select a query-independent snippet.\nThe MatchAll and Readable features directly support this guideline.\nOnce again, other features provide indirect support, and none of the results are inconsistent with it.\nFinally, the length and complexity of a URL influences user behavior.\nWhen query terms appear in the URL they should highlighted or otherwise distinguished.\nWhen multiple URLs reference the same page (due to re-directions, etc.) the shortest URL should be preferred, provided that all query terms will still appear in the caption.\nIn other words, URLs should be selected and displayed in a manner that emphasizes their relationship to the query.\nThe three URL features, as well as TermMatchTSU, directly support this guideline.\nThe influence of the Official and Image features led us to wonder what other terms are prevalent in the captions of clickthrough inversions.\nAs an additional experiment, we treated each of the terms appearing in the INV and CON sets as a separate feature (case normalized), ranking them by their \u03c72 values.\nThe results are presented in figure 6.\nSince we use the \u03c72 statistic as a divergence measure, rather than a significance test, no p-values are given.\nThe final column of the table indicates the direction of the influence, whether the presence of the terms positively or negatively influence clickthroughs.\nThe positive influence of official has already been observed (the difference in the \u03c72 value from that of figure 5 is due to stemming).\nNone of the terms included in the Image Rank Term \u03c72 influence 1 encyclopedia 114.6891 \u2193 2 wikipedia 94.0033 \u2193 3 official 36.5566 \u2191 4 and 28.3349 \u2191 5 tourism 25.2003 \u2191 6 attractions 24.7283 \u2191 7 free 23.6529 \u2193 8 sexy 21.9773 \u2191 9 medlineplus 19.9726 \u2193 10 information 19.9115 \u2191 Figure 6: Words exhibiting the greatest positive (\u2191) and negative (\u2193) influence on clickthrough patterns.\nfeature appear in the top ten, but pictures and photos appear at positions 21 and 22.\nThe high rank given to and may be related to readability (the term the appears in position 20).\nMost surprising to us is the negative influence of the terms: encyclopedia, wikipedia, free, and medlineplus.\nThe first three terms appear in the title of Wikipedia articles3 and the last appears in the title of MedlinePlus articles4 .\nThese individual word-level features provide hints about issues.\nMore detailed analyses and further experiments will be required to understand these features.\n6.\nCONCLUSIONS Clickthrough inversions form an appropriate tool for assessing the influence of caption features.\nUsing clickthrough inversions, we have demonstrated that relatively simple caption features can significantly influence user behavior.\nTo our knowledge, this is first methodology validated for assessing the quality of Web captions through implicit feedback.\nIn the future, we hope to substantially expand this work, considering more features over larger datasets.\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs.\n7.\nACKNOWLEDGMENTS This work was conducted while the first author was visiting Microsoft Research.\nThe authors thank members of the Windows Live team for their comments and assistance, particularly Girish Kumar, Luke DeLorme, Rohit Wad and Ramez Naam.\n8.\nREFERENCES [1] E. Agichtein, E. Brill, and S. Dumais.\nImproving web search ranking by incorporating user behavior information.\nIn 29th ACM SIGIR, pages 19-26, Seattle, August 2006.\n[2] E. Agichtein, E. Brill, S. Dumais, and R. Ragno.\nLearning user interaction models for predicting Web search result preferences.\nIn 29th ACM SIGIR, pages 3-10, Seattle, August 2006.\n[3] A. Broder.\nA taxonomy of Web search.\nSIGIR Forum, 36(2):3-10, 2002.\n3 www.wikipedia.org 4 www.nlm.nih.gov\/medlineplus\/ [4] E. Cutrell and Z. Guan.\nWhat are you looking for?\nAn eye-tracking study of information usage in Web search.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 407-416, San Jose, California, April-May 2007.\n[5] S. Dumais, E. Cutrell, and H. Chen.\nOptimizing search by showing results in context.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 277-284, Seattle, March-April 2001.\n[6] J. Goldstein, M. Kantrowitz, V. Mittal, and J. Carbonell.\nSummarizing text documents: Sentence selection and evaluation metrics.\nIn 22nd ACM SIGIR, pages 121-128, Berkeley, August 1999.\n[7] L. A. Granka, T. Joachims, and G. Gay.\nEye-tracking analysis of user behavior in WWW search.\nIn 27th ACM SIGIR, pages 478-479, Sheffield, July 2004.\n[8] Y. Hu, G. Xin, R. Song, G. Hu, S. Shi, Y. Cao, and H. Li.\nTitle extraction from bodies of HTML documents and its application to Web page retrieval.\nIn 28th ACM SIGIR, pages 250-257, Salvador, Brazil, August 2005.\n[9] T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay.\nAccurately interpreting clickthrough data as implicit feedback.\nIn 28th ACM SIGIR, pages 154-161, Salvador, Brazil, August 2005.\n[10] U. Lee, Z. Liu, and J. Cho.\nAutomatic identification of user goals in Web search.\nIn 14th International World Wide Web Conference, pages 391-400, Edinburgh, May 2005.\n[11] H. P. Luhn.\nThe automatic creation of literature abstracts.\nIBM Journal of Research and Development, 2(2):159-165, April 1958.\n[12] T. Paek, S. Dumais, and R. Logan.\nWaveLens: A new view onto Internet search results.\nIn SIGCHI Conference on Human Factors in Computing Systems, pages 727-734, Vienna, Austria, April 2004.\n[13] D. Rose and D. Levinson.\nUnderstanding user goals in Web search.\nIn 13th International World Wide Web Conference, pages 13-19, New York, May 2004.\n[14] J.-T.\nSun, D. Shen, H.-J.\nZeng, Q. Yang, Y. Lu, and Z. Chen.\nWeb-page summarization using clickthrough data.\nIn 28th ACM SIGIR, pages 194-201, Salvador, Brazil, August 2005.\n[15] A. Tombros and M. Sanderson.\nAdvantages of query biased summaries in information retrieval.\nIn 21st ACM SIGIR, pages 2-10, Melbourne, Australia, August 1998.\n[16] R. Varadarajan and V. Hristidis.\nA system for query-specific document summarization.\nIn 15th ACM international conference on Information and knowledge management (CIKM), pages 622-631, Arlington, Virginia, November 2006.\n[17] R. W. White, I. Ruthven, and J. M. Jose.\nFinding relevant documents using top ranking sentences: An evaluation of two alternative schemes.\nIn 25th ACM SIGIR, pages 57-64, Tampere, Finland, August 2002.\n[18] G.-R.\nXue, H.-J.\nZeng, Z. Chen, Y. Yu, W.-Y.\nMa, W. Xi, and W. Fan.\nOptimizing web search using Web click-through data.\nIn 13th ACM Conference on Information and Knowledge Management (CIKM), pages 118-126, Washington, DC, November 2004.","lvl-3":"The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions, comprising title, snippet, and URL, to help users decide which search results to visit.\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation.\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions.\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms, the readability of the snippet, and the length of the URL shown in the caption, can significantly influence users' Web search behavior.\n1.\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way.\nEach search result is described by a brief caption, comprising the URL of the associated Web page, a title, and a brief summary (or \"snippet\") describing the contents of the page.\nOften the snippet is extracted from the Web page itself, but it may also be taken from external sources, such as the human-generated summaries found in Web directories.\nFigure 1 shows a typical Web search, with captions for the top three results.\nWhile the three captions share the same\nbasic structure, their content differs in several respects.\nThe snippet of the third caption is nearly twice as long as that of the first, while the snippet is missing entirely from the second caption.\nThe title of the third caption contains all of the query terms in order, while the titles of the first and second captions contain only two of the three terms.\nOne of the query terms is repeated in the first caption.\nAll of the query terms appear in the URL of the third caption, while none appear in the URL of the first caption.\nThe snippet of the first caption consists of a complete sentence that concisely describes the associated page, while the snippet of the third caption consists of two incomplete sentences that are largely unrelated to the overall contents of the associated page and to the apparent intent of the query.\nWhile these differences may seem minor, they may also have a substantial impact on user behavior.\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result.\nIn the case of a navigational query--particularly when the destination is well known--the URL alone may be sufficient to identify the desired page.\nBut in the case of an informational query, the title and snippet may be necessary to guide the user in selecting a page for further study, and she may judge the relevance of a page on the basis of the caption alone.\nWhen this judgment is correct, it can speed the search process by allowing the user to avoid unwanted material.\nWhen it fails, the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest.\nEven worse, the user may be misled into skipping a page that contains desired information.\nAll three of the results in figure 1 are relevant, with some limitations.\nThe first result links to the main Yahoo Kids!\nhomepage, but it is then necessary to follow a link in a menu to find the main page for games.\nDespite appearances, the second result links to a surprisingly large collection of online games, primarily with environmental themes.\nThe third result might be somewhat disappointing to a user, since it leads to only a single game, hosted at the Centers for Disease Control, that could not reasonably be described as \"online\".\nUnfortunately, these page characteristics are not entirely reflected in the captions.\nIn this paper, we examine the influence of caption features on user's Web search behavior, using clickthroughs extracted from search engines logs as our primary investigative tool.\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1: Top three results for the query: kids online games.\ncaptions themselves.\nIn addition, these features can play a role in the process of inferring relevance judgments from user behavior [1].\nBy better understanding their influence, better judgments may result.\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page.\nSnippets may be generated in a query-independent fashion, providing a summary of the page as a whole, or in a querydependent fashion, providing a summary of how the page relates to the query terms.\nThe correct choice of snippet may depend on aspects of both the query and the result page.\nThe title may be taken from the HTML header or extracted from the body of the document [8].\nFor links that re-direct, it may be possible to display alternative URLs.\nMoreover, for pages listed in human-edited Web directories such as the Open Directory Project', it may be possible to display alternative titles and snippets derived from these listings.\nWhen these alternative snippets, titles and URLs are available, the selection of an appropriate combination for display may be guided by their features.\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet.\nA title extracted from the body may provide greater coverage of the query terms.\nA URL before re-direction may be shorter and provide a clearer idea of the final destination.\nThe work reported in this paper was undertaken in the context of the Windows Live search engine.\nThe image in figure 1 was captured from Windows Live and cropped to eliminate branding, advertising and navigational elements.\nThe experiments reported in later sections are based on Windows Live query logs, result pages and relevance judgments collected as part of ongoing research into search engine performance [1, 2].\nNonetheless, given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines.\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines.\nThis and other queries produce captions that exhibit similar variations.\nIn addition, we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available.\n2.\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis, relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior.\nMost related research in the area of document summarization has focused on newspaper articles and similar material, rather than Web pages, and has conducted evaluations by comparing automatically generated summaries with manually generated summaries.\nMost research on the display of Web results has proposed substantial interface changes, rather than addressing details of the existing interfaces.\n2.1 Display of Web results\nVaradarajan and Hristidis [16] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems, without introducing additional changes to the interface.\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system.\nThey evaluated their method by asking users to compare snippets from the various sources.\nCutrell and Guan [4] conducted an eye-tracking study to investigate the influence of snippet length on Web search performance and found that the optimal snippet length varied according to the task type, with longer snippets leading to improved performance for informational tasks and shorter snippets for navigational tasks.\nSIGIR 2007 Proceedings Session 6: Summaries\n2.2 Document summarization\n2.3 Clickthroughs\n3.\nCLICKTHROUGH INVERSIONS\nSIGIR 2007 Proceedings Session 6: Summaries\n3.1 Extracting clickthroughs\n3.2 Clickthrough curves\n3.3 Relevance\n4.\nINFLUENCE OF CAPTION FEATURES\n4.1 Evaluation methodology\n4.2 Features\n4.3 Results\n5.\nCOMMENTARY\n6.\nCONCLUSIONS\nClickthrough inversions form an appropriate tool for assessing the influence of caption features.\nUsing clickthrough inversions, we have demonstrated that relatively simple caption features can significantly influence user behavior.\nTo our knowledge, this is first methodology validated for assessing the quality of Web captions through implicit feedback.\nIn the future, we hope to substantially expand this work, considering more features over larger datasets.\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs.","lvl-4":"The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions, comprising title, snippet, and URL, to help users decide which search results to visit.\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation.\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions.\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms, the readability of the snippet, and the length of the URL shown in the caption, can significantly influence users' Web search behavior.\n1.\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way.\nEach search result is described by a brief caption, comprising the URL of the associated Web page, a title, and a brief summary (or \"snippet\") describing the contents of the page.\nOften the snippet is extracted from the Web page itself, but it may also be taken from external sources, such as the human-generated summaries found in Web directories.\nFigure 1 shows a typical Web search, with captions for the top three results.\nWhile the three captions share the same\nThe snippet of the third caption is nearly twice as long as that of the first, while the snippet is missing entirely from the second caption.\nThe title of the third caption contains all of the query terms in order, while the titles of the first and second captions contain only two of the three terms.\nOne of the query terms is repeated in the first caption.\nAll of the query terms appear in the URL of the third caption, while none appear in the URL of the first caption.\nWhile these differences may seem minor, they may also have a substantial impact on user behavior.\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result.\nIn the case of a navigational query--particularly when the destination is well known--the URL alone may be sufficient to identify the desired page.\nBut in the case of an informational query, the title and snippet may be necessary to guide the user in selecting a page for further study, and she may judge the relevance of a page on the basis of the caption alone.\nWhen this judgment is correct, it can speed the search process by allowing the user to avoid unwanted material.\nWhen it fails, the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest.\nEven worse, the user may be misled into skipping a page that contains desired information.\nAll three of the results in figure 1 are relevant, with some limitations.\nThe first result links to the main Yahoo Kids!\nhomepage, but it is then necessary to follow a link in a menu to find the main page for games.\nDespite appearances, the second result links to a surprisingly large collection of online games, primarily with environmental themes.\nUnfortunately, these page characteristics are not entirely reflected in the captions.\nIn this paper, we examine the influence of caption features on user's Web search behavior, using clickthroughs extracted from search engines logs as our primary investigative tool.\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1: Top three results for the query: kids online games.\ncaptions themselves.\nIn addition, these features can play a role in the process of inferring relevance judgments from user behavior [1].\nBy better understanding their influence, better judgments may result.\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page.\nSnippets may be generated in a query-independent fashion, providing a summary of the page as a whole, or in a querydependent fashion, providing a summary of how the page relates to the query terms.\nThe correct choice of snippet may depend on aspects of both the query and the result page.\nFor links that re-direct, it may be possible to display alternative URLs.\nMoreover, for pages listed in human-edited Web directories such as the Open Directory Project', it may be possible to display alternative titles and snippets derived from these listings.\nWhen these alternative snippets, titles and URLs are available, the selection of an appropriate combination for display may be guided by their features.\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet.\nA title extracted from the body may provide greater coverage of the query terms.\nThe work reported in this paper was undertaken in the context of the Windows Live search engine.\nThe experiments reported in later sections are based on Windows Live query logs, result pages and relevance judgments collected as part of ongoing research into search engine performance [1, 2].\nNonetheless, given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines.\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines.\nThis and other queries produce captions that exhibit similar variations.\nIn addition, we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available.\n2.\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis, relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior.\nMost research on the display of Web results has proposed substantial interface changes, rather than addressing details of the existing interfaces.\n2.1 Display of Web results\nVaradarajan and Hristidis [16] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems, without introducing additional changes to the interface.\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system.\nThey evaluated their method by asking users to compare snippets from the various sources.\n6.\nCONCLUSIONS\nClickthrough inversions form an appropriate tool for assessing the influence of caption features.\nUsing clickthrough inversions, we have demonstrated that relatively simple caption features can significantly influence user behavior.\nTo our knowledge, this is first methodology validated for assessing the quality of Web captions through implicit feedback.\nWe also hope to directly address the goal of predicting relevance from clickthoughs and other information present in search engine logs.","lvl-2":"The Influence of Caption Features on Clickthrough Patterns in Web Search\nABSTRACT\nWeb search engines present lists of captions, comprising title, snippet, and URL, to help users decide which search results to visit.\nUnderstanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation.\nIn this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions.\nThe findings of our study suggest that relatively simple caption features such as the presence of all terms query terms, the readability of the snippet, and the length of the URL shown in the caption, can significantly influence users' Web search behavior.\n1.\nINTRODUCTION\nThe major commercial Web search engines all present their results in much the same way.\nEach search result is described by a brief caption, comprising the URL of the associated Web page, a title, and a brief summary (or \"snippet\") describing the contents of the page.\nOften the snippet is extracted from the Web page itself, but it may also be taken from external sources, such as the human-generated summaries found in Web directories.\nFigure 1 shows a typical Web search, with captions for the top three results.\nWhile the three captions share the same\nbasic structure, their content differs in several respects.\nThe snippet of the third caption is nearly twice as long as that of the first, while the snippet is missing entirely from the second caption.\nThe title of the third caption contains all of the query terms in order, while the titles of the first and second captions contain only two of the three terms.\nOne of the query terms is repeated in the first caption.\nAll of the query terms appear in the URL of the third caption, while none appear in the URL of the first caption.\nThe snippet of the first caption consists of a complete sentence that concisely describes the associated page, while the snippet of the third caption consists of two incomplete sentences that are largely unrelated to the overall contents of the associated page and to the apparent intent of the query.\nWhile these differences may seem minor, they may also have a substantial impact on user behavior.\nA principal motivation for providing a caption is to assist the user in determining the relevance of the associated page without actually having to click through to the result.\nIn the case of a navigational query--particularly when the destination is well known--the URL alone may be sufficient to identify the desired page.\nBut in the case of an informational query, the title and snippet may be necessary to guide the user in selecting a page for further study, and she may judge the relevance of a page on the basis of the caption alone.\nWhen this judgment is correct, it can speed the search process by allowing the user to avoid unwanted material.\nWhen it fails, the user may waste her time clicking through to an inappropriate result and scanning a page containing little or nothing of interest.\nEven worse, the user may be misled into skipping a page that contains desired information.\nAll three of the results in figure 1 are relevant, with some limitations.\nThe first result links to the main Yahoo Kids!\nhomepage, but it is then necessary to follow a link in a menu to find the main page for games.\nDespite appearances, the second result links to a surprisingly large collection of online games, primarily with environmental themes.\nThe third result might be somewhat disappointing to a user, since it leads to only a single game, hosted at the Centers for Disease Control, that could not reasonably be described as \"online\".\nUnfortunately, these page characteristics are not entirely reflected in the captions.\nIn this paper, we examine the influence of caption features on user's Web search behavior, using clickthroughs extracted from search engines logs as our primary investigative tool.\nUnderstanding this influence may help to validate algorithms and guidelines for the improved generation of the\nFigure 1: Top three results for the query: kids online games.\ncaptions themselves.\nIn addition, these features can play a role in the process of inferring relevance judgments from user behavior [1].\nBy better understanding their influence, better judgments may result.\nDifferent caption generation algorithms might select snippets of different lengths from different areas of a page.\nSnippets may be generated in a query-independent fashion, providing a summary of the page as a whole, or in a querydependent fashion, providing a summary of how the page relates to the query terms.\nThe correct choice of snippet may depend on aspects of both the query and the result page.\nThe title may be taken from the HTML header or extracted from the body of the document [8].\nFor links that re-direct, it may be possible to display alternative URLs.\nMoreover, for pages listed in human-edited Web directories such as the Open Directory Project', it may be possible to display alternative titles and snippets derived from these listings.\nWhen these alternative snippets, titles and URLs are available, the selection of an appropriate combination for display may be guided by their features.\nA snippet from a Web directory may consist of complete sentences and be less fragmentary than an extracted snippet.\nA title extracted from the body may provide greater coverage of the query terms.\nA URL before re-direction may be shorter and provide a clearer idea of the final destination.\nThe work reported in this paper was undertaken in the context of the Windows Live search engine.\nThe image in figure 1 was captured from Windows Live and cropped to eliminate branding, advertising and navigational elements.\nThe experiments reported in later sections are based on Windows Live query logs, result pages and relevance judgments collected as part of ongoing research into search engine performance [1, 2].\nNonetheless, given the similarity of caption formats across the major Web search engines we believe the results are applicable to these other engines.\nThe query in ` www.dmoz.org figure 1 produces results with similar relevance on the other major search engines.\nThis and other queries produce captions that exhibit similar variations.\nIn addition, we believe our methodology may be generalized to other search applications when sufficient clickthrough data is available.\n2.\nRELATED WORK\nWhile commercial Web search engines have followed similar approaches to caption display since their genesis, relatively little research has been published about methods for generating these captions and evaluating their impact on user behavior.\nMost related research in the area of document summarization has focused on newspaper articles and similar material, rather than Web pages, and has conducted evaluations by comparing automatically generated summaries with manually generated summaries.\nMost research on the display of Web results has proposed substantial interface changes, rather than addressing details of the existing interfaces.\n2.1 Display of Web results\nVaradarajan and Hristidis [16] are among the few who have attempted to improve directly upon the snippets generated by commercial search systems, without introducing additional changes to the interface.\nThey generated snippets from spanning trees of document graphs and experimentally compared these snippets against the snippets generated for the same documents by the Google desktop search system and MSN desktop search system.\nThey evaluated their method by asking users to compare snippets from the various sources.\nCutrell and Guan [4] conducted an eye-tracking study to investigate the influence of snippet length on Web search performance and found that the optimal snippet length varied according to the task type, with longer snippets leading to improved performance for informational tasks and shorter snippets for navigational tasks.\nSIGIR 2007 Proceedings Session 6: Summaries\nMany researchers have explored alternative methods for displaying Web search results.\nDumais et al. [5] compared an interface typical of those used by major Web search engines with one that groups results by category, finding that users perform search tasks faster with the category interface.\nPaek et al. [12] propose an interface based on a fisheye lens, in which mouse hovers and other events cause captions to zoom and snippets to expand with additional text.\nWhite et al. [17] evaluated three alternatives to the standard Web search interface: one that displays expanded summaries on mouse hovers, one that displays a list of top ranking sentences extracted from the results taken as a group, and one that updates this list automatically through implicit feedback.\nThey treat the length of time that a user spends viewing a summary as an implicit indicator of relevance.\nTheir goal was to improve the ability of users to interact with a given result set, helping them to look beyond the first page of results and to reduce the burden of query re-formulation.\n2.2 Document summarization\nOutside the narrow context of Web search considerable related research has been undertaken on the problem of document summarization.\nThe basic idea of extractive summarization--creating a summary by selecting sentences or fragments--goes back to the foundational work of Luhn [11].\nLuhn's approach uses term frequencies to identify \"significant words\" within a document and then selects and extracts sentences that contain significant words in close proximity.\nA considerable fraction of later work may be viewed as extending and tuning this basic approach, developing improved methods for identifying significant words and selecting sentences.\nFor example, a recent paper by Sun et al. [14] describes a variant of Luhn's algorithm that uses clickthrough data to identify significant words.\nAt its simplest, snippet generation for Web captions might also be viewed as following this approach, with query terms taking on the role of significant words.\nSince 2000, the annual Document Understanding Conference (DUC) series, conducted by the US National Institute of Standards and Technology, has provided a vehicle for evaluating much of the research in document summarization2.\nEach year DUC defines a methodology for one or more experimental tasks, and supplies the necessary test documents, human-created summaries, and automatically extracted baseline summaries.\nThe majority of participating systems use extractive summarization, but a number attempt natural language generation and other approaches.\nEvaluation at DUC is achieved through comparison with manually generated summaries.\nOver the years DUC has included both single-document summarization and multidocument summarization tasks.\nThe main DUC 2007 task is posed as taking place in a question answering context.\nGiven a topic and 25 documents, participants were asked to generate a 250-word summary satisfying the information need enbodied in the topic.\nWe view our approach of evaluating summarization through the analysis of Web logs as complementing the approach taken at DUC.\nA number of other researchers have examined the value of query-dependent summarization in a non-Web context.\nTombros and Sanderson [15] compared the performance of 20 subjects searching a collection of newspaper articles when 2duc.\nnist.gov guided by query-independent vs. query-dependent snippets.\nThe query-independent snippets were created by extracting the first few sentences of the articles; the query-dependent snippets were created by selecting the highest scoring sentences under a measure biased towards sentences containing query terms.\nWhen query-dependent summaries were presented, subjects were better able to identify relevant documents without clicking through to the full text.\nGoldstein et al. [6] describe another extractive system for generating query-dependent summaries from newspaper articles.\nIn their system, sentences are ranked by combining statistical and linguistic features.\nThey introduce normalized measures of recall and precision to facilitate evaluation.\n2.3 Clickthroughs\nQueries and clickthroughs taken from the logs of commercial Web search engines have been widely used to improve the performance of these systems and to better understand how users interact with them.\nIn early work, Broder [3] examined the logs of the AltaVista search engine and identified three broad categories of Web queries: informational, navigational and transactional.\nRose and Levinson [13] conducted a similar study, developing a hierarchy of query goals with three top-level categories: informational, navigational and resource.\nUnder their taxonomy, a transactional query as defined by Broder might fall under either of their three categories, depending on details of the desired transaction.\nLee et al. [10] used clickthrough patterns to automatically categorize queries into one of two categories: informational--for which multiple Websites may satisfy all or part of the user's need--and navigational--for which users have a particular Website in mind.\nUnder their taxonomy, a transactional or resource query would be subsumed under one of these two categories.\nAgichtein et al. interpreted caption features, clickthroughs and other user behavior as implicit feedback to learn preferences [2] and improve ranking [1] in Web search.\nXue et al. [18] present several methods for associating queries with documents by analyzing clickthrough patterns and links between documents.\nQueries associated with documents in this way are treated as meta-data.\nIn effect, they are added to the document content for indexing and ranking purposes.\nOf particular interest to us is the work of Joachims et al. [9] and Granka et al. [7].\nThey conducted eye-tracking studies and analyzed log data to determine the extent to which clickthrough data may be treated as implicit relevance judgments.\nThey identified a \"trust bias\", which leads users to prefer the higher ranking result when all other factors are equal.\nIn addition, they explored techniques that treat clicks as pairwise preferences.\nFor example, a click at position N + 1--after skipping the result at position N--may be viewed as a preference for the result at position N +1 relative to the result at position N.\nThese findings form the basis of the clickthrough inversion methodology we use to interpret user interactions with search results.\nOur examination of large search logs compliments their detailed analysis of a smaller number of participants.\n3.\nCLICKTHROUGH INVERSIONS\nWhile other researchers have evaluated the display of Web search results through user studies--presenting users with a small number of different techniques and asking them to complete experimental tasks--we approach the problem\nSIGIR 2007 Proceedings Session 6: Summaries\nby extracting implicit feedback from search engine logs.\nExamining user behavior in situ allows us to consider many more queries and caption characteristics, with the volume of available data compensating for the lack of a controlled lab environment.\nThe problem remains of interpreting the information in these logs as implicit indicators of user preferences, and in this matter we are guided by the work of Joachims et al. [9].\nWe consider caption pairs, which appear adjacent to one another in the result list.\nOur primary tool for examining the influence of caption features is a type of pattern observed with respect to these caption pairs, which we call a clickthrough inversion.\nA clickthrough inversion occurs at position N when the result at position N receives fewer clicks than the result at position N + 1.\nFollowing Joachims et al. [9], we interpret a clickthrough inversion as indicating a preference for the lower ranking result, overcoming any trust bias.\nFor simplicity, in the remainder of this paper we refer to the higher ranking caption in a pair as \"caption A\" and the lower ranking caption as \"caption B\".\n3.1 Extracting clickthroughs\nFor the experiments reported in this paper, we sampled a subset of the queries and clickthroughs from the logs of the Windows Live search engine over a period of 3-4 days on three separate occasions: once for results reported in section 3.3, once for a pilot of our main experiment, and once for the experiment itself (sections 4 and 5).\nFor simplicity we restricted our sample to queries submitted to the US English interface and ignored any queries containing complex or non-alphanumeric terms (e.g. operators and phrases).\nAt the end of each sampling period, we downloaded captions for the queries associated with the clickthrough sample.\nWhen identifying clickthroughs in search engine logs, we consider only the first clickthrough action taken by a user after entering a query and viewing the result page.\nUsers are identified by IP address, which is a reasonably reliable method of eliminating multiple results from a single user, at the cost of falsely eliminating results from multiple users sharing the same address.\nBy focusing on the initial clickthrough, we hope to capture a user's impression of the relative relevance within a caption pair when first encountered.\nIf the user later clicks on other results or re-issues the same query, we ignore these actions.\nAny preference captured by a clickthrough inversion is therefore a preference among a group of users issuing a particular query, rather than a preference on the part of a single user.\nIn the remainder of the paper, we use the term \"clickthrough\" to refer only to this initial action.\nGiven the dynamic nature of the Web and the volumes of data involved, search engine logs are bound to contain considerable \"noise\".\nFor example, even over a period of hours or minutes the order of results for a given query can change, with some results dropping out of the top ten and new ones appearing.\nFor this reason, we retained clickthroughs for a specific combination of a query and a result only if this result appears in a consistent position for at least 50% of the clickthroughs.\nClickthroughs for the same result when it appeared at other positions were discarded.\nFor similar reasons, if we did not detect at least ten clickthroughs for a particular query during the sampling period, no clickthroughs for that query were retained.\nFigure 2: Clickthrough curves for three queries: a) a stereotypical navigational query, b) a stereotypical informational query, and c) a query exhibiting clickthrough inversions.\nThe outcome at the end of each sampling period is a set of records, with each record describing the clickthroughs for a given query\/result combination.\nEach record includes a query, a result position, a title, a snippet, a URL, the number of clickthroughs for this result, and the total number of clickthroughs for this query.\nWe then processed this set to generate clickthrough curves and identify inversions.\n3.2 Clickthrough curves\nIt could be argued that under ideal circumstances, clickthrough inversions would not be present in search engine logs.\nA hypothetical \"perfect\" search engine would respond to a query by placing the result most likely to be relevant first in the result list.\nEach caption would appropriately summarize the content of the linked page and its relationship to the query, allowing users to make accurate judgments.\nLater results would complement earlier ones, linking to novel or supplementary material, and ordered by their interest to the greatest number of users.\nFigure 2 provides clickthrough curves for three example queries.\nFor each example, we plot the percentage of clickthroughs against position for the top ten results.\nThe first query (craigslist) is stereotypically navigational, showing a spike at the \"correct\" answer (www.craigslist.org).\nThe second query is informational in the sense of Lee et al. [10] (periodic table of elements).\nIts curve is flatter and less skewed toward a single result.\nFor both queries, the number of clickthroughs is consistent with the result positions, with the percentage of clickthroughs decreasing monotonically as position increases, the ideal behavior.\nRegrettably, no search engine is perfect, and clickthrough inversions are seen for many queries.\nFor example, for the third query (kids online games) the clickthrough curve exhibits a number of clickthrough inversions, with an apparent preference for the result at position 4.\nSeveral causes may be enlisted to explain the presence of an inversion in a clickthrough curve.\nThe search engine may have failed in its primary goal, ranking more relevant results below less relevant results.\nEven when the relative ranking is appropriate, a caption may fail to reflect the content of the underlying page with respect to the query, leading the user to make an incorrect judgment.\nBefore turning to the second case, we address the first, and examine the extent to which relevance alone may explain these inversions.\n3.3 Relevance\nThe simplest explanation for the presence of a clickthrough inversion is a relevance difference between the higher ranking member of caption pair and the lower ranking member.\nIn order to examine the extent to which relevance plays a role in clickthrough inversions, we conducted an initial experiment using a set of 1,811 queries with associated judgments created as part of on-going work.\nOver a four-day period, we sampled the search engine logs and extracted over one hundred thousand clicks involving these queries.\nFrom these clicks we identified 355 clickthrough inversions, satisfying the criteria of section 3.1, where relevance judgments existed for both pages.\nThe relevance judgments were made by independent assessors viewing the pages themselves, rather than the captions.\nRelevance was assessed on a 6-point scale.\nThe outcome is presented in figure 3, which shows the explicit judgments for the 355 clickthrough inversions.\nIn all of these cases, there were more clicks on the lower ranked member of the\nFigure 3: Relevance relationships at clickthrough inversions.\nCompares relevance between the higher ranking member of a caption pair (rel (A)) to the relevance of the lower ranking member (rel (B)), where caption A received fewer clicks than caption B.\npair (B).\nThe figure shows the corresponding relevance judgments.\nFor example, the first row rel (A) \u03c3, there would be an edge connecting the corresponding two vertices.\nAfter the similarity graph G\u03c3 is built, the star clustering algorithm clusters the documents using a greedy algorithm as follows: 1.\nAssociate every vertex in G\u03c3 with a flag, initialized as unmarked.\n2.\nFrom those unmarked vertices, find the one which has the highest degree and let it be u. 3.\nMark the flag of u as center.\n4.\nForm a cluster C containing u and all its neighbors that are not marked as center.\nMark all the selected neighbors as satellites.\n5.\nRepeat from step 2 until all the vertices in G\u03c3 are marked.\nEach cluster is star-shaped, which consists a single center and several satellites.\nThere is only one parameter \u03c3 in the star clustering algorithm.\nA big \u03c3 enforces that the connected documents have high similarities, and thus the clusters tend to be small.\nOn the other hand, a small \u03c3 will make the clusters big and less coherent.\nWe will study the impact of this parameter in our experiments.\nA good feature of the star clustering algorithm is that it outputs a center for each cluster.\nIn the past query collection Hq, each document corresponds to a query.\nThis center query can be regarded as the most representative one for the whole cluster, and thus provides a label for the cluster naturally.\nAll the clusters obtained are related to the input query q from different perspectives, and they represent the possible aspects of interests about query q of users.\n4.3 Categorizing Search Results In order to organize the search results according to users'' interests, we use the learned aspects from the related past queries to categorize the search results.\nGiven the top m Web pages returned by a search engine for q: {s1, ..., sm}, we group them into different aspects using a categorization algorithm.\nIn principle, any categorization algorithm can be used here.\nHere we use a simple centroid-based method for categorization.\nNaturally, more sophisticated methods such as SVM [21] may be expected to achieve even better performance.\nBased on the pseudo-documents in each discovered aspect Ci, we build a centroid prototype pi by taking the average of all the vectors of the documents in Ci: pi = 1 |Ci| l\u2208Ci vl.\nAll these pi``s are used to categorize the search results.\nSpecifically, for any search result sj, we build a TF-IDF vector.\nThe centroid-based method computes the cosine similarity between the vector representation of sj and each centroid prototype pi.\nWe then assign sj to the aspect with which it has the highest cosine similarity score.\nAll the aspects are finally ranked according to the number of search results they have.\nWithin each aspect, the search results are ranked according to their original search engine ranking.\n5.\nDATA COLLECTION We construct our data set based on the MSN search log data set released by the Microsoft Live Labs in 2006 [14].\nIn total, this log data spans 31 days from 05\/01\/2006 to 05\/31\/2006.\nThere are 8,144,000 queries, 3,441,000 distinct queries, and 4,649,000 distinct URLs in the raw data.\nTo test our algorithm, we separate the whole data set into two parts according to the time: the first 2\/3 data is used to simulate the historical data that a search engine accumulated, and we use the last 1\/3 to simulate future queries.\nIn the history collection, we clean the data by only keeping those frequent, well-formatted, English queries (queries which only contain characters `a'', `b'', ..., `z'', and space, and appear more than 5 times).\nAfter cleaning, we get 169,057 unique queries in our history data collection totally.\nOn average, each query has 3.5 distinct clicks.\nWe build the pseudo-documents for all these queries as described in Section 3.\nThe average length of these pseudo-documents is 68 words and the total data size of our history collection is 129MB.\nWe construct our test data from the last 1\/3 data.\nAccording to the time, we separate this data into two test sets equally for cross-validation to set parameters.\nFor each test set, we use every session as a test case.\nEach session contains a single query and several clicks.\n(Note that we do not aggregate sessions for test cases.\nDifferent test cases may have the same queries but possibly different clicks.)\nSince it is infeasible to ask the original user who submitted a query to judge the results for the query, we follow the work [11] and opt to use the clicks associated with the query in a session to approximate relevant documents.\nUsing clicks as judgments, we can then compare different algorithms for organizing search results to see how well these algorithms can help users reach the clicked URLs.\nOrganizing search results into different aspects is expected to help informational queries.\nIt thus makes sense to focus on the informational queries in our evaluation.\nFor each test case, i.e., each session, we count the number of different clicks and filter out those test cases with fewer than 4 clicks under the assumption that a query with more clicks is more likely to be an informational query.\nSince we want to test whether our algorithm can learn from the past queries, we also filter out those test cases whose queries can not retrieve at least 100 pseudo-documents from our history collection.\nFinally, we obtain 172 and 177 test cases in the first and second test sets respectively.\nOn average, we have 6.23 and 5.89 clicks for each test case in the two test sets respectively.\n6.\nEXPERIMENTS In the section, we describe our experiments on the search result organization based past search engine logs.\n6.1 Experimental Design We use two baseline methods to evaluate the proposed method for organizing search results.\nFor each test case, the first method is the default ranked list from a search engine (baseline).\nThe second method is to organize the search results by clustering them (cluster-based).\nFor fair comparison, we use the same clustering algorithm as our logbased method (i.e., star clustering).\nThat is, we treat each search result as a document, construct the similarity graph, and find the star-shaped clusters.\nWe compare our method (log-based) with the two baseline methods in the following experiments.\nFor both cluster-based and log-based methods, the search results within each cluster is ranked based on their original ranking given by the search engine.\nTo compare different result organization methods, we adopt a similar method as in the paper [9].\nThat is, we compare the quality (e.g., precision) of the best cluster, which is defined as the one with the largest number of relevant documents.\nOrganizing search results into clusters is to help users navigate into relevant documents quickly.\nThe above metric is to simulate a scenario when users always choose the right cluster and look into it.\nSpecifically, we download and organize the top 100 search results into aspects for each test case.\nWe use Precision at 5 documents (P@5) in the best cluster as the primary measure to compare different methods.\nP@5 is a very meaningful measure as it tells us the perceived precision when the user opens a cluster and looks at the first 5 documents.\nWe also use Mean Reciprocal Rank (MRR) as another metric.\nMRR is calculated as MRR = 1 |T| q\u2208T 1 rq where T is a set of test queries, rq is the rank of the first relevant document for q. To give a fair comparison across different organization algorithms, we force both cluster-based and log-based methods to output the same number of aspects and force each search result to be in one and only one aspect.\nThe number of aspects is fixed at 10 in all the following experiments.\nThe star clustering algorithm can output different number of clusters for different input.\nTo constrain the number of clusters to 10, we order all the clusters by their sizes, select the top 10 as aspect candidates.\nWe then re-assign each search result to one of these selected 10 aspects that has the highest similarity score with the corresponding aspect centroid.\nIn our experiments, we observe that the sizes of the best clusters are all larger than 5, and this ensures that P@5 is a meaningful metric.\n6.2 Experimental Results Our main hypothesis is that organizing search results based on the users'' interests learned from a search log data set is more beneficial than to organize results using a simple list or cluster search results.\nIn the following, we test our hypothesis from two perspectives - organization and labeling.\nMethod Test set 1 Test set 2 MMR P@5 MMR P@5 Baseline 0.7347 0.3325 0.7393 0.3288 Cluster-based 0.7735 0.3162 0.7666 0.2994 Log-based 0.7833 0.3534 0.7697 0.3389 Cluster\/Baseline 5.28% -4.87% 3.69% -8.93% Log\/Baseline 6.62% 6.31% 4.10% 3.09% Log\/Cluster 1.27% 11.76% 0.40% 13.20% Table 2: Comparison of different methods by MMR and P@5.\nWe also show the percentage of relative improvement in the lower part.\nComparison Test set 1 Test set 2 Impr.\n\/Decr.\nImpr.\n\/Decr.\nCluster\/Baseline 53\/55 50\/64 Log\/Baseline 55\/44 60\/45 Log\/Cluster 68\/47 69\/44 Table 3: Pairwise comparison w.r.t the number of test cases whose P@5``s are improved versus decreased w.r.t the baseline.\n6.2.1 Overall performance We compare three methods, basic search engine ranking (baseline), traditional clustering based method (clusterbased), and our log based method (log-based), in Table 2 using MRR and P@5.\nWe optimize the parameter \u03c3``s for each collection individually based on P@5 values.\nThis shows the best performance that each method can achieve.\nIn this table, we can see that in both test collections, our method is better than both the baseline and the cluster-based methods.\nFor example, in the first test collection, the baseline method of MMR is 0.734, the cluster-based method is 0.773 and our method is 0.783.\nWe achieve higher accuracy than both cluster-based method (1.27% improvement) and the baseline method (6.62% improvement).\nThe P@5 values are 0.332 for the baseline, 0.316 for cluster-based method, but 0.353 for our method.\nOur method improves over the baseline by 6.31%, while the cluster-based method even decreases the accuracy.\nThis is because cluster-based method organizes the search results only based on the contents.\nThus it could organize the results differently from users'' preferences.\nThis confirms our hypothesis of the bias of the cluster-based method.\nComparing our method with the cluster-based method, we achieve significant improvement on both test collections.\nThe p-values of the significance tests based on P@5 on both collections are 0.01 and 0.02 respectively.\nThis shows that our log-based method is effective to learn users'' preferences from the past query history, and thus it can organize the search results in a more useful way to users.\nWe showed the optimal results above.\nTo test the sensitivity of the parameter \u03c3 of our log-based method, we use one of the test sets to tune the parameter to be optimal and then use the tuned parameter on the other set.\nWe compare this result (log tuned outside) with the optimal results of both cluster-based (cluster optimized) and log-based methods (log optimized) in Figure 1.\nWe can see that, as expected, the performance using the parameter tuned on a separate set is worse than the optimal performance.\nHowever, our method still performs much better than the optimal results of cluster-based method on both test collections.\n0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 Test set 1 Test set 2 P@5 cluster optimized log optimized log tuned outside Figure 1: Results using parameters tuned from the other test collection.\nWe compare it with the optimal performance of the cluster-based and our logbased methods.\n0 10 20 30 40 50 60 1 2 3 4 Bin number #Queries Improved Decreased Figure 2: The correlation between performance change and result diversity.\nIn Table 3, we show pairwise comparisons of the three methods in terms of the numbers of test cases for which P@5 is increased versus decreased.\nWe can see that our method improves more test cases compared with the other two methods.\nIn the next section, we show more detailed analysis to see what types of test cases can be improved by our method.\n6.2.2 Detailed Analysis To better understand the cases where our log-based method can improve the accuracy, we test two properties: result diversity and query difficulty.\nAll the analysis below is based on test set 1.\nDiversity Analysis: Intuitively, organizing search results into different aspects is more beneficial to those queries whose results are more diverse, as for such queries, the results tend to form two or more big clusters.\nIn order to test the hypothesis that log-based method help more those queries with diverse results, we compute the size ratios of the biggest and second biggest clusters in our log-based results and use this ratio as an indicator of diversity.\nIf the ratio is small, it means that the first two clusters have a small difference thus the results are more diverse.\nIn this case, we would expect our method to help more.\nThe results are shown in Figure 2.\nIn this figure, we partition the ratios into 4 bins.\nThe 4 bins correspond to the ratio ranges [1, 2), [2, 3), [3, 4), and [4, +\u221e) respectively.\n([i, j) means that i \u2264 ratio < j.) In each bin, we count the numbers of test cases whose P@5``s are improved versus decreased with respect to the ranking baseline, and plot the numbers in this figure.\nWe can observe that when the ratio is smaller, the log-based method can improve more test cases.\nBut when 0 5 10 15 20 25 30 1 2 3 4 Bin number #Queries Improved Decreased Figure 3: The correlation between performance change and query difficulty.\nthe ratio is large, the log-based method can not improve over the baseline.\nFor example, in bin 1, 48 test cases are improved and 34 are decreased.\nBut in bin 4, all the 4 test cases are decreased.\nThis confirms our hypothesis that our method can help more if the query has more diverse results.\nThis also suggests that we should turn off the option of re-organizing search results if the results are not very diverse (e.g., as indicated by the cluster size ratio).\nDifficulty Analysis: Difficult queries have been studied in recent years [7, 25, 5].\nHere we analyze the effectiveness of our method in helping difficult queries.\nWe quantify the query difficulty by the Mean Average Precision (MAP) of the original search engine ranking for each test case.\nWe then order the 172 test cases in test set 1 in an increasing order of MAP values.\nWe partition the test cases into 4 bins with each having a roughly equal number of test cases.\nA small MAP means that the utility of the original ranking is low.\nBin 1 contains those test cases with the lowest MAP``s and bin 4 contains those test cases with the highest MAP``s. For each bin, we compute the numbers of test cases whose P@5``s are improved versus decreased.\nFigure 3 shows the results.\nClearly, in bin 1, most of the test cases are improved (24 vs 3), while in bin 4, log-based method may decrease the performance (3 vs 20).\nThis shows that our method is more beneficial to difficult queries, which is as expected since clustering search results is intended to help difficult queries.\nThis also shows that our method does not really help easy queries, thus we should turn off our organization option for easy queries.\n6.2.3 Parameter Setting We examine parameter sensitivity in this section.\nFor the star clustering algorithm, we study the similarity threshold parameter \u03c3.\nFor the OKAPI retrieval function, we study the parameters k1 and b.\nWe also study the impact of the number of past queries retrieved in our log-based method.\nFigure 4 shows the impact of the parameter \u03c3 for both cluster-based and log-based methods on both test sets.\nWe vary \u03c3 from 0.05 to 0.3 with step 0.05.\nFigure 4 shows that the performance is not very sensitive to the parameter \u03c3.\nWe can always obtain the best result in range 0.1 \u2264 \u03c3 \u2264 0.25.\nIn Table 4, we show the impact of OKAPI parameters.\nWe vary k1 from 1.0 to 2.0 with step 0.2 and b from 0 to 1 with step 0.2.\nFrom this table, it is clear that P@5 is also not very sensitive to the parameter setting.\nMost of the values are larger than 0.35.\nThe default values k1 = 1.2 and b = 0.8 give approximately optimal results.\nWe further study the impact of the amount of history 0.2 0.25 0.3 0.35 0.4 0.05 0.1 0.15 0.2 0.25 0.3 P@5 similarity threhold: sigma cluster-based 1 log-based 1 cluster-based 2 log-based 2 Figure 4: The impact of similarity threshold \u03c3 on both cluster-based and log-based methods.\nWe show the result on both test collections.\nb 0.0 0.2 0.4 0.6 0.8 1.0 1.0 0.3476 0.3406 0.3453 0.3616 0.3500 0.3453 1.2 0.3418 0.3383 0.3453 0.3593 0.3534 0.3546 k1 1.4 0.3337 0.3430 0.3476 0.3604 0.3546 0.3465 1.6 0.3476 0.3418 0.3523 0.3534 0.3581 0.3476 1.8 0.3465 0.3418 0.3546 0.3558 0.3616 0.3476 2.0 0.3453 0.3500 0.3534 0.3558 0.3569 0.3546 Table 4: Impact of OKAPI parameters k1 and b. information to learn from by varying the number of past queries to be retrieved for learning aspects.\nThe results on both test collections are shown in Figure 5.\nWe can see that the performance gradually increases as we enlarge the number of past queries retrieved.\nThus our method could potentially learn more as we accumulate more history.\nMore importantly, as time goes, more and more queries will have sufficient history, so we can improve more and more queries.\n6.2.4 An Illustrative Example We use the query area codes to show the difference in the results of the log-based method and the cluster-based method.\nThis query may mean phone codes or zip codes.\nTable 5 shows the representative keywords extracted from the three biggest clusters of both methods.\nIn the clusterbased method, the results are partitioned based on locations: local or international.\nIn the log-based method, the results are disambiguated into two senses: phone codes or zip codes.\nWhile both are reasonable partitions, our evaluation indicates that most users using such a query are often interested in either phone codes or zip codes.\nsince the P@5 values of cluster-based and log-based methods are 0.2 and 0.6, respectively.\nTherefore our log-based method is more effective in helping users to navigate into their desired results.\nCluster-based method Log-based method city, state telephone, city, international local, area phone, dialing international zip, postal Table 5: An example showing the difference between the cluster-based method and our log-based method 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 1501201008050403020 P@5 #queries retrieved Test set 1 Test set 2 Figure 5: The impact of the number of past queries retrieved.\n6.2.5 Labeling Comparison We now compare the labels between the cluster-based method and log-based method.\nThe cluster-based method has to rely on the keywords extracted from the snippets to construct the label for each cluster.\nOur log-based method can avoid this difficulty by taking advantage of queries.\nSpecifically, for the cluster-based method, we count the frequency of a keyword appearing in a cluster and use the most frequent keywords as the cluster label.\nFor log-based method, we use the center of each star cluster as the label for the corresponding cluster.\nIn general, it is not easy to quantify the readability of a cluster label automatically.\nWe use examples to show the difference between the cluster-based and the log-based methods.\nIn Table 6, we list the labels of the top 5 clusters for two examples jaguar and apple.\nFor the cluster-based method, we separate keywords by commas since they do not form a phrase.\nFrom this table, we can see that our log-based method gives more readable labels because it generates labels based on users'' queries.\nThis is another advantage of our way of organizing search results over the clustering approach.\nLabel comparison for query jaguar Log-based method Cluster-based method 1.\njaguar animal 1.\njaguar, auto, accessories 2.\njaguar auto accessories 2.\njaguar, type, prices 3.\njaguar cats 3.\njaguar, panthera, cats 4.\njaguar repair 4.\njaguar, services, boston 5.\njaguar animal pictures 5.\njaguar, collection, apparel Label comparison for query apple Log-based method Cluster-based method 1.\napple computer 1.\napple, support, product 2.\napple ipod 2.\napple, site, computer 3.\napple crisp recipe 3.\napple, world, visit 4.\nfresh apple cake 4.\napple, ipod, amazon 5.\napple laptop 5.\napple, products, news Table 6: Cluster label comparison.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper, we studied the problem of organizing search results in a user-oriented manner.\nTo attain this goal, we rely on search engine logs to learn interesting aspects from users'' perspective.\nGiven a query, we retrieve its related queries from past query history, learn the aspects by clustering the past queries and the associated clickthrough information, and categorize the search results into the aspects learned.\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking.\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline, especially when the queries are difficult or the search results are diverse.\nFurthermore, our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results.\nThere are several interesting directions for further extending our work: First, although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results, the methods we have experimented with are relatively simple.\nIt would be interesting to explore other potentially more effective methods.\nIn particular, we hope to develop probabilistic models for learning aspects and organizing results simultaneously.\nSecond, with the proposed way of organizing search results, we can expect to obtain informative feedback information from a user (e.g., the aspect chosen by a user to view).\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information.\nFinally, we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user.\n8.\nACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments.\nThis work is in part supported by a Microsoft Live Labs Research Grant, a Google Research Grant, and an NSF CAREER grant IIS-0347933.\n9.\nREFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais.\nImproving web search ranking by incorporating user behavior information.\nIn SIGIR, pages 19-26, 2006.\n[2] J. A. Aslam, E. Pelekov, and D. Rus.\nThe star clustering algorithm for static and dynamic information organization.\nJournal of Graph Algorithms and Applications, 8(1):95-129, 2004.\n[3] R. A. Baeza-Yates.\nApplications of web query mining.\nIn ECIR, pages 7-22, 2005.\n[4] D. Beeferman and A. L. Berger.\nAgglomerative clustering of a search engine query log.\nIn KDD, pages 407-416, 2000.\n[5] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg.\nWhat makes a query difficult?\nIn SIGIR, pages 390-397, 2006.\n[6] H. Chen and S. T. Dumais.\nBringing order to the web: automatically categorizing search results.\nIn CHI, pages 145-152, 2000.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft.\nPredicting query performance.\nIn Proceedings of ACM SIGIR 2002, pages 299-306, 2002.\n[8] S. T. Dumais, E. Cutrell, and H. Chen.\nOptimizing search by showing results in context.\nIn CHI, pages 277-284, 2001.\n[9] M. A. Hearst and J. O. Pedersen.\nReexamining the cluster hypothesis: Scatter\/gather on retrieval results.\nIn SIGIR, pages 76-84, 1996.\n[10] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn KDD, pages 133-142, 2002.\n[11] T. Joachims.\nEvaluating Retrieval Performance Using Clickthrough Data., pages 79-96.\nPhysica\/Springer Verlag, 2003.\nin J. Franke and G. Nakhaeizadeh and I. Renz, Text Mining.\n[12] R. Jones, B. Rey, O. Madani, and W. Greiner.\nGenerating query substitutions.\nIn WWW, pages 387-396, 2006.\n[13] K. Kummamuru, R. Lotlikar, S. Roy, K. Singal, and R. Krishnapuram.\nA hierarchical monothetic document clustering algorithm for summarization and browsing search results.\nIn WWW, pages 658-665, 2004.\n[14] Microsoft Live Labs.\nAccelerating search in academic research, 2006.\nhttp:\/\/research.microsoft.com\/ur\/us\/fundingopps\/RFPs\/ Search 2006 RFP.aspx.\n[15] P. Pirolli, P. K. Schank, M. A. Hearst, and C. Diehl.\nScatter\/gather browsing communicates the topic structure of a very large text collection.\nIn CHI, pages 213-220, 1996.\n[16] F. Radlinski and T. Joachims.\nQuery chains: learning to rank from implicit feedback.\nIn KDD, pages 239-248, 2005.\n[17] S. E. Robertson and S. Walker.\nSome simple effective approximations to the 2-poisson model for probabilistic weighted retrieval.\nIn SIGIR, pages 232-241, 1994.\n[18] G. Salton, A. Wong, and C. S. Yang.\nA vector space model for automatic indexing.\nCommun.\nACM, 18(11):613-620, 1975.\n[19] X. Shen, B. Tan, and C. Zhai.\nContext-sensitive information retrieval using implicit feedback.\nIn SIGIR, pages 43-50, 2005.\n[20] C. J. van Rijsbergen.\nInformation Retrieval, second edition.\nButterworths, London, 1979.\n[21] V. N. Vapnik.\nThe Nature of Statistical Learning Theory.\nSpringer-Verlag, Berlin, 1995.\n[22] Vivisimo.\nhttp:\/\/vivisimo.com\/.\n[23] X. Wang, J.-T.\nSun, Z. Chen, and C. Zhai.\nLatent semantic analysis for multiple-type interrelated data objects.\nIn SIGIR, pages 236-243, 2006.\n[24] J.-R.\nWen, J.-Y.\nNie, and H. Zhang.\nClustering user queries of a search engine.\nIn WWW, pages 162-168, 2001.\n[25] E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow.\nLearning to estimate query difficulty: including applications to missing content detection and distributed information retrieval.\nIn SIGIR, pages 512-519, 2005.\n[26] O. Zamir and O. Etzioni.\nWeb document clustering: A feasibility demonstration.\nIn SIGIR, pages 46-54, 1998.\n[27] O. Zamir and O. Etzioni.\nGrouper: A dynamic clustering interface to web search results.\nComputer Networks, 31(11-16):1361-1374, 1999.\n[28] H.-J.\nZeng, Q.-C.\nHe, Z. Chen, W.-Y.\nMa, and J. Ma.\nLearning to cluster web search results.\nIn SIGIR, pages 210-217, 2004.","lvl-3":"Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine.\nClustering search results is an effective way to organize search results, which allows a user to navigate into relevant documents quickly.\nHowever, two deficiencies of this approach make it not always work well: (1) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user's perspective; and (2) the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nIn this paper, we propose to address these two deficiencies by (1) learning \"interesting aspects\" of a topic from Web search logs and organizing search results accordingly; and (2) generating more meaningful cluster labels using past query words entered by users.\nWe evaluate our proposed method on a commercial search engine log data.\nCompared with the traditional methods of clustering search results, our method can give better result organization and more meaningful labels.\n1.\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors.\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function, how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly.\nCompared with the vast amount of literature on retrieval models, however, there is relatively little research on how to improve the effectiveness of search result organization.\nThe most common strategy of presenting search results is a simple ranked list.\nIntuitively, such a presentation strategy is reasonable for non-ambiguous, homogeneous search\nresults; in general, it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results.\nHowever, when the search results are diverse (e.g., due to ambiguity or multiple aspects of a topic) as is often the case in Web search, the ranked list presentation would not be effective; in such a case, it would be better to group the search results into clusters so that a user can easily navigate into a particular interesting group.\nFor example, the results in the first page returned from Google for the ambiguous query \"jaguar\" (as of Dec. 2nd, 2006) contain at least four different senses of \"jaguar\" (i.e., car, animal, software, and a sports team); even for a more refined query such as \"jaguar team picture\", the results are still quite ambiguous, including at least four different jaguar teams--a wrestling team, a jaguar car team, Southwestern College Jaguar softball team, and Jacksonville Jaguar football team.\nMoreover, if a user wants to find a place to download a jaguar software, a query such as \"download jaguar\" is also not very effective as the dominating results are about downloading jaguar brochure, jaguar wallpaper, and jaguar DVD.\nIn these examples, a clustering view of the search results would be much more useful to a user than a simple ranked list.\nClustering is also useful when the search results are poor, in which case, a user would otherwise have to go through a long list sequentially to reach the very first relevant document.\nAs a primary alternative strategy for presenting search results, clustering search results has been studied relatively extensively [9, 15, 26, 27, 28].\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters, which often correspond to different subtopics of the general query topic.\nA label will be generated to indicate what each cluster is about.\nA user can then view the labels to decide which cluster to look into.\nSuch a strategy has been shown to be more useful than the simple ranked list presentation in several studies [8, 9, 26].\nHowever, this clustering strategy has two deficiencies which make it not always work well: First, the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user's perspective.\nFor example, users are often interested in finding either \"phone codes\" or \"zip codes\" when entering the query \"area codes.\"\nBut the clusters discovered by the current methods may partition the results into \"local codes\" and \"international codes.\"\nSuch clusters would not be very useful for users; even the best cluster would still have a low precision.\nSecond, the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nThere are two reasons for this problem: (1) The clusters are not corresponding to a user's interests, so their labels would not be very meaningful or useful.\n(2) Even if a cluster really corresponds to an interesting aspect of the topic, the label may not be informative because it is usually generated based on the contents in a cluster, and it is possible that the user is not very familiar with some of the terms.\nFor example, the ambiguous query \"jaguar\" may mean an animal or a car.\nA cluster may be labeled as \"panthera onca.\"\nAlthough this is an accurate label for a cluster with the \"animal\" sense of \"jaguar\", if a user is not familiar with the phrase, the label would not be helpful.\nIn this paper, we propose a different strategy for partitioning search results, which addresses these two deficiencies through imposing a user-oriented partitioning of the search results.\nThat is, we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly.\nSpecifically, we propose to do the following: First, we will learn \"interesting aspects\" of similar topics from search logs and organize search results based on these \"interesting aspects\".\nFor example, if the current query has occurred many times in the search logs, we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query.\nIn case when the query is ambiguous such as \"jaguar\" we can expect to see some clear clusters corresponding different senses of \"jaguar\".\nMore importantly, even if a word is not ambiguous (e.g., \"car\"), we may still discover interesting aspects such as \"car rental\" and \"car pricing\" (which happened to be the two primary aspects discovered in our search log data).\nSuch aspects can be very useful for organizing future search results about \"car\".\nNote that in the case of \"car\", clusters generated using regular clustering may not necessarily reflect such interesting aspects about \"car\" from a user's perspective, even though the generated clusters are coherent and meaningful in other ways.\nSecond, we will generate more meaningful cluster labels using past query words entered by users.\nAssuming that the past search logs can help us learn what specific aspects are interesting to users given the current query topic, we could also expect that those query words entered by users in the past that are associated with the current query can provide meaningful descriptions of the distinct aspects.\nThus they can be better labels than those extracted from the ordinary contents of search results.\nTo implement the ideas presented above, we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs.\nGiven a new query, we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [2] to these past queries and clickthroughs.\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster.\nWe evaluate our method for result organization using logs of a commercial search engine.\nWe compare our method with the default search engine ranking and the traditional clustering of search results.\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches.\nThe rest of the paper is organized as follows.\nWe first review the related work in Section 2.\nIn Section 3, we describe search engine log data and our procedure of building a history collection.\nIn Section 4, we present our approach in details.\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6.\nFinally, we conclude our paper and discuss future work in Section 7.\n2.\nRELATED WORK\nOur work is closely related to the study of clustering search results.\nIn [9, 15], the authors used Scatter\/Gather algorithm to cluster the top documents returned from a traditional information retrieval system.\nTheir results validate the cluster hypothesis [20] that relevant documents tend to form clusters.\nThe system \"Grouper\" was described in [26, 27].\nIn these papers, the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents.\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm (STC) was shown to be the most effective one.\nThey also showed that using snippets is as effective as using whole documents.\nHowever, an important challenge of document clustering is to generate meaningful labels for clusters.\nTo overcome this difficulty, in [28], supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results.\nIn [13], the authors proposed to use a monothetic clustering algorithm, in which a document is assigned to a cluster based on a single feature, to organize search results, and the single feature is used to label the corresponding cluster.\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [22].\nHowever, in all these works, the clusters are generated solely based on the search results.\nThus the obtained clusters do not necessarily reflect users' preferences and the generated labels may not be informative from a user's viewpoint.\nMethods of organizing search results based on text categorization are studied in [6, 8].\nIn this work, a text classifier is trained using a Web directory and search results are then classified into the predefined categories.\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces.\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query.\nSearch logs have been exploited for several different purposes in the past.\nFor example, clustering search queries to find those Frequent Asked Questions (FAQ) is studied in [24, 4].\nRecently, search logs have been used for suggesting query substitutes [12], personalized search [19], Web site design [3], Latent Semantic Analysis [23], and learning retrieval ranking functions [16, 10, 1].\nIn our work, we explore past query history in order to better organize the search results for future queries.\nWe use the star clustering algorithm [2], which is a graph partition based approach, to learn interesting aspects from search logs given a new query.\nThus past queries are clustered in a query specific manner and this is another difference from previous works such as [24, 4] in which all queries in logs are clustered in an offline batch manner.\n3.\nSEARCH ENGINE LOGS\n4.\nOUR APPROACH\n4.1 Finding Related Past Queries\n4.2 Learning Aspects by Clustering\n4.2.1 Star Clustering\n4.3 Categorizing Search Results\n5.\nDATA COLLECTION\n6.\nEXPERIMENTS\n6.1 Experimental Design\n6.2 Experimental Results\n6.2.1 Overall performance\n6.2.2 Detailed Analysis\n6.2.3 Parameter Setting\n6.2.4 An Illustrative Example\n6.2.5 Labeling Comparison\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the problem of organizing search results in a user-oriented manner.\nTo attain this goal, we rely on search engine logs to learn interesting aspects from users' perspective.\nGiven a query, we retrieve its related\nqueries from past query history, learn the aspects by clustering the past queries and the associated clickthrough information, and categorize the search results into the aspects learned.\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking.\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline, especially when the queries are difficult or the search results are diverse.\nFurthermore, our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results.\nThere are several interesting directions for further extending our work: First, although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results, the methods we have experimented with are relatively simple.\nIt would be interesting to explore other potentially more effective methods.\nIn particular, we hope to develop probabilistic models for learning aspects and organizing results simultaneously.\nSecond, with the proposed way of organizing search results, we can expect to obtain informative feedback information from a user (e.g., the aspect chosen by a user to view).\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information.\nFinally, we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user.","lvl-4":"Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine.\nClustering search results is an effective way to organize search results, which allows a user to navigate into relevant documents quickly.\nHowever, two deficiencies of this approach make it not always work well: (1) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user's perspective; and (2) the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nIn this paper, we propose to address these two deficiencies by (1) learning \"interesting aspects\" of a topic from Web search logs and organizing search results accordingly; and (2) generating more meaningful cluster labels using past query words entered by users.\nWe evaluate our proposed method on a commercial search engine log data.\nCompared with the traditional methods of clustering search results, our method can give better result organization and more meaningful labels.\n1.\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors.\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function, how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly.\nCompared with the vast amount of literature on retrieval models, however, there is relatively little research on how to improve the effectiveness of search result organization.\nThe most common strategy of presenting search results is a simple ranked list.\nIntuitively, such a presentation strategy is reasonable for non-ambiguous, homogeneous search\nresults; in general, it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results.\nIn these examples, a clustering view of the search results would be much more useful to a user than a simple ranked list.\nClustering is also useful when the search results are poor, in which case, a user would otherwise have to go through a long list sequentially to reach the very first relevant document.\nAs a primary alternative strategy for presenting search results, clustering search results has been studied relatively extensively [9, 15, 26, 27, 28].\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters, which often correspond to different subtopics of the general query topic.\nA label will be generated to indicate what each cluster is about.\nA user can then view the labels to decide which cluster to look into.\nHowever, this clustering strategy has two deficiencies which make it not always work well: First, the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user's perspective.\nBut the clusters discovered by the current methods may partition the results into \"local codes\" and \"international codes.\"\nSuch clusters would not be very useful for users; even the best cluster would still have a low precision.\nSecond, the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nThere are two reasons for this problem: (1) The clusters are not corresponding to a user's interests, so their labels would not be very meaningful or useful.\nFor example, the ambiguous query \"jaguar\" may mean an animal or a car.\nA cluster may be labeled as \"panthera onca.\"\nIn this paper, we propose a different strategy for partitioning search results, which addresses these two deficiencies through imposing a user-oriented partitioning of the search results.\nThat is, we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly.\nSpecifically, we propose to do the following: First, we will learn \"interesting aspects\" of similar topics from search logs and organize search results based on these \"interesting aspects\".\nFor example, if the current query has occurred many times in the search logs, we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query.\nIn case when the query is ambiguous such as \"jaguar\" we can expect to see some clear clusters corresponding different senses of \"jaguar\".\nSuch aspects can be very useful for organizing future search results about \"car\".\nSecond, we will generate more meaningful cluster labels using past query words entered by users.\nThus they can be better labels than those extracted from the ordinary contents of search results.\nTo implement the ideas presented above, we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs.\nGiven a new query, we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [2] to these past queries and clickthroughs.\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster.\nWe evaluate our method for result organization using logs of a commercial search engine.\nWe compare our method with the default search engine ranking and the traditional clustering of search results.\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches.\nThe rest of the paper is organized as follows.\nWe first review the related work in Section 2.\nIn Section 3, we describe search engine log data and our procedure of building a history collection.\nIn Section 4, we present our approach in details.\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6.\nFinally, we conclude our paper and discuss future work in Section 7.\n2.\nRELATED WORK\nOur work is closely related to the study of clustering search results.\nIn [9, 15], the authors used Scatter\/Gather algorithm to cluster the top documents returned from a traditional information retrieval system.\nTheir results validate the cluster hypothesis [20] that relevant documents tend to form clusters.\nIn these papers, the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents.\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm (STC) was shown to be the most effective one.\nThey also showed that using snippets is as effective as using whole documents.\nHowever, an important challenge of document clustering is to generate meaningful labels for clusters.\nTo overcome this difficulty, in [28], supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results.\nIn [13], the authors proposed to use a monothetic clustering algorithm, in which a document is assigned to a cluster based on a single feature, to organize search results, and the single feature is used to label the corresponding cluster.\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [22].\nHowever, in all these works, the clusters are generated solely based on the search results.\nThus the obtained clusters do not necessarily reflect users' preferences and the generated labels may not be informative from a user's viewpoint.\nMethods of organizing search results based on text categorization are studied in [6, 8].\nIn this work, a text classifier is trained using a Web directory and search results are then classified into the predefined categories.\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces.\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query.\nSearch logs have been exploited for several different purposes in the past.\nFor example, clustering search queries to find those Frequent Asked Questions (FAQ) is studied in [24, 4].\nIn our work, we explore past query history in order to better organize the search results for future queries.\nWe use the star clustering algorithm [2], which is a graph partition based approach, to learn interesting aspects from search logs given a new query.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the problem of organizing search results in a user-oriented manner.\nTo attain this goal, we rely on search engine logs to learn interesting aspects from users' perspective.\nGiven a query, we retrieve its related\nqueries from past query history, learn the aspects by clustering the past queries and the associated clickthrough information, and categorize the search results into the aspects learned.\nWe compared our log-based method with the traditional cluster-based method and the baseline of search engine ranking.\nThe experiments show that our log-based method can consistently outperform cluster-based method and improve over the ranking baseline, especially when the queries are difficult or the search results are diverse.\nFurthermore, our log-based method can generate more meaningful aspect labels than the cluster labels generated based on search results when we cluster search results.\nThere are several interesting directions for further extending our work: First, although our experiment results have clearly shown promise of the idea of learning from search logs to organize search results, the methods we have experimented with are relatively simple.\nIt would be interesting to explore other potentially more effective methods.\nIn particular, we hope to develop probabilistic models for learning aspects and organizing results simultaneously.\nSecond, with the proposed way of organizing search results, we can expect to obtain informative feedback information from a user (e.g., the aspect chosen by a user to view).\nIt would thus be interesting to study how to further improve the organization of the results based on such feedback information.\nFinally, we can combine a general search log with any personal search log to customize and optimize the organization of search results for each individual user.","lvl-2":"Learn from Web Search Logs to Organize Search Results\nABSTRACT\nEffective organization of search results is critical for improving the utility of any search engine.\nClustering search results is an effective way to organize search results, which allows a user to navigate into relevant documents quickly.\nHowever, two deficiencies of this approach make it not always work well: (1) the clusters discovered do not necessarily correspond to the interesting aspects of a topic from the user's perspective; and (2) the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nIn this paper, we propose to address these two deficiencies by (1) learning \"interesting aspects\" of a topic from Web search logs and organizing search results accordingly; and (2) generating more meaningful cluster labels using past query words entered by users.\nWe evaluate our proposed method on a commercial search engine log data.\nCompared with the traditional methods of clustering search results, our method can give better result organization and more meaningful labels.\n1.\nINTRODUCTION\nThe utility of a search engine is affected by multiple factors.\nWhile the primary factor is the soundness of the underlying retrieval model and ranking function, how to organize and present search results is also a very important factor that can affect the utility of a search engine significantly.\nCompared with the vast amount of literature on retrieval models, however, there is relatively little research on how to improve the effectiveness of search result organization.\nThe most common strategy of presenting search results is a simple ranked list.\nIntuitively, such a presentation strategy is reasonable for non-ambiguous, homogeneous search\nresults; in general, it would work well when the search results are good and a user can easily find many relevant documents in the top ranked results.\nHowever, when the search results are diverse (e.g., due to ambiguity or multiple aspects of a topic) as is often the case in Web search, the ranked list presentation would not be effective; in such a case, it would be better to group the search results into clusters so that a user can easily navigate into a particular interesting group.\nFor example, the results in the first page returned from Google for the ambiguous query \"jaguar\" (as of Dec. 2nd, 2006) contain at least four different senses of \"jaguar\" (i.e., car, animal, software, and a sports team); even for a more refined query such as \"jaguar team picture\", the results are still quite ambiguous, including at least four different jaguar teams--a wrestling team, a jaguar car team, Southwestern College Jaguar softball team, and Jacksonville Jaguar football team.\nMoreover, if a user wants to find a place to download a jaguar software, a query such as \"download jaguar\" is also not very effective as the dominating results are about downloading jaguar brochure, jaguar wallpaper, and jaguar DVD.\nIn these examples, a clustering view of the search results would be much more useful to a user than a simple ranked list.\nClustering is also useful when the search results are poor, in which case, a user would otherwise have to go through a long list sequentially to reach the very first relevant document.\nAs a primary alternative strategy for presenting search results, clustering search results has been studied relatively extensively [9, 15, 26, 27, 28].\nThe general idea in virtually all the existing work is to perform clustering on a set of topranked search results to partition the results into natural clusters, which often correspond to different subtopics of the general query topic.\nA label will be generated to indicate what each cluster is about.\nA user can then view the labels to decide which cluster to look into.\nSuch a strategy has been shown to be more useful than the simple ranked list presentation in several studies [8, 9, 26].\nHowever, this clustering strategy has two deficiencies which make it not always work well: First, the clusters discovered in this way do not necessarily correspond to the interesting aspects of a topic from the user's perspective.\nFor example, users are often interested in finding either \"phone codes\" or \"zip codes\" when entering the query \"area codes.\"\nBut the clusters discovered by the current methods may partition the results into \"local codes\" and \"international codes.\"\nSuch clusters would not be very useful for users; even the best cluster would still have a low precision.\nSecond, the cluster labels generated are not informative enough to allow a user to identify the right cluster.\nThere are two reasons for this problem: (1) The clusters are not corresponding to a user's interests, so their labels would not be very meaningful or useful.\n(2) Even if a cluster really corresponds to an interesting aspect of the topic, the label may not be informative because it is usually generated based on the contents in a cluster, and it is possible that the user is not very familiar with some of the terms.\nFor example, the ambiguous query \"jaguar\" may mean an animal or a car.\nA cluster may be labeled as \"panthera onca.\"\nAlthough this is an accurate label for a cluster with the \"animal\" sense of \"jaguar\", if a user is not familiar with the phrase, the label would not be helpful.\nIn this paper, we propose a different strategy for partitioning search results, which addresses these two deficiencies through imposing a user-oriented partitioning of the search results.\nThat is, we try to figure out what aspects of a search topic are likely interesting to a user and organize the results accordingly.\nSpecifically, we propose to do the following: First, we will learn \"interesting aspects\" of similar topics from search logs and organize search results based on these \"interesting aspects\".\nFor example, if the current query has occurred many times in the search logs, we can look at what kinds of pages viewed by the users in the results and what kind of words are used together with such a query.\nIn case when the query is ambiguous such as \"jaguar\" we can expect to see some clear clusters corresponding different senses of \"jaguar\".\nMore importantly, even if a word is not ambiguous (e.g., \"car\"), we may still discover interesting aspects such as \"car rental\" and \"car pricing\" (which happened to be the two primary aspects discovered in our search log data).\nSuch aspects can be very useful for organizing future search results about \"car\".\nNote that in the case of \"car\", clusters generated using regular clustering may not necessarily reflect such interesting aspects about \"car\" from a user's perspective, even though the generated clusters are coherent and meaningful in other ways.\nSecond, we will generate more meaningful cluster labels using past query words entered by users.\nAssuming that the past search logs can help us learn what specific aspects are interesting to users given the current query topic, we could also expect that those query words entered by users in the past that are associated with the current query can provide meaningful descriptions of the distinct aspects.\nThus they can be better labels than those extracted from the ordinary contents of search results.\nTo implement the ideas presented above, we rely on search engine logs and build a history collection containing the past queries and the associated clickthroughs.\nGiven a new query, we find its related past queries from the history collection and learn aspects through applying the star clustering algorithm [2] to these past queries and clickthroughs.\nWe can then organize the search results into these aspects using categorization techniques and label each aspect by the most representative past query in the query cluster.\nWe evaluate our method for result organization using logs of a commercial search engine.\nWe compare our method with the default search engine ranking and the traditional clustering of search results.\nThe results show that our method is effective for improving search utility and the labels generated using past query words are more readable than those generated using traditional clustering approaches.\nThe rest of the paper is organized as follows.\nWe first review the related work in Section 2.\nIn Section 3, we describe search engine log data and our procedure of building a history collection.\nIn Section 4, we present our approach in details.\nWe describe the data set in Section 5 and the experimental results are discussed in Section 6.\nFinally, we conclude our paper and discuss future work in Section 7.\n2.\nRELATED WORK\nOur work is closely related to the study of clustering search results.\nIn [9, 15], the authors used Scatter\/Gather algorithm to cluster the top documents returned from a traditional information retrieval system.\nTheir results validate the cluster hypothesis [20] that relevant documents tend to form clusters.\nThe system \"Grouper\" was described in [26, 27].\nIn these papers, the authors proposed to cluster the results of a real search engine based on the snippets or the contents of returned documents.\nSeveral clustering algorithms are compared and the Suffix Tree Clustering algorithm (STC) was shown to be the most effective one.\nThey also showed that using snippets is as effective as using whole documents.\nHowever, an important challenge of document clustering is to generate meaningful labels for clusters.\nTo overcome this difficulty, in [28], supervised learning algorithms were studied to extract meaningful phrases from the search result snippets and these phrases were then used to group search results.\nIn [13], the authors proposed to use a monothetic clustering algorithm, in which a document is assigned to a cluster based on a single feature, to organize search results, and the single feature is used to label the corresponding cluster.\nClustering search results has also attracted a lot of attention in industry and commercial Web services such as Vivisimo [22].\nHowever, in all these works, the clusters are generated solely based on the search results.\nThus the obtained clusters do not necessarily reflect users' preferences and the generated labels may not be informative from a user's viewpoint.\nMethods of organizing search results based on text categorization are studied in [6, 8].\nIn this work, a text classifier is trained using a Web directory and search results are then classified into the predefined categories.\nThe authors designed and studied different category interfaces and they found that category interfaces are more effective than list interfaces.\nHowever predefined categories are often too general to reflect the finer granularity aspects of a query.\nSearch logs have been exploited for several different purposes in the past.\nFor example, clustering search queries to find those Frequent Asked Questions (FAQ) is studied in [24, 4].\nRecently, search logs have been used for suggesting query substitutes [12], personalized search [19], Web site design [3], Latent Semantic Analysis [23], and learning retrieval ranking functions [16, 10, 1].\nIn our work, we explore past query history in order to better organize the search results for future queries.\nWe use the star clustering algorithm [2], which is a graph partition based approach, to learn interesting aspects from search logs given a new query.\nThus past queries are clustered in a query specific manner and this is another difference from previous works such as [24, 4] in which all queries in logs are clustered in an offline batch manner.\n3.\nSEARCH ENGINE LOGS\nSearch engine logs record the activities of Web users, which reflect the actual users' needs or interests when conducting\nTable 1: Sample entries of search engine logs.\nDifferent ID's mean different sessions.\nWeb search.\nThey generally have the following information: text queries that users submitted, the URLs that they clicked after submitting the queries, and the time when they clicked.\nSearch engine logs are separated by sessions.\nA session includes a single query and all the URLs that a user clicked after issuing the query [24].\nA small sample of search log data is shown in Table 1.\nOur idea of using search engine logs is to treat these logs as past history, learn users' interests using this history data automatically, and represent their interests by representative queries.\nFor example, in the search logs, a lot of queries are related to \"car\" and this reflects that a large number of users are interested in information about \"car.\"\nDifferent users are probably interested in different aspects of \"car.\"\nSome are looking for renting a car, thus may submit a query like \"car rental\"; some are more interested in buying a used car, and may submit a query like \"used car\"; and others may care more about buying a car accessory, so they may use a query like \"car audio.\"\nBy mining all the queries which are related to the concept of \"car\", we can learn the aspects that are likely interesting from a user's perspective.\nAs an example, the following is some aspects about \"car\" learned from our search log data (see Section 5).\n1.\ncar rental, hertz car rental, enterprise car rental,...2.\ncar pricing, used car, car values,...3.\ncar accidents, car crash, car wrecks,...4.\ncar audio, car stereo, car speaker, ...\nIn order to learn aspects from search engine logs, we preprocess the raw logs to build a history data collection.\nAs shown above, search engine logs consist of sessions.\nEach session contains the information of the text query and the clicked Web page URLs, together with the time that the user did the clicks.\nHowever, this information is limited since URLs alone are not informative enough to tell the intended meaning of a submitted query accurately.\nTo gather rich information, we enrich each URL with additional text content.\nSpecifically, given the query in a session, we obtain its top-ranked results using the search engine from which we obtained our log data, and extract the snippets of the URLs that are clicked on according to the log information in the corresponding session.\nAll the titles, snippets, and URLs of the clicked Web pages of that query are used to represent the session.\nDifferent sessions may contain the same queries.\nThus the number of sessions could be quite huge and the information in the sessions with the same queries could be redundant.\nIn order to improve the scalability and reduce data sparseness, we aggregate all the sessions which contain exactly the same queries together.\nThat is, for each unique query, we build a \"pseudo-document\" which consists of all the descriptions of its clicks in all the sessions aggregated.\nThe keywords contained in the queries themselves can be regarded as brief summaries of the pseudo-documents.\nAll these pseudo-documents form our history data collection, which is used to learn interesting aspects in the following section.\n4.\nOUR APPROACH\nOur approach is to organize search results by aspects learned from search engine logs.\nGiven an input query, the general procedure of our approach is:\n1.\nGet its related information from search engine logs.\nAll the information forms a working set.\n2.\nLearn aspects from the information in the working set.\nThese aspects correspond to users' interests given the input query.\nEach aspect is labeled with a representative query.\n3.\nCategorize and organize the search results of the input query according to the aspects learned above.\nWe now give a detailed presentation of each step.\n4.1 Finding Related Past Queries\nGiven a query q, a search engine will return a ranked list of Web pages.\nTo know what the users are really interested in given this query, we first retrieve its past similar queries in our preprocessed history data collection.\nFormally, assume we have N pseudo-documents in our history data set: H = {Q1, Q2,..., QN}.\nEach Qi corresponds to a unique query and is enriched with clickthrough information as discussed in Section 3.\nTo find q's related queries in H, a natural way is to use a text retrieval algorithm.\nHere we use the OKAPI method [17], one of the state-of-the-art retrieval methods.\nSpecifically, we use the following formula to calculate the similarity between query q and pseudo-document Qi: c (w, q) \u00d7 IDF (w) \u00d7 (k1 + 1) \u00d7 c (w, Qi) wEq Qi where k1 and b are OKAPI parameters set empirically, c (w, Qi) and c (w, q) are the count of word w in Qi and q respectively, IDF (w) is the inverse document frequency of word w, and avdl is the average document length in our history collection.\nBased on the similarity scores, we rank all the documents in H.\nThe top ranked documents provide us a working set to learn the aspects that users are usually interested in.\nEach document in H corresponds to a past query, and thus the top ranked documents correspond to q's related past queries.\n4.2 Learning Aspects by Clustering\nGiven a query q, we use Hq = {d1,..., dn} to represent the top ranked pseudo-documents from the history collection H.\nThese pseudo-documents contain the aspects that users are interested in.\nIn this subsection, we propose to use a clustering method to discover these aspects.\nAny clustering algorithm could be applied here.\nIn this paper, we use an algorithm based on graph partition: the star clustering algorithm [2].\nA good property of the star clustering in our setting is that it can suggest a good label for each cluster naturally.\nWe describe the star clustering algorithm below.\n4.2.1 Star Clustering\nGiven Hq, star clustering starts with constructing a pairwise similarity graph on this collection based on the vector space model in information retrieval [18].\nThen the clusters are formed by dense subgraphs that are star-shaped.\nThese clusters form a cover of the similarity graph.\nFormally, for each of the n pseudo-documents {d1,..., dn} in the collection Hq, we compute a TF-IDF vector.\nThen, for each pair of documents di and dj (i = j), their similarity is computed as the cosine score of their corresponding vectors vi and vj, that is sim (di, dj) = cos (vi, vj) = vi \u00b7 vj | vi | \u00b7 | vj A similarity graph G can then be constructed as follows using a similarity threshold parameter v. Each document di is a vertex of G.\nIf sim (di, dj)> v, there would be an edge connecting the corresponding two vertices.\nAfter the similarity graph G is built, the star clustering algorithm clusters the documents using a greedy algorithm as follows:\n1.\nAssociate every vertex in G with a flag, initialized as unmarked.\n2.\nFrom those unmarked vertices, find the one which has the highest degree and let it be u. 3.\nMark the flag of u as center.\n4.\nForm a cluster C containing u and all its neighbors that are not marked as center.\nMark all the selected neighbors as satellites.\n5.\nRepeat from step 2 until all the vertices in G are marked.\nEach cluster is star-shaped, which consists a single center and several satellites.\nThere is only one parameter v in the star clustering algorithm.\nA big v enforces that the connected documents have high similarities, and thus the clusters tend to be small.\nOn the other hand, a small v will make the clusters big and less coherent.\nWe will study the impact of this parameter in our experiments.\nA good feature of the star clustering algorithm is that it outputs a center for each cluster.\nIn the past query collection Hq, each document corresponds to a query.\nThis center query can be regarded as the most representative one for the whole cluster, and thus provides a label for the cluster naturally.\nAll the clusters obtained are related to the input query q from different perspectives, and they represent the possible aspects of interests about query q of users.\n4.3 Categorizing Search Results\nIn order to organize the search results according to users' interests, we use the learned aspects from the related past queries to categorize the search results.\nGiven the top m Web pages returned by a search engine for q: {s1,..., sm}, we group them into different aspects using a categorization algorithm.\nIn principle, any categorization algorithm can be used here.\nHere we use a simple centroid-based method for categorization.\nNaturally, more sophisticated methods such as SVM [21] may be expected to achieve even better performance.\nBased on the pseudo-documents in each discovered aspect Ci, we build a centroid prototype pi by taking the average of all the vectors of the documents in Ci:\nAll these pi's are used to categorize the search results.\nSpecifically, for any search result sj, we build a TF-IDF vector.\nThe centroid-based method computes the cosine similarity between the vector representation of sj and each centroid prototype pi.\nWe then assign sj to the aspect with which it has the highest cosine similarity score.\nAll the aspects are finally ranked according to the number of search results they have.\nWithin each aspect, the search results are ranked according to their original search engine ranking.\n5.\nDATA COLLECTION\nWe construct our data set based on the MSN search log data set released by the Microsoft Live Labs in 2006 [14].\nIn total, this log data spans 31 days from 05\/01\/2006 to 05\/31\/2006.\nThere are 8,144,000 queries, 3,441,000 distinct queries, and 4,649,000 distinct URLs in the raw data.\nTo test our algorithm, we separate the whole data set into two parts according to the time: the first 2\/3 data is used to simulate the historical data that a search engine accumulated, and we use the last 1\/3 to simulate future queries.\nIn the history collection, we clean the data by only keeping those frequent, well-formatted, English queries (queries which only contain characters ` a', ` b',..., ` z', and space, and appear more than 5 times).\nAfter cleaning, we get 169,057 unique queries in our history data collection totally.\nOn average, each query has 3.5 distinct clicks.\nWe build the \"pseudo-documents\" for all these queries as described in Section 3.\nThe average length of these pseudo-documents is 68 words and the total data size of our history collection is 129MB.\nWe construct our test data from the last 1\/3 data.\nAccording to the time, we separate this data into two test sets equally for cross-validation to set parameters.\nFor each test set, we use every session as a test case.\nEach session contains a single query and several clicks.\n(Note that we do not aggregate sessions for test cases.\nDifferent test cases may have the same queries but possibly different clicks.)\nSince it is infeasible to ask the original user who submitted a query to judge the results for the query, we follow the work [11] and opt to use the clicks associated with the query in a session to approximate relevant documents.\nUsing clicks as judgments, we can then compare different algorithms for organizing search results to see how well these algorithms can help users reach the clicked URLs.\nOrganizing search results into different aspects is expected to help informational queries.\nIt thus makes sense to focus on the informational queries in our evaluation.\nFor each test case, i.e., each session, we count the number of different clicks and filter out those test cases with fewer than 4 clicks under the assumption that a query with more clicks is more likely to be an informational query.\nSince we want to test whether our algorithm can learn from the past queries, we also filter out those test cases whose queries cannot retrieve at least 100 pseudo-documents from our history collection.\nFinally, we obtain 172 and 177 test cases in the first and\nsecond test sets respectively.\nOn average, we have 6.23 and 5.89 clicks for each test case in the two test sets respectively.\n6.\nEXPERIMENTS\nIn the section, we describe our experiments on the search result organization based past search engine logs.\n6.1 Experimental Design\nWe use two baseline methods to evaluate the proposed method for organizing search results.\nFor each test case, the first method is the default ranked list from a search engine (baseline).\nThe second method is to organize the search results by clustering them (cluster-based).\nFor fair comparison, we use the same clustering algorithm as our logbased method (i.e., star clustering).\nThat is, we treat each search result as a document, construct the similarity graph, and find the star-shaped clusters.\nWe compare our method (log-based) with the two baseline methods in the following experiments.\nFor both cluster-based and log-based methods, the search results within each cluster is ranked based on their original ranking given by the search engine.\nTo compare different result organization methods, we adopt a similar method as in the paper [9].\nThat is, we compare the quality (e.g., precision) of the best cluster, which is defined as the one with the largest number of relevant documents.\nOrganizing search results into clusters is to help users navigate into relevant documents quickly.\nThe above metric is to simulate a scenario when users always choose the right cluster and look into it.\nSpecifically, we download and organize the top 100 search results into aspects for each test case.\nWe use Precision at 5 documents (P@5) in the best cluster as the primary measure to compare different methods.\nP@5 is a very meaningful measure as it tells us the perceived precision when the user opens a cluster and looks at the first 5 documents.\nWe also use Mean Reciprocal Rank (MRR) as another metric.\nMRR is calculated as\nwhere T is a set of test queries, rq is the rank of the first relevant document for q. To give a fair comparison across different organization algorithms, we force both cluster-based and log-based methods to output the same number of aspects and force each search result to be in one and only one aspect.\nThe number of aspects is fixed at 10 in all the following experiments.\nThe star clustering algorithm can output different number of clusters for different input.\nTo constrain the number of clusters to 10, we order all the clusters by their sizes, select the top 10 as aspect candidates.\nWe then re-assign each search result to one of these selected 10 aspects that has the highest similarity score with the corresponding aspect centroid.\nIn our experiments, we observe that the sizes of the best clusters are all larger than 5, and this ensures that P@5 is a meaningful metric.\n6.2 Experimental Results\nOur main hypothesis is that organizing search results based on the users' interests learned from a search log data set is more beneficial than to organize results using a simple list or cluster search results.\nIn the following, we test our hypothesis from two perspectives--organization and labeling.\nTable 2: Comparison of different methods by MMR and P@5.\nWe also show the percentage of relative improvement in the lower part.\nTable 3: Pairwise comparison w.r.t the number of test cases whose P@5\u2019s are improved versus decreased w.r.t the baseline.\n6.2.1 Overall performance\nWe compare three methods, basic search engine ranking (baseline), traditional clustering based method (clusterbased), and our log based method (log-based), in Table 2 using MRR and P@5.\nWe optimize the parameter a's for each collection individually based on P@5 values.\nThis shows the best performance that each method can achieve.\nIn this table, we can see that in both test collections, our method is better than both the \"baseline\" and the \"cluster-based\" methods.\nFor example, in the first test collection, the baseline method of MMR is 0.734, the cluster-based method is 0.773 and our method is 0.783.\nWe achieve higher accuracy than both cluster-based method (1.27% improvement) and the baseline method (6.62% improvement).\nThe P@5 values are 0.332 for the baseline, 0.316 for cluster-based method, but 0.353 for our method.\nOur method improves over the baseline by 6.31%, while the cluster-based method even decreases the accuracy.\nThis is because cluster-based method organizes the search results only based on the contents.\nThus it could organize the results differently from users' preferences.\nThis confirms our hypothesis of the bias of the cluster-based method.\nComparing our method with the cluster-based method, we achieve significant improvement on both test collections.\nThe p-values of the significance tests based on P@5 on both collections are 0.01 and 0.02 respectively.\nThis shows that our log-based method is effective to learn users' preferences from the past query history, and thus it can organize the search results in a more useful way to users.\nWe showed the optimal results above.\nTo test the sensitivity of the parameter v of our log-based method, we use one of the test sets to tune the parameter to be optimal and then use the tuned parameter on the other set.\nWe compare this result (log tuned outside) with the optimal results of both cluster-based (cluster optimized) and log-based methods (log optimized) in Figure 1.\nWe can see that, as expected, the performance using the parameter tuned on a separate set is worse than the optimal performance.\nHowever, our method still performs much better than the optimal results of cluster-based method on both test collections.\n1 rq\nFigure 1: Results using parameters tuned from the other test collection.\nWe compare it with the optimal performance of the cluster-based and our logbased methods.\nFigure 2: The correlation between performance change and result diversity.\nIn Table 3, we show pairwise comparisons of the three methods in terms of the numbers of test cases for which P@5 is increased versus decreased.\nWe can see that our method improves more test cases compared with the other two methods.\nIn the next section, we show more detailed analysis to see what types of test cases can be improved by our method.\n6.2.2 Detailed Analysis\nTo better understand the cases where our log-based method can improve the accuracy, we test two properties: result diversity and query difficulty.\nAll the analysis below is based on test set 1.\nDiversity Analysis: Intuitively, organizing search results into different aspects is more beneficial to those queries whose results are more diverse, as for such queries, the results tend to form two or more big clusters.\nIn order to test the hypothesis that log-based method help more those queries with diverse results, we compute the size ratios of the biggest and second biggest clusters in our log-based results and use this ratio as an indicator of diversity.\nIf the ratio is small, it means that the first two clusters have a small difference thus the results are more diverse.\nIn this case, we would expect our method to help more.\nThe results are shown in Figure 2.\nIn this figure, we partition the ratios into 4 bins.\nThe 4 bins correspond to the ratio ranges [1, 2), [2, 3), [3, 4), and [4, + oo) respectively.\n([i, j) means that i bi\/qi.\nAfter each round, the clearing price p is publicly revealed.\nAgents then revise their beliefs according to any information garnered from the new price.\nThe next round proceeds as the previous.\nThe process continues until an equilibrium is reached, meaning that prices and bids do not change from one round to the next.\nIn this paper, we make a further simplifying restriction on the trading in each round: We assume that qi = 1 for each agent i.\nThis modeling assumption serves two analytical purposes.\nFirst, it ensures that there is forced trade in every round.\nClassic results in economics show that perfectly rational and risk-neutral agents will never trade with each other for purely speculative reasons (even if they have differing information) [20].\nThere are many factors that can induce rational agents to trade, such as differing degrees of risk aversion, the presence of other traders who are trading for liquidity reasons rather than speculative gain, or a market maker who is pumping money into the market through a subsidy.\nWe sidestep this issue by simply assuming that the 3 Common knowledge is information that all agents know, that all agents know that all agents know, and so on ad infinitum [5].\n4 The values of the input bits themselves may or may not be publicly revealed.\n5 Throughout this paper we ignore the time value of money.\ninformed agents will trade (for unspecified reasons).\nSecond, forcing qi = 1 for all i means that the total volume of trade and the impact of any one trader on the clearing price are common knowledge; the clearing price p is a simple function of the agents'' bids, p = i bi\/n.\nWe will discuss the implications of alternative market models in Section 5.\n3.3 Agent strategies In order to draw formal conclusions about the price evolution process, we need to make some assumptions about how agents behave.\nEssentially we assume that agents are riskneutral, myopic,6 and bid truthfully: Each agent in each round bids his or her current valuation of the security, which is that agent``s estimation of the expected payoff of the security.\nExpectations are computed according to each agent``s probability distribution, which is updated via Bayes'' rule when new information (revealed via the clearing prices) becomes available.\nWe also assume that it is common knowledge that all the agents behave in the specified manner.\nWould rational agents actually behave according to this strategy?\nIt``s hard to say.\nCertainly, we do not claim that this is an equilibrium strategy in the game-theoretic sense.\nFurthermore, it is clear that we are ignoring some legitimate tactics, e.g., bidding falsely in one round in order to effect other agents'' judgments in the following rounds (nonmyopic reasoning).\nHowever, we believe that the strategy outlined is a reasonable starting point for analysis.\nSolving for a true game-theoretic equilibrium strategy in this setting seems extremely difficult.\nOur assumptions seem reasonable when there are enough agents in the system such that extremely complex meta-reasoning is not likely to improve upon simply bidding one``s true expected value.\nIn this case, according the the Shapley-Shubik mechanism, if the clearing price is below an agent``s expected value that agent will end up buying (increasing expected profit); otherwise, if the clearing price is above the agent``s expected value, the agent will end up selling (also increasing expected profit).\n4.\nCOMPUTATIONAL PROPERTIES In this section, we study the computational power of information markets for a very simple class of aggregation functions: Boolean functions of n variables.\nWe characterize the set of Boolean functions that can be computed in our market model for all prior distributions and then prove upper and lower bounds on the worst-case convergence time for these markets.\nThe information structure we assume is as follows: There are n agents, and each agent i has a single bit of private information xi.\nWe use x to denote the vector (x1, ... , xn) of inputs.\nAll the agents also have a common prior probability distribution P : {0, 1}n \u2192 [0, 1] over the values of x.\nWe define a Boolean aggregate function f(x) : {0, 1}n \u2192 {0, 1} that we would like the market to compute.\nNote that x, and hence f(x), is completely determined by the combination of all the agents'' information, but it is not known to any one agent.\nThe agents trade in a Boolean security F, which pays off $1 if f(x) = 1 and $0 if f(x) = 0.\nSo an omniscient 6 Risk neutrality implies that each agent``s utility for the security is linearly related to his or her subjective estimation of the expected payoff of the security.\nMyopic behavior means that agents treat each round as if it were the final round: They do not reason about how their bids may affect the bids of other agents in future rounds.\n158 agent with access to all the agents'' bits would know the true value of security F-either exactly $1 or exactly $0.\nIn reality, risk-neutral agents with limited information will value F according to their expectation of its payoff, or Ei[f(x)], where Ei is the expectation operator applied according to agent i``s probability distribution.\nFor any function f, trading in F may happen to converge to the true value of f(x) by coincidence if the prior probability distribution is sufficiently degenerate.\nMore interestingly, we would like to know for which functions f does the price of the security F always converge to f(x) for all prior probability distributions P.7 In Section 4.2, we prove a necessary and sufficient condition that guarantees convergence.\nIn Section 4.3, we address the natural follow-up question, by deriving upper and lower bounds on the worst-case number of rounds of trading required for the value of f(x) to be revealed.\n4.1 Equilibrium price characterization Our analysis builds on a characterization of the equilibrium price of F that follows from a powerful result on common knowledge of aggregates due to McKelvey and Page [19], later extended by Nielsen et al. [21].\nInformation markets aim to aggregate the knowledge of all the agents.\nProcedurally, this occurs because the agents learn from the markets: The price of the security conveys information to each agent about the knowledge of other agents.\nWe can model the flow of information through prices as follows.\nLet \u2126 = {0, 1}n be the set of possible values of x; we say that \u2126 denotes the set of possible states of the world.\nThe prior P defines everyone``s initial belief about the likelihood of each state.\nAs trading proceeds, some possible states can be logically ruled out, but the relative likelihoods among the remaining states are fully determined by the prior P.\nSo the common knowledge after any stage is completely described by the set of states that an external observer-with no information beyond the sequence of prices observed-considers possible (along with the prior).\nSimilarly, the knowledge of agent i at any point is also completely described by the set of states she considers possible.\nWe use the notation Sr to denote the common-knowledge possibility set after round r, and Sr i to denote the set of states that agent i considers possible after round r. Initially, the only common knowledge is that the input vector x is in \u2126; in other words, the set of states considered possible by an external observer before trading has occurred is the set S0 = \u2126.\nHowever, each agent i also knows the value of her bit xi; thus, her knowledge set S0 i is the set {y \u2208 \u2126|yi = xi}.\nAgent i``s first-round bid is her conditional expectation of the event f(x) = 1 given that x \u2208 S0 i .\nAll the agents'' bids are processed, and the clearing price p1 is announced.\nAn external observer could predict agent i``s bid if he knew the value of xi.\nThus, if he knew the value of x, he could predict the value of p1 .\nIn other words, the external observer knows the function price1 (x) that relates the first round price to the true state x. Of course, he does not know the value of x; however, he can rule out any vector x that would have resulted in a different clearing price from the observed price p1 .\n7 We assume that the common prior is consistent with x in the sense that it assigns a non-zero probability to the actual value of x. Thus, the common knowledge after round 1 is the set S1 = {y \u2208 S0 | price1 (y) = p1 }.\nAgent i knows the common knowledge and, in addition, knows the value of bit xi.\nHence, after every round r, the knowledge of agent i is given by Sr i = {y \u2208 Sr |yi = xi}.\nNote that, because knowledge can only improve over time, we must always have Sr i \u2286 Sr\u22121 i and Sr \u2286 Sr\u22121 .\nThus, only a finite number of changes in each agent``s knowledge are possible, and so eventually we must converge to an equilibrium after which no player learns any further information.\nWe use S\u221e to denote the common knowledge at this point, and S\u221e i to denote agent i``s knowledge at this point.\nLet p\u221e denote the clearing price at equilibrium.\nInformally, McKelvey and Page [19] show that, if n people with common priors but different information about the likelihood of some event A agree about a suitable aggregate of their individual conditional probabilities, then their individual conditional probabilities of event A``s occurring must be identical.\n(The precise definition of suitable is described below.)\nThere is a strong connection to rational expectation equilibria in markets, which was noted in the original McKelvey-Page paper: The market price of a security is common knowledge at the point of equilibrium.\nThus, if the price is a suitable aggregate of the conditional expectations of all the agents, then in equilibrium they must have identical conditional expectations of the event that the security will pay off.\n(Note that their information may still be different.)\nDefinition 1.\nA function g : n \u2192 is called stochastically monotone if it can be written in the form g(x) = i gi(xi), where each function gi : \u2192 is strictly increasing.\nBergin and Brandenburger [2] proved that this simple definition of stochastically monotone functions is equivalent to the original definition in McKelvey-Page [19].\nDefinition 2.\nA function g : n \u2192 is called stochastically regular if it can be written in the form g = h \u25e6 g , where g is stochastically monotone and h is invertible on the range of g .\nWe can now state the McKelvey-Page result, as generalized by Nielsen et al. [21].\nIn our context, the following simple theorem statement suffices; more general versions of this theorem can be found in [19, 21].\nTheorem 1.\n(Nielsen et al. [21]) Suppose that, at equilibrium, the n agents have a common prior, but possibly different information, about the value of a random variable F, as described above.\nFor all i, let p\u221e i = E(F|x \u2208 S\u221e i ).\nIf g is a stochastically regular function and g(p\u221e 1 , p\u221e 2 , ... , p\u221e n ) is common knowledge, then it must be the case that p\u221e 1 = p\u221e 2 = \u00b7 \u00b7 \u00b7 = p\u221e n = E(F|x \u2208 S\u221e ) = p\u221e In one round of our simplified Shapley-Shubik trading model, the announced price is the mean of the conditional expectations of the n agents.\nThe mean is a stochastically regular function; hence, Theorem 1 shows that, at equilibrium, all agents have identical conditional expectations of the payoff of the security.\nIt follows that the equilibrium 159 price p\u221e must be exactly the conditional expectations of all agents at equilibrium.\nTheorem 1 does not in itself say how the equilibrium is reached.\nMcKelvey and Page, extending an argument due to Geanakoplos and Polemarchakis [10], show that repeated announcement of the aggregate will eventually result in common knowledge of the aggregate.\nIn our context, this is achieved by announcing the current price at the end of each round; this will ultimately converge to a state in which all agents bid the same price p\u221e .\nHowever, reaching an equilibrium price is not sufficient for the purposes of information aggregation.\nWe also want the price to reveal the actual value of f(x).\nIt is possible that the equilibrium price p\u221e of the security F will not be either 0 or 1, and so we cannot infer the value of f(x) from it.\nExample 1: Consider two agents 1 and 2 with private input bits x1 and x2 respectively.\nSuppose the prior probability distribution is uniform, i.e., x = (x1, x2) takes the values (0, 0), (0, 1), (1, 0), and (1, 1) each with probability 1 4 .\nNow, suppose the aggregate function we want to compute is the XOR function, f(x) = x1 \u2295 x2.\nTo this end, we design a market to trade in a Boolean security F, which will eventually payoff $1 iff x1 \u2295 x2 = 1.\nIf agent 1 observes x1 = 1, she estimates the expected value of F to be the probability that x2 = 0 (given x1 = 1), which is 1 2 .\nIf she observes x1 = 0, her expectation of the value of F is the conditional probability that x2 = 1, which is also 1 2 .\nThus, in either case, agent 1 will bid 0.5 for F in the first round.\nSimilarly, agent 2 will also always bid 0.5 in the first round.\nHence, the first round of trading ends with a clearing price of 0.5.\nFrom this, agent 2 can infer that agent 1 bid 0.5, but this gives her no information about the value of x1-it is still equally likely to be 0 or 1.\nAgent 1 also gains no information from the first round of trading, and hence neither agent changes her bid in the following rounds.\nThus, the market reaches equilibrium at this point.\nAs predicted by Theorem 1, both agents have the same conditional expectation (0.5) at equilibrium.\nHowever, the equilibrium price of the security F does not reveal the value of f(x1, x2), even though the combination of agents'' information is enough to determine it precisely.\n4.2 Characterizing computable aggregates We now give a necessary and sufficient characterization of the class of functions f such that, for any prior distribution on x, the equilibrium price of F will reveal the true value of f.\nWe show that this is exactly the class of weighted threshold functions: Definition 3.\nA function f : {0, 1}n \u2192 {0, 1} is a weighted threshold function iff there are real constants w1, w2, ... , wn such that f(x) = 1 iff n i=1 wixi \u2265 1 Theorem 2.\nIf f is a weighted threshold function, then, for any prior probability distribution P, the equilibrium price of F is equal to f(x).\nProof: Let S\u221e i denote the possibility set of agent i at equilibrium.\nAs before, we use p\u221e to denote the final trading price at this point.\nNote that, by Theorem 1, p\u221e is exactly agent i``s conditional expectation of the value of f(x), given her final possibility set S\u221e i .\nFirst, observe that if p\u221e is 0 or 1, then we must have f(x) = p\u221e , regardless of the form of f. For instance, if p\u221e = 1, this means that E(f(y)|y \u2208 S\u221e ) = 1.\nAs f(\u00b7) can only take the values 0 or 1, it follows that P(f(y) = 1|y \u2208 S\u221e ) = 1.\nThe actual value x is always in the final possibility set S\u221e , and, furthermore, it must have non-zero prior probability, because it actually occurred.\nHence, it follows that f(x) = 1 in this case.\nAn identical argument shows that if p\u221e = 0, f(x) = 0.\nHence, it is enough to show that, if f is a weighted threshold function, then p\u221e is either 0 or 1.\nWe prove this by contradiction.\nLet f(\u00b7) be a weighted threshold function corresponding to weights {wi}, and assume that 0 < p\u221e < 1.\nBy Theorem 1, we must have: P(f(y) = 1|y \u2208 S\u221e ) = p\u221e (1) \u2200i P(f(y) = 1|y \u2208 S\u221e i ) = p\u221e (2) Recall that S\u221e i = {y \u2208 S\u221e |yi = xi}.\nThus, Equation (2) can be written as \u2200i P(f(y) = 1|y \u2208 S\u221e , yi = xi) = p\u221e (3) Now define J+ i = P(yi = 1|y \u2208 S\u221e , f(y) = 1) J\u2212 i = P(yi = 1|y \u2208 S\u221e , f(y) = 0) J+ = n i=1 wiJ+ i J\u2212 = n i=1 wiJ\u2212 i Because by assumption p\u221e = 0, 1, both J+ i and J\u2212 i are well-defined (for all i): Neither is conditioned on a zeroprobability event.\nClaim: Eqs.\n1 and 3 imply that J+ i = J\u2212 i , for all i. Proof of claim: We consider the two cases xi = 1 and xi = 0 separately.\nCase (i): xi = 1.\nWe can assume that J\u2212 i and J+ i are not both 0 (or else, the claim is trivially true).\nIn this case, we have P(f(y) = 1|y \u2208 S\u221e ) \u00b7 J+ i P(f(y) = 1|y \u2208 S\u221e) \u00b7 J+ i + P(f(y) = 0|y \u2208 S\u221e) \u00b7 J\u2212 i = P(f(y) = 1|yi = 1, y \u2208 S\u221e ) (Bayes'' law) p\u221e J+ i p\u221eJ+ i + (1 \u2212 p\u221e)J\u2212 i = p\u221e (by Eqs.\n1 and 3) J+ i = p\u221e J+ i + (1 \u2212 p\u221e )J\u2212 i =\u21d2 J+ i = J\u2212 i (as p\u221e = 1) Case (ii): xi = 0.\nWhen xi = 0, observe that the argument of Case (i) can be used to prove that (1 \u2212 J+ i ) = (1 \u2212 J\u2212 i ).\nIt immediately follows that J+ i = J\u2212 i as well.\n2 Hence, we must also have J+ = J\u2212 .\nBut using linearity of expectation, we can also write J+ as J+ = E n i=1 wiyi y \u2208 S\u221e , f(y) = 1 , 160 and, because f(y) = 1 only when i wiyi \u2265 1, this gives us J+ \u2265 1.\nSimilarly, J\u2212 = E n i=1 wiyi y \u2208 S\u221e , f(y) = 0 , and thus J\u2212 < 1.\nThis implies J\u2212 = J+ , which leads to a contradiction.\n2 Perhaps surprisingly, the converse of Theorem 2 also holds: Theorem 3.\nSuppose f : {0, 1}n \u2192 {0, 1} cannot be expressed as a weighted threshold function.\nThen there exists a prior distribution P for which the price of the security F does not converge to the value of f(x).\nProof: We start from a geometric characterization of weighted threshold functions.\nConsider the Boolean hypercube {0, 1}n as a set of points in n .\nIt is well known that f is expressible as a weighted threshold function iff there is a hyperplane in n that separates all the points at which f has value 0 from all the points at which f has value 1.\nNow, consider the sets H+ = Conv(f\u22121 (1)) and H\u2212 = Conv(f\u22121 (0)), where Conv(S) denotes the convex hull of S in n .\nH+ and H\u2212 are convex sets in n , and so, if they do not intersect, we can find a separating hyperlane between them.\nThis means that, if f is not expressible as a weighted threshold function, H+ and H\u2212 must intersect.\nIn this case, we show how to construct a prior P for which f(x) is not computed by the market.\nLet x\u2217 \u2208 n be a point in H+ \u2229 H\u2212 .\nBecause x\u2217 is in H+ , there exists some points z1 , z2 , ... , zm and constants \u03bb1, \u03bb2, ... , \u03bbm, such that the following constraints are satisfied: \u2200k zk \u2208 {0, 1}n , and f(zk ) = 1 \u2200k 0 < \u03bbk \u2264 1 m k=1 \u03bbk = 1 m k=1 \u03bbkzk = x\u2217 Similarly, because x\u2217 \u2208 H\u2212 , there are points y1 , y2 , ... , yl and constants \u00b51, \u00b52, ... , \u00b5l, such that \u2200j yj \u2208 {0, 1}n , and f(yj ) = 0 \u2200j 0 < \u00b5j \u2264 1 l j=1 \u00b5j = 1 l j=1 \u00b5j yj = x\u2217 We now define our prior distribution P as follows: P(zk ) = \u03bbk 2 for k = 1, 2, ... , m P(yj ) = \u00b5j 2 for j = 1, 2, ... , l, and all other points are assigned probability 0.\nIt is easy to see that this is a valid probability distribution.\nUnder this distribution P, first observe that P(f(x) = 1) = 1 2 .\nFurther, for any i such that 0 < x\u2217 i < 1, we have P(f(x) = 1|xi = 1) = P(f(x) = 1 \u2227 xi = 1) P(xi = 1) = x\u2217 i 2 x\u2217 i = 1 2 and P(f(x) = 1|xi = 0) = P(f(x) = 1 \u2227 xi = 0) P(xi = 0) = (1\u2212x\u2217 i ) 2 (1 \u2212 x\u2217 i ) = 1 2 For indices i such that x\u2217 i is 0 or 1 exactly, i``s private information reveals no additional information under prior P, and so here too we have P(f(x) = 1|xi = 0) = P(f(x) = 1|xi = 1) = 1 2 .\nHence, regardless of her private bit xi, each agent i will bid 0.5 for security F in the first round.\nThe clearing price of 0.5 also reveals no additional information, and so this is an equilibrium with price p\u221e = 0.5 that does not reveal the value of f(x).\n2 The XOR function is one example of a function that cannot be expressed as weighted threshold function; Example 1 illustrates Theorem 3 for this function.\n4.3 Convergence time bounds We have shown that the class of Boolean functions computable in our model is the class of weighted threshold functions.\nThe next natural question to ask is: How many rounds of trading are necessary before the equilibrium is reached?\nWe analyze this problem using the same simplified Shapley-Shubik model of market clearing in each round.\nWe first prove that, in the worst case, at most n rounds are required.\nThe idea of the proof is to consider the sequence of common knowledge sets \u2126 = S0 , S1 , ..., and show that, until the market reaches equilibrium, each set has a strictly lower dimension than the previous set.\nDefinition 4.\nFor a set S \u2286 {0, 1}n , the dimension of set S is the dimension of the smallest linear subspace of n that contains all the points in S; we use the notation dim(S) to denote it.\nLemma 1.\nIf Sr = Sr\u22121 , then dim(Sr ) < dim(Sr\u22121 ).\nProof: Let k = dim(Sr\u22121 ).\nConsider the bids in round r.\nIn our model, agent i will bid her current expectation for the value of F, br i = E(f(y) = 1|y \u2208 Sr\u22121 , yi = xi).\nThus, depending on the value of xi, br i will take on one of two values h (0) i or h (1) i .\nNote that h (0) i and h (1) i depend only on the set Sr\u22121 , which is common knowledge before round 161 r. Setting di = h (1) i \u2212 h (0) i , we can write br i = h (0) i + dixi.\nIt follows that the clearing price in round r is given by pr = 1 n n i=1 (h (0) i + dixi) (4) All the agents already know all the h (0) i and di values, and they observe the price pr at the end of the rth round.\nThus, they effectively have a linear equation in x1, x2, ... , xn that they use to improve their knowledge by ruling out any possibility that would not have resulted in price pr .\nIn other words, after r rounds, the common knowledge set Sr is the intersection of Sr\u22121 with the hyperplane defined by Equation (4).\nIt follows that Sr is contained in the intersection of this hyperplane with the k-dimension linear space containing Sr\u22121 .\nIf Sr is not equal to Sr\u22121 , this intersection defines a linear subspace of dimension (k \u2212 1) that contains Sr , and hence Sr has dimension at most (k \u2212 1).\n2 Theorem 4.\nLet f be a weighted threshold function, and let P be an arbitrary prior probability distribution.\nThen, after at most n rounds of trading, the price reaches its equilibrium value p\u221e = f(x).\nProof: Consider the sequence of common knowledge sets S0 , S1 , ..., and let r be the minimum index such that Sr = Sr\u22121 .\nThen, the rth round of trading does not improve any agent``s knowledge, and thus we must have S\u221e = Sr\u22121 and p\u221e = pr\u22121 .\nObserving that dim(S0 ) = n, and applying Lemma 1 to the first r \u2212 1 rounds, we must have (r \u2212 1) \u2264 n. Thus, the price reaches its equilibrium value within n rounds.\n2 Theorem 4 provides an upper bound of O(n) on the number of rounds required for convergence.\nWe now show that this bound is tight to within a factor of 2 by constructing a threshold function with 2n inputs and a prior distribution for which it takes n rounds to determine the value of f(x) in the worst case.\nThe functions we use are the carry-bit functions.\nThe function Cn takes 2n inputs; for convenience, we write the inputs as x1, x2 ... , xn, y1, y2, ... , yn or as a pair (x, y).\nThe function value is the value of the high-order carry bit when the binary numbers xnxn\u22121 \u00b7 \u00b7 \u00b7 x1 and ynyn\u22121 \u00b7 \u00b7 \u00b7 y1 are added together.\nIn weighted threshold form, this can be written as Cn(x, y) = 1 iff n i=1 xi + yi 2n+1\u2212i \u2265 1.\nFor this proof, let us call the agents A1, A2, ... , An, B1, B2, ... , Bn, where Ai holds input bit xi, and Bi holds input bit yi.\nWe first illustrate our technique by proving that computing C2 requires 2 rounds in the worst case.\nTo do this, we construct a common prior P2 as follows: \u2022 The pair (x1, y1) takes on the values (0, 0), (0, 1), (1, 0), (1, 1) uniformly (i.e., with probability 1 4 each).\n\u2022 We extend this to a distribution on (x1, x2, y1, y2) by specifying the conditional distribution of (x2, y2) given (x1, y1): If (x1, y1) = (1, 1), then (x2, y2) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 2 , 1 6 , 1 6 , 1 6 respectively.\nOtherwise, (x2, y2) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 6 , 1 6 , 1 6 , 1 2 respectively.\nNow, suppose x1 turns out to be 1, and consider agent A1``s bid in the first round.\nIt is given by b1 A1 = P(C2(x1, x2, y1, y2) = 1|x1 = 1)) = P(y1 = 1|x1 = 1) \u00b7 P((x2, y2) = (0, 0)|x1 = 1, y1 = 1) +P(y1 = 0|x1 = 1) \u00b7 P((x2, y2) = (1, 1)|x1 = 1, y1 = 0) = 1 2 \u00b7 1 2 + 1 2 \u00b7 1 2 = 1 2 On the other hand, if x1 turns out to be 0, agent A1``s bid would be given by b1 A1 = P(C2(x1, x2, y1, y2) = 1|x1 = 0)) = P((x2, y2) = (1, 1)|x1 = 0) = 1 2 Thus, irrespective of her bit, A1 will bid 0.5 in the first round.\nNote that the function and distribution are symmetric between x and y, and so the same argument shows that B1 will also bid 0.5 in the first round.\nThus, the price p1 announced at the end of the first round reveals no information about x1 or y1.\nThe reason this occurs is that, under this distribution, the second carry bit C2 is statistically independent of the first carry bit (x1 \u2227 y1); we will use this trick again in the general construction.\nNow, suppose that (x2, y2) is either (0, 1) or (1, 0).\nThen, even if x2 and y2 are completely revealed by the first-round price, the value of C2(x1, x2, y1, y2) is not revealed: It will be 1 if x1 = y1 = 1 and 0 otherwise.\nThus, we have shown that at least 2 rounds of trading will be required to reveal the function value in this case.\nWe now extend this construction to show by induction that the function Cn takes n rounds to reach an equilibrium in the worst case.\nTheorem 5.\nThere is a function Cn with 2n inputs and a prior distribution Pn such that, in the worst case, the market takes n rounds to reveal the value of Cn(\u00b7).\nProof: We prove the theorem by induction on n.\nThe base case for n = 2 has already been shown to be true.\nStarting from the distribution P2 described above, we construct the distributions P3, P4, ... , Pn by inductively applying the following rule: \u2022 Let x\u2212n denote the vector (x1, x2, ... , xn\u22121), and define y\u2212n similarly.\nWe extend the distribution Pn\u22121 on (x\u2212n , y\u2212n ) to a distribution Pn on (x, y) by specifying the conditional distribution of (xn, yn) given (x\u2212n , y\u2212n ): If Cn\u22121(x\u2212n , y\u2212n ) = 1, then (xn, yn) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 2 , 1 6 , 1 6 , 1 6 respectively.\nOtherwise, (xn, yn) takes the values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1 6 , 1 6 , 1 6 , 1 2 respectively.\nClaim: Under distribution Pn, for all i < n, P(Cn(x, y) = 1|xi = 1) = P(Cn(x, y) = 1|xi = 0).\n162 Proof of claim: A similar calculation to that used for C2 above shows that the value of Cn(x, y) under this distribution is statistically independent of Cn\u22121(x\u2212n , y\u2212n ).\nFor i < n, xi can affect the value of Cn only through Cn\u22121.\nAlso, by contruction of Pn, given the value of Cn\u22121, the distribution of Cn is independent of xi.\nIt follows that Cn(x, y) is statistically independent of xi as well.\nOf course, a similar result holds for yi by symmetry.\nThus, in the first round, for all i = 1, 2, ... , n \u2212 1, the bids of agents Ai and Bi do not reveal anything about their private information.\nThus, the first-round price does not reveal any information about the value of (x\u2212n , y\u2212n ).\nOn the other hand, agents An and Bn do have different expectations of Cn(x) depending on whether their input bit is a 0 or a 1; thus, the first-round price does reveal whether neither, one, or both of xn and yn are 1.\nNow, consider a situation in which (xn, yn) takes on the value (1, 0) or (0, 1).\nWe show that, in this case, after one round we are left with the residual problem of computing the value of Cn\u22121(x\u2212n , y\u2212n ) under the prior Pn\u22121.\nClearly, when xn + yn = 1, Cn(x, y) = Cn\u22121(x\u2212n , y\u2212n ).\nFurther, according to the construction of Pn, the event (xn+ yn = 1) has the same probability (1\/3) for all values of (x\u2212n , y\u2212n ).\nThus, conditioning on this fact does not alter the probability distribution over (x\u2212n , y\u2212n ); it must still be Pn\u22121.\nFinally, the inductive assumption tells us that solving this residual problem will take at least n \u2212 1 more rounds in the worst case and hence that finding the value of Cn(x, y) takes at least n rounds in the worst case.\n2 5.\nDISCUSSION Our results have been derived in a simplified model of an information market.\nIn this section, we discuss the applicability of these results to more general trading models.\nAssuming that agents bid truthfully, Theorem 2 holds in any model in which the price is a known stochastically monotone aggregate of agents'' bids.\nWhile it seems reasonable that the market price satisfies monotonicity properties, the exact form of the aggregate function may not be known if the volume of each user``s trades is not observable; this depends on the details of the market process.\nTheorem 3 and Theorem 5 hold more generally; they only require that an agent``s strategy depends only on her conditional expectation of the security``s value.\nPerhaps the most fragile result is Theorem 4, which relies on the linear form of the Shapley-Shubik clearing price (in addition to the conditions for Theorem 2); however, it seems plausible that a similar dimension-based bound will hold for other families of nonlinear clearing prices.\nUp to this point, we have described the model with the same number of agents as bits of information.\nHowever, all the results hold even if there is competition in the form of a known number of agents who know each bit of information.\nIndeed, modeling such competition may help alleviate the strategic problems in our current model.\nAnother interesting approach to addressing the strategic issue is to consider alternative markets that are at least myopically incentive compatible.\nOne example is a market mechanism called a market scoring rule, suggested by Hanson [12].\nThese markets have the property that a riskneutral agent``s best myopic strategy is to truthfully bid her current expected value of the security.\nAdditionally, the number of securities involved in each trade is fixed and publicly known.\nIf the market structure is such that, for example, the current scoring rule is posted publicly after each agent``s trade, then in equilibrium there is common knowledge of all agents'' expectation, and hence Theorem 2 holds.\nTheorem 3 also applies in this case, and hence we have the same characterization for the set of computable Boolean functions.\nThis suggests that the problem of eliciting truthful responses may be orthogonal to the problem of computing the desired aggregate, reminiscent of the revelation principle [18].\nIn this paper, we have restricted our attention to the simplest possible aggregation problem: computing Boolean functions of Boolean inputs.\nThe proofs of Theorems 3 and 5 also hold if we consider Boolean functions of real inputs, where each agent``s private information is a real number.\nFurther, Theorem 2 also holds provided the market reaches equilibrium.\nWith real inputs and arbitrary prior distributions, however, it is not clear that the market will reach an equilibrium in a finite number of steps.\n6.\nCONCLUSION 6.1 Summary We have framed the process of information aggregation in markets as a computation on distributed information.\nWe have developed a simplified model of an information market that we believe captures many of the important aspects of real agent interaction in an information market.\nWithin this model, we prove several results characterizing precisely what the market can compute and how quickly.\nSpecifically, we show that the market is guaranteed to converge to the true rational expectations equilibrium if and only if the security payoff function is a weighted threshold function.\nWe prove that the process whereby agents reveal their information over time and learn from the resulting announced prices takes at most n rounds to converge to the correct full-information price in the worst case.\nWe show that this bound is tight within a factor of two.\n6.2 Future work We view this paper as a first step towards understanding the computational power of information markets.\nSome interesting and important next steps include gaining a better understanding of the following: \u2022 The effect of price accuracy and precision: We have assumed that the clearing price is known with unlimited precision; in practice, this will not be true.\nFurther, we have neglected influences on the market price other than from rational traders; the market price may also be influenced by other factors such as misinformed or irrational traders.\nIt is interesting to ask what aggregates can be computed even in the presence of noisy prices.\n\u2022 Incremental updates: If the agents have computed the value of the function and a small number of input bits are switched, can the new value of the function be computed incrementally and quickly?\n\u2022 Distributed computation: In our model, distributed information is aggregated through a centralized market 163 computation.\nIn a sense, some of the computation itself is distributed among the participating agents, but can the market computation also be distributed?\nFor example, can we find a good distributed-computational model of a decentralized market?\n\u2022 Agents'' computation: We have not accounted for the complexity of the computations that agents must do to accurately update their beliefs after each round.\n\u2022 Strategic market models: For reasons of simplicity and tractability, we have directly assumed that agents bid truthfully.\nA more satisfying approach would be to assume only rationality and solve for the resulting gametheoretic solution strategy, either in our current computational model or another model of an information market.\n\u2022 The common-prior assumption: Can we say anything about the market behavior when agents'' priors are only approximately the same or when they differ greatly?\n\u2022 Average-case analysis: Our negative results (Theorems 3 and 5) examine worst-case scenarios, and thus involve very specific prior probability distributions.\nIt is interesting to ask whether we would get very different results for generic prior distributions.\n\u2022 Information market design: Non-threshold functions can be implemented by layering two or more threshold functions together.\nWhat is the minimum number of threshold securities required to implement a given function?\nThis is exactly the problem of minimizing the size of a neural network, a well-studied problem known to be NP-hard [15].\nWhat configuration of securities can best approximate a given function?\nAre there ways to define and configure securities to speed up convergence to equilibrium?\nWhat is the relationship between machine learning (e.g., neural-network learning) and information-market design?\nAcknowledgments We thank Joe Kilian for many helpful discussions.\nWe thank Robin Hanson and the anonymous reviewers for useful insights and pointers.\n7.\nREFERENCES [1] K. J. Arrow.\nThe role of securities in the optimal allocation of risk-bearing.\nReview of Economic Studies, 31(2):91-96, 1964.\n[2] J. Bergin and A. Brandenburger.\nA simple characterization of stochastically monotone functions.\nEconometrica, 58(5):1241-1243, Sept. 1990.\n[3] S. Debnath, D. M. Pennock, C. L. Giles, and S. Lawrence.\nInformation incorporation in online in-game sports betting markets.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce (EC``03), June 2003.\n[4] P. Dubey, J. Geanakoplos, and M. Shubik.\nThe revelation of information in strategic market games: A critique of rational expectations equilibrium.\nJournal of Mathematical Economics, 16:105-137, 1987.\n[5] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.\nReasoning About Knowledge.\nMIT Press, Cambridge, MA, 1996.\n[6] R. Forsythe and R. Lundholm.\nInformation aggregation in an experimental market.\nEconometrica, 58(2):309-347, 1990.\n[7] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright.\nAnatomy of an experimental political stock market.\nAmerican Economic Review, 82(5):1142-1161, 1992.\n[8] R. Forsythe, T. A. Rietz, and T. W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[9] J. M. Gandar, W. H. Dare, C. R. Brown, and R. A. Zuber.\nInformed traders and price variations in the betting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[10] J. Geanakoplos and H. Polemarchakis.\nWe can``t disagree forever.\nJournal of Economic Theory, 28(1):192-200, 1982.\n[11] S. J. Grossman.\nAn introduction to the theory of rational expectations under asymmetric information.\nReview of Economic Studies, 48(4):541-559, 1981.\n[12] R. Hanson.\nCombinatorial information market design.\nInformation Systems Frontiers, 5(1), 2002.\n[13] M. Jackson and J. Peck.\nAsymmetric information in a strategic market game: Reexamining the implications of rational expectations.\nEconomic Theory, 13:603-628, 1999.\n[14] J. C. Jackwerth and M. Rubinstein.\nRecovering probability distributions from options prices.\nJournal of Finance, 51(5):1611-1631, Dec. 1996.\n[15] J.-H.\nLin and J. S. Vitter.\nComplexity results on learning by neural nets.\nMachine Learning, 6:211-230, 1991.\n[16] R. E. Lucas.\nExpectations and the neutrality of money.\nJournal of Economic Theory, 4(2):103-24, 1972.\n[17] M. Magill and M. Quinzii.\nTheory of Incomplete Markets, Vol.\n1.\nMIT Press, 1996.\n[18] A. Mas-Colell, M. D. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, New York, 1995.\n[19] R. D. McKelvey and T. Page.\nCommon knowledge, consensus, and aggregate information.\nEconometrica, 54(1):109-127, 1986.\n[20] P. Milgrom and N. Stokey.\nInformation, trade, and common knowledge.\nJournal of Economic Theory, 26:17-27, 1982.\n[21] L. T. Nielsen, A. Brandenburger, J. Geanakoplos, R. McKelvey, and T. Page.\nCommon knowledge of an aggregate of expectations.\nEconometrica, 58(5):1235-1238, 1990.\n[22] D. M. Pennock, S. Debnath, E. J. Glover, and C. L. Giles.\nModeling information incorporation in markets, with application to detecting and explaining events.\nIn Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, 2002.\n164 [23] D. M. Pennock, S. Lawrence, C. L. Giles, and F. \u02daA.\nNielsen.\nThe real power of artificial markets.\nScience, 291:987-988, February 2001.\n[24] D. M. Pennock, S. Lawrence, F. \u02daA.\nNielsen, and C. L. Giles.\nExtracting collective probabilistic forecasts from web games.\nIn Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 174-183, 2001.\n[25] C. R. Plott and S. Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56(5):1085-1118, 1988.\n[26] C. R. Plott, J. Wit, and W. C. Yang.\nParimutuel betting markets as information aggregation devices: Experimental results.\nTechnical Report Social Science Working Paper 986, California Institute of Technology, Apr. 1997.\n[27] C. Schmidt and A. Werwatz.\nHow accurate do markets predict the outcome of an event?\nthe Euro 2000 soccer championships experiment.\nTechnical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002.\n[28] L. Shapley and M. Shubik.\nTrade using one commodity as a means of payment.\nJournal of Political Economy, 85:937-968, 1977.\n[29] Y. Shoham and M. Tennenholtz.\nRational computation and the communication complexity of auctions.\nGames and Economic Behavior, 35(1-2):197-211, 2001.\n[30] R. H. Thaler and W. T. Ziemba.\nAnomalies: Parimutuel betting markets: Racetracks and lotteries.\nJournal of Economic Perspectives, 2(2):161-174, 1988.\n[31] H. R. Varian.\nThe arbitrage principle in financial economics.\nJournal of Economic Perspectives, 1(2):55-72, 1987.\n165","lvl-3":"Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory--supported by empirical and laboratory evidence--the equilibrium price of a financial security reflects all of the information regarding the security's value.\nWe investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price.\nWe develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process.\nWe show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory.\nOn the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions.\nMoreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information.\nWe also prove a lower bound, showing a type of threshold security that requires at least n\/2 rounds to converge in the worst case.\n\u2217 This work was supported by the DoD University Research Initiative (URI) administered by the Office of Naval Research under Grant N00014-01-1-0795.\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337, CCR-TC-0208972, ANI-0207399, and ITR-0219018.\n\u2021 This work conducted while at NEC Laboratories America, Princeton, NJ.\n1.\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders.\nAs a result, market prices encode the best forecasts of future outcomes given all information, even if that information is distributed across many sources.\nSupporting evidence can be found in empirical studies of options markets [14], political stock markets [7, 8, 22], sports betting markets [3, 9, 27], horse-racing markets [30], market games [23, 24], and laboratory investigations of experimental markets [6, 25, 26].\nThe process of information incorporation is, at its essence, a distributed computation.\nEach trader begins with his or her own information.\nAs trades are made, summary information is revealed through market prices.\nTraders learn or infer what information others are likely to have by observing prices, then update their own beliefs based on their observations.\nOver time, if the process works as advertised, all information is revealed, and all traders converge to the same information state.\nAt this point, the market is in what is called a rational expectations equilibrium [11, 16, 19].\nAll information available to all traders is now reflected in the going prices, and no further trades are desirable until some new information becomes available.\nWhile most markets are not designed with information aggregation as a primary motivation--for example, derivatives\nmarkets are intended mainly for risk management and sports betting markets for entertainment--recently, some markets have been created solely for the purpose of aggregating information on a topic of interest.\nThe Iowa Electronic Market1 is a prime example, operated by the University of Iowa Tippie College of Business for the purpose of investigating how information about political elections distributed among traders gets reflected in securities prices whose payoffs are tied to actual election outcomes [7, 8].\nIn this paper, we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets.\nTo do so, in Section 3, we propose a model of an information market that is tractable for theoretical analysis and, we believe, captures much of the important essence of real information markets.\nIn Section 4, we present our main theoretical results concerning this model.\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory.\nBoolean securities with more complex payoffs may not converge under some prior distributions.\nWe also provide upper and lower bounds on the convergence time for these threshold securities.\nWe show that, for all prior distributions, the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds, where n is the number of bits of distributed information.\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n\/2 rounds to converge.\n2.\nRELATIONSHIP TO RELATED WORK\n3.\nMODEL OF AN INFORMATION MARKET\n3.1 Initial information state\n3.2 Market mechanism\n5\n3.3 Agent strategies\n4.\nCOMPUTATIONAL PROPERTIES\n4.1 Equilibrium price characterization\n4.2 Characterizing computable aggregates\nProof:\n4.3 Convergence time bounds\n5.\nDISCUSSION\n6.\nCONCLUSION 6.1 Summary\n6.2 Future work","lvl-4":"Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory--supported by empirical and laboratory evidence--the equilibrium price of a financial security reflects all of the information regarding the security's value.\nWe investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price.\nWe develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process.\nWe show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory.\nOn the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions.\nMoreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information.\nWe also prove a lower bound, showing a type of threshold security that requires at least n\/2 rounds to converge in the worst case.\n\u2217 This work was supported by the DoD University Research Initiative (URI) administered by the Office of Naval Research under Grant N00014-01-1-0795.\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337, CCR-TC-0208972, ANI-0207399, and ITR-0219018.\n\u2021 This work conducted while at NEC Laboratories America, Princeton, NJ.\n1.\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders.\nAs a result, market prices encode the best forecasts of future outcomes given all information, even if that information is distributed across many sources.\nThe process of information incorporation is, at its essence, a distributed computation.\nEach trader begins with his or her own information.\nAs trades are made, summary information is revealed through market prices.\nTraders learn or infer what information others are likely to have by observing prices, then update their own beliefs based on their observations.\nOver time, if the process works as advertised, all information is revealed, and all traders converge to the same information state.\nAt this point, the market is in what is called a rational expectations equilibrium [11, 16, 19].\nAll information available to all traders is now reflected in the going prices, and no further trades are desirable until some new information becomes available.\nWhile most markets are not designed with information aggregation as a primary motivation--for example, derivatives\nIn this paper, we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets.\nTo do so, in Section 3, we propose a model of an information market that is tractable for theoretical analysis and, we believe, captures much of the important essence of real information markets.\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory.\nBoolean securities with more complex payoffs may not converge under some prior distributions.\nWe also provide upper and lower bounds on the convergence time for these threshold securities.\nWe show that, for all prior distributions, the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds, where n is the number of bits of distributed information.\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n\/2 rounds to converge.","lvl-2":"Computation in a Distributed Information Market \u2217\nABSTRACT\nAccording to economic theory--supported by empirical and laboratory evidence--the equilibrium price of a financial security reflects all of the information regarding the security's value.\nWe investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price.\nWe develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process.\nWe show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory.\nOn the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions.\nMoreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information.\nWe also prove a lower bound, showing a type of threshold security that requires at least n\/2 rounds to converge in the worst case.\n\u2217 This work was supported by the DoD University Research Initiative (URI) administered by the Office of Naval Research under Grant N00014-01-1-0795.\n\u2020 Supported in part by ONR grant N00014-01-0795 and NSF grants CCR-0105337, CCR-TC-0208972, ANI-0207399, and ITR-0219018.\n\u2021 This work conducted while at NEC Laboratories America, Princeton, NJ.\n1.\nINTRODUCTION\nThe strong form of the efficient markets hypothesis states that market prices nearly instantly incorporate all information available to all traders.\nAs a result, market prices encode the best forecasts of future outcomes given all information, even if that information is distributed across many sources.\nSupporting evidence can be found in empirical studies of options markets [14], political stock markets [7, 8, 22], sports betting markets [3, 9, 27], horse-racing markets [30], market games [23, 24], and laboratory investigations of experimental markets [6, 25, 26].\nThe process of information incorporation is, at its essence, a distributed computation.\nEach trader begins with his or her own information.\nAs trades are made, summary information is revealed through market prices.\nTraders learn or infer what information others are likely to have by observing prices, then update their own beliefs based on their observations.\nOver time, if the process works as advertised, all information is revealed, and all traders converge to the same information state.\nAt this point, the market is in what is called a rational expectations equilibrium [11, 16, 19].\nAll information available to all traders is now reflected in the going prices, and no further trades are desirable until some new information becomes available.\nWhile most markets are not designed with information aggregation as a primary motivation--for example, derivatives\nmarkets are intended mainly for risk management and sports betting markets for entertainment--recently, some markets have been created solely for the purpose of aggregating information on a topic of interest.\nThe Iowa Electronic Market1 is a prime example, operated by the University of Iowa Tippie College of Business for the purpose of investigating how information about political elections distributed among traders gets reflected in securities prices whose payoffs are tied to actual election outcomes [7, 8].\nIn this paper, we investigate the nature of the computational process whereby distributed information is revealed and combined over time into the prices in information markets.\nTo do so, in Section 3, we propose a model of an information market that is tractable for theoretical analysis and, we believe, captures much of the important essence of real information markets.\nIn Section 4, we present our main theoretical results concerning this model.\nWe prove that only Boolean securities whose payoffs can be expressed as threshold functions of the distributed input bits of information are guaranteed to converge as predicted by rational expectations theory.\nBoolean securities with more complex payoffs may not converge under some prior distributions.\nWe also provide upper and lower bounds on the convergence time for these threshold securities.\nWe show that, for all prior distributions, the price of a threshold security converges to its rational expectations equilibrium price in at most n rounds, where n is the number of bits of distributed information.\nWe show that this worst-case bound is tight within a factor of two by illustrating a situation in which a threshold security requires n\/2 rounds to converge.\n2.\nRELATIONSHIP TO RELATED WORK\nAs mentioned, there is a great deal of documented evidence supporting the notion that markets are able to aggregate information in a number of scenarios using a variety of market mechanisms.\nThe theoretically ideal mechanism requires what is called a complete market.\nA complete market contains enough linearly independent securities to span the entire state space of interest [1, 31].\nThat is, the dimensionality of the available securities equals the dimensionality of the event space over which information is to be aggregated .2 In this ideal case, all private information becomes common knowledge in equilibrium, and thus any function of the private information can be directly evaluated by any agent or observer.\nHowever, this theoretical ideal is almost never achievable in practice, because it generally requires a number of securities exponential in the number of random variables of interest.\nWhen available securities form an incomplete market [17] in relation to the desired information space--as is usually the case--aggregation may be partial.\nNot all private information is revealed in equilibrium, and prices may not convey enough information to recover the complete joint probability distribution over all events.\nStill, it is generally assumed that aggregation does occur along the dimensions represented in the market; that is, prices do reflect a consistent projection of the entire joint distribution onto the smaller-dimensional space spanned by securities.\nIn this pa\nper, we investigate cases in which even this partial aggregation fails.\nFor example, even though there is enough private information to determine completely the price of a security in the market, the equilibrium price may in fact reveal no information at all!\nSo characterizations of when a rational expectations equilibrium is fully revealing do not immediately apply to our problem.\nWe are not asking whether all possible functions of private information can be evaluated, but whether a particular target function can be evaluated.\nWe show that properties of the function itself play a major role, not just the relative dimensionalities of the information and security spaces.\nOur second main contribution is examining the dynamics of information aggregation before equilibrium, in particular proving upper and lower bounds on the time to convergence in those cases in which aggregation succeeds.\nShoham and Tennenholtz [29] define a rationally computable function as a function of agents' valuations (types) that can be computed by a market, assuming agents follow rational equilibrium strategies.\nThe authors mainly consider auctions of goods as their basic mechanistic unit and examine the communication complexity involved in computing various functions of agents' valuations of goods.\nFor example, they give auction mechanisms that can compute the maximum, minimum, and kth-highest of the agents' valuations of a single good using 1, 1, and n \u2212 k + 1 bits of communication, respectively.\nThey also examine the potential tradeoff between communication complexity and revenue.\n3.\nMODEL OF AN INFORMATION MARKET\nTo investigate the properties and limitations of the process whereby an information market converges toward its rational-expectations equilibrium, we formulate a representative model of the market.\nIn designing the model, our goals were two-fold: (1) to make the model rich enough to be realistic and (2) to make the model simple enough to admit meaningful analysis.\nAny modeling decisions must trade off these two generally conflicting goals, and the decision process is as much an art as a science.\nNonetheless, we believe that our model captures enough of the essence of real information markets to lend credence to the results that follow.\nIn this section, we present our modeling assumptions and justifications in detail.\nSection 3.1 describes the initial information state of the system, Section 3.2 covers the market mechanism, and Section 3.3 presents the agents' strategies.\n3.1 Initial information state\nThere are n agents (traders) in the system, each of whom is privy to one bit of information, denoted xi.\nThe vector of all n bits is denoted x = (x1, x2,..., xn).\nIn the initial state, each agent is aware only of her own bit of information.\nAll agents have a common prior regarding the joint distribution of bits among agents, but none has any specific information about the actual value of bits held by others.\nNote that this common-prior assumption--typical in the economics literature--does not imply that all agents agree.\nTo the contrary, because each agent has different information, the initial state of the system is in general a state of disagreement.\nNearly any disagreement that could be modeled by assuming different priors can instead be mod\neled by assuming a common prior with different information, and so the common-prior assumption is not as severe as it may seem.\n3.2 Market mechanism\nThe security being traded by the agents is a financial instrument whose payoff is a function f (X) of the agents' bits.\nThe form of f (the description of the security) is common knowledge3 among agents.\nWe sometimes refer to the xi as the input bits.\nAt some time in the future after trading is completed, the true value of f (X) is revealed,4 and every owner of the security is paid an amount f (X) in cash per unit owned.\nIf an agent ends with a negative quantity of the security (by selling short), then the agent must pay the amount f (X) in cash per unit.\nNote that if someone were to have complete knowledge of all input bits X, then that person would know the true value f (X) of the security with certainty, and so would be willing to buy it at any price lower than f (X) and (short) sell it at any price higher than\n5\nFollowing Dubey, Geanakoplos, and Shubik [4], and Jackson and Peck [13], we model the market-price formation process as a multiperiod Shapley-Shubik market game [28].\nThe Shapley-Shubik process operates as follows: The market proceeds in synchronous rounds.\nIn each round, each agent i submits a bid bi and a quantity qi.\nThe semantics are that agent i is supplying a quantity qi of the security and an amount bi of money to be traded in the market.\nFor simplicity, we assume that there are no restrictions on credit or short sales, and so an agent's trade is not constrained by her possessions.\nThe market clears in each round by settling at a single price that balances the trade in that round: The clearing price is p = Ei bi \/ Ei qi.\nAt the end of the round, agent i holds a quantity q' i proportional to the money she bid: q' i = bi\/p.\nIn addition, she is left with an amount of money b' i that reflects her net trade at price p: b' i = bi \u2212 p (q' i \u2212 qi) = pqi.\nNote that agent i's net trade in the security is a purchase if p bi\/qi.\nAfter each round, the clearing price p is publicly revealed.\nAgents then revise their beliefs according to any information garnered from the new price.\nThe next round proceeds as the previous.\nThe process continues until an equilibrium is reached, meaning that prices and bids do not change from one round to the next.\nIn this paper, we make a further simplifying restriction on the trading in each round: We assume that qi = 1 for each agent i.\nThis modeling assumption serves two analytical purposes.\nFirst, it ensures that there is forced trade in every round.\nClassic results in economics show that perfectly rational and risk-neutral agents will never trade with each other for purely speculative reasons (even if they have differing information) [20].\nThere are many factors that can induce rational agents to trade, such as differing degrees of risk aversion, the presence of other traders who are trading for liquidity reasons rather than speculative gain, or a market maker who is pumping money into the market through a subsidy.\nWe sidestep this issue by simply assuming that the\ninformed agents will trade (for unspecified reasons).\nSecond, forcing qi = 1 for all i means that the total volume of trade and the impact of any one trader on the clearing price are common knowledge; the clearing price p is a simple function of the agents' bids, p = Ei bi\/n.\nWe will discuss the implications of alternative market models in Section 5.\n3.3 Agent strategies\nIn order to draw formal conclusions about the price evolution process, we need to make some assumptions about how agents behave.\nEssentially we assume that agents are riskneutral, myopic,6 and bid truthfully: Each agent in each round bids his or her current valuation of the security, which is that agent's estimation of the expected payoff of the security.\nExpectations are computed according to each agent's probability distribution, which is updated via Bayes' rule when new information (revealed via the clearing prices) becomes available.\nWe also assume that it is common knowledge that all the agents behave in the specified manner.\nWould rational agents actually behave according to this strategy?\nIt's hard to say.\nCertainly, we do not claim that this is an equilibrium strategy in the game-theoretic sense.\nFurthermore, it is clear that we are ignoring some legitimate tactics, e.g., bidding falsely in one round in order to effect other agents' judgments in the following rounds (nonmyopic reasoning).\nHowever, we believe that the strategy outlined is a reasonable starting point for analysis.\nSolving for a true game-theoretic equilibrium strategy in this setting seems extremely difficult.\nOur assumptions seem reasonable when there are enough agents in the system such that extremely complex meta-reasoning is not likely to improve upon simply bidding one's true expected value.\nIn this case, according the the Shapley-Shubik mechanism, if the clearing price is below an agent's expected value that agent will end up buying (increasing expected profit); otherwise, if the clearing price is above the agent's expected value, the agent will end up selling (also increasing expected profit).\n4.\nCOMPUTATIONAL PROPERTIES\nIn this section, we study the computational power of information markets for a very simple class of aggregation functions: Boolean functions of n variables.\nWe characterize the set of Boolean functions that can be computed in our market model for all prior distributions and then prove upper and lower bounds on the worst-case convergence time for these markets.\nThe information structure we assume is as follows: There are n agents, and each agent i has a single bit of private information xi.\nWe use X to denote the vector (x1,..., xn) of inputs.\nAll the agents also have a common prior probability distribution P: {0, 11n __ + [0, 1] over the values of X.\nWe define a Boolean aggregate function f (X): {0, 11n __ + {0, 11 that we would like the market to compute.\nNote that X, and hence f (X), is completely determined by the combination of all the agents' information, but it is not known to any one agent.\nThe agents trade in a Boolean security F, which pays off $1 if f (X) = 1 and $0 if f (X) = 0.\nSo an omniscient 6Risk neutrality implies that each agent's utility for the security is linearly related to his or her subjective estimation of the expected payoff of the security.\nMyopic behavior means that agents treat each round as if it were the final round: They do not reason about how their bids may affect the bids of other agents in future rounds.\nf (X).\nagent with access to all the agents' bits would know the true value of security F--either exactly $1 or exactly $0.\nIn reality, risk-neutral agents with limited information will value F according to their expectation of its payoff, or Ei [f (x)], where Ei is the expectation operator applied according to agent i's probability distribution.\nFor any function f, trading in F may happen to converge to the true value of f (x) by coincidence if the prior probability distribution is sufficiently degenerate.\nMore interestingly, we would like to know for which functions f does the price of the security F always converge to f (x) for all prior probability distributions P. 7 In Section 4.2, we prove a necessary and sufficient condition that guarantees convergence.\nIn Section 4.3, we address the natural follow-up question, by deriving upper and lower bounds on the worst-case number of rounds of trading required for the value of f (x) to be revealed.\n4.1 Equilibrium price characterization\nOur analysis builds on a characterization of the equilibrium price of F that follows from a powerful result on common knowledge of aggregates due to McKelvey and Page [19], later extended by Nielsen et al. [21].\nInformation markets aim to aggregate the knowledge of all the agents.\nProcedurally, this occurs because the agents learn from the markets: The price of the security conveys information to each agent about the knowledge of other agents.\nWe can model the flow of information through prices as follows.\nLet \u2126 = {0, 1} n be the set of possible values of x; we say that \u2126 denotes the set of possible \"states of the world.\"\nThe prior P defines everyone's initial belief about the likelihood of each state.\nAs trading proceeds, some possible states can be logically ruled out, but the relative likelihoods among the remaining states are fully determined by the prior P.\nSo the common knowledge after any stage is completely described by the set of states that an external observer--with no information beyond the sequence of prices observed--considers possible (along with the prior).\nSimilarly, the knowledge of agent i at any point is also completely described by the set of states she considers possible.\nWe use the notation Sr to denote the common-knowledge possibility set after round r, and Sri to denote the set of states that agent i considers possible after round r. Initially, the only common knowledge is that the input vector x is in \u2126; in other words, the set of states considered possible by an external observer before trading has occurred is the set S0 = \u2126.\nHowever, each agent i also knows the value of her bit xi; thus, her knowledge set S0i is the set {y E \u2126 | yi = xi}.\nAgent i's first-round bid is her conditional expectation of the event f (x) = 1 given that x E S0i.\nAll the agents' bids are processed, and the clearing price p1 is announced.\nAn external observer could predict agent i's bid if he knew the value of xi.\nThus, if he knew the value of x, he could predict the value of p1.\nIn other words, the external observer knows the function price1 (x) that relates the first round price to the true state x. Of course, he does not know the value of x; however, he can rule out any vector x that would have resulted in a different clearing price from the observed price p1.\n7We assume that the common prior is consistent with x in the sense that it assigns a non-zero probability to the actual value of x. Thus, the common knowledge after round 1 is the set S1 = {y E S0 | price1 (y) = p1}.\nAgent i knows the common knowledge and, in addition, knows the value of bit xi.\nHence, after every round r, the knowledge of agent i is given by Sri = {y E Sr | yi = xi}.\nNote that, because knowledge can only improve over time, we must always have Sri C _ Sr-1 i and Sr C _ Sr-1.\nThus, only a finite number of changes in each agent's knowledge are possible, and so eventually we must converge to an equilibrium after which no player learns any further information.\nWe use S' to denote the common knowledge at this point, and S' i to denote agent i's knowledge at this point.\nLet p' denote the clearing price at equilibrium.\nInformally, McKelvey and Page [19] show that, if n people with common priors but different information about the likelihood of some event A agree about a \"suitable\" aggregate of their individual conditional probabilities, then their individual conditional probabilities of event A's occurring must be identical.\n(The precise definition of \"suitable\" is described below.)\nThere is a strong connection to rational expectation equilibria in markets, which was noted in the original McKelvey-Page paper: The market price of a security is common knowledge at the point of equilibrium.\nThus, if the price is a \"suitable\" aggregate of the conditional expectations of all the agents, then in equilibrium they must have identical conditional expectations of the event that the security will pay off.\n(Note that their information may still be different.)\nBergin and Brandenburger [2] proved that this simple definition of stochastically monotone functions is equivalent to the original definition in McKelvey-Page [19].\nDEFINITION 2.\nA function g: Rn--+ R is called stochastically regular if it can be written in the form g = h o g', where g' is stochastically monotone and h is invertible on the range of g'.\nWe can now state the McKelvey-Page result, as generalized by Nielsen et al. [21].\nIn our context, the following simple theorem statement suffices; more general versions of this theorem can be found in [19, 21].\nTHEOREM 1.\n(Nielsen et al. [21]) Suppose that, at equilibrium, the n agents have a common prior, but possibly different information, about the value of a random variable F, as described above.\nFor all i, let p' i = E (F | x E S' i).\nIf g is a stochastically regular function and g (p' 1, p' 2,..., p 'n) is common knowledge, then it must be the case that\nIn one round of our simplified Shapley-Shubik trading model, the announced price is the mean of the conditional expectations of the n agents.\nThe mean is a stochastically regular function; hence, Theorem 1 shows that, at equilibrium, all agents have identical conditional expectations of the payoff of the security.\nIt follows that the equilibrium\nprice p' must be exactly the conditional expectations of all agents at equilibrium.\nTheorem 1 does not in itself say how the equilibrium is reached.\nMcKelvey and Page, extending an argument due to Geanakoplos and Polemarchakis [10], show that repeated announcement of the aggregate will eventually result in common knowledge of the aggregate.\nIn our context, this is achieved by announcing the current price at the end of each round; this will ultimately converge to a state in which all agents bid the same price p'.\nHowever, reaching an equilibrium price is not sufficient for the purposes of information aggregation.\nWe also want the price to reveal the actual value of f (x).\nIt is possible that the equilibrium price p' of the security F will not be either 0 or 1, and so we cannot infer the value of f (x) from it.\nExample 1: Consider two agents 1 and 2 with private input bits x1 and x2 respectively.\nSuppose the prior probability distribution is uniform, i.e., x = (x1, x2) takes the values (0, 0), (0, 1), (1, 0), and (1, 1) each with probability 14.\nNow, suppose the aggregate function we want to compute is the XOR function, f (x) = x1 \u2295 x2.\nTo this end, we design a market to trade in a Boolean security F, which will eventually payoff $1 iff x1 \u2295 x2 = 1.\nIf agent 1 observes x1 = 1, she estimates the expected value of F to be the probability that x2 = 0 (given x1 = 1), which is 21.\nIf she observes x1 = 0, her expectation of the value of F is the conditional probability that x2 = 1, which is also 21.\nThus, in either case, agent 1 will bid 0.5 for F in the first round.\nSimilarly, agent 2 will also always bid 0.5 in the first round.\nHence, the first round of trading ends with a clearing price of 0.5.\nFrom this, agent 2 can infer that agent 1 bid 0.5, but this gives her no information about the value of x1--it is still equally likely to be 0 or 1.\nAgent 1 also gains no information from the first round of trading, and hence neither agent changes her bid in the following rounds.\nThus, the market reaches equilibrium at this point.\nAs predicted by Theorem 1, both agents have the same conditional expectation (0.5) at equilibrium.\nHowever, the equilibrium price of the security F does not reveal the value of f (x1, x2), even though the combination of agents' information is enough to determine it precisely.\n4.2 Characterizing computable aggregates\nWe now give a necessary and sufficient characterization of the class of functions f such that, for any prior distribution on x, the equilibrium price of F will reveal the true value of f.\nWe show that this is exactly the class of weighted threshold functions:\nTHEOREM 2.\nIf f is a weighted threshold function, then, for any prior probability distribution P, the equilibrium price of F is equal to f (x).\nProof:\nLet S' i denote the possibility set of agent i at equilibrium.\nAs before, we use p' to denote the final trading price at this point.\nNote that, by Theorem 1, p' is exactly agent i's conditional expectation of the value of f (x), given her final possibility set S' i.\nFirst, observe that if p' is 0 or 1, then we must have f (x) = p', regardless of the form of f. For instance, if p' = 1, this means that E (f (y) | y \u2208 S') = 1.\nAs f (\u00b7) can only take the values 0 or 1, it follows that P (f (y) = 1 | y \u2208 S') = 1.\nThe actual value x is always in the final possibility set S', and, furthermore, it must have non-zero prior probability, because it actually occurred.\nHence, it follows that f (x) = 1 in this case.\nAn identical argument shows that if p' = 0, f (x) = 0.\nHence, it is enough to show that, if f is a weighted threshold function, then p' is either 0 or 1.\nWe prove this by contradiction.\nLet f (\u00b7) be a weighted threshold function corresponding to weights {wi}, and assume that 0 .\nLet X denote the data matrix in RKHS, X = (\u03c6(x1), \u03c6(x2), \u00b7 \u00b7 \u00b7 , \u03c6(xm)).\nSimilarly, we define Z = (\u03c6(z1), \u03c6(z2), \u00b7 \u00b7 \u00b7 , \u03c6(zk)).\nThus, the optimization problem in RKHS can be written as follows: min Z Tr XT ZZT + \u03bb1XLXT + \u03bb2I \u22121 X (13) Since the mapping function \u03c6 is generally unknown, there is no direct way to solve problem (13).\nIn the following, we apply kernel tricks to solve this optimization problem.\nLet X\u22121 be the Moore-Penrose inverse (also known as pseudo inverse) of X. Thus, we have: XT ZZT + \u03bb1XLXT + \u03bb2I \u22121 X = XT XX\u22121 ZZT + \u03bb1XLXT + \u03bb2I \u22121 (XT )\u22121 XT X = XT X ZZT X + \u03bb1XLXT X + \u03bb2X \u22121 (XT )\u22121 XT X = XT X XT ZZT X + \u03bb1XT XLXT X + \u03bb2XT X \u22121 XT X = KXX KXZKZX + \u03bb1KXXLKXX + \u03bb2KXX \u22121 KXX where KXX is a m \u00d7 m matrix (KXX,ij = K(xi, xj)), KXZ is a m\u00d7k matrix (KXZ,ij = K(xi, zj)), and KZX is a k\u00d7m matrix (KZX,ij = K(zi, xj)).\nThus, the Kernel Laplacian Optimal Design can be defined as follows: Definition 2.\nKernel Laplacian Optimal Design minZ=(z1,\u00b7\u00b7\u00b7 ,zk) Tr KXX KXZKZX + \u03bb1KXXLKXX \u03bb2KXX \u22121 KXX (14) 4.2 Optimization Scheme In this subsection, we discuss how to solve the optimization problems (11) and (14).\nParticularly, if we select a linear kernel for KLOD, then it reduces to LOD.\nTherefore, we will focus on problem (14) in the following.\nIt can be shown that the optimization problem (14) is NP-hard.\nIn this subsection, we develop a simple sequential greedy approach to solve (14).\nSuppose n points have been selected, denoted by a matrix Zn = (z1, \u00b7 \u00b7 \u00b7 , zn).\nThe (n + 1)-th point zn+1 can be selected by solving the following optimization problem: max Zn+1=(Zn,zn+1) Tr KXX KXZn+1 KZn+1X + \u03bb1KXXLKXX + \u03bb2KXX \u22121 KXX (15) The kernel matrices KXZn+1 and KZn+1X can be rewritten as follows: KXZn+1 = KXZn , KXzn+1 , KZn+1X = KZnX Kzn+1X Thus, we have: KXZn+1 KZn+1X = KXZn KZnX + KXzn+1 Kzn+1X We define: A = KXZn KZnX + \u03bb1KXXLKXX + \u03bb2KXX A is only dependent on X and Zn .\nThus, the (n + 1)-th point zn+1 is given by: zn+1 = arg min zn+1 Tr KXX A + KXzn+1 Kzn+1X \u22121 KXX (16) Each time we select a new point zn+1, the matrix A is updated by: A \u2190 A + KXzn+1 Kzn+1X If the kernel function is chosen as inner product K(x, y) = x, y , then HK is a linear functional space and the algorithm reduces to LOD.\n5.\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN In this section, we describe how to apply Laplacian Optimal Design to CBIR.\nWe begin with a brief description of image representation using low level visual features.\n5.1 Low-Level Image Representation Low-level image representation is a crucial problem in CBIR.\nGeneral visual features includes color, texture, shape, etc..\nColor and texture features are the most extensively used visual features in CBIR.\nCompared with color and texture features, shape features are usually described after images have been segmented into regions or objects.\nSince robust and accurate image segmentation is difficult to achieve, the use of shape features for image retrieval has been limited to special applications where objects or regions are readily available.\nIn this work, We combine 64-dimensional color histogram and 64-dimensional Color Texture Moment (CTM, [15]) to represent the images.\nThe color histogram is calculated using 4 \u00d7 4 \u00d7 4 bins in HSV space.\nThe Color Texture Moment is proposed by Yu et al. [15], which integrates the color and texture characteristics of the image in a compact form.\nCTM adopts local Fourier transform as a texture representation scheme and derives eight characteristic maps to describe different aspects of co-occurrence relations of image pixels in each channel of the (SVcosH, SVsinH, V) color space.\nThen CTM calculates the first and second moment of these maps as a representation of the natural color image pixel distribution.\nPlease see [15] for details.\n5.2 Relevance Feedback Image Retrieval Relevance feedback is one of the most important techniques to narrow down the gap between low level visual features and high level semantic concepts [12].\nTraditionally, the user``s relevance feedbacks are used to update the query vector or adjust the weighting of different dimensions.\nThis process can be viewed as an on-line learning process in which the image retrieval system acts as a learner and the user acts as a teacher.\nThey typical retrieval process is outlined as follows: 1.\nThe user submits a query image example to the system.\nThe system ranks the images in database according to some pre-defined distance metric and presents to the user the top ranked images.\n2.\nThe system selects some images from the database and request the user to label them as relevant or irrelevant.\n3.\nThe system uses the user``s provided information to rerank the images in database and returns to the user the top images.\nGo to step 2 until the user is satisfied.\nOur Laplacian Optimal Design algorithm is applied in the second step for selecting the most informative images.\nOnce we get the labels for the images selected by LOD, we apply Laplacian Regularized Regression (LRR, [2]) to solve the optimization problem (3) and build the classifier.\nThe classifier is then used to re-rank the images in database.\nNote that, in order to reduce the computational complexity, we do not use all the unlabeled images in the database but only those within top 500 returns of previous iteration.\n6.\nEXPERIMENTAL RESULTS In this section, we evaluate the performance of our proposed algorithm on a large image database.\nTo demonstrate the effectiveness of our proposed LOD algorithm, we compare it with Laplacian Regularized Regression (LRR, [2]), Support Vector Machine (SVM), Support Vector Machine Active Learning (SVMactive) [14], and A-Optimal Design (AOD).\nBoth SVMactive, AOD, and LOD are active learning algorithms, while LRR and SVM are standard classification algorithms.\nSVM only makes use of the labeled images, while LRR is a semi-supervised learning algorithm which makes use of both labeled and unlabeled images.\nFor SVMactive, AOD, and LOD, 10 training images are selected by the algorithms themselves at each iteration.\nWhile for LRR and SVM, we use the top 10 images as training data.\nIt would be important to note that SVMactive is based on the ordinary SVM, LOD is based on LRR, and AOD is based on the ordinary regression.\nThe parameters \u03bb1 and \u03bb2 in our LOD algorithm are empirically set to be 0.001 and 0.00001.\nFor both LRR and LOD algorithms, we use the same graph structure (see Eq.\n4) and set the value of p (number of nearest neighbors) to be 5.\nWe begin with a simple synthetic example to give some intuition about how LOD works.\n6.1 Simple Synthetic Example A simple synthetic example is given in Figure 1.\nThe data set contains two circles.\nEight points are selected by AOD and LOD.\nAs can be seen, all the points selected by AOD are from the big circle, while LOD selects four points from the big circle and four from the small circle.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nWe did not compare our algorithm with SVMactive because SVMactive can not be applied in this case due to the lack of the labeled points.\n6.2 Image Retrieval Experimental Design The image database we used consists of 7,900 images of 79 semantic categories, from COREL data set.\nIt is a large and heterogeneous image set.\nEach image is represented as a 128-dimensional vector as described in Section 5.1.\nFigure 2 shows some sample images.\nTo exhibit the advantages of using our algorithm, we need a reliable way of evaluating the retrieval performance and the comparisons with other algorithms.\nWe list different aspects of the experimental design below.\n6.2.1 Evaluation Metrics We use precision-scope curve and precision rate [10] to evaluate the effectiveness of the image retrieval algorithms.\nThe scope is specified by the number (N) of top-ranked images presented to the user.\nThe precision is the ratio of the number of relevant images presented to the user to the (a) Data set 1 2 3 4 5 6 7 8 (b) AOD 1 2 3 4 5 6 7 8 (c) LOD Figure 1: Data selection by active learning algorithms.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nNote that, the SVMactive algorithm can not be applied in this case due to the lack of labeled points.\n(a) (b) (c) Figure 2: Sample images from category bead, elephant, and ship.\nscope N.\nThe precision-scope curve describes the precision with various scopes and thus gives an overall performance evaluation of the algorithms.\nOn the other hand, the precision rate emphasizes the precision at a particular value of scope.\nIn general, it is appropriate to present 20 images on a screen.\nPutting more images on a screen may affect the quality of the presented images.\nTherefore, the precision at top 20 (N = 20) is especially important.\nIn real world image retrieval systems, the query image is usually not in the image database.\nTo simulate such environment, we use five-fold cross validation to evaluate the algorithms.\nMore precisely, we divide the whole image database into five subsets with equal size.\nThus, there are 20 images per category in each subset.\nAt each run of cross validation, one subset is selected as the query set, and the other four subsets are used as the database for retrieval.\nThe precisionscope curve and precision rate are computed by averaging the results from the five-fold cross validation.\n6.2.2 Automatic Relevance Feedback Scheme We designed an automatic feedback scheme to model the retrieval process.\nFor each submitted query, our system retrieves and ranks the images in the database.\n10 images were selected from the database for user labeling and the label information is used by the system for re-ranking.\nNote that, the images which have been selected at previous iterations are excluded from later selections.\nFor each query, the automatic relevance feedback mechanism is performed for four iterations.\nIt is important to note that the automatic relevance feedback scheme used here is different from the ones described in [8], [11].\nIn [8], [11], the top four relevant and irrelevant images were selected as the feedback images.\nHowever, this may not be practical.\nIn real world image retrieval systems, it is possible that most of the top-ranked images are relevant (or, irrelevant).\nThus, it is difficult for the user to find both four relevant and irrelevant images.\nIt is more reasonable for the users to provide feedback information only on the 10 images selected by the system.\n6.3 Image Retrieval Performance In real world, it is not practical to require the user to provide many rounds of feedbacks.\nThe retrieval performance after the first two rounds of feedbacks (especially the first round) is more important.\nFigure 3 shows the average precision-scope curves of the different algorithms for the first two feedback iterations.\nAt the beginning of retrieval, the Euclidean distances in the original 128-dimensional space are used to rank the images in database.\nAfter the user provides relevance feedbacks, the LRR, SVM, SVMactive, AOD, and LOD algorithms are then applied to re-rank the images.\nIn order to reduce the time complexity of active learning algorithms, we didn``t select the most informative images from the whole database but from the top 500 images.\nFor LRR and SVM, the user is required to label the top 10 images.\nFor SVMactive, AOD, and LOD, the user is required to label 10 most informative images selected by these algorithms.\nNote that, SVMactive can only be ap(a) Feedback Iteration 1 (b) Feedback Iteration 2 Figure 3: The average precision-scope curves of different algorithms for the first two feedback iterations.\nThe LOD algorithm performs the best on the entire scope.\nNote that, at the first round of feedback, the SVMactive algorithm can not be applied.\nIt applies the ordinary SVM to build the initial classifier.\n(a) Precision at Top 10 (b) Precision at Top 20 (c) Precision at Top 30 Figure 4: Performance evaluation of the five learning algorithms for relevance feedback image retrieval.\n(a) Precision at top 10, (b) Precision at top 20, and (c) Precision at top 30.\nAs can be seen, our LOD algorithm consistently outperforms the other four algorithms.\nplied when the classifier is already built.\nTherefore, it can not be applied at the first round and we use the standard SVM to build the initial classifier.\nAs can be seen, our LOD algorithm outperforms the other four algorithms on the entire scope.\nAlso, the LRR algorithm performs better than SVM.\nThis is because that the LRR algorithm makes efficient use of the unlabeled images by incorporating a locality preserving regularizer into the ordinary regression objective function.\nThe AOD algorithm performs the worst.\nAs the scope gets larger, the performance difference between these algorithms gets smaller.\nBy iteratively adding the user``s feedbacks, the corresponding precision results (at top 10, top 20, and top 30) of the five algorithms are respectively shown in Figure 4.\nAs can be seen, our LOD algorithm performs the best in all the cases and the LRR algorithm performs the second best.\nBoth of these two algorithms make use of the unlabeled images.\nThis shows that the unlabeled images are helpful for discovering the intrinsic geometrical structure of the image space and therefore enhance the retrieval performance.\nIn real world, the user may not be willing to provide too many relevance feedbacks.\nTherefore, the retrieval performance at the first two rounds are especially important.\nAs can be seen, our LOD algorithm achieves 6.8% performance improvement for top 10 results, 5.2% for top 20 results, and 4.1% for top 30 results, comparing to the second best algorithm (LRR) after the first two rounds of relevance feedbacks.\n6.4 Discussion Several experiments on Corel database have been systematically performed.\nWe would like to highlight several interesting points: 1.\nIt is clear that the use of active learning is beneficial in the image retrieval domain.\nThere is a significant increase in performance from using the active learning methods.\nEspecially, out of the three active learning methods (SVMactive, AOD, LOD), our proposed LOD algorithm performs the best.\n2.\nIn many real world applications like relevance feedback image retrieval, there are generally two ways of reducing labor-intensive manual labeling task.\nOne is active learning which selects the most informative samples to label, and the other is semi-supervised learning which makes use of the unlabeled samples to enhance the learning performance.\nBoth of these two strategies have been studied extensively in the past [14], [7], [5], [8].\nThe work presented in this paper is focused on active learning, but it also takes advantage of the recent progresses on semi-supervised learning [2].\nSpecifically, we incorporate a locality preserving regularizer into the standard regression framework and find the most informative samples with respect to the new objective function.\nIn this way, the active learning and semi-supervised learning techniques are seamlessly unified for learning an optimal classifier.\n3.\nThe relevance feedback technique is crucial to image retrieval.\nFor all the five algorithms, the retrieval performance improves with more feedbacks provided by the user.\n7.\nCONCLUSIONS AND FUTURE WORK This paper describes a novel active learning algorithm, called Laplacian Optimal Design, to enable more effective relevance feedback image retrieval.\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space.\nUsing techniques from experimental design, our algorithm finds the most informative images to label.\nThese labeled images and the unlabeled images in the database are used to learn a classifier.\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance.\nIn this paper, we consider the image retrieval problem on a small, static, and closed-domain image data.\nA much more challenging domain is the World Wide Web (WWW).\nFor Web image search, it is possible to collect a large amount of user click information.\nThis information can be naturally used to construct the affinity graph in our algorithm.\nHowever, the computational complexity in Web scenario may become a crucial issue.\nAlso, although our primary interest in this paper is focused on relevance feedback image retrieval, our results may also be of interest to researchers in patten recognition and machine learning, especially when a large amount of data is available but only a limited samples can be labeled.\n8.\nREFERENCES [1] A. C. Atkinson and A. N. Donev.\nOptimum Experimental Designs.\nOxford University Press, 2002.\n[2] M. Belkin, P. Niyogi, and V. Sindhwani.\nManifold regularization: A geometric framework for learning from examples.\nJournal of Machine Learning Research, 7:2399-2434, 2006.\n[3] F. R. K. Chung.\nSpectral Graph Theory, volume 92 of Regional Conference Series in Mathematics.\nAMS, 1997.\n[4] D. A. Cohn, Z. Ghahramani, and M. I. Jordan.\nActive learning with statistical models.\nJournal of Artificial Intelligence Research, 4:129-145, 1996.\n[5] A. Dong and B. Bhanu.\nA new semi-supervised em algorithm for image retrieval.\nIn IEEE Conf.\non Computer Vision and Pattern Recognition, Madison, WI, 2003.\n[6] P. Flaherty, M. I. Jordan, and A. P. Arkin.\nRobust design of biological experiments.\nIn Advances in Neural Information Processing Systems 18, Vancouver, Canada, 2005.\n[7] K.-S.\nGoh, E. Y. Chang, and W.-C.\nLai.\nMultimodal concept-dependent active learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, New York, October 2004.\n[8] X. He.\nIncremental semi-supervised subspace learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, New York, October 2004.\n[9] S. C. Hoi and M. R. Lyu.\nA semi-supervised active learning framework for image retrieval.\nIn IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, 2005.\n[10] D. P. Huijsmans and N. Sebe.\nHow to complete performance graphs in content-based image retrieval: Add generality and normalize scope.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2):245-251, 2005.\n[11] Y.-Y.\nLin, T.-L.\nLiu, and H.-T.\nChen.\nSemantic manifold learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia, Singapore, November 2005.\n[12] Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra.\nRelevance feedback: A power tool for interative content-based image retrieval.\nIEEE Transactions on Circuits and Systems for Video Technology, 8(5), 1998.\n[13] A. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain.\nContent-based image retrieval at the end of the early years.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1349-1380, 2000.\n[14] S. Tong and E. Chang.\nSupport vector machine active learning for image retrieval.\nIn Proceedings of the ninth ACM international conference on Multimedia, pages 107-118, 2001.\n[15] H. Yu, M. Li, H.-J.\nZhang, and J. Feng.\nColor texture moments for content-based image retrieval.\nIn International Conference on Image Processing, pages 24-28, 2002.\n[16] K. Yu, J. Bi, and V. Tresp.\nActive learning via transductive experimental design.\nIn Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, 2006.","lvl-3":"Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval (CBIR) performance.\nIt solicits the user's relevance judgments on the retrieved images returned by the CBIR systems.\nThe user's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images.\nHowever, the top returned images may not be the most informative ones.\nThe challenge is thus to determine which unlabeled images would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nIn this paper, we propose a novel active learning algorithm, called Laplacian Optimal Design (LOD), for relevance feedback image retrieval.\nOur algorithm is based on a regression model which minimizes the least square error on the measured (or, labeled) images and simultaneously preserves the local geometrical structure of the image space.\nSpecifically, we assume that if two images are sufficiently close to each other, then their measurements (or, labels) are close as well.\nBy constructing a nearest neighbor graph, the geometrical structure of the image space can be described by the graph Laplacian.\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images, which gives us the most amount of information.\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval.\n1.\nINTRODUCTION\nIn many machine learning and information retrieval tasks, there is no shortage of unlabeled data but labels are expensive.\nThe challenge is thus to determine which unlabeled samples would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nThis problem is typically called active learning [4].\nHere the task is to minimize an overall cost, which depends both on the classifier accuracy and the cost of data collection.\nMany real world applications can be casted into active learning framework.\nParticularly, we consider the problem of relevance feedback driven Content-Based Image Retrieval (CBIR) [13].\nContent-Based Image Retrieval has attracted substantial interests in the last decade [13].\nIt is motivated by the fast growth of digital image databases which, in turn, require efficient search schemes.\nRather than describe an image using text, in these systems an image query is described using one or more example images.\nThe low level visual features (color, texture, shape, etc.) are automatically extracted to represent the images.\nHowever, the low level features may not accurately characterize the high level semantic concepts.\nTo narrow down the semantic gap, relevance feedback is introduced into CBIR [12].\nIn many of the current relevance feedback driven CBIR systems, the user is required to provide his\/her relevance judgments on the top images returned by the system.\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not.\nHowever, in general the top returned images may not be the most informative ones.\nIn the worst case, all the top images labeled by the user may be positive and thus the standard classification techniques cannot be applied due to the lack of negative examples.\nUnlike the standard classification problems where the labeled samples are pregiven, in relevance feedback image retrieval the system can actively select the images to label.\nThus active learning can be naturally introduced into image retrieval.\nDespite many existing active learning techniques, Support Vector Machine (SVM) active learning [14] and regression based active learning [1] have received the most interests.\nBased on the observation that the closer to the SVM boundary an image is, the less reliable its classification is, SVM active learning selects those unlabeled images closest to the boundary to solicit user feedback so as to achieve maximal refinement on the hyperplane between the two classes.\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough.\nMoreover, it may not be applied at the beginning of the retrieval when there is no labeled images.\nSome other SVM based active learning algorithms can be found in [7], [9].\nIn statistics, the problem of selecting samples to label is typically referred to as experimental design.\nThe sample x is referred to as experiment, and its label y is referred to as measurement.\nThe study of optimal experimental design (OED) [1] is concerned with the design of experiments that are expected to minimize variances of a parameterized model.\nThe intent of optimal experimental design is usually to maximize confidence in a given model, minimize parameter variances for system identification, or minimize the model's output variance.\nClassical experimental design approaches include A-Optimal Design, D-Optimal Design, and E-Optimal Design.\nAll of these approaches are based on a least squares regression model.\nComparing to SVM based active learning algorithms, experimental design approaches are much more efficient in computation.\nHowever, this kind of approaches takes only measured (or, labeled) data into account in their objective function, while the unmeasured (or, unlabeled) data is ignored.\nBenefit from recent progresses on optimal experimental design and semi-supervised learning, in this paper we propose a novel active learning algorithm for image retrieval, called Laplacian Optimal Design (LOD).\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points, the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points.\nSpecifically, we introduce a locality preserving regularizer into the standard least-square-error based loss function.\nThe new loss function aims to find a classifier which is locally as smooth as possible.\nIn other words, if two points are sufficiently close to each other in the input space, then they are expected to share the same label.\nOnce the loss function is defined, we can select the most informative data points which are presented to the user for labeling.\nIt would be important to note that the most informative images may not be the top returned images.\nThe rest of the paper is organized as follows.\nIn Section 2, we provide a brief description of the related work.\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3.\nIn Section 4, we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval.\nFinally, we provide some concluding remarks and suggestions for future work in Section 5.\n2.\nRELATED WORK\nSince our proposed algorithm is based on regression framework.\nThe most related work is optimal experimental design [1], including A-Optimal Design, D-Optimal Design, and EOptimal Design.\nIn this Section, we give a brief description of these approaches.\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following.\nGiven a set of points A = {x1, x2, \u00b7 \u00b7 \u00b7, xm} in Rd, find a subset B = {z1, z2, \u00b7 \u00b7 \u00b7, zk} C A which contains the most informative points.\nIn other words, the points zi (i = 1, \u00b7 \u00b7 \u00b7, k) can improve the classifier the most if they are labeled and used as training points.\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nwhere y is the observation, x is the independent variable, w is the weight vector and ~ is an unknown error with zero mean.\nDifferent observations have errors that are independent, but with equal variances \u03c32.\nWe define f (x) = wT x to be the learner's output given input x and the weight vector w. Suppose we have a set of labeled sample points (z1, y1), \u00b7 \u00b7 \u00b7, (zk, yk), where yi is the label of zi.\nThus, the maximum likelihood estimate for the weight vector, \u02c6w, is that which minimizes the sum squared error\nBy Gauss-Markov theorem, we know that w\u02c6 \u2212 w has a zero mean and a covariance matrix given by \u03c32H \u2212 1 sse, where Hsse is the Hessian of Jsse (w)\nwhere Z = (z1, z2, \u00b7 \u00b7 \u00b7, zk).\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare: \u2022 D-optimal design: determinant of Hsse.\n\u2022 A-optimal design: trace of Hsse.\n\u2022 E-optimal design: maximum eigenvalue of Hsse.\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace, A-optimal design is more efficient than the other two.\nSome recent work on experimental design can be found in [6], [16].\n3.\nLAPLACIAN OPTIMAL DESIGN\n3.1 The Objective Function\n4.\nKERNEL LAPLACIAN OPTIMAL DESIGN\n4.1 Derivation of LOD in Reproducing Kernel Hilbert Space\n4.2 Optimization Scheme\n5.\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN\n5.1 Low-Level Image Representation\n5.2 Relevance Feedback Image Retrieval\n6.\nEXPERIMENTAL RESULTS\n6.1 Simple Synthetic Example\n6.2 Image Retrieval Experimental Design\n6.2.1 Evaluation Metrics\n6.2.2 Automatic Relevance Feedback Scheme\n6.3 Image Retrieval Performance\n6.4 Discussion\n7.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm, called Laplacian Optimal Design, to enable more effective relevance feedback image retrieval.\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space.\nUsing techniques from experimental design, our algorithm finds the most informative images to label.\nThese labeled images and the unlabeled images in the database are used to learn a classifier.\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance.\nIn this paper, we consider the image retrieval problem on a small, static, and closed-domain image data.\nA much more challenging domain is the World Wide Web (WWW).\nFor Web image search, it is possible to collect a large amount of user click information.\nThis information can be naturally used to construct the affinity graph in our algorithm.\nHowever, the computational complexity in Web scenario may become a crucial issue.\nAlso, although our primary interest in this paper is focused on relevance feedback image retrieval, our results may also be of interest to researchers in patten recognition and machine learning, especially when a large amount of data is available but only a limited samples can be labeled.","lvl-4":"Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval (CBIR) performance.\nIt solicits the user's relevance judgments on the retrieved images returned by the CBIR systems.\nThe user's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images.\nHowever, the top returned images may not be the most informative ones.\nThe challenge is thus to determine which unlabeled images would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nIn this paper, we propose a novel active learning algorithm, called Laplacian Optimal Design (LOD), for relevance feedback image retrieval.\nOur algorithm is based on a regression model which minimizes the least square error on the measured (or, labeled) images and simultaneously preserves the local geometrical structure of the image space.\nSpecifically, we assume that if two images are sufficiently close to each other, then their measurements (or, labels) are close as well.\nBy constructing a nearest neighbor graph, the geometrical structure of the image space can be described by the graph Laplacian.\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images, which gives us the most amount of information.\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval.\n1.\nINTRODUCTION\nIn many machine learning and information retrieval tasks, there is no shortage of unlabeled data but labels are expensive.\nThe challenge is thus to determine which unlabeled samples would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nThis problem is typically called active learning [4].\nMany real world applications can be casted into active learning framework.\nParticularly, we consider the problem of relevance feedback driven Content-Based Image Retrieval (CBIR) [13].\nContent-Based Image Retrieval has attracted substantial interests in the last decade [13].\nIt is motivated by the fast growth of digital image databases which, in turn, require efficient search schemes.\nRather than describe an image using text, in these systems an image query is described using one or more example images.\nThe low level visual features (color, texture, shape, etc.) are automatically extracted to represent the images.\nTo narrow down the semantic gap, relevance feedback is introduced into CBIR [12].\nIn many of the current relevance feedback driven CBIR systems, the user is required to provide his\/her relevance judgments on the top images returned by the system.\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not.\nHowever, in general the top returned images may not be the most informative ones.\nIn the worst case, all the top images labeled by the user may be positive and thus the standard classification techniques cannot be applied due to the lack of negative examples.\nUnlike the standard classification problems where the labeled samples are pregiven, in relevance feedback image retrieval the system can actively select the images to label.\nThus active learning can be naturally introduced into image retrieval.\nDespite many existing active learning techniques, Support Vector Machine (SVM) active learning [14] and regression based active learning [1] have received the most interests.\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough.\nMoreover, it may not be applied at the beginning of the retrieval when there is no labeled images.\nSome other SVM based active learning algorithms can be found in [7], [9].\nIn statistics, the problem of selecting samples to label is typically referred to as experimental design.\nThe sample x is referred to as experiment, and its label y is referred to as measurement.\nThe study of optimal experimental design (OED) [1] is concerned with the design of experiments that are expected to minimize variances of a parameterized model.\nThe intent of optimal experimental design is usually to maximize confidence in a given model, minimize parameter variances for system identification, or minimize the model's output variance.\nClassical experimental design approaches include A-Optimal Design, D-Optimal Design, and E-Optimal Design.\nAll of these approaches are based on a least squares regression model.\nComparing to SVM based active learning algorithms, experimental design approaches are much more efficient in computation.\nHowever, this kind of approaches takes only measured (or, labeled) data into account in their objective function, while the unmeasured (or, unlabeled) data is ignored.\nBenefit from recent progresses on optimal experimental design and semi-supervised learning, in this paper we propose a novel active learning algorithm for image retrieval, called Laplacian Optimal Design (LOD).\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points, the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points.\nSpecifically, we introduce a locality preserving regularizer into the standard least-square-error based loss function.\nThe new loss function aims to find a classifier which is locally as smooth as possible.\nIn other words, if two points are sufficiently close to each other in the input space, then they are expected to share the same label.\nOnce the loss function is defined, we can select the most informative data points which are presented to the user for labeling.\nIt would be important to note that the most informative images may not be the top returned images.\nThe rest of the paper is organized as follows.\nIn Section 2, we provide a brief description of the related work.\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3.\nIn Section 4, we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval.\nFinally, we provide some concluding remarks and suggestions for future work in Section 5.\n2.\nRELATED WORK\nSince our proposed algorithm is based on regression framework.\nThe most related work is optimal experimental design [1], including A-Optimal Design, D-Optimal Design, and EOptimal Design.\nIn this Section, we give a brief description of these approaches.\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following.\nIn other words, the points zi (i = 1, \u00b7 \u00b7 \u00b7, k) can improve the classifier the most if they are labeled and used as training points.\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nDifferent observations have errors that are independent, but with equal variances \u03c32.\nThus, the maximum likelihood estimate for the weight vector, \u02c6w, is that which minimizes the sum squared error\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare: \u2022 D-optimal design: determinant of Hsse.\n\u2022 A-optimal design: trace of Hsse.\n\u2022 E-optimal design: maximum eigenvalue of Hsse.\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace, A-optimal design is more efficient than the other two.\nSome recent work on experimental design can be found in [6], [16].\n7.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm, called Laplacian Optimal Design, to enable more effective relevance feedback image retrieval.\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space.\nUsing techniques from experimental design, our algorithm finds the most informative images to label.\nThese labeled images and the unlabeled images in the database are used to learn a classifier.\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance.\nIn this paper, we consider the image retrieval problem on a small, static, and closed-domain image data.\nFor Web image search, it is possible to collect a large amount of user click information.\nThis information can be naturally used to construct the affinity graph in our algorithm.","lvl-2":"Laplacian Optimal Design for Imag e Retrieval\nABSTRACT\nRelevance feedback is a powerful technique to enhance ContentBased Image Retrieval (CBIR) performance.\nIt solicits the user's relevance judgments on the retrieved images returned by the CBIR systems.\nThe user's labeling is then used to learn a classifier to distinguish between relevant and irrelevant images.\nHowever, the top returned images may not be the most informative ones.\nThe challenge is thus to determine which unlabeled images would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nIn this paper, we propose a novel active learning algorithm, called Laplacian Optimal Design (LOD), for relevance feedback image retrieval.\nOur algorithm is based on a regression model which minimizes the least square error on the measured (or, labeled) images and simultaneously preserves the local geometrical structure of the image space.\nSpecifically, we assume that if two images are sufficiently close to each other, then their measurements (or, labels) are close as well.\nBy constructing a nearest neighbor graph, the geometrical structure of the image space can be described by the graph Laplacian.\nWe discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images, which gives us the most amount of information.\nExperimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval.\n1.\nINTRODUCTION\nIn many machine learning and information retrieval tasks, there is no shortage of unlabeled data but labels are expensive.\nThe challenge is thus to determine which unlabeled samples would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples.\nThis problem is typically called active learning [4].\nHere the task is to minimize an overall cost, which depends both on the classifier accuracy and the cost of data collection.\nMany real world applications can be casted into active learning framework.\nParticularly, we consider the problem of relevance feedback driven Content-Based Image Retrieval (CBIR) [13].\nContent-Based Image Retrieval has attracted substantial interests in the last decade [13].\nIt is motivated by the fast growth of digital image databases which, in turn, require efficient search schemes.\nRather than describe an image using text, in these systems an image query is described using one or more example images.\nThe low level visual features (color, texture, shape, etc.) are automatically extracted to represent the images.\nHowever, the low level features may not accurately characterize the high level semantic concepts.\nTo narrow down the semantic gap, relevance feedback is introduced into CBIR [12].\nIn many of the current relevance feedback driven CBIR systems, the user is required to provide his\/her relevance judgments on the top images returned by the system.\nThe labeled images are then used to train a classifier to separate images that match the query concept from those that do not.\nHowever, in general the top returned images may not be the most informative ones.\nIn the worst case, all the top images labeled by the user may be positive and thus the standard classification techniques cannot be applied due to the lack of negative examples.\nUnlike the standard classification problems where the labeled samples are pregiven, in relevance feedback image retrieval the system can actively select the images to label.\nThus active learning can be naturally introduced into image retrieval.\nDespite many existing active learning techniques, Support Vector Machine (SVM) active learning [14] and regression based active learning [1] have received the most interests.\nBased on the observation that the closer to the SVM boundary an image is, the less reliable its classification is, SVM active learning selects those unlabeled images closest to the boundary to solicit user feedback so as to achieve maximal refinement on the hyperplane between the two classes.\nThe major disadvantage of SVM active learning is that the estimated boundary may not be accurate enough.\nMoreover, it may not be applied at the beginning of the retrieval when there is no labeled images.\nSome other SVM based active learning algorithms can be found in [7], [9].\nIn statistics, the problem of selecting samples to label is typically referred to as experimental design.\nThe sample x is referred to as experiment, and its label y is referred to as measurement.\nThe study of optimal experimental design (OED) [1] is concerned with the design of experiments that are expected to minimize variances of a parameterized model.\nThe intent of optimal experimental design is usually to maximize confidence in a given model, minimize parameter variances for system identification, or minimize the model's output variance.\nClassical experimental design approaches include A-Optimal Design, D-Optimal Design, and E-Optimal Design.\nAll of these approaches are based on a least squares regression model.\nComparing to SVM based active learning algorithms, experimental design approaches are much more efficient in computation.\nHowever, this kind of approaches takes only measured (or, labeled) data into account in their objective function, while the unmeasured (or, unlabeled) data is ignored.\nBenefit from recent progresses on optimal experimental design and semi-supervised learning, in this paper we propose a novel active learning algorithm for image retrieval, called Laplacian Optimal Design (LOD).\nUnlike traditional experimental design methods whose loss functions are only defined on the measured points, the loss function of our proposed LOD algorithm is defined on both measured and unmeasured points.\nSpecifically, we introduce a locality preserving regularizer into the standard least-square-error based loss function.\nThe new loss function aims to find a classifier which is locally as smooth as possible.\nIn other words, if two points are sufficiently close to each other in the input space, then they are expected to share the same label.\nOnce the loss function is defined, we can select the most informative data points which are presented to the user for labeling.\nIt would be important to note that the most informative images may not be the top returned images.\nThe rest of the paper is organized as follows.\nIn Section 2, we provide a brief description of the related work.\nOur proposed Laplacian Optimal Design algorithm is introduced in Section 3.\nIn Section 4, we compare our algorithm with the state-or-the-art algorithms and present the experimental results on image retrieval.\nFinally, we provide some concluding remarks and suggestions for future work in Section 5.\n2.\nRELATED WORK\nSince our proposed algorithm is based on regression framework.\nThe most related work is optimal experimental design [1], including A-Optimal Design, D-Optimal Design, and EOptimal Design.\nIn this Section, we give a brief description of these approaches.\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following.\nGiven a set of points A = {x1, x2, \u00b7 \u00b7 \u00b7, xm} in Rd, find a subset B = {z1, z2, \u00b7 \u00b7 \u00b7, zk} C A which contains the most informative points.\nIn other words, the points zi (i = 1, \u00b7 \u00b7 \u00b7, k) can improve the classifier the most if they are labeled and used as training points.\n2.2 Optimal Experimental Design\nWe consider a linear regression model\nwhere y is the observation, x is the independent variable, w is the weight vector and ~ is an unknown error with zero mean.\nDifferent observations have errors that are independent, but with equal variances \u03c32.\nWe define f (x) = wT x to be the learner's output given input x and the weight vector w. Suppose we have a set of labeled sample points (z1, y1), \u00b7 \u00b7 \u00b7, (zk, yk), where yi is the label of zi.\nThus, the maximum likelihood estimate for the weight vector, \u02c6w, is that which minimizes the sum squared error\nBy Gauss-Markov theorem, we know that w\u02c6 \u2212 w has a zero mean and a covariance matrix given by \u03c32H \u2212 1 sse, where Hsse is the Hessian of Jsse (w)\nwhere Z = (z1, z2, \u00b7 \u00b7 \u00b7, zk).\nThe three most common scalar measures of the size of the parameter covariance matrix in optimal experimental design\nare: \u2022 D-optimal design: determinant of Hsse.\n\u2022 A-optimal design: trace of Hsse.\n\u2022 E-optimal design: maximum eigenvalue of Hsse.\nSince the computation of the determinant and eigenvalues of a matrix is much more expensive than the computation of matrix trace, A-optimal design is more efficient than the other two.\nSome recent work on experimental design can be found in [6], [16].\n3.\nLAPLACIAN OPTIMAL DESIGN\nSince the covariance matrix Hsse used in traditional approaches is only dependent on the measured samples, i.e. zi's, these approaches fail to evaluate the expected errors on the unmeasured samples.\nIn this Section, we introduce a novel active learning algorithm called Laplacian Optimal Design (LOD) which makes efficient use of both measured (labeled) and unmeasured (unlabeled) samples.\n3.1 The Objective Function\nIn many machine learning problems, it is natural to assume that if two points xi, xj are sufficiently close to each other, then their measurements (f (xi), f (xj)) are close as\nwell.\nLet S be a similarity matrix.\nThus, a new loss function which respects the geometrical structure of the data space can be defined as follows: where yi is the measurement (or, label) of zi.\nNote that, the loss function (3) is essentially the same as the one used in Laplacian Regularized Regression (LRR, [2]).\nHowever, LRR is a passive learning algorithm where the training data is given.\nIn this paper, we are focused on how to select the most informative data for training.\nThe loss function with our choice of symmetric weights Sij (Sij = Sji) incurs a heavy penalty if neighboring points xi and xj are mapped far apart.\nTherefore, minimizing J0 (w) is an attempt to ensure that if xi and xj are close then f (xi) and f (xj) are close as well.\nThere are many choices of the similarity matrix S.\nA simple definition is as follows:\nLet D be a diagonal matrix, Dii = ~ j Sij, and L = D \u2212 S.\nThe matrix L is called graph Laplacian in spectral graph theory [3].\nLet y = (y1, \u00b7 \u00b7 \u00b7, yk) T and X = (x1, \u00b7 \u00b7 \u00b7, xm).\nFollowing some simple algebraic steps, we see that:\nwhere I is an identity matrix and \u039b = \u03bb1XLXT + \u03bb2I.\nClearly, H is of full rank.\nRequiring that the gradient of J (w) with respect to w vanish gives the optimal estimate \u02c6w:\nThe following proposition states the bias and variance properties of the estimator for the coefficient vector w.\nFor any x, let y\u02c6 = \u02c6wT x be its predicted observation.\nThe expected squared prediction error is E (y \u2212 \u02c6y) 2\nIn some cases, the matrix ZZT + \u03bbXLXT is singular (e.g. if m .\nLet X denote the data matrix in RKHS, X = (\u03c6 (x1), \u03c6 (x2), \u00b7 \u00b7 \u00b7, \u03c6 (xm)).\nSimilarly, we define Z = (\u03c6 (z1), \u03c6 (z2), \u00b7 \u00b7 \u00b7, \u03c6 (zk)).\nThus, the optimization problem in RKHS can be written as follows:\nSince the mapping function \u03c6 is generally unknown, there is no direct way to solve problem (13).\nIn the following, we apply kernel tricks to solve this optimization problem.\nLet X \u2212 1 be the Moore-Penrose inverse (also known as pseudo inverse) of X. Thus, we have:\nwhere KXX is a m \u00d7 m matrix (KXX, ij = K (xi, xj)), KXZ is a m \u00d7 k matrix (KXZ, ij = K (xi, zj)), and KZX is a k \u00d7 m matrix (KZX, ij = K (zi, xj)).\nThus, the Kernel Laplacian Optimal Design can be defined as follows:\n4.2 Optimization Scheme\nIn this subsection, we discuss how to solve the optimization problems (11) and (14).\nParticularly, if we select a linear kernel for KLOD, then it reduces to LOD.\nTherefore, we will focus on problem (14) in the following.\nIt can be shown that the optimization problem (14) is NP-hard.\nIn this subsection, we develop a simple sequential greedy approach to solve (14).\nSuppose n points have been selected, denoted by a matrix Zn = (z1, \u00b7 \u00b7 \u00b7, zn).\nThe (n + 1) - th point zn +1 can be selected by solving the following optimization problem:\nIf the kernel function is chosen as inner product K (x, y) = (x, y), then WK is a linear functional space and the algorithm reduces to LOD.\n5.\nCONTENT-BASED IMAGE RETRIEVAL USING LAPLACIAN OPTIMAL DESIGN\nIn this section, we describe how to apply Laplacian Optimal Design to CBIR.\nWe begin with a brief description of image representation using low level visual features.\n5.1 Low-Level Image Representation\nLow-level image representation is a crucial problem in CBIR.\nGeneral visual features includes color, texture, shape, etc. .\nColor and texture features are the most extensively used visual features in CBIR.\nCompared with color and texture features, shape features are usually described after images have been segmented into regions or objects.\nSince robust and accurate image segmentation is difficult to achieve, the use of shape features for image retrieval has been limited to special applications where objects or regions are readily available.\nIn this work, We combine 64-dimensional color histogram and 64-dimensional Color Texture Moment (CTM, [15]) to represent the images.\nThe color histogram is calculated using 4 x 4 x 4 bins in HSV space.\nThe Color Texture Moment is proposed by Yu et al. [15], which integrates the color and texture characteristics of the image in a compact form.\nCTM adopts local Fourier transform as a texture representation scheme and derives eight characteristic maps to describe different aspects of co-occurrence relations of image pixels in each channel of the (SVcosH, SVsinH, V) color space.\nThen CTM calculates the first and second moment of these maps as a representation of the natural color image pixel distribution.\nPlease see [15] for details.\n5.2 Relevance Feedback Image Retrieval\nRelevance feedback is one of the most important techniques to narrow down the gap between low level visual features and high level semantic concepts [12].\nTraditionally, the user's relevance feedbacks are used to update the query vector or adjust the weighting of different dimensions.\nThis process can be viewed as an on-line learning process in which the image retrieval system acts as a learner and the user acts as a teacher.\nThey typical retrieval process is outlined as follows:\n1.\nThe user submits a query image example to the system.\nThe system ranks the images in database according to some pre-defined distance metric and presents to the user the top ranked images.\n2.\nThe system selects some images from the database and request the user to label them as \"relevant\" or \"irrelevant\".\n3.\nThe system uses the user's provided information to rerank the images in database and returns to the user the top images.\nGo to step 2 until the user is satisfied.\nOur Laplacian Optimal Design algorithm is applied in the second step for selecting the most informative images.\nOnce we get the labels for the images selected by LOD, we apply Laplacian Regularized Regression (LRR, [2]) to solve the optimization problem (3) and build the classifier.\nThe classifier is then used to re-rank the images in database.\nNote that, in order to reduce the computational complexity, we do not use all the unlabeled images in the database but only those within top 500 returns of previous iteration.\n6.\nEXPERIMENTAL RESULTS\nIn this section, we evaluate the performance of our proposed algorithm on a large image database.\nTo demonstrate the effectiveness of our proposed LOD algorithm, we compare it with Laplacian Regularized Regression (LRR, [2]), Support Vector Machine (SVM), Support Vector Machine Active Learning (SVMactive) [14], and A-Optimal Design (AOD).\nBoth SVMactive, AOD, and LOD are active learning algorithms, while LRR and SVM are standard classification algorithms.\nSVM only makes use of the labeled images, while LRR is a semi-supervised learning algorithm which makes use of both labeled and unlabeled images.\nFor SVMactive, AOD, and LOD, 10 training images are selected by the algorithms themselves at each iteration.\nWhile for LRR and SVM, we use the top 10 images as training data.\nIt would be important to note that SVMactive is based on the ordinary SVM, LOD is based on LRR, and AOD is based on the ordinary regression.\nThe parameters \u03bb1 and \u03bb2 in our LOD algorithm are empirically set to be 0.001 and 0.00001.\nFor both LRR and LOD algorithms, we use the same graph structure (see Eq.\n4) and set the value of p (number of nearest neighbors) to be 5.\nWe begin with a simple synthetic example to give some intuition about how LOD works.\n6.1 Simple Synthetic Example\nA simple synthetic example is given in Figure 1.\nThe data set contains two circles.\nEight points are selected by AOD and LOD.\nAs can be seen, all the points selected by AOD are from the big circle, while LOD selects four points from the big circle and four from the small circle.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nWe did not compare our algorithm with SVMactive because SVMactive cannot be applied in this case due to the lack of the labeled points.\n6.2 Image Retrieval Experimental Design\nThe image database we used consists of 7,900 images of 79 semantic categories, from COREL data set.\nIt is a large and heterogeneous image set.\nEach image is represented as a 128-dimensional vector as described in Section 5.1.\nFigure 2 shows some sample images.\nTo exhibit the advantages of using our algorithm, we need a reliable way of evaluating the retrieval performance and the comparisons with other algorithms.\nWe list different aspects of the experimental design below.\n6.2.1 Evaluation Metrics\nWe use precision-scope curve and precision rate [10] to evaluate the effectiveness of the image retrieval algorithms.\nThe scope is specified by the number (N) of top-ranked images presented to the user.\nThe precision is the ratio of the number of relevant images presented to the user to the\nFigure 1: Data selection by active learning algorithms.\nThe numbers beside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better represent the original data set.\nNote that, the SVMactive algorithm cannot be applied in this case due to the lack of labeled points.\nFigure 2: Sample images from category bead, elephant, and ship.\nscope N.\nThe precision-scope curve describes the precision with various scopes and thus gives an overall performance evaluation of the algorithms.\nOn the other hand, the precision rate emphasizes the precision at a particular value of scope.\nIn general, it is appropriate to present 20 images on a screen.\nPutting more images on a screen may affect the quality of the presented images.\nTherefore, the precision at top 20 (N = 20) is especially important.\nIn real world image retrieval systems, the query image is usually not in the image database.\nTo simulate such environment, we use five-fold cross validation to evaluate the algorithms.\nMore precisely, we divide the whole image database into five subsets with equal size.\nThus, there are 20 images per category in each subset.\nAt each run of cross validation, one subset is selected as the query set, and the other four subsets are used as the database for retrieval.\nThe precisionscope curve and precision rate are computed by averaging the results from the five-fold cross validation.\n6.2.2 Automatic Relevance Feedback Scheme\nWe designed an automatic feedback scheme to model the retrieval process.\nFor each submitted query, our system retrieves and ranks the images in the database.\n10 images were selected from the database for user labeling and the label information is used by the system for re-ranking.\nNote that, the images which have been selected at previous iterations are excluded from later selections.\nFor each query, the automatic relevance feedback mechanism is performed for four iterations.\nIt is important to note that the automatic relevance feedback scheme used here is different from the ones described in [8], [11].\nIn [8], [11], the top four relevant and irrelevant images were selected as the feedback images.\nHowever, this may not be practical.\nIn real world image retrieval systems, it is possible that most of the top-ranked images are relevant (or, irrelevant).\nThus, it is difficult for the user to find both four relevant and irrelevant images.\nIt is more reasonable for the users to provide feedback information only on the 10 images selected by the system.\n6.3 Image Retrieval Performance\nIn real world, it is not practical to require the user to provide many rounds of feedbacks.\nThe retrieval performance after the first two rounds of feedbacks (especially the first round) is more important.\nFigure 3 shows the average precision-scope curves of the different algorithms for the first two feedback iterations.\nAt the beginning of retrieval, the Euclidean distances in the original 128-dimensional space are used to rank the images in database.\nAfter the user provides relevance feedbacks, the LRR, SVM, SVMactive, AOD, and LOD algorithms are then applied to re-rank the images.\nIn order to reduce the time complexity of active learning algorithms, we didn't select the most informative images from the whole database but from the top 500 images.\nFor LRR and SVM, the user is required to label the top 10 images.\nFor SVMactive, AOD, and LOD, the user is required to label 10 most informative images selected by these algorithms.\nNote that, SVMactive can only be ap\nFigure 3: The average precision-scope curves of different algorithms for the first two feedback iterations.\nThe LOD algorithm performs the best on the entire scope.\nNote that, at the first round of feedback, the SVMactive algorithm cannot be applied.\nIt applies the ordinary SVM to build the initial classifier.\nFigure 4: Performance evaluation of the five learning algorithms for relevance feedback image retrieval.\n(a) Precision at top 10, (b) Precision at top 20, and (c) Precision at top 30.\nAs can be seen, our LOD algorithm consistently outperforms the other four algorithms.\nplied when the classifier is already built.\nTherefore, it cannot be applied at the first round and we use the standard SVM to build the initial classifier.\nAs can be seen, our LOD algorithm outperforms the other four algorithms on the entire scope.\nAlso, the LRR algorithm performs better than SVM.\nThis is because that the LRR algorithm makes efficient use of the unlabeled images by incorporating a locality preserving regularizer into the ordinary regression objective function.\nThe AOD algorithm performs the worst.\nAs the scope gets larger, the performance difference between these algorithms gets smaller.\nBy iteratively adding the user's feedbacks, the corresponding precision results (at top 10, top 20, and top 30) of the five algorithms are respectively shown in Figure 4.\nAs can be seen, our LOD algorithm performs the best in all the cases and the LRR algorithm performs the second best.\nBoth of these two algorithms make use of the unlabeled images.\nThis shows that the unlabeled images are helpful for discovering the intrinsic geometrical structure of the image space and therefore enhance the retrieval performance.\nIn real world, the user may not be willing to provide too many relevance feedbacks.\nTherefore, the retrieval performance at the first two rounds are especially important.\nAs can be seen, our LOD algorithm achieves 6.8% performance improvement for top 10 results, 5.2% for top 20 results, and 4.1% for top 30 results, comparing to the second best algorithm (LRR) after the first two rounds of relevance feedbacks.\n6.4 Discussion\nSeveral experiments on Corel database have been systematically performed.\nWe would like to highlight several interesting points: 1.\nIt is clear that the use of active learning is beneficial in the image retrieval domain.\nThere is a significant increase in performance from using the active learning methods.\nEspecially, out of the three active learning methods (SVMactive, AOD, LOD), our proposed LOD algorithm performs the best.\n2.\nIn many real world applications like relevance feedback image retrieval, there are generally two ways of reducing labor-intensive manual labeling task.\nOne is active learning which selects the most informative samples to label, and the other is semi-supervised learning which makes use of the unlabeled samples to enhance the learning performance.\nBoth of these two strategies have been studied extensively in the past [14],\n[7], [5], [8].\nThe work presented in this paper is focused on active learning, but it also takes advantage of the recent progresses on semi-supervised learning [2].\nSpecifically, we incorporate a locality preserving regularizer into the standard regression framework and find the most informative samples with respect to the new objective function.\nIn this way, the active learning and semi-supervised learning techniques are seamlessly unified for learning an optimal classifier.\n3.\nThe relevance feedback technique is crucial to image retrieval.\nFor all the five algorithms, the retrieval performance improves with more feedbacks provided by the user.\n7.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm, called Laplacian Optimal Design, to enable more effective relevance feedback image retrieval.\nOur algorithm is based on an objective function which simultaneously minimizes the empirical error and preserves the local geometrical structure of the data space.\nUsing techniques from experimental design, our algorithm finds the most informative images to label.\nThese labeled images and the unlabeled images in the database are used to learn a classifier.\nThe experimental results on Corel database show that both active learning and semi-supervised learning can significantly improve the retrieval performance.\nIn this paper, we consider the image retrieval problem on a small, static, and closed-domain image data.\nA much more challenging domain is the World Wide Web (WWW).\nFor Web image search, it is possible to collect a large amount of user click information.\nThis information can be naturally used to construct the affinity graph in our algorithm.\nHowever, the computational complexity in Web scenario may become a crucial issue.\nAlso, although our primary interest in this paper is focused on relevance feedback image retrieval, our results may also be of interest to researchers in patten recognition and machine learning, especially when a large amount of data is available but only a limited samples can be labeled.","keyphrases":["imag retriev","relev feedback","contentbas imag retriev","label","top return imag","activ learn","activ learn","optim experiment design","imag represent","least squar regress model","precis rate","intrins geometr structur","patten recognit"],"prmu":["P","P","P","P","P","P","P","P","M","R","M","M","U"]} {"id":"J-14","title":"Computing Good Nash Equilibria in Graphical Games","abstract":"This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game. In [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.","lvl-1":"Computing Good Nash Equilibria in Graphical Games \u2217 Edith Elkind Hebrew University of Jerusalem, Israel, and University of Southampton, Southampton, SO17 1BJ, U.K. Leslie Ann Goldberg University of Liverpool Liverpool L69 3BX, U.K. Paul Goldberg University of Liverpool Liverpool L69 3BX, U.K. ABSTRACT This paper addresses the problem of fair equilibrium selection in graphical games.\nOur approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game.\nIn [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path.\nIn this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants.\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare.\nWe show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size.\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In a large community of agents, an agent``s behavior is not likely to have a direct effect on most other agents: rather, it is just the agents who are close enough to him that will be affected.\nHowever, as these agents respond by adapting their behavior, more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community.\nThis is the intuition behind graphical games, which were introduced by Kearns, Littman and Singh in [13] as a compact representation scheme for games with many players.\nIn an n-player graphical game, each player is associated with a vertex of an underlying graph G, and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph.\nIf the maximum degree of G is \u0394, and each player has two actions available to him, then the game can be represented using n2\u0394+1 numbers.\nIn contrast, we need n2n numbers to represent a general n-player 2-action game, which is only practical for small values of n. For graphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium, the existence of which follows from Nash``s celebrated theorem (as graphical games are just a special case of n-player games).\nThe first attempt to tackle this problem was made in [13], where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree.\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways: an exponential-time algorithm for finding an (exact) Nash equilibrium, and a fully polynomial time approximation scheme (FPTAS) for finding an approximation to a Nash equilibrium.\nFor any > 0 this algorithm outputs an -Nash equilibrium, which is a strategy profile in which no player can improve his payoff by more than by unilaterally changing his strategy.\nWhile -Nash equilibria are often easier to compute than exact Nash equilibria, this solution concept has several drawbacks.\nFirst, the players may be sensitive to a small loss in payoffs, so the strategy profile that is an -Nash equilibrium will not be stable.\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive, and for a large population of players it may be difficult to choose a value of that will satisfy everyone.\nSecond, the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria.\nTherefore, the (approximation to the) value of the best solution that corresponds to an -Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium.\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents, as the benchmark implied by an -Nash equilibrium may be unrealistic.\nFor these reasons, in this paper we focus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an (exact) Nash equilibrium in polynomial time when the underlying 162 graph has degree 2 (that is, when the graph is a collection of paths and cycles).\nBy contrast, finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable: it has been shown (see [5, 12, 7]) to be complete for the complexity class PPAD.\n[9] extends this hardness result to the case in which the underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium, indeed it may have exponentially many.\nMoreover, some Nash equilibria are more desirable than others.\nRather than having an algorithm which merely finds some Nash equilibrium, we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties, such as maximizing overall payoff or distributing profit fairly.\nA useful property of the data structure of [13] is that it simultaneously represents the set of all Nash equilibria of the underlying game.\nIf this representation has polynomial size (as is the case for paths, as shown in [9]), one may hope to extract from it a Nash equilibrium with the desired properties.\nIn fact, in [13] the authors mention that this is indeed possible if one is interested in finding an (approximate) -Nash equilibrium.\nThe goal of this paper is to extend this to exact Nash equilibria.\n1.1 Our Results In this paper, we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [13] has size poly(n).\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties.\nIn particular, we show how to find a Nash equilibrium that (nearly) maximizes the social welfare, i.e., the sum of the players'' payoffs, and we show how to find a Nash equilibrium that (nearly) satisfies prescribed payoff bounds for all players.\nGraphical games on bounded-degree trees have a simple algebraic structure.\nOne attractive feature, which follows from [13], is that every such game has a Nash equilibrium in which the strategy of every player is a rational number.\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare.\nWe show (Theorems 1 and 2) that, surprisingly, the set of Nash equilibria that maximize social welfare is more complex.\nIn fact, for any algebraic number \u03b1 \u2208 [0, 1] with degree at most n, we exhibit a graphical game on a path of length O(n) such that, in the unique social welfare-maximizing Nash equilibrium of this game, one of the players plays the mixed strategy \u03b1.1 This result shows that it may be difficult to represent an optimal Nash equilibrium.\nIt seems to be a novel feature of the setting we consider here, that an optimal Nash equilibrium is hard to represent, in a situation where it is easy to find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of previous papers [13, 16, 19] is that we require our algorithm to output an exact Nash equilibrium, though not necessarily the optimal one with respect to our criteria.\nIn Section 4, we describe an algorithm that satisfies this requirement.\nNamely, we propose an algorithm that for any > 0 finds a Nash equilibrium whose total payoff is within of optimal.\nIt runs in polynomial time (Theorem 3,4) for any graphical game on a bounded-degree tree for which the data structure proposed by [13] (the so-called best response policy, defined below) is of size poly(n) (note that, as shown in [9], this is always the case when the underlying graph is a path).\nMore pre1 A related result in a different context was obtained by Datta [8], who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games.\ncisely, the running time of our algorithm is polynomial in n, Pmax, and 1\/ , where Pmax is the maximum absolute value of an entry of a payoff matrix, i.e., it is a pseudopolynomial algorithm, though it is fully polynomial with respect to .\nWe show (Section 4.1) that under some restrictions on the payoff matrices, the algorithm can be transformed into a (truly) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 factor from the optimal.\nIn Section 5, we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti.\nUsing the idea from Section 4 we give (Theorem 5) a fully polynomial time approximation scheme for this problem.\nThe running time of the algorithm is bounded by a polynomial in n, Pmax, and .\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 .\nIn Section 6, we introduce other natural criteria for selecting a good Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria.\nIn particular, in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare, while guaranteeing that each individual payoff is close to a prescribed threshold.\nIn Section 6.2 we show how to find a Nash equilibrium that (nearly) maximizes the minimum individual payoff.\nFinally, in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other.\n1.2 Related Work Our approximation scheme (Theorem 3 and Theorem 4) shows a contrast between the games that we study and two-player n-action games, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash equilibria with special properties is typically NP-hard.\nIn particular, this is the case for Nash equilibria that maximize the social welfare [11, 6].\nMoreover, it is likely to be intractable even to approximate such equilibria.\nIn particular, Chen, Deng and Teng [4] show that there exists some , inverse polynomial in n, for which computing an -Nash equilibrium in 2-player games with n actions per player is PPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them.\nNote that these algorithms are not polynomial-time in general.\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.\nA correlated equilibrium (CE) (introduced by Aumann [2]) is a distribution over vectors of players'' actions with the property that if any player is told his own action (the value of his own component) from a vector generated by that distribution, then he cannot increase his expected payoff by changing his action.\nAny Nash equilibrium is a CE but the converse does not hold in general.\nIn contrast with Nash equilibria, correlated equilibria can be found for low-degree graphical games (as well as other classes of conciselyrepresented multiplayer games) in polynomial time [17].\nBut, for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [18].\nHowever, the NP-hardness results apply to more general games than the one we consider here, in particular the graphs are not trees.\nFrom [2] it is also known that there exist 2-player, 2-action games for which the expected total payoff 163 of the best correlated equilibrium is higher than the best Nash equilibrium, and we discuss this issue further in Section 7.\n2.\nPRELIMINARIES AND NOTATION We consider graphical games in which the underlying graph G is an n-vertex tree, in which each vertex has at most \u0394 children.\nEach vertex has two actions, which are denoted by 0 and 1.\nA mixed strategy of a player V is represented as a single number v \u2208 [0, 1], which denotes the probability that V selects action 1.\nFor the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root has a single child, and that its payoff is independent of the action chosen by the child.\nThis can be achieved by first choosing an arbitrary root of the tree, and then adding a dummy parent of this root, giving the new parent a constant payoff function, e.g., 0.\nGiven an edge (V, W ) of the tree G, and a mixed strategy w for W , let G(V,W ),W =w be the instance obtained from G by (1) deleting all nodes Z which are separated from V by W (i.e., all nodes Z such that the path from Z to V passes through W ), and (2) restricting the instance so that W is required to play mixed strategy w. Definition 1.\nSuppose that (V, W ) is an edge of the tree, that v is a mixed strategy for V and that w is a mixed strategy for W .\nWe say that v is a potential best response to w (denoted by v \u2208 pbrV (w)) if there is an equilibrium in the instance G(V,W ),W =w in which V has mixed strategy v.\nWe define the best response policy for V , given W , as B(W, V ) = {(w, v) | v \u2208 pbrV (w), w \u2208 [0, 1]}.\nThe upstream pass of the generic algorithm of [13] considers every node V (other than the root) and computes the best response policy for V given its parent.\nWith the above assumptions about the root, the downstream pass is straightforward.\nThe root selects a mixed strategy w for the root W and a mixed strategy v \u2208 B(W, V ) for each child V of W .\nIt instructs each child V to play v.\nThe remainder of the downward pass is recursive.\nWhen a node V is instructed by its parent to adopt mixed strategy v, it does the following for each child U - It finds a pair (v, u) \u2208 B(V, U) (with the same v value that it was given by its parent) and instructs U to play u.\nThe best response policy for a vertex U given its parent V can be represented as a union of rectangles, where a rectangle is defined by a pair of closed intervals (IV , IU ) and consists of all points in IV \u00d7 IU ; it may be the case that one or both of the intervals IV and IU consists of a single point.\nIn order to perform computations on B(V, U), and to bound the number of rectangles, [9] used the notion of an event point, which is defined as follows.\nFor any set A \u2286 [0, 1]2 that is represented as a union of a finite number of rectangles, we say that a point u \u2208 [0, 1] on the U-axis is a Uevent point of A if u = 0 or u = 1 or the representation of A contains a rectangle of the form IV \u00d7 IU and u is an endpoint of IU ; V -event points are defined similarly.\nFor many games considered in this paper, the underlying graph is an n-vertex path, i.e., a graph G = (V, E) with V = {V1, ... , Vn} and E = {(V1, V2), ... , (Vn\u22121, Vn)}.\nIn [9], it was shown that for such games, the best response policy has only polynomially-many rectangles.\nThe proof that the number of rectangles in B(Vj+1, Vj) is polynomial proceeds by first showing that the number of event points in B(Vj+1, Vj ) cannot exceed the number of event points in B(Vj, Vj\u22121) by more than 2, and using this fact to bound the number of rectangles in B(Vj+1, Vj ).\nLet P0 (V ) and P1 (V ) be the expected payoffs to V when it plays 0 and 1, respectively.\nBoth P0 (V ) and P1 (V ) are multilinear functions of the strategies of V ``s neighbors.\nIn what follows, we will frequently use the following simple observation.\nCLAIM 1.\nFor a vertex V with a single child U and parent W , given any A, B, C, D \u2208 Q, A , B , C , D \u2208 Q, one can select the payoffs to V so that P0 (V ) = Auw + Bu + Cw + D, P1 (V ) = A uw + B u + C w + D .\nMoreover, if all A, B, C, D, A , B , C , D are integer, the payoffs to V are integer as well.\nPROOF.\nWe will give the proof for P0 (V ); the proof for P1 (V ) is similar.\nFor i, j = 0, 1, let Pij be the payoff to V when U plays i, V plays 0 and W plays j.\nWe have P0 (V ) = P00(1 \u2212 u)(1 \u2212 w) + P10u(1 \u2212 w) + P01(1 \u2212 u)w + P11uw.\nWe have to select the values of Pij so that P00 \u2212 P10 \u2212 P01 + P11 = A, \u2212P00 + P10 = B, \u2212P00 + P01 = C, P00 = D.\nIt is easy to see that the unique solution is given by P00 = D, P01 = C + D, P10 = B + D, P11 = A + B + C + D.\nThe input to all algorithms considered in this paper includes the payoff matrices for each player.\nWe assume that all elements of these matrices are integer.\nLet Pmax be the greatest absolute value of any element of any payoff matrix.\nThen the input consists of at most n2\u0394+1 numbers, each of which can be represented using log Pmax bits.\n3.\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE: SOLUTIONS IN R \\ Q From the point of view of social welfare, the best Nash equilibrium is the one that maximizes the sum of the players'' expected payoffs.\nUnfortunately, it turns out that computing such a strategy profile exactly is not possible: in this section, we show that even if all players'' payoffs are integers, the strategy profile that maximizes the total payoff may have irrational coordinates; moreover, it may involve algebraic numbers of an arbitrary degree.\n3.1 Warm-up: quadratic irrationalities We start by providing an example of a graphical game on a path of length 3 with integer payoffs such that in the Nash equilibrium that maximizes the total payoff, one of the players has a strategy in R \\ Q.\nIn the next subsection, we will extend this example to algebraic numbers of arbitrary degree n; to do so, we have to consider paths of length O(n).\nTHEOREM 1.\nThere exists an integer-payoff graphical game G on a 3-vertex path UV W such that, in any Nash equilibrium of G that maximizes social welfare, the strategy, u, of the player U and the total payoff, p, satisfy u, p \u2208 R \\ Q. PROOF.\nThe payoffs to the players in G are specified as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V ) = \u2212uw + 3w and P1 (V ) = P0 (V ) + w(u + 2) \u2212 (u + 1), where u and w are the (mixed) strategies of U and W , respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f(u) = u+1 u+2 .\nObserve that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1].\nThe payoff to W is 0 if it selects the same action as V and 1 otherwise.\nCLAIM 2.\nAll Nash equilibria of the game G are of the form (u, 1\/2, f(u)).\nThat is, in any Nash equilibrium, V plays v = 1\/2 and W plays w = f(u).\nMoreover, for any value of u, the vector of strategies (u, 1\/2, f(u)) constitutes a Nash equilibrium.\nPROOF.\nIt is easy to check that for any u \u2208 [0, 1], the vector (u, 1\/2, f(u)) is a Nash equilibrium.\nIndeed, U is content to play 164 any mixed strategy u no matter what V and W do.\nFurthermore, V is indifferent between 0 and 1 as long as w = f(u), so it can play 1\/2.\nFinally, if V plays 0 and 1 with equal probability, W is indifferent between 0 and 1, so it can play f(u).\nConversely, suppose that v > 1\/2.\nThen W strictly prefers to play 0, i.e., w = 0.\nThen for V we have P1 (V ) = P0 (V ) \u2212 (u + 1), i.e., P1 (V ) < P0 (V ), which implies v = 0, a contradiction.\nSimilarly, if v < 1\/2, player W prefers to play 1, so we have w = 1.\nHence, P1 (V ) = P0 (V ) + (u + 2) \u2212 (u + 1), i.e., P1 (V ) > P0 (V ), which implies v = 1, a contradiction.\nFinally, if v = 1\/2, but w = f(u), player V is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nThis completes the proof of Claim 2.\nBy Claim 2, the total payoff in any Nash equilibrium of this game is a function of u.\nMore specifically, the payoff to U is 0, the payoff to V is \u2212uf(u) + 3f(u), and the payoff to W is 1\/2.\nTherefore, the Nash equilibrium with the maximum total payoff corresponds to the value of u that maximizes g(u) = \u2212u (u + 1) u + 2 + 3 u + 1 u + 2 = \u2212 (u \u2212 3)(u + 1) u + 2 .\nTo find extrema of g(u), we compute h(u) = \u2212 d du g(u).\nWe have h(u) = (2u \u2212 2)(u + 2) \u2212 (u \u2212 3)(u + 1) (u + 2)2 = u2 + 4u \u2212 1 (u + 2)2 .\nHence, h(u) = 0 if and only if u \u2208 {\u22122 + \u221a 5, \u22122 \u2212 \u221a 5}.\nNote that \u22122 + \u221a 5 \u2208 [0, 1].\nThe function g(u) changes sign at \u22122, \u22121, and 3.\nWe have g(u) < 0 for g > 3, g(u) > 0 for u < \u22122, so the extremum of g(u) that lies between 1 and 3, i.e., u = \u22122 + \u221a 5, is a local maximum.\nWe conclude that the social welfare-maximizing Nash equilibrium for this game is given by the vector of strategies (\u22122+\u221a 5, 1\/2, (5 \u2212 \u221a 5)\/5).\nThe respective total payoff is 0 \u2212 ( \u221a 5 \u2212 5)( \u221a 5 \u2212 1) \u221a 5 + 1 2 = 13\/2 \u2212 2 \u221a 5.\nThis concludes the proof of Theorem 1.\n3.2 Strategies of arbitrary degree We have shown that in the social welfare-maximizing Nash equilibrium, some players'' strategies can be quadratic irrationalities, and so can the total payoff.\nIn this subsection, we will extend this result to show that we can construct an integer-payoff graphical game on a path whose social welfare-maximizing Nash equilibrium involves arbitrary algebraic numbers in [0, 1].\nTHEOREM 2.\nFor any degree-n algebraic number \u03b1 \u2208 [0, 1], there exists an integer payoff graphical game on a path of length O(n) such that, in all social welfare-maximizing Nash equilibria of this game, one of the players plays \u03b1.\nPROOF.\nOur proof consists of two steps.\nFirst, we construct a rational expression R(x) and a segment [x , x ] such that x , x \u2208 Q and \u03b1 is the only maximum of R(x) on [x , x ].\nSecond, we construct a graphical game whose Nash equilibria can be parameterized by u \u2208 [x , x ], so that at the equilibrium that corresponds to u the total payoff is R(u) and, moreover, some player``s strategy is u.\nIt follows that to achieve the payoff-maximizing Nash equilibrium, this player has to play \u03b1.\nThe details follow.\nLEMMA 1.\nGiven an algebraic number \u03b1 \u2208 [0, 1], deg(\u03b1) = n, there exist K2, ... , K2n+2 \u2208 Q and x , x \u2208 (0, 1) \u2229 Q such that \u03b1 is the only maximum of R(x) = K2 x + 2 + \u00b7 \u00b7 \u00b7 + K2n+2 x + 2n + 2 on [x , x ].\nPROOF.\nLet P(x) be the minimal polynomial of \u03b1, i.e., a polynomial of degree n with rational coefficients whose leading coefficient is 1 such that P(\u03b1) = 0.\nLet A = {\u03b11, ... , \u03b1n} be the set of all roots of P(x).\nConsider the polynomial Q1(x) = \u2212P2 (x).\nIt has the same roots as P(x), and moreover, for any x \u2208 A we have Q1(x) < 0.\nHence, A is the set of all maxima of Q1(x).\nNow, set R(x) = Q1(x) (x+2)...(x+2n+1)(x+2n+2) .\nObserve that R(x) \u2264 0 for all x \u2208 [0, 1] and R(x) = 0 if and only if Q1(x) = 0.\nHence, the set A is also the set of all maxima of R(x) on [0, 1].\nLet d = min{|\u03b1i \u2212 \u03b1| | \u03b1i \u2208 A, \u03b1i = \u03b1}, and set \u03b1 = max{\u03b1 \u2212 d\/2, 0}, \u03b1 = min{\u03b1 + d\/2, 1}.\nClearly, \u03b1 is the only zero (and hence, the only maximum) of R(x) on [\u03b1 , \u03b1 ].\nLet x and x be some rational numbers in (\u03b1 , \u03b1) and (\u03b1, \u03b1 ), respectively; note that by excluding the endpoints of the intervals we ensure that x , x = 0, 1.\nAs [x , x ] \u2282 [\u03b1 , \u03b1 ], we have that \u03b1 is the only maximum of R(x) on [x , x ].\nAs R(x) is a proper rational expression and all roots of its denominator are simple, by partial fraction decomposition theorem, R(x) can be represented as R(x) = K2 x + 2 + \u00b7 \u00b7 \u00b7 + K2n+2 x + 2n + 2 , where K2, ... , K2n+2 are rational numbers.\nConsider a graphical game on the path U\u22121V\u22121U0V0U1V1 ... Uk\u22121Vk\u22121Uk, where k = 2n + 2.\nIntuitively, we want each triple (Ui\u22121, Vi\u22121, Ui) to behave similarly to the players U, V , and W from the game described in the previous subsection.\nMore precisely, we define the payoffs to the players in the following way.\n\u2022 The payoff to U\u22121 is 0 no matter what everyone else does.\n\u2022 The expected payoff to V\u22121 is 0 if it plays 0 and u0 \u2212 (x \u2212 x )u\u22121 \u2212x if it plays 1, where u0 and u\u22121 are the strategies of U0 and U\u22121, respectively.\n\u2022 The expected payoff to V0 is 0 if it plays 0 and u1(u0 + 1)\u2212 u0 if it plays 1, where u0 and u1 are the strategies of U0 and U1, respectively.\n\u2022 For each i = 1, ... , k \u2212 1, the expected payoff to Vi when it plays 0 is P0 (Vi) = Aiuiui+1 \u2212 Aiui+1, and the expected payoff to Vi when it plays 1 is P1 (Vi) = P0 (Vi) + ui+1(2 \u2212 ui) \u2212 1, where Ai = \u2212Ki+1 and ui+1 and ui are the strategies of Ui+1 and Ui, respectively.\n\u2022 For each i = 0, ... , k, the payoff to Ui does not depend on Vi and is 1 if Ui and Vi\u22121 select different actions and 0 otherwise.\nWe will now characterize the Nash equilibria of this game using a sequence of claims.\nCLAIM 3.\nIn all Nash equilibria of this game V\u22121 plays 1\/2, and the strategies of u\u22121 and u0 satisfy u0 = (x \u2212 x )u\u22121 + x .\nConsequently, in all Nash equilibria we have u0 \u2208 [x , x ].\n165 PROOF.\nThe proof is similar to that of Claim 2.\nLet f(u\u22121) = (x \u2212 x )u\u22121 + x .\nClearly, the player V\u22121 is indifferent between playing 0 and 1 if and only if u0 = f(u\u22121).\nSuppose that v\u22121 < 1\/2.\nThen U0 strictly prefers to play 1, i.e., u0 = 1, so we have P1 (V\u22121) = P0 (V\u22121) + 1 \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs 1 \u2212 x \u2264 1 \u2212 (x \u2212 x )u\u22121 \u2212 x \u2264 1 \u2212 x for u\u22121 \u2208 [0, 1] and x < 1, we have P1 (V\u22121) > P0 (V\u22121), so V\u22121 prefers to play 1, a contradiction.\nSimilarly, if v\u22121 > 1\/2, the player U0 strictly prefers to play 0, i.e., u0 = 0, so we have P1 (V\u22121) = P0 (V\u22121) \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs x < x , x > 0, we have P1 (V\u22121) < P0 (V\u22121), so V\u22121 prefers to play 0, a contradiction.\nFinally, if V\u22121 plays 1\/2, but u0 = f(u\u22121), player V\u22121 is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nAlso, note that f(0) = x , f(1) = x , and, moreover, f(u\u22121) \u2208 [x , x ] if and only if u\u22121 \u2208 [0, 1].\nHence, in all Nash equilibria of this game we have u0 \u2208 [x , x ].\nCLAIM 4.\nIn all Nash equilibria of this game for each i = 0, ... , k \u2212 1, we have vi = 1\/2, and the strategies of the players Ui and Ui+1 satisfy ui+1 = fi(ui), where f0(u) = u\/(u + 1) and fi(u) = 1\/(2 \u2212 u) for i > 0.\nPROOF.\nThe proof of this claim is also similar to that of Claim 2.\nWe use induction on i to prove that the statement of the claim is true and, additionally, ui = 1 for i > 0.\nFor the base case i = 0, note that u0 = 0 by the previous claim (recall that x , x are selected so that x , x = 0, 1) and consider the triple (U0, V0, U1).\nLet v0 be the strategy of V0.\nFirst, suppose that v0 > 1\/2.\nThen U1 strictly prefers to play 0, i.e., u1 = 0.\nThen for V0 we have P1 (V0) = P0 (V0) \u2212 u0.\nAs u0 = 0, we have P1 (V0) < P0 (V0), which implies v1 = 0, a contradiction.\nSimilarly, if v0 < 1\/2, player U1 prefers to play 1, so we have u1 = 1.\nHence, P1 (V0) = P0 (V0) + 1.\nIt follows that P1 (V0) > P0 (V0), which implies v0 = 1, a contradiction.\nFinally, if v0 = 1\/2, but u1 = u0\/(u0 + 1), player V0 is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nMoreover, as u1 = u0\/(u0 + 1) and u0 \u2208 [0, 1], we have u1 = 1.\nThe argument for the inductive step is similar.\nNamely, suppose that the statement is proved for all i < i and consider the triple (Ui, Vi, Ui+1).\nLet vi be the strategy of Vi.\nFirst, suppose that vi > 1\/2.\nThen Ui+1 strictly prefers to play 0, i.e., ui+1 = 0.\nThen for Vi we have P1 (Vi) = P0 (Vi)\u22121, i.e., P1 (Vi) < P0 (Vi), which implies vi = 0, a contradiction.\nSimilarly, if vi < 1\/2, player Ui+1 prefers to play 1, so we have ui+1 = 1.\nHence, P1 (Vi) = P0 (Vi) + 1 \u2212 ui.\nBy inductive hypothesis, we have ui < 1.\nConsequently, P1 (Vi) > P0 (Vi), which implies vi = 1, a contradiction.\nFinally, if vi = 1\/2, but ui+1 = 1\/(2 \u2212 ui), player Vi is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nMoreover, as ui+1 = 1\/(2 \u2212 ui) and ui < 1, we have ui+1 < 1.\nCLAIM 5.\nAny strategy profile of the form (u\u22121, 1\/2, u0, 1\/2, u1, 1\/2, ... , uk\u22121, 1\/2, uk), where u\u22121 \u2208 [0, 1], u0 = (x \u2212 x )u\u22121 + x , u1 = u0\/(u0 + 1), and ui+1 = 1\/(2 \u2212 ui) for i \u2265 1 constitutes a Nash equilibrium.\nPROOF.\nFirst, the player U\u22121``s payoffs do not depend on other players'' actions, so he is free to play any strategy in [0, 1].\nAs long as u0 = (x \u2212x )u\u22121 +x , player V\u22121 is indifferent between 0 and 1, so he is content to play 1\/2; a similar argument applies to players V0, ... , Vk\u22121.\nFinally, for each i = 0, ... , k, the payoffs of player Ui only depend on the strategy of player Vi\u22121.\nIn particular, as long as vi\u22121 = 1\/2, player Ui is indifferent between playing 0 and 1, so he can play any mixed strategy ui \u2208 [0, 1].\nTo complete the proof, note that (x \u2212 x )u\u22121 + x \u2208 [0, 1] for all u\u22121 \u2208 [0, 1], u0\/(u0 + 1) \u2208 [0, 1] for all u0 \u2208 [0, 1], and 1\/(2 \u2212 ui) \u2208 [0, 1] for all ui \u2208 [0, 1], so we have ui \u2208 [0, 1] for all i = 0, ... , k. Now, let us compute the total payoff under a strategy profile of the form given in Claim 5.\nThe payoff to U\u22121 is 0, and the expected payoff to each of the Ui, i = 0, ... , k, is 1\/2.\nThe expected payoffs to V\u22121 and V0 are 0.\nFinally, for any i = 1, ... , k \u2212 1, the expected payoff to Vi is Ti = Aiuiui+1 \u2212 Aiui+1.\nIt follows that to find a Nash equilibrium with the highest total payoff, we have to maximize Pk\u22121 i=1 Ti subject to conditions u\u22121 \u2208 [0, 1], u0 = (x \u2212x )u\u22121+x , u1 = u0\/(u0+1), and ui+1 = 1\/(2\u2212ui) for i = 1, ... , k \u2212 1.\nWe would like to express Pk\u22121 i=1 Ti as a function of u0.\nTo simplify notation, set u = u0.\nLEMMA 2.\nFor i = 1, ... , k, we have ui = u+i\u22121 u+i .\nPROOF.\nThe proof is by induction on i. For i = 1, we have u1 = u\/(u + 1).\nNow, for i \u2265 2 suppose that ui\u22121 = (u + i \u2212 2)\/(u + i \u2212 1).\nWe have ui = 1\/(2 \u2212 ui\u22121) = (u + i \u2212 1)\/(2u + 2i \u2212 2 \u2212 u \u2212 i + 2) = (u + i \u2212 1)\/(u + i).\nIt follows that for i = 1, ... , k \u2212 1 we have Ti = Ai u + i \u2212 1 u + i u + i u + i + 1 \u2212 Ai u + i u + i + 1 = \u2212Ai 1 u + i + 1 = Ki+1 u + i + 1 .\nObserve that as u\u22121 varies from 0 to 1, u varies from x to x .\nTherefore, to maximize the total payoff, we have to choose u \u2208 [x , x ] so as to maximize K2 u + 2 + \u00b7 \u00b7 \u00b7 + Kk u + k = R(u).\nBy construction, the only maximum of R(u) on [x , x ] is \u03b1.\nIt follows that in the payoff-maximizing Nash equilibrium of our game U0 plays \u03b1.\nFinally, note that the payoffs in our game are rational rather than integer.\nHowever, it is easy to see that we can multiply all payoffs to a player by their greatest common denominator without affecting his strategy.\nIn the resulting game, all payoffs are integer.\nThis concludes the proof of Theorem 2.\n4.\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM We have seen that the Nash equilibrium that maximizes the social welfare may involve strategies that are not in Q. Hence, in this section we focus on finding a Nash equilibrium that is almost optimal from the social welfare perspective.\nWe propose an algorithm that for any > 0 finds a Nash equilibrium whose total payoff is within from optimal.\nThe running time of this algorithm is polynomial in 1\/ , n and |Pmax| (recall that Pmax is the maximum absolute value of an entry of a payoff matrix).\nWhile the negative result of the previous section is for graphical games on paths, our algorithm applies to a wider range of scenarios.\nNamely, it runs in polynomial time on bounded-degree trees 166 as long as the best response policy of each vertex, given its parent, can be represented as a union of a polynomial number of rectangles.\nNote that path graphs always satisfy this condition: in [9] we showed how to compute such a representation, given a graph with maximum degree 2.\nConsequently, for path graphs the running time of our algorithm is guaranteed to be polynomial.\n(Note that [9] exhibits a family of graphical games on bounded-degree trees for which the best response policies of some of the vertices, given their parents, have exponential size, when represented as unions of rectangles.)\nDue to space restrictions, in this version of the paper we present the algorithm for the case where the graph underlying the graphical game is a path.\nWe then state our result for the general case; the proof can be found in the full version of this paper [10].\nSuppose that s is a strategy profile for a graphical game G.\nThat is, s assigns a mixed strategy to each vertex of G. let EPV (s) be the expected payoff of player V under s and let EP(s) =P V EPV (s).\nLet M(G) = max{EP(s) | s is a Nash equilibrium for G}.\nTHEOREM 3.\nSuppose that G is a graphical game on an nvertex path.\nThen for any > 0 there is an algorithm that constructs a Nash equilibrium s for G that satisfies EP(s ) \u2265 M(G)\u2212 .\nThe running time of the algorithm is O(n4 P3 max\/ 3 ) PROOF.\nLet {V1, ... , Vn} be the set of all players.\nWe start by constructing the best response policies for all Vi, i = 1, ... , n \u2212 1.\nAs shown in [9], this can be done in time O(n3 ).\nLet N > 5n be a parameter to be selected later, set \u03b4 = 1\/N, and define X = {j\u03b4 | j = 0, ... , N}.\nWe say that vj is an event point for a player Vi if it is a Vi-event point for B(Vi, Vi\u22121) or B(Vi+1, Vi).\nFor each player Vi, consider a finite set of strategies Xi given by Xi = X \u222a {vj |vj is an event point for Vi}.\nIt has been shown in [9] that for any i = 2, ... , n, the best response policy B(Vi, Vi\u22121) has at most 2n + 4 Vi-event points.\nAs we require N > 5n, we have |Xi| \u2264 2N; assume without loss of generality that |Xi| = 2N.\nOrder the elements of Xi in increasing order as x1 i = 0 < x2 i < \u00b7 \u00b7 \u00b7 < x2N i .\nWe will refer to the strategies in Xi as discrete strategies of player Vi; a strategy profile in which each player has a discrete strategy will be referred to as a discrete strategy profile.\nWe will now show that even we restrict each player Vi to strategies from Xi, the players can still achieve a Nash equilibrium, and moreover, the best such Nash equilibrium (with respect to the social welfare) has total payoff at least M(G) \u2212 as long as N is large enough.\nLet s be a strategy profile that maximizes social welfare.\nThat is, let s = (s1, ... , sn) where si is the mixed strategy of player Vi and EP(s) = M(G).\nFor i = 1, ... , n, let ti = max{xj i | xj i \u2264 si}.\nFirst, we will show that the strategy profile t = (t1, ... , tn) is a Nash equilibrium for G. Fix any i, 1 < i \u2264 n, and let R = [v1, v2]\u00d7[u1, u2] be the rectangle in B(Vi, Vi\u22121) that contains (si, si\u22121).\nAs v1 is a Vi-event point of B(Vi, Vi\u22121), we have v1 \u2264 ti, so the point (ti, si\u22121) is inside R. Similarly, the point u1 is a Vi\u22121-event point of B(Vi, Vi\u22121), so we have u1 \u2264 ti\u22121, and therefore the point (ti, ti\u22121) is inside R.\nThis means that for any i, 1 < i \u2264 n, we have ti\u22121 \u2208 pbrVi\u22121 (ti), which implies that t = (t1, ... , tn) is a Nash equilibrium for G. Now, let us estimate the expected loss in social welfare caused by playing t instead of s. LEMMA 3.\nFor any pair of strategy profiles t, s such that |ti \u2212 si| \u2264 \u03b4 we have |EPVi (s) \u2212 EPVi (t)| \u2264 24Pmax\u03b4 for any i = 1, ... , n. PROOF.\nLet Pi klm be the payoff of the player Vi, when he plays k, Vi\u22121 plays l, and Vi+1 plays m. Fix i = 1, ... , n and for k, l, m \u2208 {0, 1}, set tklm = tk i\u22121(1 \u2212 ti\u22121)1\u2212k tl i(1 \u2212 ti)1\u2212l tm i+1(1 \u2212 ti+1)1\u2212m sklm = sk i\u22121(1 \u2212 si\u22121)1\u2212k sl i(1 \u2212 si)1\u2212l sm i+1(1 \u2212 si+1)1\u2212m .\nWe have |EPVi (s) \u2212 EPVi (t)| \u2264 X k,l,m=0,1 |Pi klm(tklm \u2212 sklm )| \u2264 8Pmax max klm |tklm \u2212 sklm | We will now show that for any k, l, m \u2208 {0, 1} we have |tklm \u2212 sklm | \u2264 3\u03b4; clearly, this implies the lemma.\nIndeed, fix k, l, m \u2208 {0, 1}.\nSet x = tk i\u22121(1 \u2212 ti\u22121)1\u2212k , x = sk i\u22121(1 \u2212 si\u22121)1\u2212k , y = tl i(1 \u2212 ti)1\u2212l , y = sl i(1 \u2212 si)1\u2212l , z = tm i+1(1 \u2212 ti+1)1\u2212m , z = sm i+1(1 \u2212 si+1)1\u2212m .\nObserve that if k = 0 then x \u2212 x = (1 \u2212 ti\u22121) \u2212 (1 \u2212 si\u22121), and if k = 1 then x \u2212 x = ti\u22121 \u2212 si\u22121, so |x \u2212 x | \u2264 \u03b4.\nA similar argument shows |y \u2212 y | \u2264 \u03b4, |z \u2212 z | \u2264 \u03b4.\nAlso, we have x, x , y, y , z, z \u2208 [0, 1].\nHence, |tklm \u2212sklm | = |xyz\u2212x y z | = |xyz \u2212 x yz + x yz \u2212 x y z + x y z \u2212 x y z | \u2264 |x \u2212 x |yz + |y \u2212 y |x z + |z \u2212 z |x y \u2264 3\u03b4.\nLemma 3 implies Pn i=1 |EPVi (s) \u2212 EPVi (t)| \u2264 24nPmax\u03b4, so by choosing \u03b4 < \/(24nPmax), or, equivalently, setting N > 24nPmax\/ , we can ensure that the total expected payoff for the strategy profile t is within from optimal.\nWe will now show that we can find the best discrete Nash equilibrium (with respect to the social welfare) using dynamic programming.\nAs t is a discrete strategy profile, this means that the strategy profile found by our algorithm will be at least as good as t. Define ml,k i to be the maximum total payoff that V1, ... , Vi\u22121 can achieve if each Vj , j \u2264 i, chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj+1, and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nIf there is no way to choose the strategies for V1, ... , Vi\u22121 to satisfy these conditions, we set ml,k i = \u2212\u221e.\nThe values ml,k i , i = 1, ... , n; k, l = 1, ... , N, can be computed inductively, as follows.\nWe have ml,k 1 = 0 for k, l = 1, ... , N. Now, suppose that we have already computed ml,k j for all j < i; k, l = 1, ... , N. To compute mk,l i , we first check if (xk i , xl i\u22121) \u2208 B(Vi, Vi\u22121).\nIf this is not the case, we have ml,k i = \u2212\u221e.\nOtherwise, consider the set Y = Xi\u22122 \u2229 pbrVi\u22122 (xl i\u22121), i.e., the set of all discrete strategies of Vi\u22122 that are potential best responses to xl i\u22121.\nThe proof of Theorem 1 in [9] implies that the set pbrVi\u22122 (xl i\u22121) is non-empty: the player Vi\u22122 has a potential best response to any strategy of Vi\u22121, in particular, xl i\u22121.\nBy construction of the set Xi\u22122, this implies that Y is not empty.\nFor each xj i\u22122 \u2208 Y , let pjlk be the payoff that Vi\u22121 receives when Vi\u22122 plays xj i\u22122, Vi\u22121 plays xl i\u22121, and Vi plays xk i .\nClearly, pjlk can be computed in constant time.\nThen we have ml,k i = max{mj,l i\u22121 + pjlk | xj i\u22122 \u2208 Y }.\nFinally, suppose that we have computed ml,k n for l, k = 1, ... , N.\nWe still need to take into account the payoff of player Vn.\nHence, 167 we consider all pairs (xk n, xl n\u22121) that satisfy xl n\u22121 \u2208 pbrVn\u22121 (xk n), and pick the one that maximizes the sum of mk,l n and the payoff of Vn when he plays xk n and Vn\u22121 plays xl n\u22121.\nThis results in the maximum total payoff the players can achieve in a Nash equilibrium using discrete strategies; the actual strategy profile that produces this payoff can be reconstructed using standard dynamic programming techniques.\nIt is easy to see that each ml,k i can be computed in time O(N), i.e., all of them can be computed in time O(nN3 ).\nRecall that we have to select N \u2265 (24nPmax)\/ to ensure that the strategy profile we output has total payoff that is within from optimal.\nWe conclude that we can compute an -approximation to the best Nash equilibrium in time O(n4 P3 max\/ 3 ).\nThis completes the proof of Theorem 3.\nTo state our result for the general case (i.e., when the underlying graph is a bounded-degree tree rather than a path), we need additional notation.\nIf G has n players, let q(n) be an upper bound on the number of event points in the representation of any best response policy.\nThat is, we assume that for any vertex U with parent V , B(V, U) has at most q(n) event points.\nWe will be interested in the situation in which q(n) is polynomial in n. THEOREM 4.\nLet G be an n-player graphical game on a tree in which each node has at most \u0394 children.\nSuppose we are given a set of best-response policies for G in which each best-response policy B(V, U) is represented by a set of rectangles with at most q(n) event points.\nFor any > 0, there is an algorithm that constructs a Nash equilibrium s for G that satisfies EP(s ) \u2265 M(G) \u2212 .\nThe running time of the algorithm is polynomial in n, Pmax and \u22121 provided that the tree has bounded degree (that is, \u0394 = O(1)) and q(n) is a polynomial in n.\nIn particular, if N = max((\u0394 + 1)q(n) + 1, n2\u0394+2 (\u0394 + 2)Pmax \u22121 ) and \u0394 > 1 then the running time is O(n\u0394(2N)\u0394 .\nFor the proof of this theorem, see [10].\n4.1 A polynomial-time algorithm for multiplicative approximation The running time of our algorithm is pseudopolynomial rather than polynomial, because it includes a factor which is polynomial in Pmax, the maximum (in absolute value) entry in any payoff matrix.\nIf we are interested in multiplicative approximation rather than additive one, this can be improved to polynomial.\nFirst, note that we cannot expect a multiplicative approximation for all inputs.\nThat is, we cannot hope to have an algorithm that computes a Nash equilibrium with total payoff at least (1 \u2212 )M(G).\nIf we had such an algorithm, then for graphical games G with M(G) = 0, the algorithm would be required to output the optimal solution.\nTo show that this is infeasible, observe that we can use the techniques of Section 3.2 to construct two integercoefficient graphical games on paths of length O(n) such that for some X \u2208 R the maximal total payoff in the first game is X, the maximal total payoff in the second game is \u2212X, and for both games, the strategy profiles that achieve the maximal total payoffs involve algebraic numbers of degree n. By combining the two games so that the first vertex of the second game becomes connected to the last vertex of the first game, but the payoffs of all players do not change, we obtain a graphical game in which the best Nash equilibrium has total payoff 0, yet the strategies that lead to this payoff have high algebraic complexity.\nHowever, we can achieve a multiplicative approximation when all entries of the payoff matrices are positive and the ratio between any two entries is polynomially bounded.\nRecall that we assume that all payoffs are integer, and let Pmin > 0 be the smallest entry of any payoff matrix.\nIn this case, for any strategy profile the payoff to player i is at least Pmin, so the total payoff in the social-welfare maximizing Nash equilibrium s satisfies M(G) \u2265 nPmin.\nMoreover, Lemma 3 implies that by choosing \u03b4 < \/(24Pmax\/Pmin), we can ensure that the Nash equilibrium t produced by our algorithm satisfies nX i=1 EPVi (s) \u2212 nX i=1 EPVi (t) \u2264 24Pmax\u03b4n \u2264 nPmin \u2264 M(G), i.e., for this value of \u03b4 we have Pn i=1 EPVi (t) \u2265 (1 \u2212 )M(G).\nRecall that the running time of our algorithm is O(nN3 ), where N has to be selected to satisfy N > 5n, N = 1\/\u03b4.\nIt follows that if Pmin > 0, Pmax\/Pmin = poly(n), we can choose N so that our algorithm provides a multiplicative approximation guarantee and runs in time polynomial in n and 1\/ .\n5.\nBOUNDED PAYOFF NASH EQUILIBRIA Another natural way to define what is a good Nash equilibrium is to require that each player``s expected payoff exceeds a certain threshold.\nThese thresholds do not have to be the same for all players.\nIn this case, in addition to the payoff matrices of the n players, we are given n numbers T1, ... , Tn, and our goal is to find a Nash equilibrium in which the payoff of player i is at least Ti, or report that no such Nash equilibrium exists.\nIt turns out that we can design an FPTAS for this problem using the same techniques as in the previous section.\nTHEOREM 5.\nGiven a graphical game G on an n-vertex path and n rational numbers T1, ... , Tn, suppose that there exists a strategy profile s such that s is a Nash equilibrium for G and EPVi (s) \u2265 Ti for i = 1, ... , n.\nThen for any > 0 we can find in time O(max{nP3 max\/ 3 , n4 \/ 3 }) a strategy profile s such that s is a Nash equilibrium for G and EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n. PROOF.\nThe proof is similar to that of Theorem 3.\nFirst, we construct the best response policies for all players, choose N > 5n, and construct the sets Xi, i = 1, ... , n, as described in the proof of Theorem 3.\nConsider a strategy profile s such that s is a Nash equilibrium for G and EPVi (s) \u2265 Ti for i = 1, ... , n.\nWe construct a strategy profile ti = max{xj i | xj i \u2264 si} and use the same argument as in the proof of Theorem 3 to show that t is a Nash equilibrium for G. By Lemma 3, we have |EPVi (s) \u2212 EPVi (t)| \u2264 24Pmax\u03b4, so choosing \u03b4 < \/(24Pmax), or, equivalently, N > max{5n, 24Pmax\/ }, we can ensure EPVi (t) \u2265 Ti \u2212 for i = 1, ... , n. Now, we will use dynamic programming to find a discrete Nash equilibrium that satisfies EPVi (t) \u2265 Ti \u2212 for i = 1, ... , n.\nAs t is a discrete strategy profile, our algorithm will succeed whenever there is a strategy profile s with EPVi (s) \u2265 Ti\u2212 for i = 1, ... , n. Let zl,k i = 1 if there is a discrete strategy profile such that for any j < i the strategy of the player Vj is a potential best response to the strategy of Vj+1, the expected payoff of Vj is at least Tj \u2212 , and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nOtherwise, let zl,k i = 0.\nWe can compute zl,k i , i = 1, ... , n; k, l = 1, ... , N inductively, as follows.\nWe have zl,k 1 = 1 for k, l = 1, ... , N. Now, suppose that we have already computed zl,k j for all j < i; k, l = 1, ... , N. To compute zk,l i , we first check if (xk i , xl i\u22121) \u2208 B(Vi, Vi\u22121).\nIf this 168 is not the case, clearly, zk,l i = 0.\nOtherwise, consider the set Y = Xi\u22122 \u2229pbrVi\u22122 (xl i\u22121), i.e., the set of all discrete strategies of Vi\u22122 that are potential best responses to xl i\u22121.\nIt has been shown in the proof of Theorem 3 that Y = \u2205.\nFor each xj i\u22122 \u2208 Y , let pjlk be the payoff that Vi\u22121 receives when Vi\u22122 plays xj i\u22122, Vi\u22121 plays xl i\u22121, and Vi plays xk i .\nClearly, pjlk can be computed in constant time.\nIf there exists an xj i\u22122 \u2208 Y such that zj,l i\u22121 = 1 and pjlk \u2265 Ti\u22122 \u2212 , set zl,k i = 1.\nOtherwise, set zl,k i = 0.\nHaving computed zl,k n , l, k = 1, ... , N, we check if zl,k n = 1 for some pair (l, k).\nif such a pair of indices exists, we instruct Vn to play xk n and use dynamic programming techniques (or, equivalently, the downstream pass of the algorithm of [13]) to find a Nash equilibrium s that satisfies EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n (recall that Vn is a dummy player, i.e., we assume Tn = 0, EPn(s ) = 0 for any choice of s ).\nIf zl,k n = 0 for all l, k = 1, ... , N, there is no discrete Nash equilibrium s that satisfies EPVi (s ) \u2265 Ti \u2212 for i = 1, ... , n and hence no Nash equilibrium s (not necessarily discrete) such that EPVi (s) \u2265 Ti for i = 1, ... , n.\nThe running time analysis is similar to that for Theorem 3; we conclude that the running time of our algorithm is O(nN3 ) = O(max{nP3 max\/ 3 , n4 \/ 3 }).\nREMARK 1.\nTheorem 5 can be extended to trees of bounded degree in the same way as Theorem 4.\n5.1 Exact Computation Another approach to finding Nash equilibria with bounded payoffs is based on inductively computing the subsets of the best response policies of all players so as to exclude the points that do not provide sufficient payoffs to some of the players.\nFormally, we say that a strategy v of the player V is a potential best response to a strategy w of its parent W with respect to a threshold vector T = (T1, ... , Tn), (denoted by v \u2208 pbrV (w, T)) if there is an equilibrium in the instance G(V,W ),W =w in which V plays mixed strategy v and the payoff to any player Vi downstream of V (including V ) is at least Ti.\nThe best response policy for V with respect to a threshold vector T is defined as B(W, V, T) = {(w, v) | v \u2208 pbrV (w, T), w \u2208 [0, 1]}.\nIt is easy to see that if any of the sets B(Vj, Vj\u22121, T), j = 1, ... , n, is empty, it means that it is impossible to provide all players with expected payoffs prescribed by T. Otherwise, one can apply the downstream pass of the original algorithm of [13] to find a Nash equilibrium.\nAs we assume that Vn is a dummy vertex whose payoff is identically 0, the Nash equilibrium with these payoffs exists as long as Tn \u2264 0 and B(Vn, Vn\u22121, T) is not empty.\nUsing the techniques developed in [9], it is not hard to show that for any j = 1, ... , n, the set B(Vj , Vj\u22121, T) consists of a finite number of rectangles, and one can compute B(Vj+1, Vj , T) given B(Vj , Vj\u22121, T).\nThe advantage of this approach is that it allows us to represent all Nash equilibria that provide required payoffs to the players.\nHowever, it is not likely to be practical, since it turns out that the rectangles that appear in the representation of B(Vj , Vj\u22121, T) may have irrational coordinates.\nCLAIM 6.\nThere exists a graphical game G on a 3-vertex path UV W and a vector T = (T1, T2, T3) such that B(V, W, T) cannot be represented as a union of a finite number of rectangles with rational coordinates.\nPROOF.\nWe define the payoffs to the players in G as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V ) = uw, P1 (V ) = P0 (V ) + w \u2212 .8u \u2212 .1, where u and w are the (mixed) strategies of U and W , respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f(u) = .8u + .1; observe that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1].\nIt is not hard to see that we have B(W, V ) = [0, .1]\u00d7{0} \u222a [.1, .9]\u00d7[0, 1] \u222a [.9, 1]\u00d7{1}.\nThe payoffs to W are not important for our construction; for example, set P0(W ) = P0(W ) = 0.\nNow, set T = (0, 1\/8, 0), i.e., we are interested in Nash equilibria in which V ``s expected payoff is at least 1\/8.\nSuppose w \u2208 [0, 1].\nThe player V can play a mixed strategy v when W is playing w as long as U plays u = f\u22121 (w) = 5w\/4 \u2212 1\/8 (to ensure that V is indifferent between 0 and 1) and P0 (V ) = P1 (V ) = uw = w(5w\/4 \u2212 1\/8) \u2265 1\/8.\nThe latter condition is satisfied if w \u2264 (1 \u2212 \u221a 41)\/20 < 0 or w \u2265 (1 + \u221a 41)\/20.\nNote that we have .1 < (1 + \u221a 41)\/20 < .9.\nFor any other value of w, any strategy of U either makes V prefer one of the pure strategies or does not provide it with a sufficient expected payoff.\nThere are also some values of w for which V can play a pure strategy (0 or 1) as a potential best response to W and guarantee itself an expected payoff of at least 1\/8; it can be shown that these values of w form a finite number of segments in [0, 1].\nWe conclude that any representation of B(W, V, T) as a union of a finite number of rectangles must contain a rectangle of the form [(1 + \u221a 41)\/20, w ]\u00d7[v , v ] for some w , v , v \u2208 [0, 1].\nOn the other hand, it can be shown that for any integer payoff matrices and threshold vectors and any j = 1, ... , n \u2212 1, the sets B(Vj+1, Vj, T) contain no rectangles of the form [u , u ]\u00d7{v} or {v}\u00d7[w , w ], where v \u2208 R\\Q.\nThis means that if B(Vn, Vn\u22121, T) is non-empty, i.e., there is a Nash equilibrium with payoffs prescribed by T, then the downstream pass of the algorithm of [13] can always pick a strategy profile that forms a Nash equilibrium, provides a payoff of at least Ti to the player Vi, and has no irrational coordinates.\nHence, unlike in the case of the Nash equilibrium that maximizes the social welfare, working with irrational numbers is not necessary, and the fact that the algorithm discussed in this section has to do so can be seen as an argument against using this approach.\n6.\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM In this section, we consider several other criteria that can be useful in selecting a Nash equilibrium.\n6.1 Combining welfare maximization with bounds on payoffs In many real life scenarios, we want to maximize the social welfare subject to certain restrictions on the payoffs to individual players.\nFor example, we may want to ensure that no player gets a negative expected payoff, or that the expected payoff to player i is at least Pi max \u2212 \u03be, where Pi max is the maximum entry of i``s payoff matrix and \u03be is a fixed parameter.\nFormally, given a graphical game G and a vector T1, ... , Tn, let S be the set of all Nash equilibria s of G that satisfy Ti \u2264 EPVi (s) for i = 1, ... , n, and let \u02c6s = argmaxs\u2208S EP(s).\nIf the set S is non-empty, we can find a Nash equilibrium \u02c6s that is -close to satisfying the payoff bounds and is within from \u02c6s with respect to the total payoff by combining the algorithms of Section 4 and Section 5.\nNamely, for a given > 0, choose \u03b4 as in the proof of Theorem 3, and let Xi be the set of all discrete strategies of player Vi (for a 169 formal definition, see the proof of Theorem 3).\nCombining the proofs of Theorem 3 and Theorem 5, we can see that the strategy profile \u02c6t given by \u02c6ti = max{xj i | xj i \u2264 \u02c6si} satisfies EPVi (\u02c6t) \u2265 Ti \u2212 , |EP(\u02c6s) \u2212 EP(\u02c6t)| \u2264 .\nDefine \u02c6ml,k i to be the maximum total payoff that V1, ... , Vi\u22121 can achieve if each Vj, j \u2264 i, chooses a strategy from Xj , for each j < i the strategy of Vj is a potential best response to the strategy of Vj+1 and the payoff to player Vj is at least Tj \u2212 , and, moreover, Vi\u22121 plays xl i\u22121, Vi plays xk i .\nIf there is no way to choose the strategies for V1, ... , Vi\u22121 to satisfy these conditions, we set ml,k i = \u2212\u221e.\nThe \u02c6ml,k i can be computed by dynamic programming similarly to the ml,k i and zl,k i in the proofs of Theorems 3 and 5.\nFinally, as in the proof of Theorem 3, we use ml,k n to select the best discrete Nash equilibrium subject to the payoff constraints.\nEven more generally, we may want to maximize the total payoff to a subset of players (who are assumed to be able to redistribute the profits fairly among themselves) while guaranteeing certain expected payoffs to (a subset of) the other players.\nThis problem can be handled similarly.\n6.2 A minimax approach A more egalitarian measure of the quality of a Nash equilibrium is the minimal expected payoff to a player.\nThe optimal solution with respect to this measure is a Nash equilibrium in which the minimal expected payoff to a player is maximal.\nTo find an approximation to such a Nash equilibrium, we can combine the algorithm of Section 5 with binary search on the space of potential lower bounds.\nNote that the expected payoff to any player Vi given a strategy s always satisfies \u2212Pmax \u2264 EPVi (s) \u2264 Pmax.\nFor a fixed > 0, we start by setting T = \u2212Pmax, T = Pmax, T\u2217 = (T + T )\/2.\nWe then run the algorithm of Section 5 with T1 = \u00b7 \u00b7 \u00b7 = Tn = T\u2217 .\nIf the algorithm succeeds in finding a Nash equilibrium s that satisfies EPVi (s ) \u2265 T\u2217 \u2212 for all i = 1, ... , n, we set T = T\u2217 , T\u2217 = (T + T )\/2; otherwise, we set T = T\u2217 , T\u2217 = (T + T )\/2 and loop.\nWe repeat this process until |T \u2212 T | \u2264 .\nIt is not hard to check that for any p \u2208 R, if there is a Nash equilibrium s such that mini=1,...,n EPVi (s) \u2265 p, then our algorithm outputs a Nash equilibrium s that satisfies mini=1,...,n EPVi (s) \u2265 p\u22122 .\nThe running time of our algorithm is O(max{nP3 max log \u22121 \/ 3 , n4 log \u22121 \/ 3 }).\n6.3 Equalizing the payoffs When the players'' payoff matrices are not very different, it is reasonable to demand that the expected payoffs to the players do not differ by much either.\nWe will now show that Nash equilibria in this category can be approximated in polynomial time as well.\nIndeed, observe that the algorithm of Section 5 can be easily modified to deal with upper bounds on individual payoffs rather than lower bounds.\nMoreover, we can efficiently compute an approximation to a Nash equilibrium that satisfies both the upper bound and the lower bound for each player.\nMore precisely, suppose that we are given a graphical game G, 2n rational numbers T1, ... , Tn, T1, ... , Tn and > 0.\nThen if there exists a strategy profile s such that s is a Nash equilibrium for G and Ti \u2264 EPVi (s) \u2264 Ti for i = 1, ... , n, we can find a strategy profile s such that s is a Nash equilibrium for G and Ti \u2212 \u2264 EPVi (s ) \u2264 Ti + for i = 1, ... , n.\nThe modified algorithm also runs in time O(max{nP3 max\/ 3 , [4]n4 \/ 3 }).\nThis observation allows us to approximate Nash equilibria in which all players'' expected payoffs differ by at most \u03be for any fixed \u03be > 0.\nGiven an > 0, we set T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax, T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax + \u03be + , and run the modified version of the algorithm of Section 5.\nIf it fails to find a solution, we increment all Ti, Ti by and loop.\nWe continue until the algorithm finds a solution, or Ti \u2265 Pmax.\nSuppose that there exists a Nash equilibrium s that satisfies |EPVi (s) \u2212 EPVj (s)| \u2264 \u03be for all i, j = 1, ... , n. Set r = mini=1,...,n EPVi (s); we have r \u2264 EPVi (s) \u2264 r + \u03be for all i = 1, ... , n.\nThere exists a k \u2265 0 such that \u2212Pmax + (k \u2212 1) \u2264 r \u2264 \u2212Pmax + k .\nDuring the kth step of the algorithm, we set T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax +(k\u22121) , i.e., we have r\u2212 \u2264 Ti \u2264 r, r + \u03be \u2264 Ti \u2264 r + \u03be + .\nThat is, the Nash equilibrium s satisfies Ti \u2264 r \u2264 EPVi (s) \u2264 r + \u03be \u2264 Ti , which means that when Ti is set to \u2212Pmax + (k \u2212 1) , our algorithm is guaranteed to output a Nash equilibrium t that satisfies r \u2212 2 \u2264 Ti \u2212 \u2264 EPVi (t) \u2264 Ti + \u2264 r +\u03be +2 .\nWe conclude that whenever such a Nash equilibrium s exists, our algorithm outputs a Nash equilibrium t that satisfies |EPVi (t) \u2212 EPVj (t)| \u2264 \u03be + 4 for all i, j = 1, ... , n.\nThe running time of this algorithm is O(max{nP3 max\/ 4 , n4 \/ 4 }).\nNote also that we can find the smallest \u03be for which such a Nash equilibrium exists by combining this algorithm with binary search over the space \u03be = [0, 2Pmax].\nThis identifies an approximation to the fairest Nash equilibrium, i.e., one in which the players'' expected payoffs differ by the smallest possible amount.\nFinally, note that all results in this section can be extended to bounded-degree trees.\n7.\nCONCLUSIONS We have studied the problem of equilibrium selection in graphical games on bounded-degree trees.\nWe considered several criteria for selecting a Nash equilibrium, such as maximizing the social welfare, ensuring a lower bound on the expected payoff of each player, etc..\nFirst, we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium, and proved strong negative results for that problem.\nNamely, we showed that even for graphical games on paths, any algebraic number \u03b1 \u2208 [0, 1] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria.\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players'' strategies are rational numbers.\nWe then provided approximation algorithms for selecting Nash equilibria with special properties.\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years, most of the existing work aims to find -Nash equilibria that satisfy (or are -close to satisfying) certain properties.\nOur approach is different in that we insist on outputting an exact Nash equilibrium, which is -close to satisfying a given requirement.\nAs argued in the introduction, there are several reasons to prefer a solution that constitutes an exact Nash equilibrium.\nOur algorithms are fully polynomial time approximation schemes, i.e., their running time is polynomial in the inverse of the approximation parameter , though they may be pseudopolynomial with respect to the input size.\nUnder mild restrictions on the inputs, they can be modified to be truly polynomial.\nThis is the strongest positive result one can derive for a problem whose exact solutions may be hard to represent, as is the case for many of the problems considered here.\nWhile we prove our results for games on a path, they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles.\nIn the full version of the paper we describe our algorithms for the general case.\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria, such as guaranteeing total payoffs for subsets of players, selecting equilibria in which some players are receiving significantly higher payoffs than their peers, etc..\nAt the moment however, it is perhaps more important to inves170 tigate whether Nash equilibria of graphical games can be computed in a decentralized manner, in contrast to the algorithms we have introduced here.\nIt is natural to ask if our results or those of [9] can be generalized to games with three or more actions.\nHowever, it seems that this will make the analysis significantly more difficult.\nIn particular, note that one can view the bounded payoff games as a very limited special case of games with three actions per player.\nNamely, given a two-action game with payoff bounds, consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does.\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game, and Section 5.1 shows that finding an exact solution to this problem requires new ideas.\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria (CE), especially since the best CE may have higher value (total expected payoff) than the best NE.\nThe ratio between these values is called the mediation value in [1].\nIt is known from [1] that the mediation value of 2-player, 2-action games with non-negative payoffs is at most 4 3 , and they exhibit a 3-player game for which it is infinite.\nFurthermore, a 2-player, 3action example from [1] also has infinite mediation value.\n8.\nREFERENCES [1] I. Ashlagi, D. Monderer and M. Tenneholtz, On the Value of Correlation, Proceedings of Dagstuhl seminar 05011 (2005) [2] R. Aumann, Subjectivity and Correlation in Randomized Strategies, Journal of Mathematical Economics 1 pp. 67-96 (1974) [3] B. Blum, C. R. Shelton, and D. Koller, A Continuation Method for Nash Equilibria in Structured Games, Proceedings of IJCAI``03 [4] X. Chen, X. Deng and S. Teng, Computing Nash Equilibria: Approximation and Smoothed Complexity, Proceedings of FOCS``06 [5] X. Chen, X. Deng, Settling the Complexity of 2-Player Nash-Equilibrium, Proceedings of FOCS``06 [6] V. Conitzer and T. Sandholm, Complexity Results about Nash Equilibria, Proceedings of IJCAI``03 [7] C. Daskalakis, P. W. Goldberg and C. H. Papadimitriou, The Complexity of Computing a Nash Equilibrium, Proceedings of STOC``06 [8] R. S. Datta, Universality of Nash Equilibria, Mathematics of Operations Research 28:3, 2003 [9] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Nash Equilibria in Graphical games on Trees Revisited, Proceedings of ACM EC``06 [10] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Computing Good Nash Equilibria in Graphical Games, http:\/\/arxiv.org\/abs\/cs.GT\/0703133 [11] I. Gilboa and E. Zemel, Nash and Correlated Equilibria: Some Complexity Considerations, Games and Economic Behavior, 1 pp. 80-93 (1989) [12] P. W. Goldberg and C. H. Papadimitriou, Reducibility Among Equilibrium Problems, Proceedings of STOC``06 [13] M. Kearns, M. Littman, and S. Singh, Graphical Models for Game Theory, Proceedings of UAI``01 [14] M. Littman, M. Kearns, and S. Singh, An Efficient Exact Algorithm for Singly Connected Graphical Games, Proceedings of NIPS``01 [15] R. Lipton and E. Markakis, Nash Equilibria via Polynomial Equations, Proceedings of LATIN``04 [16] L. Ortiz and M. Kearns, Nash Propagation for Loopy Graphical Games, Proceedings of NIPS``03 [17] C.H. Papadimitriou, Computing Correlated Equilibria in Multi-Player Games, Proceedings of STOC``05 [18] C.H. Papadimitriou and T. Roughgarden, Computing Equilibria in Multi-Player Games, Proceedings of SODA``05 [19] D. Vickrey and D. Koller, Multi-agent Algorithms for Solving Graphical Games, Proceedings of AAAI``02 171","lvl-3":"Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games.\nOur approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game.\nIn [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path.\nIn this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants.\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare.\nWe show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size.\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.\n1.\nINTRODUCTION\nIn a large community of agents, an agent's behavior is not likely to have a direct effect on most other agents: rather, it is just the * Supported by the EPSRC research grants \"Algorithmics of Network-sharing Games\" and \"Discontinuous Behaviour in the Complexity of randomized Algorithms\".\nagents who are close enough to him that will be affected.\nHowever, as these agents respond by adapting their behavior, more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community.\nThis is the intuition behind graphical games, which were introduced by Kearns, Littman and Singh in [13] as a compact representation scheme for games with many players.\nIn an n-player graphical game, each player is associated with a vertex of an underlying graph G, and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph.\nIf the maximum degree of G is \u0394, and each player has two actions available to him, then the game can be represented using n2\u0394 +1 numbers.\nIn contrast, we need n2n numbers to represent a general n-player 2-action game, which is only practical for small values of n. For graphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium, the existence of which follows from Nash's celebrated theorem (as graphical games are just a special case of n-player games).\nThe first attempt to tackle this problem was made in [13], where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree.\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways: an exponential-time algorithm for finding an (exact) Nash equilibrium, and a fully polynomial time approximation scheme (FPTAS) for finding an approximation to a Nash equilibrium.\nFor any e> 0 this algorithm outputs an e-Nash equilibrium, which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy.\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria, this solution concept has several drawbacks.\nFirst, the players may be sensitive to a small loss in payoffs, so the strategy profile that is an e-Nash equilibrium will not be stable.\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive, and for a large population of players it may be difficult to choose a value of a that will satisfy everyone.\nSecond, the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria.\nTherefore, the (approximation to the) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium.\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents, as the benchmark implied by an e-Nash equilibrium may be unrealistic.\nFor these reasons, in this paper we focus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an (exact) Nash equilibrium in polynomial time when the underlying\ngraph has degree 2 (that is, when the graph is a collection of paths and cycles).\nBy contrast, finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable: it has been shown (see [5, 12, 7]) to be complete for the complexity class PPAD.\n[9] extends this hardness result to the case in which the underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium, indeed it may have exponentially many.\nMoreover, some Nash equilibria are more desirable than others.\nRather than having an algorithm which merely finds some Nash equilibrium, we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties, such as maximizing overall payoff or distributing profit fairly.\nA useful property of the data structure of [13] is that it simultaneously represents the set of all Nash equilibria of the underlying game.\nIf this representation has polynomial size (as is the case for paths, as shown in [9]), one may hope to extract from it a Nash equilibrium with the desired properties.\nIn fact, in [13] the authors mention that this is indeed possible if one is interested in finding an (approximate) a-Nash equilibrium.\nThe goal of this paper is to extend this to exact Nash equilibria.\n1.1 Our Results\nIn this paper, we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [13] has size poly (n).\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties.\nIn particular, we show how to find a Nash equilibrium that (nearly) maximizes the social welfare, i.e., the sum of the players' payoffs, and we show how to find a Nash equilibrium that (nearly) satisfies prescribed payoff bounds for all players.\nGraphical games on bounded-degree trees have a simple algebraic structure.\nOne attractive feature, which follows from [13], is that every such game has a Nash equilibrium in which the strategy of every player is a rational number.\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare.\nWe show (Theorems 1 and 2) that, surprisingly, the set of Nash equilibria that maximize social welfare is more complex.\nIn fact, for any algebraic number \u03b1 \u2208 [0, 1] with degree at most n, we exhibit a graphical game on a path of length O (n) such that, in the unique social welfare-maximizing Nash equilibrium of this game, one of the players plays the mixed strategy \u03b1 .1 This result shows that it may be difficult to represent an optimal Nash equilibrium.\nIt seems to be a novel feature of the setting we consider here, that an optimal Nash equilibrium is hard to represent, in a situation where it is easy to find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of previous papers [13, 16, 19] is that we require our algorithm to output an exact Nash equilibrium, though not necessarily the optimal one with respect to our criteria.\nIn Section 4, we describe an algorithm that satisfies this requirement.\nNamely, we propose an algorithm that for any e> 0 finds a Nash equilibrium whose total payoff is within a of optimal.\nIt runs in polynomial time (Theorem 3,4) for any graphical game on a bounded-degree tree for which the data structure proposed by [13] (the so-called best response policy, defined below) is of size poly (n) (note that, as shown in [9], this is always the case when the underlying graph is a path).\nMore pre1A related result in a different context was obtained by Datta [8], who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games.\ncisely, the running time of our algorithm is polynomial in n, Pmax, and 1\/e, where Pmax is the maximum absolute value of an entry of a payoff matrix, i.e., it is a pseudopolynomial algorithm, though it is fully polynomial with respect to E.\nWe show (Section 4.1) that under some restrictions on the payoff matrices, the algorithm can be transformed into a (truly) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal.\nIn Section 5, we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti.\nUsing the idea from Section 4 we give (Theorem 5) a fully polynomial time approximation scheme for this problem.\nThe running time of the algorithm is bounded by a polynomial in n, Pmax, and E.\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E.\nIn Section 6, we introduce other natural criteria for selecting a \"good\" Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria.\nIn particular, in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare, while guaranteeing that each individual payoff is close to a prescribed threshold.\nIn Section 6.2 we show how to find a Nash equilibrium that (nearly) maximizes the minimum individual payoff.\nFinally, in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other.\n1.2 Related Work\nOur approximation scheme (Theorem 3 and Theorem 4) shows a contrast between the games that we study and two-player n-action games, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash equilibria with special properties is typically NP-hard.\nIn particular, this is the case for Nash equilibria that maximize the social welfare [11, 6].\nMoreover, it is likely to be intractable even to approximate such equilibria.\nIn particular, Chen, Deng and Teng [4] show that there exists some e, inverse polynomial in n, for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them.\nNote that these algorithms are not polynomial-time in general.\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.\nA correlated equilibrium (CE) (introduced by Aumann [2]) is a distribution over vectors of players' actions with the property that if any player is told his own action (the value of his own component) from a vector generated by that distribution, then he cannot increase his expected payoff by changing his action.\nAny Nash equilibrium is a CE but the converse does not hold in general.\nIn contrast with Nash equilibria, correlated equilibria can be found for low-degree graphical games (as well as other classes of conciselyrepresented multiplayer games) in polynomial time [17].\nBut, for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [18].\nHowever, the NP-hardness results apply to more general games than the one we consider here, in particular the graphs are not trees.\nFrom [2] it is also known that there exist 2-player, 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium, and we discuss this issue further in Section 7.\n2.\nPRELIMINARIES AND NOTATION\n3.\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE: SOLUTIONS IN R \\ Q\n3.1 Warm-up: quadratic irrationalities\n3.2 Strategies of arbitrary degree\n4.\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM\nDefine ml, k\n4.1 A polynomial-time algorithm for multiplicative approximation\n5.\nBOUNDED PAYOFF NASH EQUILIBRIA\n5.1 Exact Computation\n6.\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM\n6.1 Combining welfare maximization with bounds on payoffs\n6.2 A minimax approach\n6.3 Equalizing the payoffs\n7.\nCONCLUSIONS\nWe have studied the problem of equilibrium selection in graphical games on bounded-degree trees.\nWe considered several criteria for selecting a Nash equilibrium, such as maximizing the social welfare, ensuring a lower bound on the expected payoff of each player, etc. .\nFirst, we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium, and proved strong negative results for that problem.\nNamely, we showed that even for graphical games on paths, any algebraic number \u03b1 E [0, 1] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria.\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players' strategies are rational numbers.\nWe then provided approximation algorithms for selecting Nash equilibria with special properties.\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years, most of the existing work aims to find E-Nash equilibria that satisfy (or are E-close to satisfying) certain properties.\nOur approach is different in that we insist on outputting an exact Nash equilibrium, which is E-close to satisfying a given requirement.\nAs argued in the introduction, there are several reasons to prefer a solution that constitutes an exact Nash equilibrium.\nOur algorithms are fully polynomial time approximation schemes, i.e., their running time is polynomial in the inverse of the approximation parameter E, though they may be pseudopolynomial with respect to the input size.\nUnder mild restrictions on the inputs, they can be modified to be truly polynomial.\nThis is the strongest positive result one can derive for a problem whose exact solutions may be hard to represent, as is the case for many of the problems considered here.\nWhile we prove our results for games on a path, they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles.\nIn the full version of the paper we describe our algorithms for the general case.\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria, such as guaranteeing total payoffs for subsets of players, selecting equilibria in which some players are receiving significantly higher payoffs than their peers, etc. .\nAt the moment however, it is perhaps more important to inves\ntigate whether Nash equilibria of graphical games can be computed in a decentralized manner, in contrast to the algorithms we have introduced here.\nIt is natural to ask if our results or those of [9] can be generalized to games with three or more actions.\nHowever, it seems that this will make the analysis significantly more difficult.\nIn particular, note that one can view the bounded payoff games as a very limited special case of games with three actions per player.\nNamely, given a two-action game with payoff bounds, consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does.\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game, and Section 5.1 shows that finding an exact solution to this problem requires new ideas.\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria (CE), especially since the best CE may have higher value (total expected payoff) than the best NE.\nThe ratio between these values is called the mediation value in [1].\nIt is known from [1] that the mediation value of 2-player, 2-action games with non-negative payoffs is at most 43, and they exhibit a 3-player game for which it is infinite.\nFurthermore, a 2-player, 3action example from [1] also has infinite mediation value.","lvl-4":"Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games.\nOur approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game.\nIn [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path.\nIn this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants.\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare.\nWe show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size.\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.\n1.\nINTRODUCTION\nThis is the intuition behind graphical games, which were introduced by Kearns, Littman and Singh in [13] as a compact representation scheme for games with many players.\nIn an n-player graphical game, each player is associated with a vertex of an underlying graph G, and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph.\nIf the maximum degree of G is \u0394, and each player has two actions available to him, then the game can be represented using n2\u0394 +1 numbers.\nIn contrast, we need n2n numbers to represent a general n-player 2-action game, which is only practical for small values of n. For graphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium, the existence of which follows from Nash's celebrated theorem (as graphical games are just a special case of n-player games).\nThe first attempt to tackle this problem was made in [13], where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree.\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways: an exponential-time algorithm for finding an (exact) Nash equilibrium, and a fully polynomial time approximation scheme (FPTAS) for finding an approximation to a Nash equilibrium.\nFor any e> 0 this algorithm outputs an e-Nash equilibrium, which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy.\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria, this solution concept has several drawbacks.\nFirst, the players may be sensitive to a small loss in payoffs, so the strategy profile that is an e-Nash equilibrium will not be stable.\nSecond, the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria.\nTherefore, the (approximation to the) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium.\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents, as the benchmark implied by an e-Nash equilibrium may be unrealistic.\nFor these reasons, in this paper we focus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an (exact) Nash equilibrium in polynomial time when the underlying\nBy contrast, finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable: it has been shown (see [5, 12, 7]) to be complete for the complexity class PPAD.\n[9] extends this hardness result to the case in which the underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium, indeed it may have exponentially many.\nMoreover, some Nash equilibria are more desirable than others.\nRather than having an algorithm which merely finds some Nash equilibrium, we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties, such as maximizing overall payoff or distributing profit fairly.\nA useful property of the data structure of [13] is that it simultaneously represents the set of all Nash equilibria of the underlying game.\nIf this representation has polynomial size (as is the case for paths, as shown in [9]), one may hope to extract from it a Nash equilibrium with the desired properties.\nIn fact, in [13] the authors mention that this is indeed possible if one is interested in finding an (approximate) a-Nash equilibrium.\nThe goal of this paper is to extend this to exact Nash equilibria.\n1.1 Our Results\nIn this paper, we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [13] has size poly (n).\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties.\nIn particular, we show how to find a Nash equilibrium that (nearly) maximizes the social welfare, i.e., the sum of the players' payoffs, and we show how to find a Nash equilibrium that (nearly) satisfies prescribed payoff bounds for all players.\nGraphical games on bounded-degree trees have a simple algebraic structure.\nOne attractive feature, which follows from [13], is that every such game has a Nash equilibrium in which the strategy of every player is a rational number.\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare.\nWe show (Theorems 1 and 2) that, surprisingly, the set of Nash equilibria that maximize social welfare is more complex.\nIt seems to be a novel feature of the setting we consider here, that an optimal Nash equilibrium is hard to represent, in a situation where it is easy to find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of previous papers [13, 16, 19] is that we require our algorithm to output an exact Nash equilibrium, though not necessarily the optimal one with respect to our criteria.\nIn Section 4, we describe an algorithm that satisfies this requirement.\nNamely, we propose an algorithm that for any e> 0 finds a Nash equilibrium whose total payoff is within a of optimal.\nMore pre1A related result in a different context was obtained by Datta [8], who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games.\nWe show (Section 4.1) that under some restrictions on the payoff matrices, the algorithm can be transformed into a (truly) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal.\nIn Section 5, we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti.\nUsing the idea from Section 4 we give (Theorem 5) a fully polynomial time approximation scheme for this problem.\nThe running time of the algorithm is bounded by a polynomial in n, Pmax, and E.\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E.\nIn Section 6, we introduce other natural criteria for selecting a \"good\" Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria.\nIn particular, in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare, while guaranteeing that each individual payoff is close to a prescribed threshold.\nIn Section 6.2 we show how to find a Nash equilibrium that (nearly) maximizes the minimum individual payoff.\nFinally, in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other.\n1.2 Related Work\nOur approximation scheme (Theorem 3 and Theorem 4) shows a contrast between the games that we study and two-player n-action games, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash equilibria with special properties is typically NP-hard.\nIn particular, this is the case for Nash equilibria that maximize the social welfare [11, 6].\nMoreover, it is likely to be intractable even to approximate such equilibria.\nIn particular, Chen, Deng and Teng [4] show that there exists some e, inverse polynomial in n, for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them.\nNote that these algorithms are not polynomial-time in general.\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.\nAny Nash equilibrium is a CE but the converse does not hold in general.\nIn contrast with Nash equilibria, correlated equilibria can be found for low-degree graphical games (as well as other classes of conciselyrepresented multiplayer games) in polynomial time [17].\nBut, for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [18].\nHowever, the NP-hardness results apply to more general games than the one we consider here, in particular the graphs are not trees.\nFrom [2] it is also known that there exist 2-player, 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium, and we discuss this issue further in Section 7.\n7.\nCONCLUSIONS\nWe have studied the problem of equilibrium selection in graphical games on bounded-degree trees.\nWe considered several criteria for selecting a Nash equilibrium, such as maximizing the social welfare, ensuring a lower bound on the expected payoff of each player, etc. .\nFirst, we focused on the algebraic complexity of a social welfare-maximizing Nash equilibrium, and proved strong negative results for that problem.\nNamely, we showed that even for graphical games on paths, any algebraic number \u03b1 E [0, 1] may be the only strategy available to some player in all social welfaremaximizing Nash equilibria.\nThis is in sharp contrast with the fact that graphical games on trees always possess a Nash equilibrium in which all players' strategies are rational numbers.\nWe then provided approximation algorithms for selecting Nash equilibria with special properties.\nWhile the problem of finding approximate Nash equilibria for various classes of games has received a lot of attention in recent years, most of the existing work aims to find E-Nash equilibria that satisfy (or are E-close to satisfying) certain properties.\nOur approach is different in that we insist on outputting an exact Nash equilibrium, which is E-close to satisfying a given requirement.\nAs argued in the introduction, there are several reasons to prefer a solution that constitutes an exact Nash equilibrium.\nWhile we prove our results for games on a path, they can be generalized to any tree for which the best response policies have compact representations as unions of rectangles.\nIn the full version of the paper we describe our algorithms for the general case.\nFurther work in this vein could include extensions to the kinds of guarantees sought for Nash equilibria, such as guaranteeing total payoffs for subsets of players, selecting equilibria in which some players are receiving significantly higher payoffs than their peers, etc. .\nAt the moment however, it is perhaps more important to inves\ntigate whether Nash equilibria of graphical games can be computed in a decentralized manner, in contrast to the algorithms we have introduced here.\nIt is natural to ask if our results or those of [9] can be generalized to games with three or more actions.\nHowever, it seems that this will make the analysis significantly more difficult.\nIn particular, note that one can view the bounded payoff games as a very limited special case of games with three actions per player.\nNamely, given a two-action game with payoff bounds, consider a game in which each player Vi has a third action that guarantees him a payoff of Ti no matter what everyone else does.\nThen checking if there is a Nash equilibrium in which none of the players assigns a nonzero probability to his third action is equivalent to checking if there exists a Nash equilibrium that satisfies the payoff bounds in the original game, and Section 5.1 shows that finding an exact solution to this problem requires new ideas.\nAlternatively it may be interesting to look for similar results in the context of correlated equilibria (CE), especially since the best CE may have higher value (total expected payoff) than the best NE.\nIt is known from [1] that the mediation value of 2-player, 2-action games with non-negative payoffs is at most 43, and they exhibit a 3-player game for which it is infinite.","lvl-2":"Computing Good Nash Equilibria in Graphical Games *\nABSTRACT\nThis paper addresses the problem of fair equilibrium selection in graphical games.\nOur approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game.\nIn [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path.\nIn this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants.\nAnother attractive solution concept is a Nash equilibrium that maximizes the social welfare.\nWe show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size.\nThese two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.\n1.\nINTRODUCTION\nIn a large community of agents, an agent's behavior is not likely to have a direct effect on most other agents: rather, it is just the * Supported by the EPSRC research grants \"Algorithmics of Network-sharing Games\" and \"Discontinuous Behaviour in the Complexity of randomized Algorithms\".\nagents who are close enough to him that will be affected.\nHowever, as these agents respond by adapting their behavior, more agents will feel the consequences and eventually the choices made by a single agent will propagate throughout the entire community.\nThis is the intuition behind graphical games, which were introduced by Kearns, Littman and Singh in [13] as a compact representation scheme for games with many players.\nIn an n-player graphical game, each player is associated with a vertex of an underlying graph G, and the payoffs of each player depend on his action as well as on the actions of his neighbors in the graph.\nIf the maximum degree of G is \u0394, and each player has two actions available to him, then the game can be represented using n2\u0394 +1 numbers.\nIn contrast, we need n2n numbers to represent a general n-player 2-action game, which is only practical for small values of n. For graphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that of finding a Nash equilibrium, the existence of which follows from Nash's celebrated theorem (as graphical games are just a special case of n-player games).\nThe first attempt to tackle this problem was made in [13], where the authors consider graphical games with two actions per player in which the underlying graph is a boundeddegree tree.\nThey propose a generic algorithm for finding Nash equilibria that can be specialized in two ways: an exponential-time algorithm for finding an (exact) Nash equilibrium, and a fully polynomial time approximation scheme (FPTAS) for finding an approximation to a Nash equilibrium.\nFor any e> 0 this algorithm outputs an e-Nash equilibrium, which is a strategy profile in which no player can improve his payoff by more than e by unilaterally changing his strategy.\nWhile e-Nash equilibria are often easier to compute than exact Nash equilibria, this solution concept has several drawbacks.\nFirst, the players may be sensitive to a small loss in payoffs, so the strategy profile that is an e-Nash equilibrium will not be stable.\nThis will be the case even if there is only a small subset of players who are extremely price-sensitive, and for a large population of players it may be difficult to choose a value of a that will satisfy everyone.\nSecond, the strategy profiles that are close to being Nash equilibria may be much better with respect to the properties under consideration than exact Nash equilibria.\nTherefore, the (approximation to the) value of the best solution that corresponds to an e-Nash equilibrium may not be indicative of what can be achieved under an exact Nash equilibrium.\nThis is especially important if the purpose of the approximate solution is to provide a good benchmark for a system of selfish agents, as the benchmark implied by an e-Nash equilibrium may be unrealistic.\nFor these reasons, in this paper we focus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an (exact) Nash equilibrium in polynomial time when the underlying\ngraph has degree 2 (that is, when the graph is a collection of paths and cycles).\nBy contrast, finding a Nash equilibrium in a general degree-bounded graph appears to be computationally intractable: it has been shown (see [5, 12, 7]) to be complete for the complexity class PPAD.\n[9] extends this hardness result to the case in which the underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium, indeed it may have exponentially many.\nMoreover, some Nash equilibria are more desirable than others.\nRather than having an algorithm which merely finds some Nash equilibrium, we would like to have algorithms for finding Nash equilibria with various sociallydesirable properties, such as maximizing overall payoff or distributing profit fairly.\nA useful property of the data structure of [13] is that it simultaneously represents the set of all Nash equilibria of the underlying game.\nIf this representation has polynomial size (as is the case for paths, as shown in [9]), one may hope to extract from it a Nash equilibrium with the desired properties.\nIn fact, in [13] the authors mention that this is indeed possible if one is interested in finding an (approximate) a-Nash equilibrium.\nThe goal of this paper is to extend this to exact Nash equilibria.\n1.1 Our Results\nIn this paper, we study n-player 2-action graphical games on bounded-degree trees for which the data structure of [13] has size poly (n).\nWe focus on the problem of finding exact Nash equilibria with certain socially-desirable properties.\nIn particular, we show how to find a Nash equilibrium that (nearly) maximizes the social welfare, i.e., the sum of the players' payoffs, and we show how to find a Nash equilibrium that (nearly) satisfies prescribed payoff bounds for all players.\nGraphical games on bounded-degree trees have a simple algebraic structure.\nOne attractive feature, which follows from [13], is that every such game has a Nash equilibrium in which the strategy of every player is a rational number.\nSection 3 studies the algebraic structure of those Nash equilibria that maximize social welfare.\nWe show (Theorems 1 and 2) that, surprisingly, the set of Nash equilibria that maximize social welfare is more complex.\nIn fact, for any algebraic number \u03b1 \u2208 [0, 1] with degree at most n, we exhibit a graphical game on a path of length O (n) such that, in the unique social welfare-maximizing Nash equilibrium of this game, one of the players plays the mixed strategy \u03b1 .1 This result shows that it may be difficult to represent an optimal Nash equilibrium.\nIt seems to be a novel feature of the setting we consider here, that an optimal Nash equilibrium is hard to represent, in a situation where it is easy to find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard to represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of previous papers [13, 16, 19] is that we require our algorithm to output an exact Nash equilibrium, though not necessarily the optimal one with respect to our criteria.\nIn Section 4, we describe an algorithm that satisfies this requirement.\nNamely, we propose an algorithm that for any e> 0 finds a Nash equilibrium whose total payoff is within a of optimal.\nIt runs in polynomial time (Theorem 3,4) for any graphical game on a bounded-degree tree for which the data structure proposed by [13] (the so-called best response policy, defined below) is of size poly (n) (note that, as shown in [9], this is always the case when the underlying graph is a path).\nMore pre1A related result in a different context was obtained by Datta [8], who shows that n-player 2-action games are universal in the sense that any real algebraic variety can be represented as the set of totally mixed Nash equilibria of such games.\ncisely, the running time of our algorithm is polynomial in n, Pmax, and 1\/e, where Pmax is the maximum absolute value of an entry of a payoff matrix, i.e., it is a pseudopolynomial algorithm, though it is fully polynomial with respect to E.\nWe show (Section 4.1) that under some restrictions on the payoff matrices, the algorithm can be transformed into a (truly) polynomial-time algorithm that outputs a Nash equilibrium whose total payoff is within a 1 \u2212 e factor from the optimal.\nIn Section 5, we consider the problem of finding a Nash equilibrium in which the expected payoff of each player Vi exceeds a prescribed threshold Ti.\nUsing the idea from Section 4 we give (Theorem 5) a fully polynomial time approximation scheme for this problem.\nThe running time of the algorithm is bounded by a polynomial in n, Pmax, and E.\nIf the instance has a Nash equilibrium satisfying the prescribed thresholds then the algorithm constructs a Nash equilibrium in which the expected payoff of each player Vi is at least Ti \u2212 E.\nIn Section 6, we introduce other natural criteria for selecting a \"good\" Nash equilibrium and we show that the algorithms described in the two previous sections can be used as building blocks in finding Nash equilibria that satisfy these criteria.\nIn particular, in Section 6.1 we show how to find a Nash equilibrium that approximates the maximum social welfare, while guaranteeing that each individual payoff is close to a prescribed threshold.\nIn Section 6.2 we show how to find a Nash equilibrium that (nearly) maximizes the minimum individual payoff.\nFinally, in Section 6.3 we show how to find a Nash equilibrium in which the individual payoffs of the players are close to each other.\n1.2 Related Work\nOur approximation scheme (Theorem 3 and Theorem 4) shows a contrast between the games that we study and two-player n-action games, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash equilibria with special properties is typically NP-hard.\nIn particular, this is the case for Nash equilibria that maximize the social welfare [11, 6].\nMoreover, it is likely to be intractable even to approximate such equilibria.\nIn particular, Chen, Deng and Teng [4] show that there exists some e, inverse polynomial in n, for which computing an e-Nash equilibrium in 2-player games with n actions per player is PPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them.\nNote that these algorithms are not polynomial-time in general.\nThe games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.\nA correlated equilibrium (CE) (introduced by Aumann [2]) is a distribution over vectors of players' actions with the property that if any player is told his own action (the value of his own component) from a vector generated by that distribution, then he cannot increase his expected payoff by changing his action.\nAny Nash equilibrium is a CE but the converse does not hold in general.\nIn contrast with Nash equilibria, correlated equilibria can be found for low-degree graphical games (as well as other classes of conciselyrepresented multiplayer games) in polynomial time [17].\nBut, for graphical games it is NP-hard to find a correlated equilibrium that maximizes total payoff [18].\nHowever, the NP-hardness results apply to more general games than the one we consider here, in particular the graphs are not trees.\nFrom [2] it is also known that there exist 2-player, 2-action games for which the expected total payoff\nof the best correlated equilibrium is higher than the best Nash equilibrium, and we discuss this issue further in Section 7.\n2.\nPRELIMINARIES AND NOTATION\nWe consider graphical games in which the underlying graph G is an n-vertex tree, in which each vertex has at most \u0394 children.\nEach vertex has two actions, which are denoted by 0 and 1.\nA mixed strategy of a player V is represented as a single number v E [0, 1], which denotes the probability that V selects action 1.\nFor the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root has a single child, and that its payoff is independent of the action chosen by the child.\nThis can be achieved by first choosing an arbitrary root of the tree, and then adding a dummy \"parent\" of this root, giving the new parent a constant payoff function, e.g., 0.\nGiven an edge (V, W) of the tree G, and a mixed strategy w for W, let G (V, W), W = w be the instance obtained from G by (1) deleting all nodes Z which are separated from V by W (i.e., all nodes Z such that the path from Z to V passes through W), and (2) restricting the instance so that W is required to play mixed strategy w.\nv is a mixed strategy for V and that w is a mixed strategy for W.\nWe say that v is a potential best response to w (denoted by v E pbrV (w)) if there is an equilibrium in the instance G (V, W), W = w in which V has mixed strategy v.\nWe define the best response policy for V, given W, as B (W, V) = {(w, v) | v E pbrV (w), w E [0, 1]}.\nThe upstream pass of the generic algorithm of [13] considers every node V (other than the root) and computes the best response policy for V given its parent.\nWith the above assumptions about the root, the downstream pass is straightforward.\nThe root selects a mixed strategy w for the root W and a mixed strategy v E B (W, V) for each child V of W.\nIt instructs each child V to play v.\nThe remainder of the downward pass is recursive.\nWhen a node V is instructed by its parent to adopt mixed strategy v, it does the following for each child U--It finds a pair (v, u) E B (V, U) (with the same v value that it was given by its parent) and instructs U to play u.\nThe best response policy for a vertex U given its parent V can be represented as a union of rectangles, where a rectangle is defined by a pair of closed intervals (IV, IU) and consists of all points in IV x IU; it may be the case that one or both of the intervals IV and IU consists of a single point.\nIn order to perform computations on B (V, U), and to bound the number of rectangles, [9] used the notion of an event point, which is defined as follows.\nFor any set A C _ [0, 1] 2 that is represented as a union of a finite number of rectangles, we say that a point u E [0, 1] on the U-axis is a Uevent point of A if u = 0 or u = 1 or the representation of A contains a rectangle of the form IV x IU and u is an endpoint of IU; V - event points are defined similarly.\nFor many games considered in this paper, the underlying graph is an n-vertex path, i.e., a graph G = (V, E) with V = {V1,..., Vn} and E = {(V1, V2),..., (Vn_1, Vn)}.\nIn [9], it was shown that for such games, the best response policy has only polynomially-many rectangles.\nThe proof that the number of rectangles in B (Vj +1, Vj) is polynomial proceeds by first showing that the number of event points in B (Vj +1, Vj) cannot exceed the number of event points in B (Vj, Vj_1) by more than 2, and using this fact to bound the number of rectangles in B (Vj +1, Vj).\nLet P0 (V) and P1 (V) be the expected payoffs to V when it plays 0 and 1, respectively.\nBoth P0 (V) and P1 (V) are multilinear functions of the strategies of V's neighbors.\nIn what follows, we will frequently use the following simple observation.\nPROOF.\nWe will give the proof for P0 (V); the proof for P1 (V) is similar.\nFor i, j = 0, 1, let Pij be the payoff to V when U plays i, V plays 0 and W plays j.\nWe have P0 (V) = P00 (1 \u2212 u) (1 \u2212 w) + P10u (1 \u2212 w) + P01 (1 \u2212 u) w + P11uw.\nWe have to select the values of Pij so that P00 \u2212 P10 \u2212 P01 + P11 = A, \u2212 P00 + P10 = B, \u2212 P00 + P01 = C, P00 = D.\nIt is easy to see that the unique solution is given by P00 = D, P01 = C + D, P10 = B + D, P11 = A + B + C + D.\nThe input to all algorithms considered in this paper includes the payoff matrices for each player.\nWe assume that all elements of these matrices are integer.\nLet Pmax be the greatest absolute value of any element of any payoff matrix.\nThen the input consists of at most n2\u0394 +1 numbers, each of which can be represented using [log Pmax] bits.\n3.\nNASH EQUILIBRIA THAT MAXIMIZE THE SOCIAL WELFARE: SOLUTIONS IN R \\ Q\nFrom the point of view of social welfare, the best Nash equilibrium is the one that maximizes the sum of the players' expected payoffs.\nUnfortunately, it turns out that computing such a strategy profile exactly is not possible: in this section, we show that even if all players' payoffs are integers, the strategy profile that maximizes the total payoff may have irrational coordinates; moreover, it may involve algebraic numbers of an arbitrary degree.\n3.1 Warm-up: quadratic irrationalities\nWe start by providing an example of a graphical game on a path of length 3 with integer payoffs such that in the Nash equilibrium that maximizes the total payoff, one of the players has a strategy in R \\ Q.\nIn the next subsection, we will extend this example to algebraic numbers of arbitrary degree n; to do so, we have to consider paths of length O (n).\nTHEOREM 1.\nThere exists an integer-payoff graphical game G on a 3-vertex path UV W such that, in any Nash equilibrium of G that maximizes social welfare, the strategy, u, of the player U and the total payoff, p, satisfy u, p E R \\ Q. PROOF.\nThe payoffs to the players in G are specified as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V) = \u2212 uw + 3w and P1 (V) = P0 (V) + w (u + 2) \u2212 (u + 1), where u and w are the (mixed) strategies of U and W, respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f (u) = u +1 u +2.\nObserve that for any u E [0, 1] we have f (u) E [0, 1].\nThe payoff to W is 0 if it selects the same action as V and 1 otherwise.\nany mixed strategy u no matter what V and W do.\nFurthermore, V is indifferent between 0 and 1 as long as w = f (u), so it can play 1\/2.\nFinally, if V plays 0 and 1 with equal probability, W is indifferent between 0 and 1, so it can play f (u).\nConversely, suppose that v> 1\/2.\nThen W strictly prefers to play 0, i.e., w = 0.\nThen for V we have P1 (V) = P0 (V)--(u + 1), i.e., P1 (V) P0 (V), which implies v = 1, a contradiction.\nFinally, if v = 1\/2, but w = ~ f (u), player V is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nThis completes the proof of Claim 2.\nBy Claim 2, the total payoff in any Nash equilibrium of this game is a function of u.\nMore specifically, the payoff to U is 0, the payoff to V is--uf (u) + 3f (u), and the payoff to W is 1\/2.\nTherefore, the Nash equilibrium with the maximum total payoff corresponds to the value of u that maximizes\nThis concludes the proof of Theorem 1.\n3.2 Strategies of arbitrary degree\nWe have shown that in the social welfare-maximizing Nash equilibrium, some players' strategies can be quadratic irrationalities, and so can the total payoff.\nIn this subsection, we will extend this result to show that we can construct an integer-payoff graphical game on a path whose social welfare-maximizing Nash equilibrium involves arbitrary algebraic numbers in [0, 1].\nTHEOREM 2.\nFor any degree-n algebraic number \u03b1 E [0, 1], there exists an integer payoff graphical game on a path of length O (n) such that, in all social welfare-maximizing Nash equilibria of this game, one of the players plays \u03b1.\nPROOF.\nOur proof consists of two steps.\nFirst, we construct a rational expression R (x) and a segment [x ~, x ~ ~] such that x ~, x ~ ~ E Q and \u03b1 is the only maximum of R (x) on [x ~, x ~ ~].\nSecond, we construct a graphical game whose Nash equilibria can be parameterized by u E [x ~, x ~ ~], so that at the equilibrium that corresponds to u the total payoff is R (u) and, moreover, some player's strategy is u.\nIt follows that to achieve the payoff-maximizing Nash equilibrium, this player has to play \u03b1.\nThe details follow.\nLEMMA 1.\nGiven an algebraic number \u03b1 E [0, 1], deg (\u03b1) = n, there exist K2,..., K2n +2 E Q and x ~, x ~ ~ E (0, 1) n Q such\nPROOF.\nLet P (x) be the minimal polynomial of \u03b1, i.e., a polynomial of degree n with rational coefficients whose leading coefficient is 1 such that P (\u03b1) = 0.\nLet A = {\u03b11,..., \u03b1n1 be the set of all roots of P (x).\nConsider the polynomial Q1 (x) =--P2 (x).\nIt has the same roots as P (x), and moreover, for any x E ~ A we have Q1 (x) <0.\nHence, A is the set of all maxima of Q1 (x).\nNow, set\nx E [0, 1] and R (x) = 0 if and only if Q1 (x) = 0.\nHence, the set A is also the set of all maxima of R (x) on [0, 1].\nLet d = min {l\u03b1i--\u03b1l l \u03b1i E A, \u03b1i = ~ \u03b11, and set \u03b1 ~ = max {\u03b1--d\/2, 01, \u03b1 ~ ~ = min {\u03b1 + d\/2, 11.\nClearly, \u03b1 is the only zero (and hence, the only maximum) of R (x) on [\u03b1 ~, \u03b1 ~ ~].\nLet x ~ and x ~ ~ be some rational numbers in (\u03b1 ~, \u03b1) and (\u03b1, \u03b1 ~ ~), respectively; note that by excluding the endpoints of the intervals we ensure that x ~, x ~ ~ = ~ 0, 1.\nAs [x ~, x ~ ~] C [\u03b1 ~, \u03b1 ~ ~], we have that \u03b1 is the only maximum of R (x) on [x ~, x ~ ~].\nAs R (x) is a proper rational expression and all roots of its denominator are simple, by partial fraction decomposition theorem, R (x) can be represented as\nwhere K2,..., K2n +2 are rational numbers.\nConsider a graphical game on the path\nwhere k = 2n + 2.\nIntuitively, we want each triple (Ui \u2212 1, Vi \u2212 1, Ui) to behave similarly to the players U, V, and W from the game described in the previous subsection.\nMore precisely, we define the payoffs to the players in the following way.\n.\nThe payoff to U \u2212 1 is 0 no matter what everyone else does.\n.\nThe expected payoff to V \u2212 1 is 0 if it plays 0 and u0--(x ~ ~--x ~) u \u2212 1--x ~ if it plays 1, where u0 and u \u2212 1 are the strategies of U0 and U \u2212 1, respectively.\n.\nThe expected payoff to V0 is 0 if it plays 0 and u1 (u0 + 1)--u0 if it plays 1, where u0 and u1 are the strategies of U0 and U1, respectively.\n.\nFor each i = 1,..., k--1, the expected payoff to Vi when it plays 0 is P0 (Vi) = Aiuiui +1--Aiui +1, and the expected payoff to Vi when it plays 1 is P1 (Vi) = P0 (Vi) + ui +1 (2--ui)--1, where Ai =--Ki +1 and ui +1 and ui are the strategies of Ui +1 and Ui, respectively.\n.\nFor each i = 0,..., k, the payoff to Ui does not depend on Vi and is 1 if Ui and Vi \u2212 1 select different actions and 0 otherwise.\nWe will now characterize the Nash equilibria of this game using a sequence of claims.\nCLAIM 3.\nIn all Nash equilibria of this game V \u2212 1 plays 1\/2, and the strategies of u \u2212 1 and u0 satisfy u0 = (x ~ ~--x ~) u \u2212 1 + x ~.\nConsequently, in all Nash equilibria we have u0 E [x ~, x ~ ~].\nThe function g (u) changes sign at--2,--1, and 3.\nWe have g (u) <0 for g> 3, g (u)> 0 for u <--2, so the extremum of g (u) that lies between 1 and 3, i.e., u =--2 + \\ \/ 5, is a local maximum.\nWe conclude that the social welfare-maximizing Nash equilibrium for this game is given by the vector of strategies (--2 + \\ \/ 5, 1\/2, (5--\\ \/ 5) \/ 5).\nThe respective total payoff is\nPROOF.\nThe proof is similar to that of Claim 2.\nLet f (u \u2212 1) = (x ~ ~ \u2212 x ~) u \u2212 1 + x ~.\nClearly, the player V \u2212 1 is indifferent between playing 0 and 1 if and only if u0 = f (u \u2212 1).\nSuppose that v \u2212 1 <1\/2.\nThen U0 strictly prefers to play 1, i.e., u0 = 1, so we have\nAs x ~ 0, we have P1 (V \u2212 1) 0.\nPROOF.\nThe proof of this claim is also similar to that of Claim 2.\nWe use induction on i to prove that the statement of the claim is true and, additionally, ui = ~ 1 for i> 0.\nFor the base case i = 0, note that u0 = ~ 0 by the previous claim (recall that x ~, x ~ ~ are selected so that x ~, x ~ ~ = ~ 0, 1) and consider the triple (U0, V0, U1).\nLet v0 be the strategy of V0.\nFirst, suppose that v0> 1\/2.\nThen U1 strictly prefers to play 0, i.e., u1 = 0.\nThen for V0 we have P1 (V0) = P0 (V0) \u2212 u0.\nAs u0 = ~ 0, we have P1 (V0) P0 (V0), which implies v0 = 1, a contradiction.\nFinally, if v0 = 1\/2, but u1 = ~ u0 \/ (u0 + 1), player V0 is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nMoreover, as u1 = u0 \/ (u0 + 1) and u0 \u2208 [0, 1], we have u1 = ~ 1.\nThe argument for the inductive step is similar.\nNamely, suppose that the statement is proved for all i ~ 1\/2.\nThen Ui +1 strictly prefers to play 0, i.e., ui +1 = 0.\nThen for Vi we have P1 (Vi) = P0 (Vi) \u2212 1, i.e., P1 (Vi) P0 (Vi), which implies vi = 1, a contradiction.\nFinally, if vi = 1\/2, but ui +1 = ~ 1 \/ (2 \u2212 ui), player Vi is not indifferent between 0 and 1, so he would deviate from playing 1\/2.\nMoreover, as ui +1 = 1 \/ (2 \u2212 ui) and ui <1, we have ui +1 <1.\nwhere u \u2212 1 \u2208 [0, 1], u0 = (x ~ ~ \u2212 x ~) u \u2212 1 + x ~, u1 = u0 \/ (u0 + 1), and ui +1 = 1 \/ (2 \u2212 ui) for i \u2265 1 constitutes a Nash equilibrium.\nPROOF.\nFirst, the player U \u2212 1's payoffs do not depend on other players' actions, so he is free to play any strategy in [0, 1].\nAs long as u0 = (x ~ ~ \u2212 x ~) u \u2212 1 + x ~, player V \u2212 1 is indifferent between 0 and 1, so he is content to play 1\/2; a similar argument applies to players V0,..., Vk \u2212 1.\nFinally, for each i = 0,..., k, the payoffs of player Ui only depend on the strategy of player Vi \u2212 1.\nIn particular, as long as vi \u2212 1 = 1\/2, player Ui is indifferent between playing 0 and 1, so he can play any mixed strategy ui \u2208 [0, 1].\nTo complete the proof, note that (x ~ ~ \u2212 x ~) u \u2212 1 + x ~ \u2208 [0, 1] for all u \u2212 1 \u2208 [0, 1], u0 \/ (u0 + 1) \u2208 [0, 1] for all u0 \u2208 [0, 1], and 1 \/ (2 \u2212 ui) \u2208 [0, 1] for all ui \u2208 [0, 1], so we have ui \u2208 [0, 1] for all i = 0,..., k. Now, let us compute the total payoff under a strategy profile of the form given in Claim 5.\nThe payoff to U \u2212 1 is 0, and the expected payoff to each of the Ui, i = 0,..., k, is 1\/2.\nThe expected payoffs to V \u2212 1 and V0 are 0.\nFinally, for any i = 1,..., k \u2212 1, the expected payoff to Vi is Ti = Aiuiui +1 \u2212 Aiui +1.\nIt follows that to find a Nash equilibrium with the highest total payoff, we have to maximize Pk \u2212 1\nWe would like to express Pk \u2212 1\nObserve that as u \u2212 1 varies from 0 to 1, u varies from x ~ to x ~ ~.\nTherefore, to maximize the total payoff, we have to choose u \u2208 [x ~, x ~ ~] so as to maximize\nBy construction, the only maximum of R (u) on [x ~, x ~ ~] is \u03b1.\nIt follows that in the payoff-maximizing Nash equilibrium of our game U0 plays \u03b1.\nFinally, note that the payoffs in our game are rational rather than integer.\nHowever, it is easy to see that we can multiply all payoffs to a player by their greatest common denominator without affecting his strategy.\nIn the resulting game, all payoffs are integer.\nThis concludes the proof of Theorem 2.\n4.\nAPPROXIMATING THE SOCIALLY OPTIMAL NASH EQUILIBRIUM\nWe have seen that the Nash equilibrium that maximizes the social welfare may involve strategies that are not in Q. Hence, in this section we focus on finding a Nash equilibrium that is almost optimal from the social welfare perspective.\nWe propose an algorithm that for any e> 0 finds a Nash equilibrium whose total payoff is within a from optimal.\nThe running time of this algorithm is polynomial in 1\/e, n and | Pmax | (recall that Pmax is the maximum absolute value of an entry of a payoff matrix).\nWhile the negative result of the previous section is for graphical games on paths, our algorithm applies to a wider range of scenarios.\nNamely, it runs in polynomial time on bounded-degree trees\nas long as the best response policy of each vertex, given its parent, can be represented as a union of a polynomial number of rectangles.\nNote that path graphs always satisfy this condition: in [9] we showed how to compute such a representation, given a graph with maximum degree 2.\nConsequently, for path graphs the running time of our algorithm is guaranteed to be polynomial.\n(Note that [9] exhibits a family of graphical games on bounded-degree trees for which the best response policies of some of the vertices, given their parents, have exponential size, when represented as unions of rectangles.)\nDue to space restrictions, in this version of the paper we present the algorithm for the case where the graph underlying the graphical game is a path.\nWe then state our result for the general case; the proof can be found in the full version of this paper [10].\nSuppose that s is a strategy profile for a graphical game G.\nThat is, s assigns a mixed strategy to each vertex of G. let EPV (s) be the expected payoff of player V under s and let EP (s) =\nPROOF.\nLet {V1,..., Vn} be the set of all players.\nWe start by constructing the best response policies for all Vi, i = 1,..., n \u2212 1.\nAs shown in [9], this can be done in time O (n3).\nLet N> 5n be a parameter to be selected later, set \u03b4 = 1\/N, and define X = {j\u03b4 | j = 0,..., N}.\nWe say that vj is an event point for a player Vi if it is a Vi-event point for B (Vi, Vi-1) or B (Vi +1, Vi).\nFor each player Vi, consider a finite set of strategies Xi given by\nIt has been shown in [9] that for any i = 2,..., n, the best response policy B (Vi, Vi-1) has at most 2n + 4 Vi-event points.\nAs we require N> 5n, we have | Xi | \u2264 2N; assume without loss of generality that | Xi | = 2N.\nOrder the elements of Xi in increasing order as x1i = 0 24nPmax \/ ~, we can ensure that the total expected payoff for the strategy profile t is within ~ from optimal.\nWe will now show that we can find the best discrete Nash equilibrium (with respect to the social welfare) using dynamic programming.\nAs t is a discrete strategy profile, this means that the strategy profile found by our algorithm will be at least as good as t.\nDefine ml, k\ni to be the maximum total payoff that V1,..., Vi-1 can achieve if each Vj, j \u2264 i, chooses a strategy from Xj, for each j (24nPmax) \/ E to ensure that the strategy profile we output has total payoff that is within E from optimal.\nWe conclude that we can compute an E-approximation to the best Nash equilibrium in time O (n4P3max\/E3).\nThis completes the proof of Theorem 3.\nTo state our result for the general case (i.e., when the underlying graph is a bounded-degree tree rather than a path), we need additional notation.\nIf G has n players, let q (n) be an upper bound on the number of event points in the representation of any best response policy.\nThat is, we assume that for any vertex U with parent V, B (V, U) has at most q (n) event points.\nWe will be interested in the situation in which q (n) is polynomial in n. THEOREM 4.\nLet G be an n-player graphical game on a tree in which each node has at most \u0394 children.\nSuppose we are given a set of best-response policies for G in which each best-response policy B (V, U) is represented by a set of rectangles with at most q (n) event points.\nFor any E> 0, there is an algorithm that constructs a Nash equilibrium s' for G that satisfies EP (s')> M (G) \u2212 E.\nThe running time of the algorithm is polynomial in n, Pmax and E_1 provided that the tree has bounded degree (that is, \u0394 = O (1)) and q (n) is a polynomial in n.\nIn particular, if\nand \u0394> 1 then the running time is O (n\u0394 (2N) \u0394.\nFor the proof of this theorem, see [10].\n4.1 A polynomial-time algorithm for multiplicative approximation\nThe running time of our algorithm is pseudopolynomial rather than polynomial, because it includes a factor which is polynomial in Pmax, the maximum (in absolute value) entry in any payoff matrix.\nIf we are interested in multiplicative approximation rather than additive one, this can be improved to polynomial.\nFirst, note that we cannot expect a multiplicative approximation for all inputs.\nThat is, we cannot hope to have an algorithm that computes a Nash equilibrium with total payoff at least (1 \u2212 E) M (G).\nIf we had such an algorithm, then for graphical games G with M (G) = 0, the algorithm would be required to output the optimal solution.\nTo show that this is infeasible, observe that we can use the techniques of Section 3.2 to construct two integercoefficient graphical games on paths of length O (n) such that for some X E R the maximal total payoff in the first game is X, the maximal total payoff in the second game is \u2212 X, and for both games, the strategy profiles that achieve the maximal total payoffs involve algebraic numbers of degree n. By combining the two games so that the first vertex of the second game becomes connected to the last vertex of the first game, but the payoffs of all players do not change, we obtain a graphical game in which the best Nash equilibrium has total payoff 0, yet the strategies that lead to this payoff have high algebraic complexity.\nHowever, we can achieve a multiplicative approximation when all entries of the payoff matrices are positive and the ratio between any two entries is polynomially bounded.\nRecall that we assume that all payoffs are integer, and let Pmin> 0 be the smallest entry of any payoff matrix.\nIn this case, for any strategy profile the payoff to player i is at least Pmin, so the total payoff in the social-welfare maximizing Nash equilibrium s satisfies M (G)> nPmin.\nMoreover, Lemma 3 implies that by choosing \u03b4 (1 \u2212 E) M (G).\nRecall that the running time of our algorithm is O (nN3), where N has to be selected to satisfy N> 5n, N = 1 \/ \u03b4.\nIt follows that if Pmin> 0, Pmax\/Pmin = poly (n), we can choose N so that our algorithm provides a multiplicative approximation guarantee and runs in time polynomial in n and 1\/E.\n5.\nBOUNDED PAYOFF NASH EQUILIBRIA\nAnother natural way to define what is a \"good\" Nash equilibrium is to require that each player's expected payoff exceeds a certain threshold.\nThese thresholds do not have to be the same for all players.\nIn this case, in addition to the payoff matrices of the n players, we are given n numbers T1,..., Tn, and our goal is to find a Nash equilibrium in which the payoff of player i is at least Ti, or report that no such Nash equilibrium exists.\nIt turns out that we can design an FPTAS for this problem using the same techniques as in the previous section.\nTHEOREM 5.\nGiven a graphical game G on an n-vertex path and n rational numbers T1,..., Tn, suppose that there exists a strategy profile s such that s is a Nash equilibrium for G and EPVi (s)> Ti for i = 1,..., n.\nThen for any E> 0 we can find in time O (max {nP3max\/E3, n4\/E3}) a strategy profile s' such\nPROOF.\nThe proof is similar to that of Theorem 3.\nFirst, we construct the best response policies for all players, choose N> 5n, and construct the sets Xi, i = 1,..., n, as described in the proof of Theorem 3.\nConsider a strategy profile s such that s is a Nash equilibrium for G and EPVi (s)> Ti for i = 1,..., n.\nWe construct a strategy profile ti = max {xji | xji max {5n, 24Pmax\/E}, we can ensure EPVi (t)> Ti \u2212 E for i = 1,..., n. Now, we will use dynamic programming to find a discrete Nash equilibrium that satisfies EPVi (t)> Ti \u2212 E for i = 1,..., n.\nAs t is a discrete strategy profile, our algorithm will succeed whenever there is a strategy profile s with EPVi (s)> Ti \u2212 E for i = 1,..., n. Let zl, k i = 1 if there is a discrete strategy profile such that for any j Ti \u2212 E for i = 1,..., n (recall that Vn is a dummy player, i.e., we assume Tn = 0, EPn (s') = 0 for any choice of s').\nIf zl, k n = 0 for all l, k = 1,..., N, there is no discrete Nash equilibrium s' that satisfies EPV, (s')> Ti \u2212 E for i = 1,..., n and hence no Nash equilibrium s (not necessarily discrete) such that EPV, (s)> Ti for i = 1,..., n.\nThe running time analysis is similar to that for Theorem 3; we conclude that the running time of our algorithm is O (nN3) = O (max {nP3max\/E3, n4\/E3}).\nREMARK 1.\nTheorem 5 can be extended to trees of bounded degree in the same way as Theorem 4.\n5.1 Exact Computation\nAnother approach to finding Nash equilibria with bounded payoffs is based on inductively computing the subsets of the best response policies of all players so as to exclude the points that do not provide sufficient payoffs to some of the players.\nFormally, we say that a strategy v of the player V is a potential best response to a strategy w of its parent W with respect to a threshold vector T = (T1,..., Tn), (denoted by v E pbrV (w, T)) if there is an equilibrium in the instance G (V, W), W = w in which V plays mixed strategy v and the payoff to any player Vi downstream of V (including V) is at least Ti.\nThe best response policy for V with respect to a threshold vector T is defined as B (W, V, T) = {(w, v) | v E pbrV (w, T), w E [0, 1]}.\nIt is easy to see that if any of the sets B (Vj, Vj_1, T), j = 1,..., n, is empty, it means that it is impossible to provide all players with expected payoffs prescribed by T. Otherwise, one can apply the downstream pass of the original algorithm of [13] to find a Nash equilibrium.\nAs we assume that Vn is a dummy vertex whose payoff is identically 0, the Nash equilibrium with these payoffs exists as long as Tn <0 and B (Vn, Vn_1, T) is not empty.\nUsing the techniques developed in [9], it is not hard to show that for any j = 1,..., n, the set B (Vj, Vj_1, T) consists of a finite number of rectangles, and one can compute B (Vj +1, Vj, T) given B (Vj, Vj_1, T).\nThe advantage of this approach is that it allows us to represent all Nash equilibria that provide required payoffs to the players.\nHowever, it is not likely to be practical, since it turns out that the rectangles that appear in the representation of B (Vj, Vj_1, T) may have irrational coordinates.\nCLAIM 6.\nThere exists a graphical game G on a 3-vertex path UV W and a vector T = (T1, T2, T3) such that B (V, W, T) cannot be represented as a union of a finite number of rectangles with rational coordinates.\nPROOF.\nWe define the payoffs to the players in G as follows.\nThe payoff to U is identically 0, i.e., P0 (U) = P1 (U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0 (V) = uw, P1 (V) = P0 (V) + w \u2212 .8 u \u2212 .1, where u and w are the (mixed) strategies of U and W, respectively.\nIt follows that V is indifferent between playing 0 and 1 if and only if w = f (u) = .8 u + .1; observe that for any u E [0, 1] we have f (u) E [0, 1].\nIt is not hard to see that we have B (W, V) = [0, .1] x {0} u [.1, .9] x [0, 1] u [.9, 1] x {1}.\nThe payoffs to W are not important for our construction; for example, set P0 (W) = P0 (W) = 0.\nNow, set T = (0, 1\/8, 0), i.e., we are interested in Nash equilibria in which V's expected payoff is at least 1\/8.\nSuppose w E [0, 1].\nThe player V can play a mixed strategy v when W is playing w as long as U plays u = f _ 1 (w) = 5w\/4 \u2212 1\/8 (to ensure that V is indifferent between 0 and 1) and P0 (V) = P1 (V) = uw = w (5w\/4 \u2212 1\/8)> 1\/8.\nThe latter condition is satisfied if have .1 <(1 + \\ \/ 41) \/ 20 <.9.\nFor any other value of w, any w <(1 \u2212 \\ \/ 41) \/ 20 <0 or w> (1 + \\ \/ 41) \/ 20.\nNote that we strategy of U either makes V prefer one of the pure strategies or does not provide it with a sufficient expected payoff.\nThere are also some values of w for which V can play a pure strategy (0 or 1) as a potential best response to W and guarantee itself an expected payoff of at least 1\/8; it can be shown that these values of w form a finite number of segments in [0, 1].\nWe conclude that any repremust contain a rectangle of the form [(1 + \\ \/ 41) \/ 20, w\"] x [v', v\"] sentation of B (W, V, T) as a union of a finite number of rectangles for some w\", v', v\" E [0, 1].\nOn the other hand, it can be shown that for any integer payoff matrices and threshold vectors and any j = 1,..., n \u2212 1, the sets B (Vj +1, Vj, T) contain no rectangles of the form [u', u\"] x {v} or {v} x [w', w\"], where v E R \\ Q.\nThis means that if B (Vn, Vn_1, T) is non-empty, i.e., there is a Nash equilibrium with payoffs prescribed by T, then the downstream pass of the algorithm of [13] can always pick a strategy profile that forms a Nash equilibrium, provides a payoff of at least Ti to the player Vi, and has no irrational coordinates.\nHence, unlike in the case of the Nash equilibrium that maximizes the social welfare, working with irrational numbers is not necessary, and the fact that the algorithm discussed in this section has to do so can be seen as an argument against using this approach.\n6.\nOTHER CRITERIA FOR SELECTING A NASH EQUILIBRIUM\nIn this section, we consider several other criteria that can be useful in selecting a Nash equilibrium.\n6.1 Combining welfare maximization with bounds on payoffs\nIn many real life scenarios, we want to maximize the social welfare subject to certain restrictions on the payoffs to individual players.\nFor example, we may want to ensure that no player gets a negative expected payoff, or that the expected payoff to player i is at least Pi max \u2212 \u03be, where Pi max is the maximum entry of i's payoff matrix and \u03be is a fixed parameter.\nFormally, given a graphical game G and a vector T1,..., Tn, let S be the set of all Nash equilibria s of G that satisfy Ti 0, choose \u03b4 as in the proof of Theorem 3, and let Xi be the set of all discrete strategies of player Vi (for a\nformal definition, see the proof of Theorem 3).\nCombining the proofs of Theorem 3 and Theorem 5, we can see that the strategy profile t\u02c6 given by \u02c6ti = max {xji I xji <\u02c6si} satisfies EPVi (\u02c6t)> Ti--E, IEP (\u02c6s)--EP (\u02c6t) I 0, we start by setting T ~ =--Pmax, T ~ ~ = Pmax, T \u2217 = (T ~ + T ~ ~) \/ 2.\nWe then run the algorithm of Section 5 with T1 = \u2022 \u2022 \u2022 = Tn = T \u2217.\nIf the algorithm succeeds in finding a Nash equilibrium s ~ that satisfies EPVi (s ~)> T \u2217--E for all i = 1,..., n, we set T ~ = T \u2217, T \u2217 = (T ~ + T ~ ~) \/ 2; otherwise, we set T ~ ~ = T \u2217, T \u2217 = (T ~ + T ~ ~) \/ 2 and loop.\nWe repeat this process until IT ~--T ~ ~ I p, then our algorithm outputs a Nash equilibrium s ~ that satisfies mini = 1,..., n EPVi (s)> p--2E.\nThe running time of our algorithm is O (max {nP3 max log E \u2212 1\/E3, n4 log E \u2212 1\/E3}).\n6.3 Equalizing the payoffs\nWhen the players' payoff matrices are not very different, it is reasonable to demand that the expected payoffs to the players do not differ by much either.\nWe will now show that Nash equilibria in this category can be approximated in polynomial time as well.\nIndeed, observe that the algorithm of Section 5 can be easily modified to deal with upper bounds on individual payoffs rather than lower bounds.\nMoreover, we can efficiently compute an approximation to a Nash equilibrium that satisfies both the upper bound and the lower bound for each player.\nMore precisely, suppose that we are given a graphical game G, 2n rational numbers T1,..., Tn, T ~ 1,..., Tn ~ and E> 0.\nThen if there exists a strategy profile s such that s is a Nash equilibrium for G and Ti 0.\nGiven an E> 0, we set T1 = \u2022 \u2022 \u2022 = Tn =--Pmax, T1 ~ = \u2022 \u2022 \u2022 = Tn ~ =--Pmax + \u03be + E, and run the modified version of the algorithm of Section 5.\nIf it fails to find a solution, we increment all Ti, Ti ~ by E and loop.\nWe continue until the algorithm finds a solution, or Ti> Pmax.\nSuppose that there exists a Nash equilibrium s that satisfies IEPVi (s)--EPVj (s) I <\u03be for all i, j = 1,..., n. Set r = mini = 1,..., n EPVi (s); we have r 0 such that--Pmax + (k--1) E sent \/ total_sent) < child->sending_factor) target_child = child; } if (!\nsenddata( target_child->addr, msg, size, key)) { \/\/ send succeeded target_child->sent++; target_child->child_filter.\ninsert(got_key); sent_packet = 1; } foreach child in children { should_send = 0; if (!\nsent_packet) \/\/ transfer ownership should_send = 1; else \/\/ test for available bandwidth if ( key % (1.0\/child->limiting_factor) == 0 ) should_send = 1; if (should_send) { if (!\nsenddata( child->addr, msg, size, key)) { if (!\nsent_packet) \/\/ i received ownership child->sent++; else increase(child->limiting_factor); child->child_filter.\ninsert(got_key); sent_packet = 1; } else \/\/ send failed if (sent_packet) \/\/ was for extra bw decrease(child->limiting_factor); } } Figure 5: Pseudo code for Bullet``s disjoint data send routine stream.\nWhile making data more difficult to recover, Bullet still allows for recovery of such data to its children.\nThe sending node will cache the data packet and serve it to its requesting peers.\nThis process allows its children to potentially recover the packet from one of their own peers, to whom additional bandwidth may be available.\nOnce a packet has been successfully sent to the owning child, the node attempts to send the packet to all other children depending on the limiting factors lfi.\nFor each child i, a node attempts to forward the packet deterministically if the packet``s sequence modulo 1\/lfi is zero.\nEssentially, this identifies which lfi fraction of packets of the received data stream should be forwarded to each child to make use of the available bandwidth to each.\nIf the packet transmission is successful, lfi is increased such that one more packet is to be sent per epoch.\nIf the transmission fails, lfi is decreased by the same amount.\nThis allows children limiting factors to be continuously adjusted in response to changing network conditions.\nIt is important to realize that by maintaining limiting factors, we are essentially using feedback from children (by observing transport behavior) to determine the best data to stop sending during times when a child cannot handle the entire parent stream.\nIn one extreme, if the sum of children bandwidths is not enough to receive the entire parent stream, each child will receive a completely disjoint data stream of packets it owns.\nIn the other extreme, if each 288 child has ample bandwidth, it will receive the entire parent stream as each lfi would settle on 1.0.\nIn the general case, our owning strategy attempts to make data disjoint among children subtrees with the guiding premise that, as much as possible, the expected number of nodes receiving a packet is the same across all packets.\n3.4 Improving the Bullet Mesh Bullet allows a maximum number of peering relationships.\nThat is, a node can have up to a certain number of receivers and a certain number of senders (each defaults to 10 in our implementation).\nA number of considerations can make the current peering relationships sub-optimal at any given time: i) the probabilistic nature of RanSub means that a node may not have been exposed to a sufficiently appropriate peer, ii) receivers greedily choose peers, and iii) network conditions are constantly changing.\nFor example, a sender node may wind up being unable to provide a node with very much useful (non-duplicate) data.\nIn such a case, it would be advantageous to remove that sender as a peer and find some other peer that offers better utility.\nEach node periodically (every few RanSub epochs) evaluates the bandwidth performance it is receiving from its sending peers.\nA node will drop a peer if it is sending too many duplicate packets when compared to the total number of packets received.\nThis threshold is set to 50% by default.\nIf no such wasteful sender is found, a node will drop the sender that is delivering the least amount of useful data to it.\nIt will replace this sender with some other sending peer candidate, essentially reserving a trial slot in its sender list.\nIn this way, we are assured of keeping the best senders seen so far and will eliminate senders whose performance deteriorates with changing network conditions.\nLikewise, a Bullet sender will periodically evaluate its receivers.\nEach receiver updates senders of the total received bandwidth.\nThe sender, knowing the amount of data it has sent to each receiver, can determine which receiver is benefiting the least by peering with this sender.\nThis corresponds to the one receiver acquiring the least portion of its bandwidth through this sender.\nThe sender drops this receiver, creating an empty slot for some other trial receiver.\nThis is similar to the concept of weans presented in [24].\n4.\nEVALUATION We have evaluated Bullet``s performance in real Internet environments as well as the ModelNet [37] IP emulation framework.\nWhile the bulk of our experiments use ModelNet, we also report on our experience with Bullet on the PlanetLab Internet testbed [31].\nIn addition, we have implemented a number of underlying overlay network trees upon which Bullet can execute.\nBecause Bullet performs well over a randomly created overlay tree, we present results with Bullet running over such a tree compared against an o\ufb04ine greedy bottleneck bandwidth tree algorithm using global topological information described in Section 4.1.\nAll of our implementations leverage a common development infrastructure called MACEDON [33] that allows for the specification of overlay algorithms in a simple domainspecific language.\nIt enables the reuse of the majority of common functionality in these distributed systems, including probing infrastructures, thread management, message passing, and debugging environment.\nAs a result, we believe that our comparisons qualitatively show algorithmic differences rather than implementation intricacies.\nOur implementation of the core Bullet logic is under 1000 lines of code in this infrastructure.\nOur ModelNet experiments make use of 50 2Ghz Pentium4``s running Linux 2.4.20 and interconnected with 100 Mbps and 1 Gbps Ethernet switches.\nFor the majority of these experiments, we multiplex one thousand instances (overlay participants) of our overlay applications across the 50 Linux nodes (20 per machine).\nIn ModelNet, packet transmissions are routed through emulators responsible for accurately emulating the hop-by-hop delay, bandwidth, and congestion of a network topology.\nIn our evaluations, we used four 1.4Ghz Pentium III``s running FreeBSD-4.7 as emulators.\nThis platform supports approximately 2-3 Gbps of aggregate simultaneous communication among end hosts.\nFor most of our ModelNet experiments, we use 20,000-node INET-generated topologies [10].\nWe randomly assign our participant nodes to act as clients connected to one-degree stub nodes in the topology.\nWe randomly select one of these participants to act as the source of the data stream.\nPropagation delays in the network topology are calculated based on the relative placement of the network nodes in the plane by INET.\nBased on the classification in [8], we classify network links as being Client-Stub, Stub-Stub, TransitStub, and Transit-Transit depending on their location in the network.\nWe restrict topological bandwidth by setting the bandwidth for each link depending on its type.\nEach type of link has an associated bandwidth range from which the bandwidth is chosen uniformly at random.\nBy changing these ranges, we vary bandwidth constraints in our topologies.\nFor our experiments, we created three different ranges corresponding to low, medium, and high bandwidths relative to our typical streaming rates of 600-1000 Kbps as specified in Table 1.\nWhile the presented ModelNet results are restricted to two topologies with varying bandwidth constraints, the results of experiments with additional topologies all show qualitatively similar behavior.\nWe do not implement any particular coding scheme for our experiments.\nRather, we assume that either each sequence number directly specifies a particular data block and the block offset for each packet, or we are distributing data within the same block for LT Codes, e.g., when distributing a file.\n4.1 Offline Bottleneck Bandwidth Tree One of our goals is to determine Bullet``s performance relative to the best possible bandwidth-optimized tree for a given network topology.\nThis allows us to quantify the possible improvements of an overlay mesh constructed using Bullet relative to the best possible tree.\nWhile we have not yet proven this, we believe that this problem is NP-hard.\nThus, in this section we present a simple greedy o\ufb04ine algorithm to determine the connectivity of a tree likely to deliver a high level of bandwidth.\nIn practice, we are not aware of any scalable online algorithms that are able to deliver the bandwidth of an o\ufb04ine algorithm.\nAt the same time, trees constructed by our algorithm tend to be long and skinny making them less resilient to failures and inappropriate for delay sensitive applications (such as multimedia streaming).\nIn addition to any performance comparisons, a Bullet mesh has much lower depth than the bottleneck tree and is more resilient to failure, as discussed in Section 4.6.\n289 Topology classification Client-Stub Stub-Stub Transit-Stub Transit-Transit Low bandwidth 300-600\u00a0500-1000 1000-2000\u00a02000-4000 Medium bandwidth 800-2800\u00a01000-4000 1000-4000\u00a05000-10000 High bandwidth 1600-5600\u00a02000-8000 2000-8000\u00a010000-20000 Table 1: Bandwidth ranges for link types used in our topologies expressed in Kbps.\nSpecifically, we consider the following problem: given complete knowledge of the topology (individual link latencies, bandwidth, and packet loss rates), what is the overlay tree that will deliver the highest bandwidth to a set of predetermined overlay nodes?\nWe assume that the throughput of the slowest overlay link (the bottleneck link) determines the throughput of the entire tree.\nWe are, therefore, trying to find the directed overlay tree with the maximum bottleneck link.\nAccordingly, we refer to this problem as the overlay maximum bottleneck tree (OMBT).\nIn a simplified case, assuming that congestion only exists on access links and there are no lossy links, there exists an optimal algorithm [23].\nIn the more general case of contention on any physical link, and when the system is allowed to choose the routing path between the two endpoints, this problem is known to be NP-hard [12], even in the absence of link losses.\nFor the purposes of this paper, our goal is to determine a good overlay streaming tree that provides each overlay participant with substantial bandwidth, while avoiding overlay links with high end-to-end loss rates.\nWe make the following assumptions: 1.\nThe routing path between any two overlay participants is fixed.\nThis closely models the existing overlay network model with IP for unicast routing.\n2.\nThe overlay tree will use TCP-friendly unicast connections to transfer data point-to-point.\n3.\nIn the absence of other flows, we can estimate the throughput of a TCP-friendly flow using a steady-state formula [27].\n4.\nWhen several (n) flows share the same bottleneck link, each flow can achieve throughput of at most c n , where c is the physical capacity of the link.\nGiven these assumptions, we concentrate on estimating the throughput available between two participants in the overlay.\nWe start by calculating the throughput using the steady-state formula.\nWe then route the flow in the network, and consider the physical links one at a time.\nOn each physical link, we compute the fair-share for each of the competing flows.\nThe throughput of an overlay link is then approximated by the minimum of the fair-shares along the routing path, and the formula rate.\nIf some flow does not require the same share of the bottleneck link as other competing flows (i.e., its throughput might be limited by losses elsewhere in the network), then the other flows might end up with a greater share than the one we compute.\nWe do not account for this, as the major goal of this estimate is simply to avoid lossy and highly congested physical links.\nMore formally, we define the problem as follows: Overlay Maximum Bottleneck Tree (OMBT).\nGiven a physical network represented as a graph G = (V, E), set of overlay participants P \u2282 V , source node (s \u2208 P), bandwidth B : E \u2192 R+ , loss rate L : E \u2192 [0, 1], propagation delay D : E \u2192 R+ of each link, set of possible overlay links O = {(v, w) | v, w \u2208 P, v = w}, routing table RT : O \u00d7 E \u2192 {0, 1}, find the overlay tree T = {o | o \u2208 O} (|T| = |P| \u2212 1, \u2200v \u2208 P there exists a path ov = s \u2740 v) that maximizes min o|o\u2208T (min(f(o), min e|e\u2208o b(e) |{p | p \u2208 T, e \u2208 p}| )) where f(o) is the TCP steady-state sending rate, computed from round-trip time d(o) = \u00c8e\u2208o d(e) + \u00c8e\u2208o d(e) (given overlay link o = (v, w), o = (w, v)), and loss rate l(o) = 1 \u2212 \u00c9e\u2208o (1 \u2212 l(e)).\nWe write e \u2208 o to express that link e is included in the o``s routing path (RT(o, e) = 1).\nAssuming that we can estimate the throughput of a flow, we proceed to formulate a greedy OMBT algorithm.\nThis algorithm is non-optimal, but a similar approach was found to perform well [12].\nOur algorithm is similar to the Widest Path Heuristic (WPH) [12], and more generally to Prim``s MST algorithm [32].\nDuring its execution, we maintain the set of nodes already in the tree, and the set of remaining nodes.\nTo grow the tree, we consider all the overlay links leading from the nodes in the tree to the remaining nodes.\nWe greedily pick the node with the highest throughput overlay link.\nUsing this overlay link might cause us to route traffic over physical links traversed by some other tree flows.\nSince we do not re-examine the throughput of nodes that are already in the tree, they might end up being connected to the tree with slower overlay links than initially estimated.\nHowever, by attaching the node with the highest residual bandwidth at every step, we hope to lessen the effects of after-the-fact physical link sharing.\nWith the synthetic topologies we use for our emulation environment, we have not found this inaccuracy to severely impact the quality of the tree.\n4.2 Bullet vs. Streaming We have implemented a simple streaming application that is capable of streaming data over any specified tree.\nIn our implementation, we are able to stream data through overlay trees using UDP, TFRC, or TCP.\nFigure 6 shows average bandwidth that each of 1000 nodes receives via this streaming as time progresses on the x-axis.\nIn this example, we use TFRC to stream 600 Kbps over our o\ufb04ine bottleneck bandwidth tree and a random tree (other random trees exhibit qualitatively similar behavior).\nIn these experiments, streaming begins 100 seconds into each run.\nWhile the random tree delivers an achieved bandwidth of under 100 Kbps, our o\ufb04ine algorithm overlay delivers approximately 400 Kbps of data.\nFor this experiment, bandwidths were set to the medium range from Table 1.\nWe believe that any degree-constrained online bandwidth overlay tree algorithm would exhibit similar (or lower) behavior to our bandwidth290 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bottleneck bandwidth tree Random tree Figure 6: Achieved bandwidth over time for TFRC streaming over the bottleneck bandwidth tree and a random tree.\noptimized overlay.\nHence, Bullet``s goal is to overcome this bandwidth limit by allowing for the perpendicular reception of data and by utilizing disjoint data flows in an attempt to match or exceed the performance of our o\ufb04ine algorithm.\nTo evaluate Bullet``s ability to exceed the bandwidth achievable via tree distribution overlays, we compare Bullet running over a random overlay tree to the streaming behavior shown in Figure 6.\nFigure 7 shows the average bandwidth received by each node (labeled Useful total) with standard deviation.\nThe graph also plots the total amount of data received and the amount of data a node receives from its parent.\nFor this topology and bandwidth setting, Bullet was able to achieve an average bandwidth of 500 Kbps, fives times that achieved by the random tree and more than 25% higher than the o\ufb04ine bottleneck bandwidth algorithm.\nFurther, the total bandwidth (including redundant data) received by each node is only slightly higher than the useful content, meaning that Bullet is able to achieve high bandwidth while wasting little network resources.\nBullet``s use of TFRC in this example ensures that the overlay is TCP friendly throughout.\nThe average per-node control overhead is approximately 30 Kbps.\nBy tracing certain packets as they move through the system, we are able to acquire link stress estimates of our system.\nThough the link stress can be different for each packet since each can take a different path through the overlay mesh, we average link stress due to each traced packet.\nFor this experiment, Bullet has an average link stress of approximately 1.5 with an absolute maximum link stress of 22.\nThe standard deviation in most of our runs is fairly high because of the limited bandwidth randomly assigned to some Client-Stub and Stub-Stub links.\nWe feel that this is consistent with real Internet behavior where clients have widely varying network connectivity.\nA time slice is shown in Figure 8 that plots the CDF of instantaneous bandwidths that each node receives.\nThe graph shows that few client nodes receive inadequate bandwidth even though they are bandwidth constrained.\nThe distribution rises sharply starting at approximately 500 Kbps.\nThe vast majority of nodes receive a stream of 500-600 Kbps.\nWe have evaluated Bullet under a number of bandwidth constraints to determine how Bullet performs relative to the 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 500 Bandwidth(Kbps) Time (s) Raw total Useful total From parent Figure 7: Achieved bandwidth over time for Bullet over a random tree.\n0 0.2 0.4 0.6 0.8 1 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700\u00a0800 Percentageofnodes Bandwidth(Kbps) Figure 8: CDF of instantaneous achieved bandwidth at time 430 seconds.\navailable bandwidth of the underlying topology.\nTable 1 describes representative bandwidth settings for our streaming rate of 600 Kbps.\nThe intent of these settings is to show a scenario where more than enough bandwidth is available to achieve a target rate even with traditional tree streaming, an example of where it is slightly not sufficient, and one in which the available bandwidth is quite restricted.\nFigure 9 shows achieved bandwidths for Bullet and the bottleneck bandwidth tree over time generated from topologies with bandwidths in each range.\nIn all of our experiments, Bullet outperforms the bottleneck bandwidth tree by a factor of up to 100%, depending on how much bandwidth is constrained in the underlying topology.\nIn one extreme, having more than ample bandwidth, Bullet and the bottleneck bandwidth tree are both able to stream at the requested rate (600 Kbps in our example).\nIn the other extreme, heavily constrained topologies allow Bullet to achieve twice the bandwidth achievable via the bottleneck bandwidth tree.\nFor all other topologies, Bullet``s benefits are somewhere in between.\nIn our example, Bullet running over our medium-constrained bandwidth topology is able to outperform the bottleneck bandwidth tree by a factor of 25%.\nFurther, we stress that we believe it would 291 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bullet - High Bandwidth Bottleneck tree - High Bandwidth Bullet - Medium Bandwidth Bottleneck tree - Medium Bandwidth Bullet - Low Bandwidth Bottleneck tree - Low Bandwidth Figure 9: Achieved bandwidth for Bullet and bottleneck tree over time for high, medium, and low bandwidth topologies.\nbe extremely difficult for any online tree-based algorithm to exceed the bandwidth achievable by our o\ufb04ine bottleneck algorithm that makes use of global topological information.\nFor instance, we built a simple bandwidth optimizing overlay tree construction based on Overcast [21].\nThe resulting dynamically constructed trees never achieved more than 75% of the bandwidth of our own o\ufb04ine algorithm.\n4.3 Creating Disjoint Data Bullet``s ability to deliver high bandwidth levels to nodes depends on its disjoint transmission strategy.\nThat is, when bandwidth to a child is limited, Bullet attempts to send the correct portions of data so that recovery of the lost data is facilitated.\nA Bullet parent sends different data to its children in hopes that each data item will be readily available to nodes spread throughout its subtree.\nIt does so by assigning ownership of data objects to children in a manner that makes the expected number of nodes holding a particular data object equal for all data objects it transmits.\nFigure 10 shows the resulting bandwidth over time for the non-disjoint strategy in which a node (and more importantly, the root of the tree) attempts to send all data to each of its children (subject to independent losses at individual child links).\nBecause the children transports throttle the sending rate at each parent, some data is inherently sent disjointly (by chance).\nBy not explicitly choosing which data to send its child, this approach deprives Bullet of 25% of its bandwidth capability, when compared to the case when our disjoint strategy is enabled in Figure 7.\n4.4 Epidemic Approaches In this section, we explore how Bullet compares to data dissemination approaches that use some form of epidemic routing.\nWe implemented a form of gossiping, where a node forwards non-duplicate packets to a randomly chosen number of nodes in its local view.\nThis technique does not use a tree for dissemination, and is similar to lpbcast [14] (recently improved to incorporate retrieval of data objects [13]).\nWe do not disseminate packets every T seconds; instead we forward them as soon as they arrive.\n0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 500 Bandwidth(Kbps) Time (s) Raw total Useful total From parent Figure 10: Achieved bandwidth over time using nondisjoint data transmission.\nWe also implemented a pbcast-like [2] approach for retrieving data missing from a data distribution tree.\nThe idea here is that nodes are expected to obtain most of their data from their parent.\nNodes then attempt to retrieve any missing data items through gossiping with random peers.\nInstead of using gossiping with a fixed number of rounds for each packet, we use anti-entropy with a FIFO Bloom filter to attempt to locate peers that hold any locally missing data items.\nTo make our evaluation conservative, we assume that nodes employing gossip and anti-entropy recovery are able to maintain full group membership.\nWhile this might be difficult in practice, we assume that RanSub [24] could also be applied to these ideas, specifically in the case of anti-entropy recovery that employs an underlying tree.\nFurther, we also allow both techniques to reuse other aspects of our implementation: Bloom filters, TFRC transport, etc..\nTo reduce the number of duplicate packets, we use less peers in each round (5) than Bullet (10).\nFor our configuration, we experimentally found that 5 peers results in the best performance with the lowest overhead.\nIn our experiments, increasing the number of peers did not improve the average bandwidth achieved throughout the system.\nTo allow TFRC enough time to ramp up to the appropriate TCP-friendly sending rate, we set the epoch length for anti-entropy recovery to 20 seconds.\nFor these experiments, we use a 5000-node INET topology with no explicit physical link losses.\nWe set link bandwidths according to the medium range from Table 1, and randomly assign 100 overlay participants.\nThe randomly chosen root either streams at 900 Kbps (over a random tree for Bullet and greedy bottleneck tree for anti-entropy recovery), or sends packets at that rate to randomly chosen nodes for gossiping.\nFigure 11 shows the resulting bandwidth over time achieved by Bullet and the two epidemic approaches.\nAs expected, Bullet comes close to providing the target bandwidth to all participants, achieving approximately 60 percent more then gossiping and streaming with anti-entropy.\nThe two epidemic techniques send an excessive number of duplicates, effectively reducing the useful bandwidth provided to each node.\nMore importantly, both approaches assign equal significance to other peers, regardless of the available band292 0 500 1000 1500 2000 0 50\u00a0100\u00a0150\u00a0200 250 300 Bandwidth(Kbps) Time (s) Push gossiping raw Streaming w\/AE raw Bullet raw Bullet useful Push gossiping useful Streaming w\/AE useful Figure 11: Achieved bandwidth over time for Bullet and epidemic approaches.\nwidth and the similarity ratio.\nBullet, on the other hand, establishes long-term connections with peers that provide good bandwidth and disjoint content, and avoids most of the duplicates by requesting disjoint data from each node``s peers.\n4.5 Bullet on a Lossy Network To evaluate Bullet``s performance under more lossy network conditions, we have modified our 20,000-node topologies used in our previous experiments to include random packet losses.\nModelNet allows the specification of a packet loss rate in the description of a network link.\nOur goal by modifying these loss rates is to simulate queuing behavior when the network is under load due to background network traffic.\nTo effect this behavior, we first modify all non-transit links in each topology to have a packet loss rate chosen uniformly random from [0, 0.003] resulting in a maximum loss rate of 0.3%.\nTransit links are likewise modified, but with a maximum loss rate of 0.1%.\nSimilar to the approach in [28], we randomly designated 5% of the links in the topologies as overloaded and set their loss rates uniformly random from [0.05, 0.1] resulting in a maximum packet loss rate of 10%.\nFigure 12 shows achieved bandwidths for streaming over Bullet and using our greedy o\ufb04ine bottleneck bandwidth tree.\nBecause losses adversely affect the bandwidth achievable over TCP-friendly transport and since bandwidths are strictly monotonically decreasing over a streaming tree, treebased algorithms perform considerably worse than Bullet when used on a lossy network.\nIn all cases, Bullet delivers at least twice as much bandwidth than the bottleneck bandwidth tree.\nAdditionally, losses in the low bandwidth topology essentially keep the bottleneck bandwidth tree from delivering any data, an artifact that is avoided by Bullet.\n4.6 Performance Under Failure In this section, we discuss Bullet``s behavior in the face of node failure.\nIn contrast to streaming distribution trees that must quickly detect and make tree transformations to overcome failure, Bullet``s failure resilience rests on its ability to maintain a higher level of achieved bandwidth by virtue of perpendicular (peer) streaming.\nWhile all nodes under a failed node in a distribution tree will experience a temporary 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bullet - High Bandwidth Bullet - Medium Bandwidth Bottleneck tree - High Bandwidth Bottleneck tree - Medium Bandwidth Bullet - Low Bandwidth Bottleneck tree - Low Bandwidth Figure 12: Achieved bandwidths for Bullet and bottleneck bandwidth tree over a lossy network topology.\ndisruption in service, Bullet nodes are able compensate for this by receiving data from peers throughout the outage.\nBecause Bullet, and, more importantly, RanSub makes use of an underlying tree overlay, part of Bullet``s failure recovery properties will depend on the failure recovery behavior of the underlying tree.\nFor the purposes of this discussion, we simply assume the worst-case scenario where an underlying tree has no failure recovery.\nIn our failure experiments, we fail one of root``s children (with 110 of the total 1000 nodes as descendants) 250 seconds after data streaming is started.\nBy failing one of root``s children, we are able to show Bullet``s worst-case performance under a single node failure.\nIn our first scenario, we disable failure detection in RanSub so that after a failure occurs, Bullet nodes request data only from their current peers.\nThat is, at this point, RanSub stops functioning and no new peer relationships are created for the remainder of the run.\nFigure 13 shows Bullet``s achieved bandwidth over time for this case.\nWhile the average achieved rate drops from 500 Kbps to 350 Kbps, most nodes (including the descendants of the failed root child) are able to recover a large portion of the data rate.\nNext, we enable RanSub failure detection that recognizes a node``s failure when a RanSub epoch has lasted longer than the predetermined maximum (5 seconds for this test).\nIn this case, the root simply initiates the next distribute phase upon RanSub timeout.\nThe net result is that nodes that are not descendants of the failed node will continue to receive updated random subsets allowing them to peer with appropriate nodes reflecting the new network conditions.\nAs shown in Figure 14, the failure causes a negligible disruption in performance.\nWith RanSub failure detection enabled, nodes quickly learn of other nodes from which to receive data.\nOnce such recovery completes, the descendants of the failed node use their already established peer relationships to compensate for their ancestor``s failure.\nHence, because Bullet is an overlay mesh, its reliability characteristics far exceed that of typical overlay distribution trees.\n4.7 PlanetLab This section contains results from the deployment of Bullet over the PlanetLab [31] wide-area network testbed.\nFor 293 0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bandwidth received Useful total From parent Figure 13: Bandwidth over time with a worst-case node failure and no RanSub recovery.\n0 200 400 600 800 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 Bandwidth(Kbps) Time (s) Bandwidth received Useful total From parent Figure 14: Bandwidth over time with a worst-case node failure and RanSub recovery enabled.\nour first experiment, we chose 47 nodes for our deployment, with no two machines being deployed at the same site.\nSince there is currently ample bandwidth available throughout the PlanetLab overlay (a characteristic not necessarily representative of the Internet at large), we designed this experiment to show that Bullet can achieve higher bandwidth than an overlay tree when the source is constrained, for instance in cases of congestion on its outbound access link, or of overload by a flash-crowd.\nWe did this by choosing a root in Europe connected to PlanetLab with fairly low bandwidth.\nThe node we selected was in Italy (cs.unibo.it) and we had 10 other overlay nodes in Europe.\nWithout global knowledge of the topology in PlanetLab (and the Internet), we are, of course, unable to produce our greedy bottleneck bandwidth tree for comparison.\nWe ran Bullet over a random overlay tree for 300 seconds while attempting to stream at a rate of 1.5 Mbps.\nWe waited 50 seconds before starting to stream data to allow nodes to successfully join the tree.\nWe compare the performance of Bullet to data streaming over multiple handcrafted trees.\nFigure 15 shows our results for two such trees.\nThe good tree has all nodes in Europe located high in the tree, close to the root.\nWe used pathload [20] to measure the 0 200 400 600 800 1000 1200 0 50\u00a0100\u00a0150\u00a0200 250 Bandwidth(Kbps) Time (s) Bullet Good Tree Worst Tree Figure 15: Achieved bandwidth over time for Bullet and TFRC streaming over different trees on PlanetLab with a root in Europe.\navailable bandwidth between the root and all other nodes.\nNodes with high bandwidth measurements were placed close to the root.\nIn this case, we are able to achieve a bandwidth of approximately 300 Kbps.\nThe worst tree was created by setting the root``s children to be the three nodes with the worst bandwidth characteristics from the root as measured by pathload.\nAll subsequent levels in the tree were set in this fashion.\nFor comparison, we replaced all nodes in Europe from our topology with nodes in the US, creating a topology that only included US nodes with high bandwidth characteristics.\nAs expected, Bullet was able to achieve the full 1.5 Mbps rate in this case.\nA well constructed tree over this highbandwidth topology yielded slightly lower than 1.5 Mbps, verifying that our approach does not sacrifice performance under high bandwidth conditions and improves performance under constrained bandwidth scenarios.\n5.\nRELATED WORK Snoeren et al. [36] use an overlay mesh to achieve reliable and timely delivery of mission-critical data.\nIn this system, every node chooses n parents from which to receive duplicate packet streams.\nSince its foremost emphasis is reliability, the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level.\nFurther, during recovery from parent failure, it limits an overlay router``s choice of parents to nodes with a level number that is less than its own level number.\nThe power of perpendicular downloads is perhaps best illustrated by Kazaa [22], the popular peer-to-peer file swapping network.\nKazaa nodes are organized into a scalable, hierarchical structure.\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it.\nSince Kazaa does not address the multicast communication model, a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure.\nKazaa does not use erasure coding; therefore it may take considerable time to locate the last few bytes.\n294 BitTorrent [3] is another example of a file distribution system currently deployed on the Internet.\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file.\nThe tracker poses a scalability limit, as it continuously updates the systemwide distribution of the file.\nLowering the tracker communication rate could hurt the overall system performance, as information might be out of date.\nFurther, BitTorrent does not employ any strategy to disseminate data to different regions of the network, potentially making it more difficult to recover data depending on client access patterns.\nSimilar to Bullet, BitTorrent incorporates the notion of choking at each node with the goal of identifying receivers that benefit the most by downloading from that particular source.\nFastReplica [11] addresses the problem of reliable and efficient file distribution in content distribution networks (CDNs).\nIn the basic algorithm, nodes are organized into groups of fixed size (n), with full group membership information at each node.\nTo distribute the file, a node splits it into n equal-sized portions, sends the portions to other group members, and instructs them to download the missing pieces in parallel from other group members.\nSince only a fixed portion of the file is transmitted along each of the overlay links, the impact of congestion is smaller than in the case of tree distribution.\nHowever, since it treats all paths equally, FastReplica does not take full advantage of highbandwidth overlay links in the system.\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system, it may not be applicable to high-bandwidth streaming.\nThere are numerous protocols that aim to add reliability to IP multicast.\nIn Scalable Reliable Multicast (SRM) [16], nodes multicast retransmission requests for missed packets.\nTwo techniques attempt to improve the scalability of this approach: probabilistic choice of retransmission timeouts, and organization of receivers into hierarchical local recovery groups.\nHowever, it is difficult to find appropriate timer values and local scoping settings (via the TTL field) for a wide range of topologies, number of receivers, etc. even when adaptive techniques are used.\nOne recent study [2] shows that SRM may have significant overhead due to retransmission requests.\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree.\nIn pbcast [2], a node has global group membership, and periodically chooses a random subset of peers to send a digest of its received packets.\nA node that receives the digest responds to the sender with the missing packets in a last-in, first-out fashion.\nLbpcast [14] addresses pbcast``s scalability issues (associated with global knowledge) by constructing, in a decentralized fashion, a partial group membership view at each node.\nThe average size of the views is engineered to allow a message to reach all participants with high probability.\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model, its network overhead can be quite high.\nCompared to the reliable multicast efforts, Bullet behaves favorably in terms of the network overhead because nodes do not blindly request retransmissions from their peers.\nInstead, Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content.\nFurther, a Bullet node splits the retransmission load between all of its peers.\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest.\nHowever, this does not guarantee that packets received in parallel from multiple peers will not be duplicates.\nMore importantly, the multicast recovery methods are limited by the bandwidth through the tree, while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree.\nNarada [19] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links.\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source.\nNarada nodes maintain global knowledge about all group participants, limiting system scalability to several tens of nodes.\nFurther, the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent.\nOn the other hand, the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers.\nOvercast [21] is an example of a bandwidth-efficient overlay tree construction algorithm.\nIn this system, all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth.\nBullet is expected to be more resilient to node departures than any tree, including Overcast.\nInstead of a node waiting to get the data it missed from a new parent, a node can start getting data from its perpendicular peers.\nThis transition is seamless, as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters.\nOvercast convergence time is limited by probes to immediate siblings and ancestors.\nBullet is able to provide approximately a target bandwidth without having a fully converged tree.\nIn parallel to our own work, SplitStream [9] also has the goal of achieving high bandwidth data dissemination.\nIt operates by splitting the multicast stream into k stripes, transmitting each stripe along a separate multicast tree built using Scribe [34].\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree (while observing both inbound and outbound node bandwidth constraints), thereby reducing the impact of a single node``s sudden departure on the rest of the system.\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe.\nPerhaps more importantly, SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree, including the links between the data source and the roots of individual stripe trees independently chosen by Scribe.\nTo some extent, Bullet and SplitStream are complementary.\nFor instance, Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe.\nCoopNet [29] considers live content streaming in a peerto-peer environment, subject to high node churn.\nConsequently, the system favors resilience over network efficiency.\nIt uses a centralized approach for constructing either random or deterministic node-disjoint (similar to SplitStream) trees, and it includes an MDC [17] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers.\nIn the case of on-demand streaming, CoopNet [30] addresses 295 the flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content.\nCompared to CoopNet, Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file.\n6.\nCONCLUSIONS Typically, high bandwidth overlay data streaming takes place over a distribution tree.\nIn this paper, we argue that, in fact, an overlay mesh is able to deliver fundamentally higher bandwidth.\nOf course, a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers.\nThis paper presents the design and implementation of Bullet, a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures.\nSpecifically, this paper makes the following contributions: \u2022 We present the design and analysis of Bullet, an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming.\nAs a related benefit, we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques.\n\u2022 We provide a technique for recovering missing data from peers in a scalable and efficient manner.\nRanSub periodically disseminates summaries of data sets received by a changing, uniformly random subset of global participants.\n\u2022 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes.\n\u2022 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology, as well as experimentation on top of the PlanetLab Internet testbed, shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree.\nAcknowledgments We would like to thank David Becker for his invaluable help with our ModelNet experiments and Ken Yocum for his help with ModelNet emulation optimizations.\nIn addition, we thank our shepherd Barbara Liskov and our anonymous reviewers who provided excellent feedback.\n7.\nREFERENCES [1] Suman Banerjee, Bobby Bhattacharjee, and Christopher Kommareddy.\nScalable Application Layer Multicast.\nIn Proceedings of ACM SIGCOMM, August 2002.\n[2] Kenneth Birman, Mark Hayden, Oznur Ozkasap, Zhen Xiao, Mihai Budiu, and Yaron Minsky.\nBimodal Multicast.\nACM Transaction on Computer Systems, 17(2), May 1999.\n[3] Bittorrent.\nhttp:\/\/bitconjurer.org\/BitTorrent.\n[4] Burton Bloom.\nSpace\/Time Trade-offs in Hash Coding with Allowable Errors.\nCommunication of ACM, 13(7):422-426, July 1970.\n[5] Andrei Broder.\nOn the Resemblance and Containment of Documents.\nIn Proceedings of Compression and Complexity of Sequences (SEQUENCES``97), 1997.\n[6] John W. Byers, Jeffrey Considine, Michael Mitzenmacher, and Stanislav Rost.\nInformed Content Delivery Across Adaptive Overlay Networks.\nIn Proceedings of ACM SIGCOMM, August 2002.\n[7] John W. Byers, Michael Luby, Michael Mitzenmacher, and Ashutosh Rege.\nA Digital Fountain Approach to Reliable Distribution of Bulk Data.\nIn SIGCOMM, pages 56-67, 1998.\n[8] Ken Calvert, Matt Doar, and Ellen W. Zegura.\nModeling Internet Topology.\nIEEE Communications Magazine, June 1997.\n[9] Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, Animesh Nandi, Antony Rowstron, and Atul Singh.\nSplitstream: High-bandwidth Content Distribution in Cooperative Environments.\nIn Proceedings of the 19th ACM Symposium on Operating System Principles, October 2003.\n[10] Hyunseok Chang, Ramesh Govindan, Sugih Jamin, Scott Shenker, and Walter Willinger.\nTowards Capturing Representative AS-Level Internet Topologies.\nIn Proceedings of ACM SIGMETRICS, June 2002.\n[11] Ludmila Cherkasova and Jangwon Lee.\nFastReplica: Efficient Large File Distribution within Content Delivery Networks.\nIn 4th USENIX Symposium on Internet Technologies and Systems, March 2003.\n[12] Reuven Cohen and Gideon Kaempfer.\nA Unicast-based Approach for Streaming Multicast.\nIn INFOCOM, pages 440-448, 2001.\n[13] Patrick Eugster, Sidath Handurukande, Rachid Guerraoui, Anne-Marie Kermarrec, and Petr Kouznetsov.\nLightweight Probabilistic Broadcast.\nTo appear in ACM Transactions on Computer Systems.\n[14] Patrick Eugster, Sidath Handurukande, Rachid Guerraoui, Anne-Marie Kermarrec, and Petr Kouznetsov.\nLightweight Probabilistic Broadcast.\nIn Proceedings of The International Conference on Dependable Systems and Networks (DSN), 2001.\n[15] Sally Floyd, Mark Handley, Jitendra Padhye, and Jorg Widmer.\nEquation-based congestion control for unicast applications.\nIn SIGCOMM 2000, pages 43-56, Stockholm, Sweden, August 2000.\n[16] Sally Floyd, Van Jacobson, Ching-Gung Liu, Steven McCanne, and Lixia Zhang.\nA Reliable Multicast Framework for Light-weight Sessions and Application Level Framing.\nIEEE\/ACM Transactions on Networking, 5(6):784-803, 1997.\n[17] Vivek K Goyal.\nMultiple Description Coding: Compression Meets the Network.\nIEEE Signal Processing Mag., pages 74-93, May 2001.\n[18] Yang hua Chu, Sanjay Rao, and Hui Zhang.\nA Case For End System Multicast.\nIn Proceedings of the ACM Sigmetrics 2000 International Conference on Measurement and Modeling of Computer Systems, June 2000.\n[19] Yang hua Chu, Sanjay G. Rao, Srinivasan Seshan, and Hui Zhang.\nEnabling Conferencing Applications on the Internet using an Overlay Multicast Architecture.\nIn Proceedings of ACM SIGCOMM, August 2001.\n[20] Manish Jain and Constantinos Dovrolis.\nEnd-to-end Available Bandwidth: Measurement Methodology, Dynamics, and Relation with TCP Throughput.\nIn Proceedings of SIGCOMM 2002, New York, August 19-23 2002.\n[21] John Jannotti, David K. Gifford, Kirk L. Johnson, M. Frans Kaashoek, and Jr..\nJames W. O``Toole.\nOvercast: Reliable Multicasting with an Overlay Network.\nIn Proceedings of Operating Systems Design and Implementation (OSDI), October 2000.\n[22] Kazaa media desktop.\nhttp:\/\/www.kazaa.com.\n[23] Min Sik Kim, Simon S. Lam, and Dong-Young Lee.\n296 Optimal Distribution Tree for Internet Streaming Media.\nTechnical Report TR-02-48, Department of Computer Sciences, University of Texas at Austin, September 2002.\n[24] Dejan Kosti\u00b4c, Adolfo Rodriguez, Jeannie Albrecht, Abhijeet Bhirud, and Amin Vahdat.\nUsing Random Subsets to Build Scalable Network Services.\nIn Proceedings of the USENIX Symposium on Internet Technologies and Systems, March 2003.\n[25] Michael Luby.\nLT Codes.\nIn In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002.\n[26] Michael G. Luby, Michael Mitzenmacher, M. Amin Shokrollahi, Daniel A. Spielman, and Volker Stemann.\nPractical Loss-Resilient Codes.\nIn Proceedings of the 29th Annual ACM Symposium on the Theory of Computing (STOC ``97), pages 150-159, New York, May 1997.\nAssociation for Computing Machinery.\n[27] Jitedra Padhye, Victor Firoiu, Don Towsley, and Jim Krusoe.\nModeling TCP Throughput: A Simple Model and its Empirical Validation.\nIn ACM SIGCOMM ``98 conference on Applications, technologies, architectures, and protocols for computer communication, pages 303-314, Vancouver, CA, 1998.\n[28] Venkata N. Padmanabhan, Lili Qiu, and Helen J. Wang.\nServer-based Inference of Internet Link Lossiness.\nIn Proceedings of the IEEE Infocom, San Francisco, CA, USA, 2003.\n[29] Venkata N. Padmanabhan, Helen J. Wang, and Philip A. Chou.\nResilient Peer-to-Peer Streaming.\nIn Proceedings of the 11th ICNP, Atlanta, Georgia, USA, 2003.\n[30] Venkata N. Padmanabhan, Helen J. Wang, Philip A. Chou, and Kunwadee Sripanidkulchai.\nDistributing Streaming Media Content Using Cooperative Networking.\nIn ACM\/IEEE NOSSDAV, 2002.\n[31] Larry Peterson, Tom Anderson, David Culler, and Timothy Roscoe.\nA Blueprint for Introducing Disruptive Technology into the Internet.\nIn Proceedings of ACM HotNets-I, October 2002.\n[32] R. C. Prim.\nShortest Connection Networks and Some Generalizations.\nIn Bell Systems Technical Journal, pages 1389-1401, November 1957.\n[33] Adolfo Rodriguez, Sooraj Bhat, Charles Killian, Dejan Kosti\u00b4c, and Amin Vahdat.\nMACEDON: Methodology for Automatically Creating, Evaluating, and Designing Overlay Networks.\nTechnical Report CS-2003-09, Duke University, July 2003.\n[34] Antony Rowstron, Anne-Marie Kermarrec, Miguel Castro, and Peter Druschel.\nSCRIBE: The Design of a Large-scale Event Notification Infrastructure.\nIn Third International Workshop on Networked Group Communication, November 2001.\n[35] Stefan Savage.\nSting: A TCP-based Network Measurement Tool.\nIn Proceedings of the 2nd USENIX Symposium on Internet Technologies and Systems (USITS-99), pages 71-80, Berkeley, CA, October 11-14 1999.\nUSENIX Association.\n[36] Alex C. Snoeren, Kenneth Conley, and David K. Gifford.\nMesh-Based Content Routing Using XML.\nIn Proceedings of the 18th ACM Symposium on Operating Systems Principles (SOSP ``01), October 2001.\n[37] Amin Vahdat, Ken Yocum, Kevin Walsh, Priya Mahadevan, Dejan Kosti\u00b4c, Jeff Chase, and David Becker.\nScalability and Accuracy in a Large-Scale Network Emulator.\nIn Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI), December 2002.\n297","lvl-3":"Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet.\nTypically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network.\nIn this paper, we target high-bandwidth data distribution from a single source to a large number of receivers.\nApplications include large-file transfers and real-time multimedia streaming.\nFor these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures.\nThis paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh.\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network.\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.\nKey contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances.\nIn addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.\nIn a tree, it is critical that a node's parent delivers a high rate of application data to each child.\nIn Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.\n1.\nINTRODUCTION\nIn this paper, we consider the following general problem.\nGiven a sender and a large set of interested receivers spread across the Internet, how can we maximize the amount of bandwidth delivered to receivers?\nOur problem domain includes software or video distribution and real-time multimedia streaming.\nTraditionally, native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion.\nHowever, a number of considerations, including scale, reliability, and congestion control, have limited the wide-scale deployment of IP multicast.\nEven if all these problems were to be addressed, IP multicast does not consider bandwidth when constructing its distribution tree.\nMore recently, overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery.\nTypical overlay structures attempt to mimic the structure of multicast routing trees.\nIn network-layer multicast however, interior nodes consist of high speed routers with limited processing power and extensibility.\nOverlays, on the other hand, use programmable (and hence extensible) end hosts as interior nodes in the overlay tree, with these hosts acting as repeaters to multiple children down the tree.\nOverlays have shown tremendous promise for multicast-style applications.\nHowever, we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability.\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree.\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree.\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [2, 6].\nHowever, fundamentally, the bandwidth available to any host is limited by the bandwidth available from that node's single parent in the tree.\nThus, our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined.\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss, we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network.\nHere, the sender splits data into sequential blocks.\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network.\nNodes still receive a set of objects from their parents, but they are then responsible for locating peers that hold missing data objects.\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants.\nIn this way, we avoid the problem of locating the \"last object\", which may only be available at a few nodes.\nOne hypothesis of this work is that, relative to a tree, this model will result in higher bandwidth--leveraging the bandwidth from simultaneous parallel downloads from multiple sources rather than a single parent--and higher reliability--retrieving data from multiple peers reduces the potential damage from a single node failure.\nTo illustrate Bullet's behavior, consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available (TCP-friendly) bandwidth to each of A and B. However, there is also 1 Mbps of available bandwidth between A and B.\nIn this example, Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B.\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another, effectively achieving a retrieval rate of 2 Mbps.\nOn the other hand, any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data.\nAny solution for achieving the above model must maintain a number of properties.\nFirst, it must be TCP friendly [15].\nNo flow should consume more than its fair share of the bottleneck bandwidth and each flow must respond to congestion signals (losses) by reducing its transmission rate.\nSecond, it must impose low control overhead.\nThere are many possible sources of such overhead, including probing for available bandwidth between nodes, locating appropriate nodes to \"peer\" with for data retrieval and redundantly receiving the same data objects from multiple sources.\nThird, the algorithm should be decentralized and scalable to thousands of participants.\nNo node should be required to learn or maintain global knowledge, for instance global group membership or the set of data objects currently available at all nodes.\nFinally, the approach must be robust to individual failures.\nFor example, the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants; no single failure should result in the complete loss of data for any significant fraction of nodes, as might be the case for a single node failure \"high up\" in a multicast overlay tree.\nIn this context, this paper presents the design and evaluation of Bullet, an algorithm for constructing an overlay mesh that attempts to maintain the above properties.\nBullet nodes begin by self-organizing into an overlay tree, which can be constructed by any of a number of existing techniques [1, 18, 21, 24, 34].\nEach Bullet node, starting with the root of the underlying tree, then transmits a disjoint set of data to each of its children, with the goal of maintaining uniform representativeness of each data item across all participants.\nThe level of disjointness is determined by the bandwidth available to each of its children.\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node.\nThus, Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree.\nDepending on the type of data being transmitted, Bullet can optionally employ a variety of encoding schemes, for instance Erasure codes [7, 26, 25] or Multiple Description Coding (MDC) [17], to efficiently disseminate data, adapt to variable bandwidth, and recover from losses.\nFinally, we use TFRC [15] to transfer data both down the overlay tree and among peers.\nThis ensures that the entire overlay behaves in a congestion-friendly manner, adjusting its transmission rate on a per-connection basis based on prevailing network conditions.\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree.\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol.\nIn these trees, it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source (its parent).\nThus, even once the tree is constructed, nodes must continue their probing to adapt to dynamically changing network conditions.\nWhile bandwidth probing is an active area of research [20, 35], accurate results generally require the transfer of a large amount of data to gain confidence in the results.\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system.\nThus, in Bullet, the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree.\nFurther, all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh.\nWe have completed a prototype of Bullet running on top of a number of overlay trees.\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree (using an offline algorithm and global network topology information), all while remaining TCP friendly.\nWe also deployed our prototype across the PlanetLab [31] wide-area testbed.\nFor these live Internet runs, we find that Bullet can deliver comparable bandwidth performance improvements.\nIn both cases, the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node, acceptable for our target high-bandwidth, large-scale scenarios.\nThe remainder of this paper is organized as follows.\nSection 2 presents Bullet's system components including RanSub, informed content delivery, and TFRC.\nSection 3 then details Bullet, an efficient data distribution system for bandwidth intensive applications.\nSection 4 evaluates Bullet's performance for a variety of network topologies, and compares it to existing multicast techniques.\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions.\n2.\nSYSTEM COMPONENTS\n2.1 Data Encoding\n2.2 RanSub\n2.3 Informed Content Delivery Techniques\n2.4 TCP Friendly Rate Control\n3.\nBULLET\n3.1 Finding Overlay Peers\n3.2 Recovering Data From Peers\n3.3 Making Data Disjoint\n3.4 Improving the Bullet Mesh\n4.\nEVALUATION\n4.1 Offline Bottleneck Bandwidth Tree\n4.2 Bullet vs. Streaming\n4.3 Creating Disjoint Data\n4.4 Epidemic Approaches\n4.5 Bullet on a Lossy Network\n4.6 Performance Under Failure\n4.7 PlanetLab\n5.\nRELATED WORK\nSnoeren et al. [36] use an overlay mesh to achieve reliable and timely delivery of mission-critical data.\nIn this system, every node chooses n \"parents\" from which to receive duplicate packet streams.\nSince its foremost emphasis is reliability, the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level.\nFurther, during recovery from parent failure, it limits an overlay router's choice of parents to nodes with a level number that is less than its own level number.\nThe power of \"perpendicular\" downloads is perhaps best illustrated by Kazaa [22], the popular peer-to-peer file swapping network.\nKazaa nodes are organized into a scalable, hierarchical structure.\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it.\nSince Kazaa does not address the multicast communication model, a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure.\nKazaa does not use erasure coding; therefore it may take considerable time to locate \"the last few bytes.\"\nBitTorrent [3] is another example of a file distribution system currently deployed on the Internet.\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file.\nThe tracker poses a scalability limit, as it continuously updates the systemwide distribution of the file.\nLowering the tracker communication rate could hurt the overall system performance, as information might be out of date.\nFurther, BitTorrent does not employ any strategy to disseminate data to different regions of the network, potentially making it more difficult to recover data depending on client access patterns.\nSimilar to Bullet, BitTorrent incorporates the notion of \"choking\" at each node with the goal of identifying receivers that benefit the most by downloading from that particular source.\nFastReplica [11] addresses the problem of reliable and efficient file distribution in content distribution networks (CDNs).\nIn the basic algorithm, nodes are organized into groups of fixed size (n), with full group membership information at each node.\nTo distribute the file, a node splits it into n equal-sized portions, sends the portions to other group members, and instructs them to download the missing pieces in parallel from other group members.\nSince only a fixed portion of the file is transmitted along each of the overlay links, the impact of congestion is smaller than in the case of tree distribution.\nHowever, since it treats all paths equally, FastReplica does not take full advantage of highbandwidth overlay links in the system.\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system, it may not be applicable to high-bandwidth streaming.\nThere are numerous protocols that aim to add reliability to IP multicast.\nIn Scalable Reliable Multicast (SRM) [16], nodes multicast retransmission requests for missed packets.\nTwo techniques attempt to improve the scalability of this approach: probabilistic choice of retransmission timeouts, and organization of receivers into hierarchical local recovery groups.\nHowever, it is difficult to find appropriate timer values and local scoping settings (via the TTL field) for a wide range of topologies, number of receivers, etc. even when adaptive techniques are used.\nOne recent study [2] shows that SRM may have significant overhead due to retransmission requests.\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree.\nIn pbcast [2], a node has global group membership, and periodically chooses a random subset of peers to send a digest of its received packets.\nA node that receives the digest responds to the sender with the missing packets in a last-in, first-out fashion.\nLbpcast [14] addresses pbcast's scalability issues (associated with global knowledge) by constructing, in a decentralized fashion, a partial group membership view at each node.\nThe average size of the views is engineered to allow a message to reach all participants with high probability.\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model, its network overhead can be quite high.\nCompared to the reliable multicast efforts, Bullet behaves favorably in terms of the network overhead because nodes do not \"blindly\" request retransmissions from their peers.\nInstead, Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content.\nFurther, a Bullet node splits the retransmission load between all of its peers.\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest.\nHowever, this does not guarantee that packets received in parallel from multiple peers will not be duplicates.\nMore importantly, the multicast recovery methods are limited by the bandwidth through the tree, while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree.\nNarada [19] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links.\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source.\nNarada nodes maintain global knowledge about all group participants, limiting system scalability to several tens of nodes.\nFurther, the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent.\nOn the other hand, the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers.\nOvercast [21] is an example of a bandwidth-efficient overlay tree construction algorithm.\nIn this system, all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth.\nBullet is expected to be more resilient to node departures than any tree, including Overcast.\nInstead of a node waiting to get the data it missed from a new parent, a node can start getting data from its perpendicular peers.\nThis transition is seamless, as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters.\nOvercast convergence time is limited by probes to immediate siblings and ancestors.\nBullet is able to provide approximately a target bandwidth without having a fully converged tree.\nIn parallel to our own work, SplitStream [9] also has the goal of achieving high bandwidth data dissemination.\nIt operates by splitting the multicast stream into k stripes, transmitting each stripe along a separate multicast tree built using Scribe [34].\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree (while observing both inbound and outbound node bandwidth constraints), thereby reducing the impact of a single node's sudden departure on the rest of the system.\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe.\nPerhaps more importantly, SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree, including the links between the data source and the roots of individual stripe trees independently chosen by Scribe.\nTo some extent, Bullet and SplitStream are complementary.\nFor instance, Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe.\nCoopNet [29] considers live content streaming in a peerto-peer environment, subject to high node churn.\nConsequently, the system favors resilience over network efficiency.\nIt uses a centralized approach for constructing either random or deterministic node-disjoint (similar to SplitStream) trees, and it includes an MDC [17] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers.\nIn the case of on-demand streaming, CoopNet [30] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content.\nCompared to CoopNet, Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file.\n6.\nCONCLUSIONS\nTypically, high bandwidth overlay data streaming takes place over a distribution tree.\nIn this paper, we argue that, in fact, an overlay mesh is able to deliver fundamentally higher bandwidth.\nOf course, a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers.\nThis paper presents the design and implementation of Bullet, a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures.\nSpecifically, this paper makes the following contributions: 9 We present the design and analysis of Bullet, an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming.\nAs a related benefit, we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques.\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner.\nRanSub periodically disseminates summaries of data sets received by a changing, uniformly random subset of global participants.\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes.\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology, as well as experimentation on top of the PlanetLab Internet testbed, shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree.","lvl-4":"Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet.\nTypically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network.\nIn this paper, we target high-bandwidth data distribution from a single source to a large number of receivers.\nApplications include large-file transfers and real-time multimedia streaming.\nFor these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures.\nThis paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh.\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network.\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.\nKey contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances.\nIn addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.\nIn a tree, it is critical that a node's parent delivers a high rate of application data to each child.\nIn Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.\n1.\nINTRODUCTION\nIn this paper, we consider the following general problem.\nGiven a sender and a large set of interested receivers spread across the Internet, how can we maximize the amount of bandwidth delivered to receivers?\nOur problem domain includes software or video distribution and real-time multimedia streaming.\nTraditionally, native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion.\nHowever, a number of considerations, including scale, reliability, and congestion control, have limited the wide-scale deployment of IP multicast.\nEven if all these problems were to be addressed, IP multicast does not consider bandwidth when constructing its distribution tree.\nMore recently, overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery.\nTypical overlay structures attempt to mimic the structure of multicast routing trees.\nIn network-layer multicast however, interior nodes consist of high speed routers with limited processing power and extensibility.\nOverlays, on the other hand, use programmable (and hence extensible) end hosts as interior nodes in the overlay tree, with these hosts acting as repeaters to multiple children down the tree.\nOverlays have shown tremendous promise for multicast-style applications.\nHowever, we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability.\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree.\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree.\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [2, 6].\nHowever, fundamentally, the bandwidth available to any host is limited by the bandwidth available from that node's single parent in the tree.\nThus, our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined.\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss, we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network.\nHere, the sender splits data into sequential blocks.\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network.\nNodes still receive a set of objects from their parents, but they are then responsible for locating peers that hold missing data objects.\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants.\nIn this way, we avoid the problem of locating the \"last object\", which may only be available at a few nodes.\nTo illustrate Bullet's behavior, consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available (TCP-friendly) bandwidth to each of A and B. However, there is also 1 Mbps of available bandwidth between A and B.\nIn this example, Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B.\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another, effectively achieving a retrieval rate of 2 Mbps.\nOn the other hand, any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data.\nAny solution for achieving the above model must maintain a number of properties.\nFirst, it must be TCP friendly [15].\nSecond, it must impose low control overhead.\nThere are many possible sources of such overhead, including probing for available bandwidth between nodes, locating appropriate nodes to \"peer\" with for data retrieval and redundantly receiving the same data objects from multiple sources.\nThird, the algorithm should be decentralized and scalable to thousands of participants.\nNo node should be required to learn or maintain global knowledge, for instance global group membership or the set of data objects currently available at all nodes.\nFinally, the approach must be robust to individual failures.\nFor example, the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants; no single failure should result in the complete loss of data for any significant fraction of nodes, as might be the case for a single node failure \"high up\" in a multicast overlay tree.\nIn this context, this paper presents the design and evaluation of Bullet, an algorithm for constructing an overlay mesh that attempts to maintain the above properties.\nBullet nodes begin by self-organizing into an overlay tree, which can be constructed by any of a number of existing techniques [1, 18, 21, 24, 34].\nEach Bullet node, starting with the root of the underlying tree, then transmits a disjoint set of data to each of its children, with the goal of maintaining uniform representativeness of each data item across all participants.\nThe level of disjointness is determined by the bandwidth available to each of its children.\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node.\nThus, Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree.\nFinally, we use TFRC [15] to transfer data both down the overlay tree and among peers.\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree.\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol.\nIn these trees, it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source (its parent).\nThus, even once the tree is constructed, nodes must continue their probing to adapt to dynamically changing network conditions.\nWhile bandwidth probing is an active area of research [20, 35], accurate results generally require the transfer of a large amount of data to gain confidence in the results.\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system.\nThus, in Bullet, the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree.\nFurther, all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh.\nWe have completed a prototype of Bullet running on top of a number of overlay trees.\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree (using an offline algorithm and global network topology information), all while remaining TCP friendly.\nFor these live Internet runs, we find that Bullet can deliver comparable bandwidth performance improvements.\nIn both cases, the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node, acceptable for our target high-bandwidth, large-scale scenarios.\nThe remainder of this paper is organized as follows.\nSection 2 presents Bullet's system components including RanSub, informed content delivery, and TFRC.\nSection 3 then details Bullet, an efficient data distribution system for bandwidth intensive applications.\nSection 4 evaluates Bullet's performance for a variety of network topologies, and compares it to existing multicast techniques.\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions.\n5.\nRELATED WORK\nSnoeren et al. [36] use an overlay mesh to achieve reliable and timely delivery of mission-critical data.\nIn this system, every node chooses n \"parents\" from which to receive duplicate packet streams.\nSince its foremost emphasis is reliability, the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level.\nFurther, during recovery from parent failure, it limits an overlay router's choice of parents to nodes with a level number that is less than its own level number.\nKazaa nodes are organized into a scalable, hierarchical structure.\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it.\nSince Kazaa does not address the multicast communication model, a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure.\nBitTorrent [3] is another example of a file distribution system currently deployed on the Internet.\nThe tracker poses a scalability limit, as it continuously updates the systemwide distribution of the file.\nSimilar to Bullet, BitTorrent incorporates the notion of \"choking\" at each node with the goal of identifying receivers that benefit the most by downloading from that particular source.\nFastReplica [11] addresses the problem of reliable and efficient file distribution in content distribution networks (CDNs).\nIn the basic algorithm, nodes are organized into groups of fixed size (n), with full group membership information at each node.\nTo distribute the file, a node splits it into n equal-sized portions, sends the portions to other group members, and instructs them to download the missing pieces in parallel from other group members.\nSince only a fixed portion of the file is transmitted along each of the overlay links, the impact of congestion is smaller than in the case of tree distribution.\nHowever, since it treats all paths equally, FastReplica does not take full advantage of highbandwidth overlay links in the system.\nThere are numerous protocols that aim to add reliability to IP multicast.\nIn Scalable Reliable Multicast (SRM) [16], nodes multicast retransmission requests for missed packets.\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree.\nIn pbcast [2], a node has global group membership, and periodically chooses a random subset of peers to send a digest of its received packets.\nA node that receives the digest responds to the sender with the missing packets in a last-in, first-out fashion.\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model, its network overhead can be quite high.\nCompared to the reliable multicast efforts, Bullet behaves favorably in terms of the network overhead because nodes do not \"blindly\" request retransmissions from their peers.\nInstead, Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content.\nFurther, a Bullet node splits the retransmission load between all of its peers.\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest.\nHowever, this does not guarantee that packets received in parallel from multiple peers will not be duplicates.\nMore importantly, the multicast recovery methods are limited by the bandwidth through the tree, while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree.\nNarada [19] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links.\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source.\nNarada nodes maintain global knowledge about all group participants, limiting system scalability to several tens of nodes.\nFurther, the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent.\nOn the other hand, the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers.\nOvercast [21] is an example of a bandwidth-efficient overlay tree construction algorithm.\nIn this system, all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth.\nBullet is expected to be more resilient to node departures than any tree, including Overcast.\nInstead of a node waiting to get the data it missed from a new parent, a node can start getting data from its perpendicular peers.\nOvercast convergence time is limited by probes to immediate siblings and ancestors.\nBullet is able to provide approximately a target bandwidth without having a fully converged tree.\nIn parallel to our own work, SplitStream [9] also has the goal of achieving high bandwidth data dissemination.\nIt operates by splitting the multicast stream into k stripes, transmitting each stripe along a separate multicast tree built using Scribe [34].\nPerhaps more importantly, SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree, including the links between the data source and the roots of individual stripe trees independently chosen by Scribe.\nTo some extent, Bullet and SplitStream are complementary.\nFor instance, Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe.\nCoopNet [29] considers live content streaming in a peerto-peer environment, subject to high node churn.\nConsequently, the system favors resilience over network efficiency.\nIn the case of on-demand streaming, CoopNet [30] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content.\nCompared to CoopNet, Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file.\n6.\nCONCLUSIONS\nTypically, high bandwidth overlay data streaming takes place over a distribution tree.\nIn this paper, we argue that, in fact, an overlay mesh is able to deliver fundamentally higher bandwidth.\nOf course, a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers.\nThis paper presents the design and implementation of Bullet, a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures.\nSpecifically, this paper makes the following contributions: 9 We present the design and analysis of Bullet, an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming.\nAs a related benefit, we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques.\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner.\nRanSub periodically disseminates summaries of data sets received by a changing, uniformly random subset of global participants.\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes.\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology, as well as experimentation on top of the PlanetLab Internet testbed, shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree.","lvl-2":"Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh\nABSTRACT\nIn recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet.\nTypically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network.\nIn this paper, we target high-bandwidth data distribution from a single source to a large number of receivers.\nApplications include large-file transfers and real-time multimedia streaming.\nFor these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures.\nThis paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh.\nWe construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network.\nIndividual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.\nKey contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances.\nIn addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.\nIn a tree, it is critical that a node's parent delivers a high rate of application data to each child.\nIn Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.\n1.\nINTRODUCTION\nIn this paper, we consider the following general problem.\nGiven a sender and a large set of interested receivers spread across the Internet, how can we maximize the amount of bandwidth delivered to receivers?\nOur problem domain includes software or video distribution and real-time multimedia streaming.\nTraditionally, native IP multicast has been the preferred method for delivering content to a set of receivers in a scalable fashion.\nHowever, a number of considerations, including scale, reliability, and congestion control, have limited the wide-scale deployment of IP multicast.\nEven if all these problems were to be addressed, IP multicast does not consider bandwidth when constructing its distribution tree.\nMore recently, overlays have emerged as a promising alternative to multicast for network-efficient point to multipoint data delivery.\nTypical overlay structures attempt to mimic the structure of multicast routing trees.\nIn network-layer multicast however, interior nodes consist of high speed routers with limited processing power and extensibility.\nOverlays, on the other hand, use programmable (and hence extensible) end hosts as interior nodes in the overlay tree, with these hosts acting as repeaters to multiple children down the tree.\nOverlays have shown tremendous promise for multicast-style applications.\nHowever, we argue that a tree structure has fundamental limitations both for high bandwidth multicast and for high reliability.\nOne difficulty with trees is that bandwidth is guaranteed to be monotonically decreasing moving down the tree.\nAny loss high up the tree will reduce the bandwidth available to receivers lower down the tree.\nA number of techniques have been proposed to recover from losses and hence improve the available bandwidth in an overlay tree [2, 6].\nHowever, fundamentally, the bandwidth available to any host is limited by the bandwidth available from that node's single parent in the tree.\nThus, our work operates on the premise that the model for high-bandwidth multicast data dissemination should be re-examined.\nRather than sending identical copies of the same data stream to all nodes in a tree and designing a scalable mechanism for recovering from loss, we propose that participants in a multicast overlay cooperate to strategically\ntransmit disjoint data sets to various points in the network.\nHere, the sender splits data into sequential blocks.\nBlocks are further subdivided into individual objects which are in turn transmitted to different points in the network.\nNodes still receive a set of objects from their parents, but they are then responsible for locating peers that hold missing data objects.\nWe use a distributed algorithm that aims to make the availability of data items uniformly spread across all overlay participants.\nIn this way, we avoid the problem of locating the \"last object\", which may only be available at a few nodes.\nOne hypothesis of this work is that, relative to a tree, this model will result in higher bandwidth--leveraging the bandwidth from simultaneous parallel downloads from multiple sources rather than a single parent--and higher reliability--retrieving data from multiple peers reduces the potential damage from a single node failure.\nTo illustrate Bullet's behavior, consider a simple three node overlay with a root R and two children A and B. R has 1 Mbps of available (TCP-friendly) bandwidth to each of A and B. However, there is also 1 Mbps of available bandwidth between A and B.\nIn this example, Bullet would transmit a disjoint set of data at 1 Mbps to each of A and B.\nA and B would then each independently discover the availability of disjoint data at the remote peer and begin streaming data to one another, effectively achieving a retrieval rate of 2 Mbps.\nOn the other hand, any overlay tree is restricted to delivering at most 1 Mbps even with a scalable technique for recovering lost data.\nAny solution for achieving the above model must maintain a number of properties.\nFirst, it must be TCP friendly [15].\nNo flow should consume more than its fair share of the bottleneck bandwidth and each flow must respond to congestion signals (losses) by reducing its transmission rate.\nSecond, it must impose low control overhead.\nThere are many possible sources of such overhead, including probing for available bandwidth between nodes, locating appropriate nodes to \"peer\" with for data retrieval and redundantly receiving the same data objects from multiple sources.\nThird, the algorithm should be decentralized and scalable to thousands of participants.\nNo node should be required to learn or maintain global knowledge, for instance global group membership or the set of data objects currently available at all nodes.\nFinally, the approach must be robust to individual failures.\nFor example, the failure of a single node should result only in a temporary reduction in the bandwidth delivered to a small subset of participants; no single failure should result in the complete loss of data for any significant fraction of nodes, as might be the case for a single node failure \"high up\" in a multicast overlay tree.\nIn this context, this paper presents the design and evaluation of Bullet, an algorithm for constructing an overlay mesh that attempts to maintain the above properties.\nBullet nodes begin by self-organizing into an overlay tree, which can be constructed by any of a number of existing techniques [1, 18, 21, 24, 34].\nEach Bullet node, starting with the root of the underlying tree, then transmits a disjoint set of data to each of its children, with the goal of maintaining uniform representativeness of each data item across all participants.\nThe level of disjointness is determined by the bandwidth available to each of its children.\nBullet then employs a scalable and efficient algorithm to enable nodes to quickly locate multiple peers capable of transmitting missing data items to the node.\nThus, Bullet layers a high-bandwidth mesh on top of an arbitrary overlay tree.\nDepending on the type of data being transmitted, Bullet can optionally employ a variety of encoding schemes, for instance Erasure codes [7, 26, 25] or Multiple Description Coding (MDC) [17], to efficiently disseminate data, adapt to variable bandwidth, and recover from losses.\nFinally, we use TFRC [15] to transfer data both down the overlay tree and among peers.\nThis ensures that the entire overlay behaves in a congestion-friendly manner, adjusting its transmission rate on a per-connection basis based on prevailing network conditions.\nOne important benefit of our approach is that the bandwidth delivered by the Bullet mesh is somewhat independent of the bandwidth available through the underlying overlay tree.\nOne significant limitation to building high bandwidth overlay trees is the overhead associated with the tree construction protocol.\nIn these trees, it is critical that each participant locates a parent via probing with a high level of available bandwidth because it receives data from only a single source (its parent).\nThus, even once the tree is constructed, nodes must continue their probing to adapt to dynamically changing network conditions.\nWhile bandwidth probing is an active area of research [20, 35], accurate results generally require the transfer of a large amount of data to gain confidence in the results.\nOur approach with Bullet allows receivers to obtain high bandwidth in aggregate using individual transfers from peers spread across the system.\nThus, in Bullet, the bandwidth available from any individual peer is much less important than in any bandwidthoptimized tree.\nFurther, all the bandwidth that would normally be consumed probing for bandwidth can be reallocated to streaming data across the Bullet mesh.\nWe have completed a prototype of Bullet running on top of a number of overlay trees.\nOur evaluation of a 1000-node overlay running across a wide variety of emulated 20,000 node network topologies shows that Bullet can deliver up to twice the bandwidth of a bandwidth-optimized tree (using an offline algorithm and global network topology information), all while remaining TCP friendly.\nWe also deployed our prototype across the PlanetLab [31] wide-area testbed.\nFor these live Internet runs, we find that Bullet can deliver comparable bandwidth performance improvements.\nIn both cases, the overhead of maintaining the Bullet mesh and locating the appropriate disjoint data is limited to 30 Kbps per node, acceptable for our target high-bandwidth, large-scale scenarios.\nThe remainder of this paper is organized as follows.\nSection 2 presents Bullet's system components including RanSub, informed content delivery, and TFRC.\nSection 3 then details Bullet, an efficient data distribution system for bandwidth intensive applications.\nSection 4 evaluates Bullet's performance for a variety of network topologies, and compares it to existing multicast techniques.\nSection 5 places our work in the context of related efforts and Section 6 presents our conclusions.\n2.\nSYSTEM COMPONENTS\nOur approach to high bandwidth data dissemination centers around the techniques depicted in Figure 1.\nFirst, we split the target data stream into blocks which are further subdivided into individual (typically packet-sized) objects.\nDepending on the requirements of the target applications, objects may be encoded [17, 26] to make data recovery more efficient.\nNext, we purposefully disseminate disjoint objects\nFigure 1: High-level view of Bullet's operation.\nto different clients at a rate determined by the available bandwidth to each client.\nWe use the equation-based TFRC protocol to communicate among all nodes in the overlay in a congestion responsive and TCP friendly manner.\nGiven the above techniques, data is spread across the overlay tree at a rate commensurate with the available bandwidth in the overlay tree.\nOur overall goal however is to deliver more bandwidth than would otherwise be available through any tree.\nThus, at this point, nodes require a scalable technique for locating and retrieving disjoint data from their peers.\nIn essence, these perpendicular links across the overlay form a mesh to augment the bandwidth available through the tree.\nIn Figure 1, node D only has sufficient bandwidth to receive 3 objects per time unit from its parent.\nHowever, it is able to locate two peers, C and E, who are able to transmit \"missing\" data objects, in this example increasing delivered bandwidth from 3 objects per time unit to 6 data objects per time unit.\nLocating appropriate remote peers cannot require global state or global communication.\nThus, we propose the periodic dissemination of changing, uniformly random subsets of global state to each overlay node once per configurable time period.\nThis random subset contains summary tickets of the objects available at a subset of the nodes in the system.\nEach node uses this information to request data objects from remote nodes that have significant divergence in object membership.\nIt then attempts to establish a number of these peering relationships with the goals of minimizing overlap in the objects received from each peer and maximizing the total useful bandwidth delivered to it.\nIn the remainder of this section, we provide brief background on each of the techniques that we employ as fundamental building blocks for our work.\nSection 3 then presents the details of the entire Bullet architecture.\n2.1 Data Encoding\nDepending on the type of data being distributed through the system, a number of data encoding schemes can improve system efficiency.\nFor instance, if multimedia data is being distributed to a set of heterogeneous receivers with variable bandwidth, MDC [17] allows receivers obtaining different subsets of the data to still maintain a usable multimedia stream.\nFor dissemination of a large file among a set of receivers, Erasure codes enable receivers not to focus on retrieving every transmitted data packet.\nRather, after obtaining a threshold minimum number of packets, receivers are able to decode the original data stream.\nOf course, Bullet is amenable to a variety of other encoding schemes or even the \"null\" encoding scheme, where the original data stream is transmitted best-effort through the system.\nIn this paper, we focus on the benefits of a special class of erasure-correcting codes used to implement the \"digital fountain\" [7] approach.\nRedundant Tornado [26] codes are created by performing XOR operations on a selected number of original data packets, and then transmitted along with the original data packets.\nTornado codes require any (1 + e) k correctly received packets to reconstruct the original k data packets, with the typically low reception overhead (e) of 0.03 \u2212 0.05.\nIn return, they provide significantly faster encoding and decoding times.\nAdditionally, the decoding algorithm can run in real-time, and the reconstruction process can start as soon as sufficiently many packets have arrived.\nTornado codes require a predetermined stretch factor (n\/k, where n is the total number of encoded packets), and their encoding time is proportional to n. LT codes [25] remove these two limitations, while maintaining a low reception overhead of 0.05.\n2.2 RanSub\nTo address the challenge of locating disjoint content within the system, we use RanSub [24], a scalable approach to distributing changing, uniform random subsets of global state to all nodes of an overlay tree.\nRanSub assumes the presence of some scalable mechanism for efficiently building and maintaining the underlying tree.\nA number of such techniques are described in [1, 18, 21, 24, 34].\nRanSub distributes random subsets of participating nodes throughout the tree using collect and distribute messages.\nCollect messages start at the leaves and propagate up the tree, leaving state at each node along the path to the root.\nDistribute messages start at the root and travel down the tree, using the information left at the nodes during the previous collect round to distribute uniformly random subsets to all participants.\nUsing the collect and distribute messages, RanSub distributes a random subset of participants to each node once per epoch.\nThe lower bound on the length of an epoch is determined by the time it takes to propagate data up then back down the tree, or roughly twice the height of the tree.\nFor appropriately constructed trees, the minimum epoch length will grow with the logarithm of the number of participants, though this is not required for correctness.\nAs part of the distribute message, each participant sends a uniformly random subset of remote nodes, called a distribute set, down to its children.\nThe contents of the distribute set are constructed using the collect set gathered during the previous collect phase.\nDuring this phase, each participant sends a collect set consisting of a random subset of its descendant nodes up the tree to the root along with an estimate of its total number of descendants.\nAfter the root receives all collect sets and the collect phase completes, the distribute phase begins again in a new epoch.\nOne of the key features of RanSub is the Compact operation.\nThis is the process used to ensure that membership in a collect set propagated by a node to its parent is both random and uniformly representative of all members of the sub-tree rooted at that node.\nCompact takes multiple fixedsize subsets and the total population represented by each subset as input, and generates a new fixed-size subset.\nThe\nFigure 2: This example shows the two phases of the RanSub protocol that occur in one epoch.\nThe collect phase is shown on the left, where the collect sets are traveling up the overlay to the root.\nThe distribute phase on the right shows the distribute sets traveling down the overlay to the leaf nodes.\nmembers of the resulting set are uniformly random representatives of the input subset members.\nRanSub offers several ways of constructing distribute sets.\nFor our system, we choose the RanSub-nondescendants option.\nIn this case, each node receives a random subset consisting of all nodes excluding its descendants.\nThis is appropriate for our download structure where descendants are expected to have less content than an ancestor node in most cases.\nA parent creates RanSub-nondescendants distribute sets for each child by compacting collect sets from that child's siblings and its own distribute set.\nThe result is a distribute set that contains a random subset representing all nodes in the tree except for those rooted at that particular child.\nWe depict an example of RanSub's collect-distribute process in Figure 2.\nIn the figure, AS stands for node A's state.\n2.3 Informed Content Delivery Techniques\nAssuming we can enable a node to locate a peer with disjoint content using RanSub, we need a method for reconciling the differences in the data.\nAdditionally, we require a bandwidth-efficient method with low computational overhead.\nWe chose to implement the approximate reconciliation techniques proposed in [6] for these tasks in Bullet.\nTo describe the content, nodes maintain working sets.\nThe working set contains sequence numbers of packets that have been successfully received by each node over some period of time.\nWe need the ability to quickly discern the resemblance between working sets from two nodes and decide whether a fine-grained reconciliation is beneficial.\nSummary tickets, or min-wise sketches [5], serve this purpose.\nThe main idea is to create a summary ticket that is an unbiased random sample of the working set.\nA summary ticket is a small fixed-size array.\nEach entry in this array is maintained by a specific permutation function.\nThe goal is to have each entry populated by the element with the smallest permuted value.\nTo insert a new element into the summary ticket, we apply the permutation functions in order and update array values as appropriate.\nThe permutation function can be thought of as a specialized hash function.\nThe choice of permutation functions is important as the quality of the summary ticket depends directly on the randomness properties of the permutation functions.\nSince we require them to have a low computational overhead, we use simple permutation functions, such as Pj (x) = (ax + b) mod | U |, where U is the universe size (dependant on the data encoding scheme).\nTo compute the resemblance between two working sets, we compute the number of summary ticket entries that have the same value, and divide it by the total number of entries in the summary tickets.\nFigure 3 shows the way the permutation functions are used to populate the summary ticket.\nFigure 3: Example showing a sample summary ticket being constructed from the working set.\nTo perform approximate fine-grain reconciliation, a peer A sends its digest to peer B and expects to receive packets not described in the digest.\nFor this purpose, we use a Bloom filter [4], a bit array of size m with k independent associated hash functions.\nAn element s from the set of received keys S = {so, s2,..., sn \u2212 1} is inserted into the filter by computing the hash values h0, h1,..., hk \u2212 1 of s and setting the bits in the array that correspond to the hashed\nvalues.\nTo check whether an element x is in the Bloom filter, we hash it using the hash functions and check whether all positions in the bit array are set.\nIf at least one is not set, we know that the Bloom filter does not contain x.\nWhen using Bloom filters, the insertion of different elements might cause all the positions in the bit array corresponding to an element that is not in the set to be nonzero.\nIn this case, we have a false positive.\nTherefore, it is possible that peer B will not send a packet to peer A even though A is missing it.\nOn the other hand, a node will never send a packet that is described in the Bloom filter, i.e. there are no false negatives.\nThe probability of getting a false positive pf on the membership query can be expressed as a and the number of hash functions k: pf = (1 \u2212 e \u2212 kn\/m) k.\nWe can therefore choose the size of the Bloom filter and the number of hash functions that will yield a desired false positive ratio.\n2.4 TCP Friendly Rate Control\nAlthough most traffic in the Internet today is best served by TCP, applications that require a smooth sending rate and that have a higher tolerance for loss often find TCP's reaction to a single dropped packet to be unnecessarily severe.\nTCP Friendly Rate Control, or TFRC, targets unicast streaming multimedia applications with a need for less drastic responses to single packet losses [15].\nTCP halves the sending rate as soon as one packet loss is detected.\nAlternatively, TFRC is an equation-based congestion control protocol that is based on loss events, which consist of multiple packets being dropped within one round-trip time.\nUnlike TCP, the goal of TFRC is not to find and use all available bandwidth, but instead to maintain a relatively steady sending rate while still being responsive to congestion.\nTo guarantee fairness with TCP, TFRC uses the response function that describes the steady-state sending rate of TCP to determine the transmission rate in TFRC.\nThe formula of the TCP response function [27] used in TFRC to describe the sending rate is:\nThis is the expression for the sending rate T in bytes\/second, as a function of the round-trip time R in seconds, loss event rate p, packet size s in bytes, and TCP retransmit value tRT O in seconds.\nTFRC senders and receivers must cooperate to achieve a smooth transmission rate.\nThe sender is responsible for computing the weighted round-trip time estimate R between sender and receiver, as well as determining a reasonable retransmit timeout value tRT O.\nIn most cases, using the simple formula tRT O = 4R provides the necessary fairness with TCP.\nThe sender is also responsible for adjusting the sending rate T in response to new values of the loss event rate p reported by the receiver.\nThe sender obtains a new measure for the loss event rate each time a feedback packet is received from the receiver.\nUntil the first loss is reported, the sender doubles its transmission rate each time it receives feedback just as TCP does during slow-start.\nThe main role of the receiver is to send feedback to the sender once per round-trip time and to calculate the loss event rate included in the feedback packets.\nTo obtain the loss event rate, the receiver maintains a loss interval array that contains values for the last eight loss intervals.\nA loss interval is defined as the number of packets received correctly between two loss events.\nThe array is continually updated as losses are detected.\nA weighted average is computed based on the sum of the loss interval values, and the inverse of the sum is the reported loss event rate, p.\nWhen implementing Bullet, we used an unreliable version of TFRC.\nWe wanted a transport protocol that was congestion aware and TCP friendly.\nLost packets were more easily recovered from other sources rather than waiting for a retransmission from the initial sender.\nHence, we eliminate retransmissions from TFRC.\nFurther, TFRC does not aggressively seek newly available bandwidth like TCP, a desirable trait in an overlay tree where there might be multiple competing flows sharing the same links.\nFor example, if a leaf node in the tree tried to aggressively seek out new bandwidth, it could create congestion all the way up to the root of the tree.\nBy using TFRC we were able to avoid these scenarios.\n3.\nBULLET\nBullet is an efficient data distribution system for bandwidth intensive applications.\nWhile many current overlay network distribution algorithms use a distribution tree to deliver data from the tree's root to all other nodes, Bullet layers a mesh on top of an original overlay tree to increase overall bandwidth to all nodes in the tree.\nHence, each node receives a parent stream from its parent in the tree and some number of perpendicular streams from chosen peers in the overlay.\nThis has significant bandwidth impact when a single node in the overlay is unable to deliver adequate bandwidth to a receiving node.\nBullet requires an underlying overlay tree for RanSub to deliver random subsets of participants's state to nodes in the overlay, informing them of a set of nodes that may be good candidates for retrieving data not available from any of the node's current peers and parent.\nWhile we also use the underlying tree for baseline streaming, this is not critical to Bullet's ability to efficiently deliver data to nodes in the overlay.\nAs a result, Bullet is capable of functioning on top of essentially any overlay tree.\nIn our experiments, we have run Bullet over random and bandwidth-optimized trees created offline (with global topological knowledge).\nBullet registers itself with the underlying overlay tree so that it is informed when the overlay changes as nodes come and go or make performance transformations in the overlay.\nAs with streaming overlays trees, Bullet can use standard transports such as TCP and UDP as well as our implementation of TFRC.\nFor the remainder of this paper, we assume the use of TFRC since we primarily target streaming highbandwidth content and we do not require reliable or in-order delivery.\nFor simplicity, we assume that packets originate at the root of the tree and are tagged with increasing sequence numbers.\nEach node receiving a packet will optionally forward it to each of its children, depending on a number of factors relating to the child's bandwidth and its relative position in the tree.\n3.1 Finding Overlay Peers\nRanSub periodically delivers subsets of uniformly random selected nodes to each participant in the overlay.\nBullet receivers use these lists to locate remote peers able to transmit missing data items with good bandwidth.\nRanSub messages contain a set of summary tickets that include a small (120 function of the ratio m\nbytes) summary of the data that each node contains.\nRanSub delivers subsets of these summary tickets to nodes every configurable epoch (5 seconds by default).\nEach node in the tree maintains a working set of the packets it has received thus far, indexed by sequence numbers.\nNodes associate each working set with a Bloom filter that maintains a summary of the packets received thus far.\nSince the Bloom filter does not exceed a specific size (m) and we would like to limit the rate of false positives, Bullet periodically cleans up the Bloom filter by removing lower sequence numbers from it.\nThis allows us to keep the Bloom filter population n from growing at an unbounded rate.\nThe net effect is that a node will attempt to recover packets for a finite amount of time depending on the packet arrival rate.\nSimilarly, Bullet removes older items that are not needed for data reconstruction from its working set and summary ticket.\nWe use the collect and distribute phases of RanSub to carry Bullet summary tickets up and down the tree.\nIn our current implementation, we use a set size of 10 summary tickets, allowing each collect and distribute to fit well within the size of a non-fragmented IP packet.\nThough Bullet supports larger set sizes, we expect this parameter to be tunable to specific applications' needs.\nIn practice, our default size of 10 yields favorable results for a variety of overlays and network topologies.\nIn essence, during an epoch a node receives a summarized partial view of the system's state at that time.\nUpon receiving a random subset each epoch, a Bullet node may choose to peer with the node having the lowest similarity ratio when compared to its own summary ticket.\nThis is done only when the node has sufficient space in its sender list to accept another sender (senders with lackluster performance are removed from the current sender list as described in section 3.4).\nOnce a node has chosen the best node it sends it a peering request containing the requesting node's Bloom filter.\nSuch a request is accepted by the potential sender if it has sufficient space in its receiver list for the incoming receiver.\nOtherwise, the send request is rejected (space is periodically created in the receiver lists as further described in section 3.4).\n3.2 Recovering Data From Peers\nAssuming it has space for the new peer, a recipient of the peering request installs the received Bloom filter and will periodically transmit keys not present in the Bloom filter to the requesting node.\nThe requesting node will refresh its installed Bloom filters at each of its sending peers periodically.\nAlong with the fresh filter, a receiving node will also assign a portion of the sequence space to each of its senders.\nIn this way, a node is able the reduce the likelihood that two peers simultaneously transmit the same key to it, wasting network resources.\nA node divides the sequence space in its current working set among each of its senders uniformly.\nAs illustrated in Figure 4, a Bullet receiver views the data space as a matrix of packet sequences containing s rows, where s is its current number of sending peers.\nA receiver periodically (every 5 seconds by default) updates each sender with its current Bloom filter and the range of sequences covered in its Bloom filter.\nThis identifies the range of packets that the receiver is currently interested in recovering.\nOver time, this range shifts as depicted in Figure 4-b).\nIn addition, the receiving node assigns to each sender a row from the matrix, labeled mod.\nA sender will forward packets to\nFigure 4: A Bullet receiver views data as a matrix\nof sequenced packets with rows equal to the number of peer senders it currently has.\nIt requests data within the range (Low, High) of sequence numbers based on what it has received.\na) The receiver requests a specific row in the sequence matrix from each sender.\nb) As it receives more data, the range of sequences advances and the receiver requests different rows from senders.\nthe receiver that have a sequence number x such that x modulo s equals the mod number.\nIn this fashion, receivers register to receive disjoint data from their sending peers.\nBy specifying ranges and matrix rows, a receiver is unlikely to receive duplicate data items, which would result in wasted bandwidth.\nA duplicate packet, however, may be received when a parent recovers a packet from one of its peers and relays the packet to its children (and descendants).\nIn this case, a descendant would receive the packet out of order and may have already recovered it from one of its peers.\nIn practice, this wasteful reception of duplicate packets is tolerable; less than 10% of all received packets are duplicates in our experiments.\n3.3 Making Data Disjoint\nWe now provide details of Bullet's mechanisms to increase the ease by which nodes can find disjoint data not provided by parents.\nWe operate on the premise that the main challenge in recovering lost data packets transmitted over an overlay distribution tree lies in finding the peer node housing the data to recover.\nMany systems take a hierarchical approach to this problem, propagating repair requests up the distribution tree until the request can be satisfied.\nThis ultimately leads to scalability issues at higher levels in the hierarchy particularly when overlay links are bandwidthconstrained.\nOn the other hand, Bullet attempts to recover lost data from any non-descendant node, not just ancestors, thereby increasing overall system scalability.\nIn traditional overlay distribution trees, packets are lost by the transmission transport and\/or the network.\nNodes attempt to stream data as fast as possible to each child and have essentially no control over which portions of the data stream are dropped by the transport or network.\nAs a result, the streaming subsystem has no control over how many nodes in the system will ultimately receive a particular portion of the data.\nIf few nodes receive a particular range of packets, recovering these pieces of data becomes more difficult, requiring increased communication costs, and leading to scalability problems.\nIn contrast, Bullet nodes are aware of the bandwidth achievable to each of its children using the underlying transport.\nIf\na child is unable to receive the streaming rate that the parent receives, the parent consciously decides which portion of the data stream to forward to the constrained child.\nIn addition, because nodes recover data from participants chosen uniformly at random from the set of non-descendants, it is advantageous to make each transmitted packet recoverable from approximately the same number of participant nodes.\nThat is, given a randomly chosen subset of peer nodes, it is with the same probability that each node has a particular data packet.\nWhile not explicitly proven here, we believe that this approach maximizes the probability that a lost data packet can be recovered, regardless of which packet is lost.\nTo this end, Bullet distributes incoming packets among one or more children in hopes that the expected number of nodes receiving each packet is approximately the same.\nA node p maintains for each child, i, a limiting and sending factor, lfi and sfi.\nThese factors determine the proportion of p's received data rate that it will forward to each child.\nThe sending factor sfi is the portion of the parent stream (rate) that each child should \"own\" based on the number of descendants the child has.\nThe more descendants a child has, the larger the portion of received data it should own.\nThe limiting factor lfi represents the proportion of the parent rate beyond the sending factor that each child can handle.\nFor example, a child with one descendant, but high bandwidth would have a low sending factor, but a very high limiting factor.\nThough the child is responsible for owning a small portion of the received data, it actually can receive a large portion of it.\nBecause RanSub collects descendant counts di for each child i, Bullet simply makes a call into RanSub when sending data to determine the current sending factors of its children.\nFor each child i out of k total, we set the sending factor to be:\nIn addition, a node tracks the data successfully transmitted via the transport.\nThat is, Bullet data transport sockets are non-blocking; successful transmissions are send attempts that are accepted by the non-blocking transport.\nIf the transport would block on a send (i.e., transmission of the packet would exceed the TCP-friendly fair share of network resources), the send fails and is counted as an unsuccessful send attempt.\nWhen a data packet is received by a parent, it calculates the proportion of the total data stream that has been sent to each child, thus far, in this epoch.\nIt then assigns ownership of the current packet to the child with sending proportion farthest away from its sfi as illustrated in Figure 5.\nHaving chosen the target of a particular packet, the parent attempts to forward the packet to the child.\nIf the send is not successful, the node must find an alternate child to own the packet.\nThis occurs when a child's bandwidth is not adequate to fulfill its responsibilities based on its descendants (sfi).\nTo compensate, the node attempts to deterministically find a child that can own the packet (as evidenced by its transport accepting the packet).\nThe net result is that children with more than adequate bandwidth will own more of their share of packets than those with inadequate bandwidth.\nIn the event that no child can accept a packet, it must be dropped, corresponding to the case where the sum of all children bandwidths is inadequate to serve the received\nFigure 5: Pseudo code for Bullet's disjoint data send routine\nstream.\nWhile making data more difficult to recover, Bullet still allows for recovery of such data to its children.\nThe sending node will cache the data packet and serve it to its requesting peers.\nThis process allows its children to potentially recover the packet from one of their own peers, to whom additional bandwidth may be available.\nOnce a packet has been successfully sent to the owning child, the node attempts to send the packet to all other children depending on the limiting factors lfi.\nFor each child i, a node attempts to forward the packet deterministically if the packet's sequence modulo 1\/lfi is zero.\nEssentially, this identifies which lfi fraction of packets of the received data stream should be forwarded to each child to make use of the available bandwidth to each.\nIf the packet transmission is successful, lfi is increased such that one more packet is to be sent per epoch.\nIf the transmission fails, lfi is decreased by the same amount.\nThis allows children limiting factors to be continuously adjusted in response to changing network conditions.\nIt is important to realize that by maintaining limiting factors, we are essentially using feedback from children (by observing transport behavior) to determine the best data to stop sending during times when a child cannot handle the entire parent stream.\nIn one extreme, if the sum of children bandwidths is not enough to receive the entire parent stream, each child will receive a completely disjoint data stream of packets it owns.\nIn the other extreme, if each\nchild has ample bandwidth, it will receive the entire parent stream as each lfi would settle on 1.0.\nIn the general case, our owning strategy attempts to make data disjoint among children subtrees with the guiding premise that, as much as possible, the expected number of nodes receiving a packet is the same across all packets.\n3.4 Improving the Bullet Mesh\nBullet allows a maximum number of peering relationships.\nThat is, a node can have up to a certain number of receivers and a certain number of senders (each defaults to 10 in our implementation).\nA number of considerations can make the current peering relationships sub-optimal at any given time: i) the probabilistic nature of RanSub means that a node may not have been exposed to a sufficiently appropriate peer, ii) receivers greedily choose peers, and iii) network conditions are constantly changing.\nFor example, a sender node may wind up being unable to provide a node with very much useful (non-duplicate) data.\nIn such a case, it would be advantageous to remove that sender as a peer and find some other peer that offers better utility.\nEach node periodically (every few RanSub epochs) evaluates the bandwidth performance it is receiving from its sending peers.\nA node will drop a peer if it is sending too many duplicate packets when compared to the total number of packets received.\nThis threshold is set to 50% by default.\nIf no such wasteful sender is found, a node will drop the sender that is delivering the least amount of useful data to it.\nIt will replace this sender with some other sending peer candidate, essentially reserving a trial slot in its sender list.\nIn this way, we are assured of keeping the best senders seen so far and will eliminate senders whose performance deteriorates with changing network conditions.\nLikewise, a Bullet sender will periodically evaluate its receivers.\nEach receiver updates senders of the total received bandwidth.\nThe sender, knowing the amount of data it has sent to each receiver, can determine which receiver is benefiting the least by peering with this sender.\nThis corresponds to the one receiver acquiring the least portion of its bandwidth through this sender.\nThe sender drops this receiver, creating an empty slot for some other trial receiver.\nThis is similar to the concept of weans presented in [24].\n4.\nEVALUATION\nWe have evaluated Bullet's performance in real Internet environments as well as the ModelNet [37] IP emulation framework.\nWhile the bulk of our experiments use ModelNet, we also report on our experience with Bullet on the PlanetLab Internet testbed [31].\nIn addition, we have implemented a number of underlying overlay network trees upon which Bullet can execute.\nBecause Bullet performs well over a randomly created overlay tree, we present results with Bullet running over such a tree compared against an offline greedy bottleneck bandwidth tree algorithm using global topological information described in Section 4.1.\nAll of our implementations leverage a common development infrastructure called MACEDON [33] that allows for the specification of overlay algorithms in a simple domainspecific language.\nIt enables the reuse of the majority of common functionality in these distributed systems, including probing infrastructures, thread management, message passing, and debugging environment.\nAs a result, we believe that our comparisons qualitatively show algorithmic differences rather than implementation intricacies.\nOur implementation of the core Bullet logic is under 1000 lines of code in this infrastructure.\nOur ModelNet experiments make use of 50 2Ghz Pentium4's running Linux 2.4.20 and interconnected with 100 Mbps and 1 Gbps Ethernet switches.\nFor the majority of these experiments, we multiplex one thousand instances (overlay participants) of our overlay applications across the 50 Linux nodes (20 per machine).\nIn ModelNet, packet transmissions are routed through emulators responsible for accurately emulating the hop-by-hop delay, bandwidth, and congestion of a network topology.\nIn our evaluations, we used four 1.4 Ghz Pentium III's running FreeBSD-4 .7 as emulators.\nThis platform supports approximately 2-3 Gbps of aggregate simultaneous communication among end hosts.\nFor most of our ModelNet experiments, we use 20,000-node INET-generated topologies [10].\nWe randomly assign our participant nodes to act as clients connected to one-degree stub nodes in the topology.\nWe randomly select one of these participants to act as the source of the data stream.\nPropagation delays in the network topology are calculated based on the relative placement of the network nodes in the plane by INET.\nBased on the classification in [8], we classify network links as being Client-Stub, Stub-Stub, TransitStub, and Transit-Transit depending on their location in the network.\nWe restrict topological bandwidth by setting the bandwidth for each link depending on its type.\nEach type of link has an associated bandwidth range from which the bandwidth is chosen uniformly at random.\nBy changing these ranges, we vary bandwidth constraints in our topologies.\nFor our experiments, we created three different ranges corresponding to low, medium, and high bandwidths relative to our typical streaming rates of 600-1000 Kbps as specified in Table 1.\nWhile the presented ModelNet results are restricted to two topologies with varying bandwidth constraints, the results of experiments with additional topologies all show qualitatively similar behavior.\nWe do not implement any particular coding scheme for our experiments.\nRather, we assume that either each sequence number directly specifies a particular data block and the block offset for each packet, or we are distributing data within the same block for LT Codes, e.g., when distributing a file.\n4.1 Offline Bottleneck Bandwidth Tree\nOne of our goals is to determine Bullet's performance relative to the best possible bandwidth-optimized tree for a given network topology.\nThis allows us to quantify the possible improvements of an overlay mesh constructed using Bullet relative to the best possible tree.\nWhile we have not yet proven this, we believe that this problem is NP-hard.\nThus, in this section we present a simple greedy offline algorithm to determine the connectivity of a tree likely to deliver a high level of bandwidth.\nIn practice, we are not aware of any scalable online algorithms that are able to deliver the bandwidth of an offline algorithm.\nAt the same time, trees constructed by our algorithm tend to be \"long and skinny\" making them less resilient to failures and inappropriate for delay sensitive applications (such as multimedia streaming).\nIn addition to any performance comparisons, a Bullet mesh has much lower depth than the bottleneck tree and is more resilient to failure, as discussed in Section 4.6.\nTable 1: Bandwidth ranges for link types used in our topologies expressed in Kbps.\nSpecifically, we consider the following problem: given complete knowledge of the topology (individual link latencies, bandwidth, and packet loss rates), what is the overlay tree that will deliver the highest bandwidth to a set of predetermined overlay nodes?\nWe assume that the throughput of the slowest overlay link (the bottleneck link) determines the throughput of the entire tree.\nWe are, therefore, trying to find the directed overlay tree with the maximum bottleneck link.\nAccordingly, we refer to this problem as the overlay maximum bottleneck tree (OMBT).\nIn a simplified case, assuming that congestion only exists on access links and there are no lossy links, there exists an optimal algorithm [23].\nIn the more general case of contention on any physical link, and when the system is allowed to choose the routing path between the two endpoints, this problem is known to be NP-hard [12], even in the absence of link losses.\nFor the purposes of this paper, our goal is to determine a \"good\" overlay streaming tree that provides each overlay participant with substantial bandwidth, while avoiding overlay links with high end-to-end loss rates.\nWe make the following assumptions:\n1.\nThe routing path between any two overlay participants is fixed.\nThis closely models the existing overlay network model with IP for unicast routing.\n2.\nThe overlay tree will use TCP-friendly unicast connections to transfer data point-to-point.\n3.\nIn the absence of other flows, we can estimate the throughput of a TCP-friendly flow using a steady-state formula [27].\n4.\nWhen several (n) flows share the same bottleneck link, each flow can achieve throughput of at most nc, where c is the physical capacity of the link.\nGiven these assumptions, we concentrate on estimating the throughput available between two participants in the overlay.\nWe start by calculating the throughput using the steady-state formula.\nWe then \"route\" the flow in the network, and consider the physical links one at a time.\nOn each physical link, we compute the fair-share for each of the competing flows.\nThe throughput of an overlay link is then approximated by the minimum of the fair-shares along the routing path, and the formula rate.\nIf some flow does not require the same share of the bottleneck link as other competing flows (i.e., its throughput might be limited by losses elsewhere in the network), then the other flows might end up with a greater share than the one we compute.\nWe do not account for this, as the major goal of this estimate is simply to avoid lossy and highly congested physical links.\nMore formally, we define the problem as follows: Overlay Maximum Bottleneck Tree (OMBT).\nGiven a physical network represented as a graph G = (V, E), set of overlay participants P C V, source node (s E P), bandwidth B: E--+ R +, loss rate L: E--+ [0, 1], propagation delay D: E--+ R + of each link, set of possible overlay links O = {(v, w) | v, w E P, v = ~ w}, routing table\nputed from round-trip time d (o) = E where f (o) is the TCP steady-state sending rate, com\n(given overlay link o = (v, w), o' = (w, v)), and loss rate eEo (1 \u2212 l (e)).\nWe write e E o to express that link e is included in the o's routing path (RT (o, e) = 1).\nAssuming that we can estimate the throughput of a flow, we proceed to formulate a greedy OMBT algorithm.\nThis algorithm is non-optimal, but a similar approach was found to perform well [12].\nOur algorithm is similar to the Widest Path Heuristic (WPH) [12], and more generally to Prim's MST algorithm [32].\nDuring its execution, we maintain the set of nodes already in the tree, and the set of remaining nodes.\nTo grow the tree, we consider all the overlay links leading from the nodes in the tree to the remaining nodes.\nWe greedily pick the node with the highest throughput overlay link.\nUsing this overlay link might cause us to route traffic over physical links traversed by some other tree flows.\nSince we do not re-examine the throughput of nodes that are already in the tree, they might end up being connected to the tree with slower overlay links than initially estimated.\nHowever, by attaching the node with the highest residual bandwidth at every step, we hope to lessen the effects of after-the-fact physical link sharing.\nWith the synthetic topologies we use for our emulation environment, we have not found this inaccuracy to severely impact the quality of the tree.\n4.2 Bullet vs. Streaming\nWe have implemented a simple streaming application that is capable of streaming data over any specified tree.\nIn our implementation, we are able to stream data through overlay trees using UDP, TFRC, or TCP.\nFigure 6 shows average bandwidth that each of 1000 nodes receives via this streaming as time progresses on the x-axis.\nIn this example, we use TFRC to stream 600 Kbps over our offline bottleneck bandwidth tree and a random tree (other random trees exhibit qualitatively similar behavior).\nIn these experiments, streaming begins 100 seconds into each run.\nWhile the random tree delivers an achieved bandwidth of under 100 Kbps, our offline algorithm overlay delivers approximately 400 Kbps of data.\nFor this experiment, bandwidths were set to the medium range from Table 1.\nWe believe that any degree-constrained online bandwidth overlay tree algorithm would exhibit similar (or lower) behavior to our bandwidth\nFigure 6: Achieved bandwidth over time for TFRC streaming over the bottleneck bandwidth tree and a random tree.\noptimized overlay.\nHence, Bullet's goal is to overcome this bandwidth limit by allowing for the perpendicular reception of data and by utilizing disjoint data flows in an attempt to match or exceed the performance of our offline algorithm.\nTo evaluate Bullet's ability to exceed the bandwidth achievable via tree distribution overlays, we compare Bullet running over a random overlay tree to the streaming behavior shown in Figure 6.\nFigure 7 shows the average bandwidth received by each node (labeled Useful total) with standard deviation.\nThe graph also plots the total amount of data received and the amount of data a node receives from its parent.\nFor this topology and bandwidth setting, Bullet was able to achieve an average bandwidth of 500 Kbps, fives times that achieved by the random tree and more than 25% higher than the offline bottleneck bandwidth algorithm.\nFurther, the total bandwidth (including redundant data) received by each node is only slightly higher than the useful content, meaning that Bullet is able to achieve high bandwidth while wasting little network resources.\nBullet's use of TFRC in this example ensures that the overlay is TCP friendly throughout.\nThe average per-node control overhead is approximately 30 Kbps.\nBy tracing certain packets as they move through the system, we are able to acquire link stress estimates of our system.\nThough the link stress can be different for each packet since each can take a different path through the overlay mesh, we average link stress due to each traced packet.\nFor this experiment, Bullet has an average link stress of approximately 1.5 with an absolute maximum link stress of 22.\nThe standard deviation in most of our runs is fairly high because of the limited bandwidth randomly assigned to some Client-Stub and Stub-Stub links.\nWe feel that this is consistent with real Internet behavior where clients have widely varying network connectivity.\nA time slice is shown in Figure 8 that plots the CDF of instantaneous bandwidths that each node receives.\nThe graph shows that few client nodes receive inadequate bandwidth even though they are bandwidth constrained.\nThe distribution rises sharply starting at approximately 500 Kbps.\nThe vast majority of nodes receive a stream of 500-600 Kbps.\nWe have evaluated Bullet under a number of bandwidth constraints to determine how Bullet performs relative to the\nFigure 7: Achieved bandwidth over time for Bullet over a random tree.\nFigure 8: CDF of instantaneous achieved bandwidth at time 430 seconds.\navailable bandwidth of the underlying topology.\nTable 1 describes representative bandwidth settings for our streaming rate of 600 Kbps.\nThe intent of these settings is to show a scenario where more than enough bandwidth is available to achieve a target rate even with traditional tree streaming, an example of where it is slightly not sufficient, and one in which the available bandwidth is quite restricted.\nFigure 9 shows achieved bandwidths for Bullet and the bottleneck bandwidth tree over time generated from topologies with bandwidths in each range.\nIn all of our experiments, Bullet outperforms the bottleneck bandwidth tree by a factor of up to 100%, depending on how much bandwidth is constrained in the underlying topology.\nIn one extreme, having more than ample bandwidth, Bullet and the bottleneck bandwidth tree are both able to stream at the requested rate (600 Kbps in our example).\nIn the other extreme, heavily constrained topologies allow Bullet to achieve twice the bandwidth achievable via the bottleneck bandwidth tree.\nFor all other topologies, Bullet's benefits are somewhere in between.\nIn our example, Bullet running over our medium-constrained bandwidth topology is able to outperform the bottleneck bandwidth tree by a factor of 25%.\nFurther, we stress that we believe it would\nFigure 9: Achieved bandwidth for Bullet and bottleneck tree over time for high, medium, and low bandwidth topologies.\nbe extremely difficult for any online tree-based algorithm to exceed the bandwidth achievable by our offline bottleneck algorithm that makes use of global topological information.\nFor instance, we built a simple bandwidth optimizing overlay tree construction based on Overcast [21].\nThe resulting dynamically constructed trees never achieved more than 75% of the bandwidth of our own offline algorithm.\n4.3 Creating Disjoint Data\nBullet's ability to deliver high bandwidth levels to nodes depends on its disjoint transmission strategy.\nThat is, when bandwidth to a child is limited, Bullet attempts to send the \"correct\" portions of data so that recovery of the lost data is facilitated.\nA Bullet parent sends different data to its children in hopes that each data item will be readily available to nodes spread throughout its subtree.\nIt does so by assigning ownership of data objects to children in a manner that makes the expected number of nodes holding a particular data object equal for all data objects it transmits.\nFigure 10 shows the resulting bandwidth over time for the non-disjoint strategy in which a node (and more importantly, the root of the tree) attempts to send all data to each of its children (subject to independent losses at individual child links).\nBecause the children transports throttle the sending rate at each parent, some data is inherently sent disjointly (by chance).\nBy not explicitly choosing which data to send its child, this approach deprives Bullet of 25% of its bandwidth capability, when compared to the case when our disjoint strategy is enabled in Figure 7.\n4.4 Epidemic Approaches\nIn this section, we explore how Bullet compares to data dissemination approaches that use some form of epidemic routing.\nWe implemented a form of \"gossiping\", where a node forwards non-duplicate packets to a randomly chosen number of nodes in its local view.\nThis technique does not use a tree for dissemination, and is similar to lpbcast [14] (recently improved to incorporate retrieval of data objects [13]).\nWe do not disseminate packets every T seconds; instead we forward them as soon as they arrive.\nFigure 10: Achieved bandwidth over time using nondisjoint data transmission.\nWe also implemented a pbcast-like [2] approach for retrieving data missing from a data distribution tree.\nThe idea here is that nodes are expected to obtain most of their data from their parent.\nNodes then attempt to retrieve any missing data items through gossiping with random peers.\nInstead of using gossiping with a fixed number of rounds for each packet, we use anti-entropy with a FIFO Bloom filter to attempt to locate peers that hold any locally missing data items.\nTo make our evaluation conservative, we assume that nodes employing gossip and anti-entropy recovery are able to maintain full group membership.\nWhile this might be difficult in practice, we assume that RanSub [24] could also be applied to these ideas, specifically in the case of anti-entropy recovery that employs an underlying tree.\nFurther, we also allow both techniques to reuse other aspects of our implementation: Bloom filters, TFRC transport, etc. .\nTo reduce the number of duplicate packets, we use less peers in each round (5) than Bullet (10).\nFor our configuration, we experimentally found that 5 peers results in the best performance with the lowest overhead.\nIn our experiments, increasing the number of peers did not improve the average bandwidth achieved throughout the system.\nTo allow TFRC enough time to ramp up to the appropriate TCP-friendly sending rate, we set the epoch length for anti-entropy recovery to 20 seconds.\nFor these experiments, we use a 5000-node INET topology with no explicit physical link losses.\nWe set link bandwidths according to the medium range from Table 1, and randomly assign 100 overlay participants.\nThe randomly chosen root either streams at 900 Kbps (over a random tree for Bullet and greedy bottleneck tree for anti-entropy recovery), or sends packets at that rate to randomly chosen nodes for gossiping.\nFigure 11 shows the resulting bandwidth over time achieved by Bullet and the two epidemic approaches.\nAs expected, Bullet comes close to providing the target bandwidth to all participants, achieving approximately 60 percent more then gossiping and streaming with anti-entropy.\nThe two epidemic techniques send an excessive number of duplicates, effectively reducing the useful bandwidth provided to each node.\nMore importantly, both approaches assign equal significance to other peers, regardless of the available band\nFigure 11: Achieved bandwidth over time for Bullet and epidemic approaches.\nwidth and the similarity ratio.\nBullet, on the other hand, establishes long-term connections with peers that provide good bandwidth and disjoint content, and avoids most of the duplicates by requesting disjoint data from each node's peers.\n4.5 Bullet on a Lossy Network\nTo evaluate Bullet's performance under more lossy network conditions, we have modified our 20,000-node topologies used in our previous experiments to include random packet losses.\nModelNet allows the specification of a packet loss rate in the description of a network link.\nOur goal by modifying these loss rates is to simulate queuing behavior when the network is under load due to background network traffic.\nTo effect this behavior, we first modify all non-transit links in each topology to have a packet loss rate chosen uniformly random from [0, 0.003] resulting in a maximum loss rate of 0.3%.\nTransit links are likewise modified, but with a maximum loss rate of 0.1%.\nSimilar to the approach in [28], we randomly designated 5% of the links in the topologies as overloaded and set their loss rates uniformly random from [0.05, 0.1] resulting in a maximum packet loss rate of 10%.\nFigure 12 shows achieved bandwidths for streaming over Bullet and using our greedy offline bottleneck bandwidth tree.\nBecause losses adversely affect the bandwidth achievable over TCP-friendly transport and since bandwidths are strictly monotonically decreasing over a streaming tree, treebased algorithms perform considerably worse than Bullet when used on a lossy network.\nIn all cases, Bullet delivers at least twice as much bandwidth than the bottleneck bandwidth tree.\nAdditionally, losses in the low bandwidth topology essentially keep the bottleneck bandwidth tree from delivering any data, an artifact that is avoided by Bullet.\n4.6 Performance Under Failure\nIn this section, we discuss Bullet's behavior in the face of node failure.\nIn contrast to streaming distribution trees that must quickly detect and make tree transformations to overcome failure, Bullet's failure resilience rests on its ability to maintain a higher level of achieved bandwidth by virtue of perpendicular (peer) streaming.\nWhile all nodes under a failed node in a distribution tree will experience a temporary\nFigure 12: Achieved bandwidths for Bullet and bottleneck bandwidth tree over a lossy network topology.\ndisruption in service, Bullet nodes are able compensate for this by receiving data from peers throughout the outage.\nBecause Bullet, and, more importantly, RanSub makes use of an underlying tree overlay, part of Bullet's failure recovery properties will depend on the failure recovery behavior of the underlying tree.\nFor the purposes of this discussion, we simply assume the worst-case scenario where an underlying tree has no failure recovery.\nIn our failure experiments, we fail one of root's children (with 110 of the total 1000 nodes as descendants) 250 seconds after data streaming is started.\nBy failing one of root's children, we are able to show Bullet's worst-case performance under a single node failure.\nIn our first scenario, we disable failure detection in RanSub so that after a failure occurs, Bullet nodes request data only from their current peers.\nThat is, at this point, RanSub stops functioning and no new peer relationships are created for the remainder of the run.\nFigure 13 shows Bullet's achieved bandwidth over time for this case.\nWhile the average achieved rate drops from 500 Kbps to 350 Kbps, most nodes (including the descendants of the failed root child) are able to recover a large portion of the data rate.\nNext, we enable RanSub failure detection that recognizes a node's failure when a RanSub epoch has lasted longer than the predetermined maximum (5 seconds for this test).\nIn this case, the root simply initiates the next distribute phase upon RanSub timeout.\nThe net result is that nodes that are not descendants of the failed node will continue to receive updated random subsets allowing them to peer with appropriate nodes reflecting the new network conditions.\nAs shown in Figure 14, the failure causes a negligible disruption in performance.\nWith RanSub failure detection enabled, nodes quickly learn of other nodes from which to receive data.\nOnce such recovery completes, the descendants of the failed node use their already established peer relationships to compensate for their ancestor's failure.\nHence, because Bullet is an overlay mesh, its reliability characteristics far exceed that of typical overlay distribution trees.\n4.7 PlanetLab\nThis section contains results from the deployment of Bullet over the PlanetLab [31] wide-area network testbed.\nFor\nFigure 13: Bandwidth over time with a worst-case node failure and no RanSub recovery.\nFigure 14: Bandwidth over time with a worst-case node failure and RanSub recovery enabled.\nour first experiment, we chose 47 nodes for our deployment, with no two machines being deployed at the same site.\nSince there is currently ample bandwidth available throughout the PlanetLab overlay (a characteristic not necessarily representative of the Internet at large), we designed this experiment to show that Bullet can achieve higher bandwidth than an overlay tree when the source is constrained, for instance in cases of congestion on its outbound access link, or of overload by a flash-crowd.\nWe did this by choosing a root in Europe connected to PlanetLab with fairly low bandwidth.\nThe node we selected was in Italy (cs.unibo.it) and we had 10 other overlay nodes in Europe.\nWithout global knowledge of the topology in PlanetLab (and the Internet), we are, of course, unable to produce our greedy bottleneck bandwidth tree for comparison.\nWe ran Bullet over a random overlay tree for 300 seconds while attempting to stream at a rate of 1.5 Mbps.\nWe waited 50 seconds before starting to stream data to allow nodes to successfully join the tree.\nWe compare the performance of Bullet to data streaming over multiple handcrafted trees.\nFigure 15 shows our results for two such trees.\nThe \"good\" tree has all nodes in Europe located high in the tree, close to the root.\nWe used pathload [20] to measure the\nFigure 15: Achieved bandwidth over time for Bullet and TFRC streaming over different trees on PlanetLab with a root in Europe.\navailable bandwidth between the root and all other nodes.\nNodes with high bandwidth measurements were placed close to the root.\nIn this case, we are able to achieve a bandwidth of approximately 300 Kbps.\nThe \"worst\" tree was created by setting the root's children to be the three nodes with the worst bandwidth characteristics from the root as measured by pathload.\nAll subsequent levels in the tree were set in this fashion.\nFor comparison, we replaced all nodes in Europe from our topology with nodes in the US, creating a topology that only included US nodes with high bandwidth characteristics.\nAs expected, Bullet was able to achieve the full 1.5 Mbps rate in this case.\nA well constructed tree over this highbandwidth topology yielded slightly lower than 1.5 Mbps, verifying that our approach does not sacrifice performance under high bandwidth conditions and improves performance under constrained bandwidth scenarios.\n5.\nRELATED WORK\nSnoeren et al. [36] use an overlay mesh to achieve reliable and timely delivery of mission-critical data.\nIn this system, every node chooses n \"parents\" from which to receive duplicate packet streams.\nSince its foremost emphasis is reliability, the system does not attempt to improve the bandwidth delivered to the overlay participants by sending disjoint data at each level.\nFurther, during recovery from parent failure, it limits an overlay router's choice of parents to nodes with a level number that is less than its own level number.\nThe power of \"perpendicular\" downloads is perhaps best illustrated by Kazaa [22], the popular peer-to-peer file swapping network.\nKazaa nodes are organized into a scalable, hierarchical structure.\nIndividual users search for desired content in the structure and proceed to simultaneously download potentially disjoint pieces from nodes that already have it.\nSince Kazaa does not address the multicast communication model, a large fraction of users downloading the same file would consume more bandwidth than nodes organized into the Bullet overlay structure.\nKazaa does not use erasure coding; therefore it may take considerable time to locate \"the last few bytes.\"\nBitTorrent [3] is another example of a file distribution system currently deployed on the Internet.\nIt utilizes trackers that direct downloaders to random subsets of machines that already have portions of the file.\nThe tracker poses a scalability limit, as it continuously updates the systemwide distribution of the file.\nLowering the tracker communication rate could hurt the overall system performance, as information might be out of date.\nFurther, BitTorrent does not employ any strategy to disseminate data to different regions of the network, potentially making it more difficult to recover data depending on client access patterns.\nSimilar to Bullet, BitTorrent incorporates the notion of \"choking\" at each node with the goal of identifying receivers that benefit the most by downloading from that particular source.\nFastReplica [11] addresses the problem of reliable and efficient file distribution in content distribution networks (CDNs).\nIn the basic algorithm, nodes are organized into groups of fixed size (n), with full group membership information at each node.\nTo distribute the file, a node splits it into n equal-sized portions, sends the portions to other group members, and instructs them to download the missing pieces in parallel from other group members.\nSince only a fixed portion of the file is transmitted along each of the overlay links, the impact of congestion is smaller than in the case of tree distribution.\nHowever, since it treats all paths equally, FastReplica does not take full advantage of highbandwidth overlay links in the system.\nSince it requires file store-and-forward logic at each level of the hierarchy necessary for scaling the system, it may not be applicable to high-bandwidth streaming.\nThere are numerous protocols that aim to add reliability to IP multicast.\nIn Scalable Reliable Multicast (SRM) [16], nodes multicast retransmission requests for missed packets.\nTwo techniques attempt to improve the scalability of this approach: probabilistic choice of retransmission timeouts, and organization of receivers into hierarchical local recovery groups.\nHowever, it is difficult to find appropriate timer values and local scoping settings (via the TTL field) for a wide range of topologies, number of receivers, etc. even when adaptive techniques are used.\nOne recent study [2] shows that SRM may have significant overhead due to retransmission requests.\nBullet is closely related to efforts that use epidemic data propagation techniques to recover from losses in the nonreliable IP-multicast tree.\nIn pbcast [2], a node has global group membership, and periodically chooses a random subset of peers to send a digest of its received packets.\nA node that receives the digest responds to the sender with the missing packets in a last-in, first-out fashion.\nLbpcast [14] addresses pbcast's scalability issues (associated with global knowledge) by constructing, in a decentralized fashion, a partial group membership view at each node.\nThe average size of the views is engineered to allow a message to reach all participants with high probability.\nSince lbpcast does not require an underlying tree for data distribution and relies on the push-gossiping model, its network overhead can be quite high.\nCompared to the reliable multicast efforts, Bullet behaves favorably in terms of the network overhead because nodes do not \"blindly\" request retransmissions from their peers.\nInstead, Bullet uses the summary views it obtains through RanSub to guide its actions toward nodes with disjoint content.\nFurther, a Bullet node splits the retransmission load between all of its peers.\nWe note that pbcast nodes contain a mechanism to rate-limit retransmitted packets and to send different packets in response to the same digest.\nHowever, this does not guarantee that packets received in parallel from multiple peers will not be duplicates.\nMore importantly, the multicast recovery methods are limited by the bandwidth through the tree, while Bullet strives to provide more bandwidth to all receivers by making data deliberately disjoint throughout the tree.\nNarada [19] builds a delay-optimized mesh interconnecting all participating nodes and actively measures the available bandwidth on overlay links.\nIt then runs a standard routing protocol on top of the overlay mesh to construct forwarding trees using each node as a possible source.\nNarada nodes maintain global knowledge about all group participants, limiting system scalability to several tens of nodes.\nFurther, the bandwidth available through a Narada tree is still limited to the bandwidth available from each parent.\nOn the other hand, the fundamental goal of Bullet is to increase bandwidth through download of disjoint data from multiple peers.\nOvercast [21] is an example of a bandwidth-efficient overlay tree construction algorithm.\nIn this system, all nodes join at the root and migrate down to the point in the tree where they are still able to maintain some minimum level of bandwidth.\nBullet is expected to be more resilient to node departures than any tree, including Overcast.\nInstead of a node waiting to get the data it missed from a new parent, a node can start getting data from its perpendicular peers.\nThis transition is seamless, as the node that is disconnected from its parent will start demanding more missing packets from its peers during the standard round of refreshing its filters.\nOvercast convergence time is limited by probes to immediate siblings and ancestors.\nBullet is able to provide approximately a target bandwidth without having a fully converged tree.\nIn parallel to our own work, SplitStream [9] also has the goal of achieving high bandwidth data dissemination.\nIt operates by splitting the multicast stream into k stripes, transmitting each stripe along a separate multicast tree built using Scribe [34].\nThe key design goal of the tree construction mechanism is to have each node be an intermediate node in at most one tree (while observing both inbound and outbound node bandwidth constraints), thereby reducing the impact of a single node's sudden departure on the rest of the system.\nThe join procedure can potentially sacrifice the interior-node-disjointness achieved by Scribe.\nPerhaps more importantly, SplitStream assumes that there is enough available bandwidth to carry each stripe on every link of the tree, including the links between the data source and the roots of individual stripe trees independently chosen by Scribe.\nTo some extent, Bullet and SplitStream are complementary.\nFor instance, Bullet could run on each of the stripes to maximize the bandwidth delivered to each node along each stripe.\nCoopNet [29] considers live content streaming in a peerto-peer environment, subject to high node churn.\nConsequently, the system favors resilience over network efficiency.\nIt uses a centralized approach for constructing either random or deterministic node-disjoint (similar to SplitStream) trees, and it includes an MDC [17] adaptation framework based on scalable receiver feedback that attempts to maximize the signal-to-noise ratio perceived by receivers.\nIn the case of on-demand streaming, CoopNet [30] addresses\nthe flash-crowd problem at the central server by redirecting incoming clients to a fixed number of nodes that have previously retrieved portions of the same content.\nCompared to CoopNet, Bullet provides nodes with a uniformly random subset of the system-wide distribution of the file.\n6.\nCONCLUSIONS\nTypically, high bandwidth overlay data streaming takes place over a distribution tree.\nIn this paper, we argue that, in fact, an overlay mesh is able to deliver fundamentally higher bandwidth.\nOf course, a number of difficult challenges must be overcome to ensure that nodes in the mesh do not repeatedly receive the same data from peers.\nThis paper presents the design and implementation of Bullet, a scalable and efficient overlay construction algorithm that overcomes this challenge to deliver significant bandwidth improvements relative to traditional tree structures.\nSpecifically, this paper makes the following contributions: 9 We present the design and analysis of Bullet, an overlay construction algorithm that creates a mesh over any distribution tree and allows overlay participants to achieve a higher bandwidth throughput than traditional data streaming.\nAs a related benefit, we eliminate the overhead required to probe for available bandwidth in traditional distributed tree construction techniques.\n9 We provide a technique for recovering missing data from peers in a scalable and efficient manner.\nRanSub periodically disseminates summaries of data sets received by a changing, uniformly random subset of global participants.\n9 We propose a mechanism for making data disjoint and then distributing it in a uniform way that makes the probability of finding a peer containing missing data equal for all nodes.\n9 A large-scale evaluation of 1000 overlay participants running in an emulated 20,000 node network topology, as well as experimentation on top of the PlanetLab Internet testbed, shows that Bullet running over a random tree can achieve twice the throughput of streaming over a traditional bandwidth tree.","keyphrases":["bullet","bandwidth","data dissemin","ip multicast","multipoint commun","high-bandwidth data distribut","bandwidth probe","overlai mesh","overlai network","larg-file transfer","real-time multimedia stream","peer-to-peer","ransub","content deliveri","tfrc","overlai"],"prmu":["P","P","P","P","P","P","P","M","M","M","M","U","U","U","U","U"]} {"id":"J-28","title":"Approximately-Strategyproof and Tractable Multi-Unit Auctions","abstract":"We present an approximately-efficient and approximately-strategyproof auction mechanism for a single-good multi-unit allocation problem. The bidding language in our auctions allows marginal-decreasing piecewise constant curves. First, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + )approximation in worst-case time T = O(n3 \/ ), given n bids each with a constant number of pieces. Second, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O(T log n). The maximal possible gain from manipulation to a bidder in the combined scheme is bounded by \/(1+ )V , where V is the total surplus in the efficient outcome.","lvl-1":"Approximately-Strategyproof and Tractable Multi-Unit Auctions Anshul Kothari\u2217 David C. Parkes\u2020 Subhash Suri\u2217 ABSTRACT We present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem.\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves.\nFirst, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + )approximation in worst-case time T = O(n3 \/ ), given n bids each with a constant number of pieces.\nSecond, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O(T log n).\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by \/(1+ )V , where V is the total surplus in the efficient outcome.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics.\nGeneral Terms Algorithms, Economics.\n1.\nINTRODUCTION In this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem.\nOur scheme is both approximately efficient and approximately strategyproof.\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce; for instance, corporations are increasingly using auctions for their strategic sourcing.\nWe consider both a reverse auction variation and a forward auction variation, and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves.\nIn the reverse auction, we consider a single buyer with a demand for M units of a good and n suppliers, each with a marginal-decreasing piecewise-constant cost function.\nIn addition, each supplier can also express an upper bound, or capacity constraint on the number of units she can supply.\nThe reverse variation models, for example, a procurement auction to obtain raw materials or other services (e.g. circuit boards, power suppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M units of a good and n buyers, each with a marginal-decreasing piecewise-constant valuation function.\nA buyer can also express a lower bound, or minimum lot size, on the number of units she demands.\nThe forward variation models, for example, an auction to sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [22, 5, 11] mechanism for the multiunit auction problem.\nThe Vickrey-Clarke-Groves (VCG) mechanism has a number of interesting economic properties in this setting, including strategyproofness, such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction, and allocative efficiency, such that the outcome maximizes the total surplus in the system.\nHowever, as we discuss in Section 2, the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer.\nOtherwise, either the auction must run at a loss in these instances, or the buyer cannot be expected to voluntarily choose to participate.\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with marginal-decreasing bid curves, the underlying allocation problem turns out to (weakly) intractable.\nFor instance, the classic 0\/1 knapsack is a special case of this problem.1 We model the 1 However, the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all 166 allocation problem as a novel and interesting generalization of the classic knapsack problem, and develop a fully polynomialtime approximation scheme, computing a (1 + )-approximation in worst-case time T = O(n3 \/\u03b5), where each bid has a fixed number of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG payments to all n agents requires time O(nT).\nWe compute approximate VCG payments in worst-case time O(\u03b1T log(\u03b1n\/\u03b5)), where \u03b1 is a constant that quantifies a reasonable no-monopoly assumption.\nSpecifically, in the reverse auction, suppose that C(I) is the minimal cost for procuring M units with all sellers I, and C(I \\ i) is the minimal cost without seller i. Then, the constant \u03b1 is defined as an upper bound for the ratio C(I \\i)\/C(I), over all sellers i.\nThis upper-bound tends to 1 as the number of sellers increases.\nThe approximate VCG mechanism is ( \u03b5 1+\u03b5 )-strategyproof for an approximation to within (1 + ) of the optimal allocation.\nThis means that a bidder can gain at most ( \u03b5 1+\u03b5 )V from a nontruthful bid, where V is the total surplus from the efficient allocation.\nAs such, this is an example of a computationally-tractable \u03b5-dominance result.2 In practice, we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation.\nSection 2 formally defines the forward and reverse auctions, and defines the VCG mechanisms.\nWe also prove our claims about \u03b5-strategyproofness.\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the VCG mechanism.\nSection 5 concludes.\n1.1 Related Work There has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem, in which there are multiple different items.\nThe combinatorial allocation problem (CAP) is both NP-complete and inapproximable (e.g. [6]).\nAlthough some polynomial-time cases have been identified for the CAP [6, 20], introducing an expressive exclusive-or bidding language quickly breaks these special cases.\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language-the bid taker in our setting is allowed to accept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention.\nFor instance, Lehmann et al. [15] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem.\nNisan & Ronen [18] discussed approximate VCG-based mechanisms, but either appealed to particular maximal-in-range approximations to retain full strategyproofness, or to resource-bounded agents with information or computational limitations on the ability to compute strategies.\nFeigenminimum-lot size constraints from the buyers.\n2 However, this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [8] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome.\nVCG mechanism do have a natural self-correcting property, though, because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent``s own value.\nbaum & Shenker [8] have defined the concept of strategically faithful approximations, and proposed the study of approximations as an important direction for algorithmic mechanism design.\nSchummer [21] and Parkes et al [19] have previously considered \u03b5-dominance, in the context of economic impossibility results, for example in combinatorial exchanges.\nEso et al. [7] have studied a similar procurement problem, but for a different volume discount model.\nThis earlier work formulates the problem as a general mixed integer linear program, and gives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple buyers and sellers trade a divisible good.\nThe focus of this paper is also different: it investigates the equilibrium prices using the demand and supply curves, whereas our focus is on efficient mechanism design.\nAusubel [1] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [1], with an interpretation as a primal-dual algorithm [2].\n2.\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS In this section, we first describe the marginal-decreasing piecewise bidding language that is used in our forward and reverse auctions.\nContinuing, we introduce the VCG mechanism for the problem and the \u03b5-dominance results for approximations to VCG outcomes.\nWe also discuss the economic properties of VCG mechanisms in these forward and reverse auction multi-unit settings.\n2.1 Marginal-Decreasing Piecewise Bids We provide a piecewise-constant and marginal-decreasing bidding language.\nThis bidding language is expressive for a natural class of valuation and cost functions: fixed unit prices over intervals of quantities.\nSee Figure 1 for an example.\nIn addition, we slightly relax the marginal-decreasing requirement to allow: a bidder in the forward auction to state a minimal purchase amount, such that she has zero value for quantities smaller than that amount; a seller in the reverse auction to state a capacity constraint, such that she has an effectively infinite cost to supply quantities in excess of a particular amount.\nReverse Auction Bid 7 5 10 20 25 10 8 Quantity Price 7 5 10 20 25 10 8 Quantity Price Forward Auction Bid Figure 1: Marginal-decreasing, piecewise constant bids.\nIn the forward auction bid, the bidder offers $10 per unit for quantity in the range [5, 10), $8 per unit in the range [10, 20), and $7 in the range [20, 25].\nHer valuation is zero for quantities outside the range [10, 25].\nIn the reverse auction bid, the cost of the seller is \u221e outside the range [10, 25].\nIn detail, in a forward auction, a bid from buyer i can be written as a list of (quantity-range, unit-price) tuples, ((u1 i , p1 i ), (u2 i , p2 i ), ... , (umi\u22121 i , pmi\u22121 i )), with an upper bound umi i on the quantity.\nThe interpretation is that the bidder``s valuation in the 167 (semi-open) quantity range [uj i , uj+1 i ) is pj i for each unit.\nAdditionally, it is assumed that the valuation is 0 for quantities less than u1 i as well as for quantities more than um i .\nThis is implemented by adding two dummy bid tuples, with zero prices in the range [0, u1 i ) and (umi i , \u221e).\nWe interpret the bid list as defining a price function, pbid,i(q) = qpj i , if uj i \u2264 q < uj+1 i , where j = 1, 2, ... , mi \u22121.\nIn order to resolve the boundary condition, we assume that the bid price for the upper bound quantity umi i is pbid,i(umi i ) = umi i pmi\u22121 i .\nA seller``s bid is similarly defined in the reverse auction.\nThe interpretation is that the bidder``s cost in the (semi-open) quantity range [uj i , uj+1 i ) is pj i for each unit.\nAdditionally, it is assumed that the cost is \u221e for quantities less than u1 i as well as for quantities more than um i .\nEquivalently, the unit prices in the ranges [0, u1 i ) and (um i , \u221e) are infinity.\nWe interpret the bid list as defining a price function, pask,i(q) = qpj i , if uj i \u2264 q < uj+1 i .\n2.2 VCG-Based Multi-Unit Auctions We construct the tractable and approximately-strategyproof multiunit auctions around a VCG mechanism.\nWe assume that all agents have quasilinear utility functions; that is, ui(q, p) = vi(q)\u2212 p, for a buyer i with valuation vi(q) for q units at price p, and ui(q, p) = p \u2212 ci(q) for a seller i with cost ci(q) at price p.\nThis is a standard assumption in the auction literature, equivalent to assuming risk-neutral agents [13].\nWe will use the term payoff interchangeably for utility.\nIn the forward auction, there is a seller with M units to sell.\nWe assume that this seller has no intrinsic value for the items.\nGiven a set of bids from I agents, let V (I) denote the maximal revenue to the seller, given that at most one point on the bid curve can be selected from each agent and no more than M units of the item can be sold.\nLet x\u2217 = (x\u2217 1, ... , x\u2217 N ) denote the solution to this winner- determination problem, where x\u2217 i is the number of units sold to agent i. Similarly, let V (I \\ i) denote the maximal revenue to the seller without bids from agent i.\nThe VCG mechanism is defined as follows: 1.\nReceive piecewise-constant bid curves and capacity constraints from all the buyers.\n2.\nImplement the outcome x\u2217 that solves the winner-determination problem with all buyers.\n3.\nCollect payment pvcg,i = pbid,i(x\u2217 i ) \u2212 [V (I) \u2212 V (I \\ i)] from each buyer, and pass the payments to the seller.\nIn this forward auction, the VCG mechanism is strategyproof for buyers, which means that truthful bidding is a dominant strategy, i.e. utility maximizing whatever the bids of other buyers.\nIn addition, the VCG mechanism is allocatively-efficient, and the payments from each buyer are always positive.3 Moreover, each buyer pays less than its value, and receives payoff V (I)\u2212V (I \\ i) in equilibrium; this is precisely the marginal-value that buyer i contributes to the economic efficiency of the system.\nIn the reverse auction, there is a buyer with M units to buy, and n suppliers.\nWe assume that the buyer has value V > 0 to purchase all M units, but zero value otherwise.\nTo simplify the mechanism design problem we assume that the buyer will truthfully announce this value to the mechanism.4 The winner3 In fact, the VCG mechanism maximizes the expected payoff to the seller across all efficient mechanisms, even allowing for Bayesian-Nash implementations [14].\n4 Without this assumption, the Myerson-Satterthwaite [17] impossibility result would already imply that we should not expect an efficient trading mechanism in this setting.\ndetermination problem in the reverse auction is to determine the allocation, x\u2217 , that minimizes the cost to the buyer, or forfeits trade if the minimal cost is greater than value, V .\nLet C(I) denote the minimal cost given bids from all sellers, and let C(I \\i) denote the minimal cost without bids from seller i.\nWe can assume, without loss of generality, that there is an efficient trade and V \u2265 C(I).\nOtherwise, then the efficient outcome is no trade, and the outcome of the VCG mechanism is no trade and no payments.\nThe VCG mechanism implements the outcome x\u2217 that minimizes cost based on bids from all sellers, and then provides payment pvcg,i = pask,i(x\u2217 i )+[V \u2212C(I)\u2212max(0, V \u2212C(I\\i))] to each seller.\nThe total payment is collected from the buyer.\nAgain, in equilibrium each seller``s payoff is exactly the marginal-value that the seller contributes to the economic efficiency of the system; in the simple case that V \u2265 C(I \\ i) for all sellers i, this is precisely C(I \\ i) \u2212 C(I).\nAlthough the VCG mechanism remains strategyproof for sellers in the reverse direction, its applicability is limited to cases in which the total payments to the sellers are less than the buyer``s value.\nOtherwise, there will be instances in which the buyer will not choose to voluntarily participate in the mechanism, based on its own value and its beliefs about the costs of sellers.\nThis leads to a loss in efficiency when the buyer chooses not to participate, because efficient trades are missed.\nThis problem with the size of the payments, does not occur in simple single-item reverse auctions, or even in multi-unit reverse auctions with a buyer that has a constant marginal-valuation for each additional item that she procures.5 Intuitively, the problem occurs in the reverse multi-unit setting because the buyer demands a fixed number of items, and has zero value without them.\nThis leads to the possibility of the trade being contingent on the presence of particular, so-called pivotal sellers.\nDefine a seller i as pivotal, if C(I) \u2264 V but C(I\\i) > V .\nIn words, there would be no efficient trade without the seller.\nAny time there is a pivotal seller, the VCG payments to that seller allow her to extract all of the surplus, and the payments are too large to sustain with the buyer``s value unless this is the only winning seller.\nConcretely, we have this participation problem in the reverse auction when the total payoff to the sellers, in equilibrium, exceeds the total payoff from the efficient allocation: V \u2212 C(I) \u2265 i [V \u2212 C(I) \u2212 max(0, V \u2212 C(I \\ i))] As stated above, first notice that we require V > C(I \\ i) for all sellers i.\nIn other words, there must be no pivotal sellers.\nGiven this, it is then necessary and sufficient that: V \u2212 C(I) \u2265 i (C(I \\ i) \u2212 C(I)) (1) 5 To make the reverse auction symmetric with the forward direction, we would need a buyer with a constant marginal-value to buy the first M units, and zero value for additional units.\nThe payments to the sellers would never exceed the buyer``s value in this case.\nConversely, to make the forward auction symmetric with the reverse auction, we would need a seller with a constant (and high) marginal-cost to sell anything less than the first M units, and then a low (or zero) marginal cost.\nThe total payments received by the seller can be less than the seller``s cost for the outcome in this case.\n168 In words, the surplus of the efficient allocation must be greater than the total marginal-surplus provided by each seller.6 Consider an example with 3 agents {1, 2, 3}, and V = 150 and C(123) = 50.\nCondition (1) holds when C(12) = C(23) = 70 and C(13) = 100, but not when C(12) = C(23) = 80 and C(13) = 100.\nIn the first case, the agent payoffs \u03c0 = (\u03c00, \u03c01, \u03c02, \u03c03), where 0 is the seller, is (10, 20, 50, 20).\nIn the second case, the payoffs are \u03c0 = (\u221210, 30, 50, 30).\nOne thing we do know, because the VCG mechanism will maximize the payoff to the buyer across all efficient mechanisms [14], is that whenever Eq.\n1 is not satisfied there can be no efficient auction mechanism.7 2.3 \u03b5-Strategyproofness We now consider the same VCG mechanism, but with an approximation scheme for the underlying allocation problem.\nWe derive an \u03b5-strategyproofness result, that bounds the maximal gain in payoff that an agent can expect to achieve through a unilateral deviation from following a simple truth-revealing strategy.\nWe describe the result for the forward auction direction, but it is quite a general observation.\nAs before, let V (I) denote the value of the optimal solution to the allocation problem with truthful bids from all agents, and V (I \\i) denote the value of the optimal solution computed without bids from agent i. Let \u02c6V (I) and \u02c6V (I \\ i) denote the value of the allocation computed with an approximation scheme, and assume that the approximation satisfies: (1 + ) \u02c6V (I) \u2265 V (I) for some > 0.\nWe provide such an approximation scheme for our setting later in the paper.\nLet \u02c6x denote the allocation implemented by the approximation scheme.\nThe payoff to agent i, for announcing valuation \u02c6vi, is: vi(\u02c6xi) + j=i \u02c6vj (\u02c6xj) \u2212 \u02c6V (I \\ i) The final term is independent of the agent``s announced value, and can be ignored in an incentive-analysis.\nHowever, agent i can try to improve its payoff through the effect of its announced value on the allocation \u02c6x implemented by the mechanism.\nIn particular, agent i wants the mechanism to select \u02c6x to maximize the sum of its true value, vi(\u02c6xi), and the reported value of the other agents, \u00c8j=i \u02c6vj (\u02c6xj).\nIf the mechanism``s allocation algorithm is optimal, then all the agent needs to do is truthfully state its value and the mechanism will do the rest.\nHowever, faced with an approximate allocation algorithm, the agent can try to improve its payoff by announcing a value that corrects for the approximation, and causes the approximation algorithm to implement the allocation that exactly maximizes the total reported value of the other agents together with its own actual value [18].\n6 This condition is implied by the agents are substitutes requirement [3], that has received some attention in the combinatorial auction literature because it characterizes the case in which VCG payments can be supported in a competitive equilibrium.\nUseful characterizations of conditions that satisfy agents are substitutes, in terms of the underlying valuations of agents have proved quite elusive.\n7 Moreover, although there is a small literature on maximallyefficient mechanisms subject to requirements of voluntaryparticipation and budget-balance (i.e. with the mechanism neither introducing or removing money), analytic results are only known for simple problems (e.g. [16, 4]).\nWe can now analyze the best possible gain from manipulation to an agent in our setting.\nWe first assume that the other agents are truthful, and then relax this.\nIn both cases, the maximal benefit to agent i occurs when the initial approximation is worst-case.\nWith truthful reports from other agents, this occurs when the value of choice \u02c6x is V (I)\/(1 + \u03b5).\nThen, an agent could hope to receive an improved payoff of: V (I) \u2212 V (I) 1 + \u03b5 = \u03b5 1 + \u03b5 V (I) This is possible if the agent is able to select a reported type to correct the approximation algorithm, and make the algorithm implement the allocation with value V (I).\nThus, if other agents are truthful, and with a (1 + \u03b5)-approximation scheme to the allocation problem, then no agent can improve its payoff by more than a factor \u03b5\/(1 + \u03b5) of the value of the optimal solution.\nThe analysis is very similar when the other agents are not truthful.\nIn this case, an individual agent can improve its payoff by no more than a factor \/(1 + ) of the value of the optimal solution given the values reported by the other agents.\nLet V in the following theorem define the total value of the efficient allocation, given the reported values of agents j = i, and the true value of agent i. THEOREM 1.\nA VCG-based mechanism with a (1 + \u03b5)allocation algorithm is (1+ \u2212V ) strategyproof for agent i, and agent i can gain at most this payoff through some non-truthful strategy.\nNotice that we did not need to bound the error on the allocation problems without each agent, because the -strategyproofness result follows from the accuracy of the first-term in the VCG payment and is independent of the accuracy of the second-term.\nHowever, the accuracy of the solution to the problem without each agent is important to implement a good approximation to the revenue properties of the VCG mechanism.\n3.\nTHEGENERALIZED KNAPSACK PROBLEM In this section, we design a fully polynomial approximation scheme for the generalized knapsack, which models the winnerdetermination problem for the VCG-based multi-unit auctions.\nWe describe our results for the reverse auction variation, but the formulation is completely symmetric for the forward-auction.\nIn describing our approximation scheme, we begin with a simple property (the Anchor property) of an optimal knapsack solution.\nWe use this property to develop an O(n2 ) time 2-approximation for the generalized knapsack.\nIn turn, we use this basic approximation to develop our fully polynomial-time approximation scheme (FPTAS).\nOne of the major appeals of our piecewise bidding language is its compact representation of the bidder``s valuation functions.\nWe strive to preserve this, and present an approximation scheme that will depend only on the number of bidders, and not the maximum quantity, M, which can be very large in realistic procurement settings.\nThe FPTAS implements an (1 + \u03b5) approximation to the optimal solution x\u2217 , in worst-case time T = O(n3 \/\u03b5), where n is the number of bidders, and where we assume that the piecewise bid for each bidder has O(1) pieces.\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum 169 of c pieces, then the running time can be derived by substituting nc for each occurrence of n. 3.1 Preliminaries Before we begin, let us recall the classic 0\/1 knapsack problem: we are given a set of n items, where the item i has value vi and size si, and a knapsack of capacity M; all sizes are integers.\nThe goal is to determine a subset of items of maximum value with total size at most M.\nSince we want to focus on a reverse auction, the equivalent knapsack problem will be to choose a set of items with minimum value (i.e. cost) whose size exceeds M.\nThe generalized knapsack problem of interest to us can be defined as follows: Generalized Knapsack: Instance: A target M, and a set of n lists, where the ith list has the form Bi = (u1 i , p1 i ), ... , (umi\u22121 i , pmi\u22121 i ), (umi i (i), \u221e) , where uj i are increasing with j and pj i are decreasing with j, and uj i , pj i , M are positive integers.\nProblem: Determine a set of integers xj i such that 1.\n(One per list) At most one xj i is non-zero for any i, 2.\n(Membership) xj i = 0 implies xj i \u2208 [uj i , uj+1 i ), 3.\n(Target) \u00c8i \u00c8j xj i \u2265 M, and 4.\n(Objective) \u00c8i \u00c8j pj i xj i is minimized.\nThis generalized knapsack formulation is a clear generalization of the classic 0\/1 knapsack.\nIn the latter, each list consists of a single point (si, vi).8 The connection between the generalized knapsack and our auction problem is transparent.\nEach list encodes a bid, representing multiple mutually exclusive quantity intervals, and one can choose any quantity in an interval, but at most one interval can be selected.\nChoosing interval [uj i , uj+1 i ) has cost pj i per unit.\nThe goal is to procure at least M units of the good at minimum possible cost.\nThe problem has some flavor of the continuous knapsack problem.\nHowever, there are two major differences that make our problem significantly more difficult: (1) intervals have boundaries, and so to choose interval [uj i , uj+1 i ) requires that at least uj i and at most uj+1 i units must be taken; (2) unlike the classic knapsack, we cannot sort the items (bids) by value\/size, since different intervals in one list have different unit costs.\n3.2 A 2-Approximation Scheme We begin with a definition.\nGiven an instance of the generalized knapsack, we call each tuple tj i = (uj i , pj i ) an anchor.\nRecall that these tuples represent the breakpoints in the piecewise constant curve bids.\nWe say that the size of an anchor tj i is uj i , 8 In fact, because of the one per list constraint, the generalized problem is closer in spirit to the multiple choice knapsack problem [9], where the underling set of items is partitioned into disjoint subsets U1, U2, ... , Uk, and one can choose at most one item from each subset.\nPTAS do exist for this problem [10], and indeed, one can convert our problem into a huge instance of the multiple choice knapsack problem, by creating one group for each list; put a (quantity, price) point tuple (x, p) for each possible quantity for a bidder into his group (subset).\nHowever, this conversion explodes the problem size, making it infeasible for all but the most trivial instances.\nthe minimum number of units available at this anchor``s price pj i .\nThe cost of the anchor tj i is defined to be the minimum total price associated with this tuple, namely, cost(tj i ) = pj i uj i if j < mi, and cost(tmi i ) = pmi\u22121 i umi i .\nIn a feasible solution {x1, x2, ... , xn} of the generalized knapsack, we say that an element xi = 0 is an anchor if xi = uj i , for some anchor uj i .\nOtherwise, we say that xi is midrange.\nWe observe that an optimal knapsack solution can always be constructed so that at most one solution element is midrange.\nIf there are two midrange elements x and x , for bids from two different agents, with x \u2264 x , then we can increment x and decrement x, until one of them becomes an anchor.\nSee Figure 2 for an example.\nLEMMA 1.\n[Anchor Property] There exists an optimal solution of the generalized knapsack problem with at most one midrange element.\nAll other elements are anchors.\n1 midrange bid 5 20 15 10 25 5 25\u00a030201510 35 3 2 1 Price Quantity 5 20 15 10 25 5 25\u00a030201510 35 3 2 1 Price Quantity (i) Optimal solution with 2 midrange bids (ii) Optimal soltution with Figure 2: (i) An optimal solution with more than one bid not anchored (2,3); (ii) an optimal solution with only one bid (3) not anchored.\nWe use the anchor property to first obtain a polynomial-time 2-approximation scheme.\nWe do this by solving several instances of a restricted generalized-knapsack problem, which we call iKnapsack, where one element is forced to be midrange for a particular interval.\nSpecifically, suppose element x for agent l is forced to lie in its jth range, [uj , uj+1 ), while all other elements, x1, ... , xl\u22121, xl+1, xn, are required to be anchors, or zero.\nThis corresponds to the restricted problem iKnapsack( , j), in which the goal is to obtain at least M \u2212 uj units with minimum cost.\nElement x is assumed to have already contributed uj units.\nThe value of a solution to iKnapsack( , j) represents the minimal additional cost to purchase the rest of the units.\nWe create n \u2212 1 groups of potential anchors, where ith group contains all the anchors of the list i in the generalized knapsack.\nThe group for agent l contains a single element that represents the interval [0, uj+1 \u2212uj ), and the associated unit-price pj .\nThis interval represents the excess number of units that can be taken from agent l in iKnapsack( , j), in addition to uj , which has already been committed.\nIn any other group, we can choose at most one anchor.\nThe following pseudo-code describes our algorithm for this restriction of the generalized knapsack problem.\nU is the union of all the tuples in n groups, including a tuple t for agent l.\nThe size of this special tuple is defined as uj+1 \u2212 uj , and the cost is defined as pj l (uj+1 \u2212uj ).\nR is the number of units that remain to be acquired.\nS is the set of tuples accepted in the current tentative 170 solution.\nBest is the best solution found so far.\nVariable Skip is only used in the proof of correctness.\nAlgorithm Greedy( , j) 1.\nSort all tuples of U in the ascending order of unit price; in case of ties, sort in ascending order of unit quantities.\n2.\nSet mark(i) = 0, for all lists i = 1, 2, ... , n. Initialize R = M \u2212 uj , S = Best = Skip = \u2205.\n3.\nScan the tuples in U in the sorted order.\nSuppose the next tuple is tk i , i.e. the kth anchor from agent i.\nIf mark(i) = 1, ignore this tuple; otherwise do the following steps: \u2022 if size(tk i ) > R and i = return min {cost(S) + Rpj , cost(Best)}; \u2022 if size(tk i ) > R and cost(tk i ) \u2264 cost(S) return min {cost(S) + cost(tk i ), cost(Best)}; \u2022 if size(tk i ) > R and cost(tk i ) > cost(S) Add tk i to Skip; Set Best to S \u222a {tk i } if cost improves; \u2022 if size(tk i ) \u2264 R then add tk i to S; mark(i) = 1; subtract size(tk i ) from R.\nThe approximation algorithm is very similar to the approximation algorithm for knapsack.\nSince we wish to minimize the total cost, we consider the tuples in order of increasing per unit cost.\nIf the size of tuple tk i is smaller than R, then we add it to S, update R, and delete from U all the tuples that belong to the same group as tk i .\nIf size(tk i ) is greater than R, then S along with tk i forms a feasible solution.\nHowever, this solution can be far from optimal if the size of tk i is much larger than R.\nIf total cost of S and tk i is smaller than the current best solution, we update Best.\nOne exception to this rule is the tuple t .\nSince this tuple can be taken fractionally, we update Best if the sum of S``s cost and fractional cost of t is an improvement.\nThe algorithm terminates in either of the first two cases, or when all tuples are scanned.\nIn particular, it terminates whenever we find a tk i such that size(tk i ) is greater than R but cost(tk i ) is less than cost(S), or when we reach the tuple representing agent l and it gives a feasible solution.\nLEMMA 2.\nSuppose A\u2217 is an optimal solution of the generalized knapsack, and suppose that element (l, j) is midrange in the optimal solution.\nThen, the cost V (l, j), returned by Greedy( , j), satisfies: V ( , j) + cost(tj ) \u2264 2cost(A\u2217 ) PROOF.\nLet V ( , j) be the value returned by Greedy( , j) and let V \u2217 ( , j) be an optimal solution for iKnapsack( , j).\nConsider the set Skip at the termination of Greedy( , j).\nThere are two cases to consider: either some tuple t \u2208 Skip is also in V \u2217 ( , j), or no tuple in Skip is in V \u2217 ( , j).\nIn the first case, let St be the tentative solution S at the time t was added to Skip.\nBecause t \u2208 Skip then size(t) > R, and St together with t forms a feasible solution, and we have: V ( , j) \u2264 cost(Best) \u2264 cost(St) + cost(t).\nAgain, because t \u2208 Skip then cost(t) > cost(St), and we have V ( , j) < 2cost(t).\nOn the other hand, since t is included in V \u2217 ( , j), we have V \u2217 ( , j) \u2265 cost(t).\nThese two inequalities imply the desired bound: V \u2217 ( , j) \u2264 V ( , j) < 2V \u2217 ( , j).\nIn the second case, imagine a modified instance of iKnapsack( , j), which excludes all the tuples of the set Skip.\nSince none of these tuples were included in V \u2217 ( , j), the optimal solution for the modified problem should be the same as the one for the original.\nSuppose our approximation algorithm returns the value V ( , j) for this modified instance.\nLet t be the last tuple considered by the approximation algorithm before termination on the modified instance, and let St be the corresponding tentative solution set in that step.\nSince we consider tuples in order of increasing per unit price, and none of the tuples are going to be placed in the set Skip, we must have cost(St ) < V \u2217 ( , j) because St is the optimal way to obtain size(St ).\nWe also have cost(t ) \u2264 cost(St ), and the following inequalities: V ( , j) \u2264 V ( , j) \u2264 cost(St ) + cost(t ) < 2V \u2217 ( , j) The inequality V ( , j) \u2264 V ( , j) follows from the fact that a tuple in the Skip list can only affect the Best but not the tentative solutions.\nTherefore, dropping the tuples in the set Skip can only make the solution worse.\nThe above argument has shown that the value returned by Greedy( , j) is within a factor 2 of the optimal solution for iKnapsack( , j).\nWe now show that the value V ( , j) plus cost(tj ) is a 2-approximation of the original generalized knapsack problem.\nLet A\u2217 be an optimal solution of the generalized knapsack, and suppose that element xj is midrange.\nLet x\u2212 to be set of the remaining elements, either zero or anchors, in this solution.\nFurthermore, define x = xj \u2212 uj .\nThus, cost(A\u2217 ) = cost(xl) + cost(tj l ) + cost(x\u2212l) It is easy to see that (x\u2212 , x ) is an optimal solution for iKnapsack( , j).\nSince V ( , j) is a 2-approximation for this optimal solution, we have the following inequalities: V ( , j) + cost(tj ) \u2264 cost(tj ) + 2(cost(x ) + cost(x\u2212 )) \u2264 2(cost(x ) + cost(tj ) + cost(x\u2212 )) \u2264 2cost(A\u2217 ) This completes the proof of Lemma 2.\nIt is easy to see that, after an initial sorting of the tuples in U, the algorithm Greedy( , j) takes O(n) time.\nWe have our first polynomial approximation algorithm.\nTHEOREM 2.\nA 2-approximation of the generalized knapsack problem can be found in time O(n2 ), where n is number of item lists (each of constant length).\nPROOF.\nWe run the algorithm Greedy( , j) once for each tuple (l, j) as a candidate for midrange.\nThere are O(n) tuples, and it suffices to sort them once, the total cost of the algorithm is O(n2 ).\nBy Lemma 1, there is an optimal solution with at most one midrange element, so our algorithm will find a 2-approximation, as claimed.\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum of c pieces, then the running time is O((nc)2 ).\n171 3.3 An Approximation Scheme We now use the 2-approximation algorithm presented in the preceding section to develop a fully polynomial approximation (FPTAS) for the generalized knapsack problem.\nThe high level idea is fairly standard, but the details require technical care.\nWe use a dynamic programming algorithm to solve iKnapsack( , j) for each possible midrange element, with the 2-approximation algorithm providing an upper bound on the value of the solution and enabling the use of scaling on the cost dimension of the dynamic programming (DP) table.\nConsider, for example, the case that the midrange element is x , which falls in the range [uj , uj+1 ).\nIn our FPTAS, rather than using a greedy approximation algorithm to solve iKnapsack( , j), we construct a dynamic programming table to compute the minimum cost at which at least M \u2212 uj+1 units can be obtained using the remaining n \u2212 1 lists in the generalized knapsack.\nSuppose G[i, r] denotes the maximum number of units that can be obtained at cost at most r using only the first i lists in the generalized knapsack.\nThen, the following recurrence relation describes how to construct the dynamic programming table: G[0, r] = 0 G[i, r] = max \u00b4 G[i \u2212 1, r] max j\u2208\u03b2(i,r) {G[i \u2212 1, r \u2212 cost(tj i )] + uj i } \u00b5 where \u03b2(i, r) = {j : 1 \u2264 j \u2264 mi, cost(tj i ) \u2264 r}, is the set of anchors for agent i.\nAs convention, agent i will index the row, and cost r will index the column.\nThis dynamic programming algorithm is only pseudo-polynomial, since the number of column in the dynamic programming table depends upon the total cost.\nHowever, we can convert it into a FPTAS by scaling the cost dimension.\nLet A denote the 2-approximation to the generalized knapsack problem, with total cost, cost(A).\nLet \u03b5 denote the desired approximation factor.\nWe compute the scaled cost of a tuple tj i , denoted scost(tj i ), as scost(tj i ) = n cost(tj i ) \u03b5cost(A) (2) This scaling improves the running time of the algorithm because the number of columns in the modified table is at most n \u03b5 , and independent of the total cost.\nHowever, the computed solution might not be an optimal solution for the original problem.\nWe show that the error introduced is within a factor of \u03b5 of the optimal solution.\nAs a prelude to our approximation guarantee, we first show that if two different solutions to the iKnapsack problem have equal scaled cost, then their original (unscaled) costs cannot differ by more than \u03b5cost(A).\nLEMMA 3.\nLet x and y be two distinct feasible solutions of iKnapsack( , j), excluding their midrange elements.\nIf x and y have equal scaled costs, then their unscaled costs cannot differ by more than \u03b5cost(A).\nPROOF.\nLet Ix and Iy, respectively, denote the indicator functions associated with the anchor vectors x and y-there is 1 in position Ix[i, k] if the xk i > 0.\nSince x and y has equal scaled cost, i= k scost(tk i )Ix[i, k] = i= k scost(tk i )Iy[i, k] (3) However, by (2), the scaled costs satisfy the following inequalities: (scost(tk i ) \u2212 1)\u03b5cost(A) n \u2264 cost(tk i ) \u2264 scost(tk i )\u03b5cost(A) n (4) Substituting the upper-bound on scaled cost from (4) for cost(x), the lower-bound on scaled cost from (4) for cost(y), and using equality (3) to simplify, we have: cost(x) \u2212 cost(y) \u2264 \u03b5cost(A) n i= k Iy[i, k] \u2264 \u03b5cost(A), The last inequality uses the fact that at most n components of an indicator vector are non-zero; that is, any feasible solution contains at most n tuples.\nFinally, given the dynamic programming table for iKnapsack( , j), we consider all the entries in the last row of this table, G[n\u22121, r].\nThese entries correspond to optimal solutions with all agents except l, for different levels of cost.\nIn particular, we consider the entries that provide at least M \u2212 uj+1 units.\nTogether with a contribution from agent l, we choose the entry in this set that minimizes the total cost, defined as follows: cost(G[n \u2212 1, r]) + max {uj , M \u2212 G[n \u2212 1, r]}pj , where cost() is the original, unscaled cost associated with entry G[n\u22121, r].\nIt is worth noting, that unlike the 2-approximation scheme for iKnapsack( , j), the value computed with this FPTAS includes the cost to acquire uj l units from l.\nThe following lemma shows that we achieve a (1+\u03b5)-approximation.\nLEMMA 4.\nSuppose A\u2217 is an optimal solution of the generalized knapsack problem, and suppose that element (l, j) is midrange in the optimal solution.\nThen, the solution A(l, j) from running the scaled dynamic-programming algorithm on iKnapsack( , j) satisfies cost(A(l, j)) \u2264 (1 + 2\u03b5)cost(A\u2217 ) PROOF.\nLet x\u2212 denote the vector of the elements in solution A\u2217 without element l. Then, by definition, cost(A\u2217 ) = cost(x\u2212 ) + pj xj .\nLet r = scost(x\u2212 ) be the scaled cost associated with the vector x\u2212 .\nNow consider the dynamic programming table constructed for iKnapsack( , j), and consider its entry G[n \u2212 1, r].\nLet A denote the 2-approximation to the generalized knapsack problem, and A(l, j) denote the solution from the dynamic-programming algorithm.\nSuppose y\u2212 is the solution associated with this entry in our dynamic program; the components of the vector y\u2212 are the quantities from different lists.\nSince both x\u2212 and y\u2212 have equal scaled costs, by Lemma 3, their unscaled costs are within \u03b5cost(A) of each other; that is, cost(y\u2212 ) \u2212 cost(x\u2212 ) \u2264 \u03b5cost(A).\nNow, define yj = max{uj , M \u2212 \u00c8i= \u00c8j yj i }; this is the contribution needed from to make (y\u2212 , yj ) a feasible solution.\nAmong all the equal cost solutions, our dynamic programming tables chooses the one with maximum units.\nTherefore, i= j yj i \u2265 i= j xj i 172 Therefore, it must be the case that yj \u2264 xj .\nBecause (yj , y\u2212 ) is also a feasible solution, if our algorithm returns a solution with cost cost(A(l, j)), then we must have cost(A(l, j)) \u2264 cost(y\u2212 ) + pj yj \u2264 cost(x\u2212 ) + \u03b5cost(A) + pj xj \u2264 (1 + 2\u03b5)cost(A\u2217 ), where we use the fact that cost(A) \u2264 2cost(A\u2217 ).\nPutting this together, our approximation scheme for the generalized knapsack problem will iterate the scheme described above for each choice of the midrange element (l, j), and choose the best solution from among these O(n) solutions.\nFor a given midrange, the most expensive step in the algorithm is the construction of dynamic programming table, which can be done in O(n2 \/\u03b5) time assuming constant intervals per list.\nThus, we have the following result.\nTHEOREM 3.\nWe can compute an (1 + \u03b5) approximation to the solution of a generalized knapsack problem in worst-case time O(n3 \/\u03b5).\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum of c pieces, then the running time can be derived by substituting cn for each occurrence of n. 4.\nCOMPUTING VCG PAYMENTS We now consider the related problem of computing the VCG payments for all the agents.\nA naive approach requires solving the allocation problem n times, removing each agent in turn.\nIn this section, we show that our approximation scheme for the generalized knapsack can be extended to determine all n payments in total time O(\u03b1T log(\u03b1n\/\u03b5)), where 1 \u2264 C(I\\i)\/C(I) \u2264 \u03b1, for a constant upper bound, \u03b1, and T is the complexity of solving the allocation problem once.\nThis \u03b1-bound can be justified as a no monopoly condition, because it bounds the marginal value that a single buyer brings to the auction.\nSimilarly, in the reverse variation we can compute the VCG payments to each seller in time O(\u03b1T log(\u03b1n\/\u03b5)), where \u03b1 bounds the ratio C(I\\ i)\/C(I) for all i.\nOur overall strategy will be to build two dynamic programming tables, forward and backward, for each midrange element (l, j) once.\nThe forward table is built by considering the agents in the order of their indices, where as the backward table is built by considering them in the reverse order.\nThe optimal solution corresponding to C(I \\ i) can be broken into two parts: one corresponding to first (i \u2212 1) agents and the other corresponding to last (n \u2212 i) agents.\nAs the (i \u2212 1)th row of the forward table corresponds to the sellers with first (i\u22121) indices, an approximation to the first part will be contained in (i \u2212 1)th row of the forward table.\nSimilarly, (n\u2212 i)th row of the backward table will contain an approximation for the second part.\nWe first present a simple but an inefficient way of computing the approximate value of C(I \\ i), which illustrates the main idea of our algorithm.\nThen we present an improved scheme, which uses the fact that the elements in the rows are sorted, to compute the approximate value more efficiently.\nIn the following, we concentrate on computing an allocation with xj being midrange, and some agent i = l removed.\nThis will be a component in computing an approximation to C(I \\ i), the value of the solution to the generalized knapsack without bids from agent i.\nWe begin with the simple scheme.\n4.1 A Simple Approximation Scheme We implement the scaled dynamic programming algorithm for iKnapsack( , j) with two alternate orderings over the other sellers, k = l, one with sellers ordered 1, 2, ... , n, and one with sellers ordered n, n \u2212 1, ... , 1.\nWe call the first table the forward table, and denote it F , and the second table the backward table, and denote it Bl.\nThe subscript reminds us that the agent is midrange.9 In building these tables, we use the same scaling factor as before; namely, the cost of a tuple tj i is scaled as follows: scost(tj i ) = ncost(tj i ) \u03b5cost(A) where cost(A) is the upper bound on C(I), given by our 2approximation scheme.\nIn this case, because C(I \\ i) can be \u03b1 times C(I), the scaled value of C(I \\ i) can be at most n\u03b1\/\u03b5.\nTherefore, the cost dimension of our dynamic program``s table will be n\u03b1\/\u03b5.\nFlTable F (i\u22121)l 2 3 1 2 i\u22121 1 m\u22121 m n\u22121 g 2 31 m\u22121 m B (n\u2212i) n\u22121 n\u22122 n\u2212i 1 lh Table Bl Figure 3: Computing VCG payments.\nm = n\u03b1 \u03b5 Now, suppose we want to compute a (1 + )-approximation to the generalized knapsack problem restricted to element (l, j) midrange, and further restricted to remove bids from some seller i = l. Call this problem iKnapsack\u2212i ( , j).\nRecall that the ith row of our DP table stores the best solution possible using only the first i agents excluding agent l, all of them either cleared at zero, or on anchors.\nThese first i agents are a different subset of agents in the forward and the backward tables.\nBy carefully combining one row of Fl with one row of Bl we can compute an approximation to iKnapsack\u2212i ( , j).\nWe consider the row of Fl that corresponds to solutions constructed from agents {1, 2, ... , i \u2212 1}, skipping agent l.\nWe consider the row of Bl that corresponds to solutions constructed from agents {i+1, i+2, ... , n}, again skipping agent l.\nThe rows are labeled Fl(i \u2212 1) and Bl(n \u2212 i) respectively.10 The scaled costs for acquiring these units are the column indices for these entries.\nTo solve iKnapsack\u2212i ( , j) we choose one entry from row F (i\u22121) and one from row B (n\u2212i) such that their total quantity exceeds M \u2212 uj+1 and their combined cost is minimum over all such combinations.\nFormally, let g \u2208 Fl(i \u2212 1), and h \u2208 Bl(n \u2212 1) denote entries in each row, with size(g), size(h), denoting the number of units and cost(g) and cost(h) denoting the unscaled cost associated with the entry.\nWe compute the following, subject 9 We could label the tables with both and j, to indicate the jth tuple is forced to be midrange, but omit j to avoid clutter.\n10 To be precise, the index of the rows are (i \u2212 2) and (n \u2212 i) for Fl and Bl when l < i, and (i \u2212 1) and (n \u2212 i \u2212 1), respectively, when l > i. 173 to the condition that g and h satisfy size(g) + size(h) > M \u2212 uj+1 : min g\u2208F (i\u22121),h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj \u00b7 max{uj , M \u2212 size(g) \u2212 size(h)} \u00d3 (5) LEMMA 5.\nSuppose A\u2212i is an optimal solution of the generalized knapsack problem without bids from agent i, and suppose that element (l, j) is the midrange element in the optimal solution.\nThen, the expression in Eq.\n5, for the restricted problem iKnapsack\u2212i ( , j), computes a (1 + \u03b5)-approximation to A\u2212i .\nPROOF.\nFrom earlier, we define cost(A\u2212i ) = C(I \\ i).\nWe can split the optimal solution, A\u2212i , into three disjoint parts: xl corresponds to the midrange seller, xi corresponds to first i \u2212 1 sellers (skipping agent l if l < i), and x\u2212i corresponds to last n \u2212 i sellers (skipping agent l if l > i).\nWe have: cost(A\u2212i ) = cost(xi) + cost(x\u2212i) + pj xj Let ri = scost(xi) and r\u2212i = scost(x\u2212i).\nLet yi and y\u2212i be the solution vectors corresponding to scaled cost ri and r\u2212i in F (i \u2212 1) and B (n \u2212 i), respectively.\nFrom Lemma 3 we conclude that, cost(yi) + cost(y\u2212i) \u2212 cost(xi) \u2212 cost(x\u2212i) \u2264 \u03b5cost(A) where cost(A) is the upper-bound on C(I) computed with the 2-approximation.\nAmong all equal scaled cost solutions, our dynamic program chooses the one with maximum units.\nTherefore we also have, (size(yi) \u2265 size(xi)) and (size(y\u2212i) \u2265 size(x\u2212i)) where we use shorthand size(x) to denote total number of units in all tuples in x. Now, define yj l = max(uj l , M \u2212size(yi)\u2212size(y\u2212i)).\nFrom the preceding inequalities, we have yj l \u2264 xj l .\nSince (yj l , yi, y\u2212i) is also a feasible solution to the generalized knapsack problem without agent i, the value returned by Eq.\n5 is at most cost(yi) + cost(y\u2212i) + pj l yj l \u2264 C(I \\ i) + \u03b5cost(A) \u2264 C(I \\ i) + 2cost(A\u2217 )\u03b5 \u2264 C(I \\ i) + 2C(I \\ i)\u03b5 This completes the proof.\nA naive implementation of this scheme will be inefficient because it might check (n\u03b1\/\u03b5)2 pairs of elements, for any particular choice of (l, j) and choice of dropped agent i.\nIn the next section, we present an efficient way to compute Eq.\n5, and eventually to compute the VCG payments.\n4.2 Improved Approximation Scheme Our improved approximation scheme for the winner-determination problem without agent i uses the fact that elements in F (i \u2212 1) and B (n \u2212 i) are sorted; specifically, both, unscaled cost and quantity (i.e. size), increases from left to right.\nAs before, let g and h denote generic entries in F (i \u2212 1) and B (n \u2212 i) respectively.\nTo compute Eq.\n5, we consider all the tuple pairs, and first divide the tuples that satisfy condition size(g) + size(h) > M \u2212 uj+1 l into two disjoint sets.\nFor each set we compute the best solution, and then take the best between the two sets.\n[case I: size(g) + size(h) \u2265 M \u2212 uj l ] The problem reduces to min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj l uj \u00d3 (6) We define a pair (g, h) to be feasible if size(g) + size(h) \u2265 M \u2212 uj l .\nNow to compute Eq.\n6, we do a forward and backward walk on F (i \u2212 1) and B (n \u2212 i) respectively.\nWe start from the smallest index of F (i \u2212 1) and move right, and from the highest index of B (n \u2212 i) and move left.\nLet (g, h) be the current pair.\nIf (g, h) is feasible, we decrement B``s pointer (that is, move backward) otherwise we increment F``s pointer.\nThe feasible pairs found during the walk are used to compute Eq.\n6.\nThe complexity of this step is linear in size of F (i \u2212 1), which is O(n\u03b1\/\u03b5).\n[case II: M \u2212 uj+1 l \u2264 size(g) + size(h) \u2264 M \u2212 uj l ] The problem reduces to min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2cost(g) + cost(h) + pj l (M \u2212 size(g) \u2212 size(h)) \u00d3 To compute the above equation, we transform the above problem to another problem using modified cost, which is defined as: mcost(g) = cost(g) \u2212 pj l \u00b7 size(g) mcost(h) = cost(h) \u2212 pj l \u00b7 size(h) The new problem is to compute min g\u2208F (i\u22121), h\u2208B (n\u2212i) \u00d2mcost(g) + mcost(h) + pj l M \u00d3 (7) The modified cost simplifies the problem, but unfortunately the elements in F (i \u2212 1) and B (n \u2212 i) are no longer sorted with respect to mcost.\nHowever, the elements are still sorted in quantity and we use this property to compute Eq.\n7.\nCall a pair (g, h) feasible if M \u2212 uj+1 l \u2264 size(g) + size(h) \u2264 M \u2212 uj l .\nDefine the feasible set of g as the elements h \u2208 B (n \u2212 i) that are feasible given g.\nAs the elements are sorted by quantity, the feasible set of g is a contiguous subset of B (n \u2212 i) and shifts left as g increases.\n2 3 4 5 10 20 30 40 50 60 Begin End B (n\u2212i)15 20 25 30 35 40 65421 3 1 6 F (i\u22121)l l Figure 4: The feasible set of g = 3, defined on B (n \u2212 i), is {2, 3, 4} when M \u2212 uj+1 l = 50 and M \u2212 uj l = 60.\nBegin and End represent the start and end pointers to the feasible set.\nTherefore, we can compute Eq.\n7 by doing a forward and backward walk on F (i \u2212 1) and B (n \u2212 i) respectively.\nWe walk on B (n \u2212 i), starting from the highest index, using two pointers, Begin and End, to indicate the start and end of the current feasible set.\nWe maintain the feasible set as a min heap, where the key is modified cost.\nTo update the feasible set, when we increment F``s pointer(move forward), we walk left on B, first using End to remove elements from feasible set which are no longer 174 feasible and then using Begin to add new feasible elements.\nFor a given g, the only element which we need to consider in g``s feasible set is the one with minimum modified cost which can be computed in constant time with the min heap.\nSo, the main complexity of the computation lies in heap updates.\nSince, any element is added or deleted at most once, there are O(n\u03b1 \u03b5 ) heap updates and the time complexity of this step is O(n\u03b1 \u03b5 log n\u03b1 \u03b5 ).\n4.3 Collecting the Pieces The algorithm works as follows.\nFirst, using the 2 approximation algorithm, we compute an upper bound on C(I).\nWe use this bound to scale down the tuple costs.\nUsing the scaled costs, we build the forward and backward tables corresponding to each tuple (l, j).\nThe forward tables are used to compute C(I).\nTo compute C(I \\ i), we iterate over all the possible midrange tuples and use the corresponding forward and backward tables to compute the locally optimal solution using the above scheme.\nAmong all the locally optimal solutions we choose one with the minimum total cost.\nThe most expensive step in the algorithm is computation of C(I \\ i).\nThe time complexity of this step is O(n2 \u03b1 \u03b5 log n\u03b1 \u03b5 ) as we have to iterate over all O(n) choices of tj l , for all l = i, and each time use the above scheme to compute Eq.\n5.\nIn the worst case, we might need to compute C(I \\ i) for all n sellers, in which case the final complexity of the algorithm will be O(n3 \u03b1 \u03b5 log n\u03b1 \u03b5 ).\nTHEOREM 4.\nWe can compute an \/(1+ )-strategyproof approximation to the VCG mechanism in the forward and reverse multi-unit auctions in worst-case time O(n3 \u03b1 \u03b5 log n\u03b1 \u03b5 ).\nIt is interesting to recall that T = O(n3 \u03b5 ) is the time complexity of the FPTAS to the generalized knapsack problem with all agents.\nOur combined scheme computes an approximation to the complete VCG mechanism, including payments to O(n) agents, in time complexity O(T log(n\/\u03b5)), taking the no-monopoly parameter, \u03b1, as a constant.\nThus, our algorithm performs much better than the naive scheme, which computes the VCG payment for each agent by solving a new instance of generalized knapsack problem.\nThe speed up comes from the way we solve iKnapsack\u2212i ( , j).\nTime complexity of computing iKnapsack\u2212i ( , j) by creating a new dynamic programming table will be O(n2 \u03b5 ) but by using the forward and backward tables, the complexity is reduced to O(n \u03b5 log n \u03b5 ).\nWe can further improve the time complexity of our algorithm by computing Eq.\n5 more efficiently.\nCurrently, the algorithm uses heap, which has logarithmic update time.\nIn worst case, we can have two heap update operations for each element, which makes the time complexity super linear.\nIf we can compute Eq.\n5 in linear time then the complexity of computing the VCG payment will be same as the complexity of solving a single generalized knapsack problem.\n5.\nCONCLUSIONS We presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem, using marginal decreasing piecewise constant bidding language.\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5 > 0.\nAs such it is an example of computationally tractable \u03b5-dominance result, as well as an example of a non-trivial but approximable allocation problem.\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O(T log n), where T is the time complexity to compute the solution to a single allocation problem.\n6.\nREFERENCES [1] L M Ausubel and P R Milgrom.\nAscending auctions with package bidding.\nFrontiers of Theoretical Economics, 1:1-42, 2002.\n[2] S Bikchandani, S de Vries, J Schummer, and R V Vohra.\nLinear programming and Vickrey auctions.\nTechnical report, Anderson Graduate School of Management, U.C.L.A., 2001.\n[3] S Bikchandani and J M Ostroy.\nThe package assignment model.\nJournal of Economic Theory, 2002.\nForthcoming.\n[4] K Chatterjee and W Samuelson.\nBargaining under incomplete information.\nOperations Research, 31:835-851, 1983.\n[5] E H Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[6] S de Vries and R V Vohra.\nCombinatorial auctions: A survey.\nInforms Journal on Computing, 2002.\nForthcoming.\n[7] M Eso, S Ghosh, J R Kalagnanam, and L Ladanyi.\nBid evaluation in procurement auctions with piece-wise linear supply curves.\nTechnical report, IBM TJ Watson Research Center, 2001.\nin preparation.\n[8] J Feigenbaum and S Shenker.\nDistributed Algorithmic Mechanism Design: Recent Results and Future Directions.\nIn Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 1-13, 2002.\n[9] M R Garey and D S Johnson.\nComputers and Intractability: A Guide to the Theory of NP-Completeness.\nW.H.Freeman and Company, New York, 1979.\n[10] G V Gens and E V Levner.\nComputational complexity of approximation algorithms for combinatorial problems.\nIn Mathematical Foundation of Computer Science, 292-300, 1979.\n[11] T Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[12] J R Kalagnanam, A J Davenport, and H S Lee.\nComputational aspects of clearing continuous call double auctions with assignment constraints and indivisible demand.\nElectronic Commerce Journal, 1(3):221-238, 2001.\n[13] V Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[14] V Krishna and M Perry.\nEfficient mechanism design.\nTechnical report, Pennsylvania State University, 1998.\nAvailable at: http:\/\/econ.la.psu.edu\/\u02dcvkrishna\/vcg18.ps.\n[15] D Lehmann, L I O``Callaghan, and Y Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJACM, 49(5):577-602, September 2002.\n[16] R B Myerson.\nOptimal auction design.\nMathematics of Operation Research, 6:58-73, 1981.\n[17] R B Myerson and M A Satterthwaite.\nEfficient mechanisms for bilateral trading.\nJournal of Economic Theory, 28:265-281, 1983.\n[18] N Nisan and A Ronen.\nComputationally feasible VCG mechanisms.\nIn ACM-EC, pages 242-252, 2000.\n[19] D C Parkes, J R Kalagnanam, and M Eso.\nAchieving budget-balance with Vickrey-based payment schemes in exchanges.\nIn IJCAI, 2001.\n[20] M H Rothkopf, A Peke\u02c7c, and R M Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[21] J Schummer.\nAlmost dominant strategy implementation.\nTechnical report, MEDS Department, Kellogg Graduate School of Management, 2001.\n[22] W Vickrey.\nCounterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n175","lvl-3":"Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem.\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves.\nFirst, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + e) approximation in worst-case time T = O (n3\/e), given n bids each with a constant number of pieces.\nSecond, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O (T log n).\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e \/ (1 + e) V, where V is the total surplus in the efficient outcome.\n1.\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem.\nOur scheme is both approximately efficient and approximately strategyproof.\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce; for instance, corporations are increasingly using auctions for their strategic sourcing.\nWe consider both a reverse auction variation and a forward auction variation, and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves.\nIn the reverse auction, we consider a single buyer with a demand for M units of a good and n suppliers, each with a marginal-decreasing piecewise-constant cost function.\nIn addition, each supplier can also express an upper bound, or capacity constraint on the number of units she can supply.\nThe reverse variation models, for example, a procurement auction to obtain raw materials or other services (e.g. circuit boards, power suppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M units of a good and n buyers, each with a marginal-decreasing piecewise-constant valuation function.\nA buyer can also express a lower bound, or minimum lot size, on the number of units she demands.\nThe forward variation models, for example, an auction to sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [22, 5, 11] mechanism for the multiunit auction problem.\nThe Vickrey-Clarke-Groves (VCG) mechanism has a number of interesting economic properties in this setting, including strategyproofness, such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction, and allocative efficiency, such that the outcome maximizes the total surplus in the system.\nHowever, as we discuss in Section 2, the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer.\nOtherwise, either the auction must run at a loss in these instances, or the buyer cannot be expected to voluntarily choose to participate.\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with marginal-decreasing bid curves, the underlying allocation problem turns out to (weakly) intractable.\nFor instance, the classic 0\/1 knapsack is a special case of this problem .1 We model the 1However, the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem, and develop a fully polynomialtime approximation scheme, computing a (1 + ~) - approximation in worst-case time T = O (n3 \/ \u03b5), where each bid has a fixed number of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG payments to all n agents requires time O (nT).\nWe compute approximate VCG payments in worst-case time O (\u03b1T log (\u03b1n \/ \u03b5)), where \u03b1 is a constant that quantifies a reasonable \"no-monopoly\" assumption.\nSpecifically, in the reverse auction, suppose that C (_ T) is the minimal cost for procuring M units with all sellers _ T, and C (_ T \\ i) is the minimal cost without seller i. Then, the constant \u03b1 is defined as an upper bound for the ratio C (_ T \\ i) \/ C (_ T), over all sellers i.\nThis upper-bound tends to 1 as the number of sellers increases.\nThe approximate VCG mechanism is (\u03b5 1 + \u03b5) - strategyproof for an approximation to within (1 + ~) of the optimal allocation.\nThis means that a bidder can gain at most (\u03b5 1 + \u03b5) V from a nontruthful bid, where V is the total surplus from the efficient allocation.\nAs such, this is an example of a computationally-tractable \u03b5-dominance result .2 In practice, we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation.\nSection 2 formally defines the forward and reverse auctions, and defines the VCG mechanisms.\nWe also prove our claims about \u03b5-strategyproofness.\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the VCG mechanism.\nSection 5 concludes.\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem, in which there are multiple different items.\nThe combinatorial allocation problem (CAP) is both NP-complete and inapproximable (e.g. [6]).\nAlthough some polynomial-time cases have been identified for the CAP [6, 20], introducing an expressive exclusive-or bidding language quickly breaks these special cases.\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language--the bid taker in our setting is allowed to accept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention.\nFor instance, Lehmann et al. [15] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem.\nNisan & Ronen [18] discussed approximate VCG-based mechanisms, but either appealed to particular maximal-in-range approximations to retain full strategyproofness, or to resource-bounded agents with information or computational limitations on the ability to compute strategies.\nFeigenminimum-lot size constraints from the buyers.\n2However, this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [8] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome.\nVCG mechanism do have a natural \"self-correcting\" property, though, because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent's own value.\nbaum & Shenker [8] have defined the concept of strategically faithful approximations, and proposed the study of approximations as an important direction for algorithmic mechanism design.\nSchummer [21] and Parkes et al [19] have previously considered \u03b5-dominance, in the context of economic impossibility results, for example in combinatorial exchanges.\nEso et al. [7] have studied a similar procurement problem, but for a different volume discount model.\nThis earlier work formulates the problem as a general mixed integer linear program, and gives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple buyers and sellers trade a divisible good.\nThe focus of this paper is also different: it investigates the equilibrium prices using the demand and supply curves, whereas our focus is on efficient mechanism design.\nAusubel [1] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [1], with an interpretation as a primal-dual algorithm [2].\n2.\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS\n2.1 Marginal-Decreasing Piecewise Bids\n2.2 VCG-Based Multi-Unit Auctions\n2.3 \u03b5-Strategyproofness\n3.\nTHE GENERALIZED KNAPSACK PROBLEM\n3.1 Preliminaries\nGeneralized Knapsack:\n3.2 A 2-Approximation Scheme\n3.3 An Approximation Scheme\n4.\nCOMPUTING VCG PAYMENTS\n4.1 A Simple Approximation Scheme\n4.2 Improved Approximation Scheme\n4.3 Collecting the Pieces\n5.\nCONCLUSIONS\nWe presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem, using marginal decreasing piecewise constant bidding language.\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5> 0.\nAs such it is an example of computationally tractable \u03b5-dominance result, as well as an example of a non-trivial but approximable allocation problem.\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O (T log n), where T is the time complexity to compute the solution to a single allocation problem.","lvl-4":"Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem.\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves.\nFirst, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + e) approximation in worst-case time T = O (n3\/e), given n bids each with a constant number of pieces.\nSecond, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O (T log n).\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e \/ (1 + e) V, where V is the total surplus in the efficient outcome.\n1.\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem.\nOur scheme is both approximately efficient and approximately strategyproof.\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce; for instance, corporations are increasingly using auctions for their strategic sourcing.\nWe consider both a reverse auction variation and a forward auction variation, and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves.\nIn the reverse auction, we consider a single buyer with a demand for M units of a good and n suppliers, each with a marginal-decreasing piecewise-constant cost function.\nIn addition, each supplier can also express an upper bound, or capacity constraint on the number of units she can supply.\nThe reverse variation models, for example, a procurement auction to obtain raw materials or other services (e.g. circuit boards, power suppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M units of a good and n buyers, each with a marginal-decreasing piecewise-constant valuation function.\nA buyer can also express a lower bound, or minimum lot size, on the number of units she demands.\nThe forward variation models, for example, an auction to sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [22, 5, 11] mechanism for the multiunit auction problem.\nThe Vickrey-Clarke-Groves (VCG) mechanism has a number of interesting economic properties in this setting, including strategyproofness, such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction, and allocative efficiency, such that the outcome maximizes the total surplus in the system.\nHowever, as we discuss in Section 2, the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer.\nOtherwise, either the auction must run at a loss in these instances, or the buyer cannot be expected to voluntarily choose to participate.\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with marginal-decreasing bid curves, the underlying allocation problem turns out to (weakly) intractable.\nFor instance, the classic 0\/1 knapsack is a special case of this problem .1 We model the 1However, the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem, and develop a fully polynomialtime approximation scheme, computing a (1 + ~) - approximation in worst-case time T = O (n3 \/ \u03b5), where each bid has a fixed number of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG payments to all n agents requires time O (nT).\nThis upper-bound tends to 1 as the number of sellers increases.\nThe approximate VCG mechanism is (\u03b5 1 + \u03b5) - strategyproof for an approximation to within (1 + ~) of the optimal allocation.\nThis means that a bidder can gain at most (\u03b5 1 + \u03b5) V from a nontruthful bid, where V is the total surplus from the efficient allocation.\nSection 2 formally defines the forward and reverse auctions, and defines the VCG mechanisms.\nWe also prove our claims about \u03b5-strategyproofness.\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the VCG mechanism.\nSection 5 concludes.\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem, in which there are multiple different items.\nThe combinatorial allocation problem (CAP) is both NP-complete and inapproximable (e.g. [6]).\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language--the bid taker in our setting is allowed to accept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention.\nFor instance, Lehmann et al. [15] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem.\nFeigenminimum-lot size constraints from the buyers.\nbaum & Shenker [8] have defined the concept of strategically faithful approximations, and proposed the study of approximations as an important direction for algorithmic mechanism design.\nEso et al. [7] have studied a similar procurement problem, but for a different volume discount model.\nThis earlier work formulates the problem as a general mixed integer linear program, and gives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple buyers and sellers trade a divisible good.\nThe focus of this paper is also different: it investigates the equilibrium prices using the demand and supply curves, whereas our focus is on efficient mechanism design.\nAusubel [1] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [1], with an interpretation as a primal-dual algorithm [2].\n5.\nCONCLUSIONS\nWe presented a fully polynomial-time approximation scheme for the single-good multi-unit auction problem, using marginal decreasing piecewise constant bidding language.\nOur scheme is both approximately efficient and approximately strategyproof within any specified factor \u03b5> 0.\nAs such it is an example of computationally tractable \u03b5-dominance result, as well as an example of a non-trivial but approximable allocation problem.\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O (T log n), where T is the time complexity to compute the solution to a single allocation problem.","lvl-2":"Approximately-Strategyproof and Tractable Multi-Unit Auctions\nABSTRACT\nWe present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem.\nThe bidding language in our auctions allows marginal-decreasing piecewise constant curves.\nFirst, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + e) approximation in worst-case time T = O (n3\/e), given n bids each with a constant number of pieces.\nSecond, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O (T log n).\nThe maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e \/ (1 + e) V, where V is the total surplus in the efficient outcome.\n1.\nINTRODUCTION\nIn this paper we present a fully polynomial-time approximation scheme for the single-good multi-unit auction problem.\nOur scheme is both approximately efficient and approximately strategyproof.\nThe auction settings considered in our paper are motivated by recent trends in electronic commerce; for instance, corporations are increasingly using auctions for their strategic sourcing.\nWe consider both a reverse auction variation and a forward auction variation, and propose a compact and expressive bidding language that allows marginal-decreasing piecewise constant curves.\nIn the reverse auction, we consider a single buyer with a demand for M units of a good and n suppliers, each with a marginal-decreasing piecewise-constant cost function.\nIn addition, each supplier can also express an upper bound, or capacity constraint on the number of units she can supply.\nThe reverse variation models, for example, a procurement auction to obtain raw materials or other services (e.g. circuit boards, power suppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M units of a good and n buyers, each with a marginal-decreasing piecewise-constant valuation function.\nA buyer can also express a lower bound, or minimum lot size, on the number of units she demands.\nThe forward variation models, for example, an auction to sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing the Vickrey-Clarke-Groves [22, 5, 11] mechanism for the multiunit auction problem.\nThe Vickrey-Clarke-Groves (VCG) mechanism has a number of interesting economic properties in this setting, including strategyproofness, such that truthful bidding is a dominant strategy for buyers in the forward auction and sellers in the reverse auction, and allocative efficiency, such that the outcome maximizes the total surplus in the system.\nHowever, as we discuss in Section 2, the application of the VCG-based approach is limited in the reverse direction to instances in which the total payments to the sellers are less than the value of the outcome to the buyer.\nOtherwise, either the auction must run at a loss in these instances, or the buyer cannot be expected to voluntarily choose to participate.\nThis is an example of the budget-deficit problem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with marginal-decreasing bid curves, the underlying allocation problem turns out to (weakly) intractable.\nFor instance, the classic 0\/1 knapsack is a special case of this problem .1 We model the 1However, the problem can be solved easily by a greedy scheme if we remove all capacity constraints from the seller and all\nallocation problem as a novel and interesting generalization of the classic knapsack problem, and develop a fully polynomialtime approximation scheme, computing a (1 + ~) - approximation in worst-case time T = O (n3 \/ \u03b5), where each bid has a fixed number of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG payments to all n agents requires time O (nT).\nWe compute approximate VCG payments in worst-case time O (\u03b1T log (\u03b1n \/ \u03b5)), where \u03b1 is a constant that quantifies a reasonable \"no-monopoly\" assumption.\nSpecifically, in the reverse auction, suppose that C (_ T) is the minimal cost for procuring M units with all sellers _ T, and C (_ T \\ i) is the minimal cost without seller i. Then, the constant \u03b1 is defined as an upper bound for the ratio C (_ T \\ i) \/ C (_ T), over all sellers i.\nThis upper-bound tends to 1 as the number of sellers increases.\nThe approximate VCG mechanism is (\u03b5 1 + \u03b5) - strategyproof for an approximation to within (1 + ~) of the optimal allocation.\nThis means that a bidder can gain at most (\u03b5 1 + \u03b5) V from a nontruthful bid, where V is the total surplus from the efficient allocation.\nAs such, this is an example of a computationally-tractable \u03b5-dominance result .2 In practice, we can have good confidence that bidders without good information about the bidding strategies of other participants will have little to gain from attempts at manipulation.\nSection 2 formally defines the forward and reverse auctions, and defines the VCG mechanisms.\nWe also prove our claims about \u03b5-strategyproofness.\nSection 3 provides the generalized knapsack formulation for the multi-unit allocation problems and introduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the VCG mechanism.\nSection 5 concludes.\n1.1 Related Work\nThere has been considerable interest in recent years in characterizing polynomial-time or approximable special cases of the general combinatorial allocation problem, in which there are multiple different items.\nThe combinatorial allocation problem (CAP) is both NP-complete and inapproximable (e.g. [6]).\nAlthough some polynomial-time cases have been identified for the CAP [6, 20], introducing an expressive exclusive-or bidding language quickly breaks these special cases.\nWe identify a non-trivial but approximable allocation problem with an expressive exclusiveor bidding language--the bid taker in our setting is allowed to accept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while retaining either full-strategyproofness or \u03b5-dominance has received some previous attention.\nFor instance, Lehmann et al. [15] propose a greedy and strategyproof approximation to a single-minded combinatorial auction problem.\nNisan & Ronen [18] discussed approximate VCG-based mechanisms, but either appealed to particular maximal-in-range approximations to retain full strategyproofness, or to resource-bounded agents with information or computational limitations on the ability to compute strategies.\nFeigenminimum-lot size constraints from the buyers.\n2However, this may not be an example of what Feigenbaum & Shenker refer to as a tolerably-manipulable mechanism [8] because we have not tried to bound the effect of such a manipulation on the efficiency of the outcome.\nVCG mechanism do have a natural \"self-correcting\" property, though, because a useful manipulation to an agent is a reported value that improves the total value of the allocation based on the reports of other agents and the agent's own value.\nbaum & Shenker [8] have defined the concept of strategically faithful approximations, and proposed the study of approximations as an important direction for algorithmic mechanism design.\nSchummer [21] and Parkes et al [19] have previously considered \u03b5-dominance, in the context of economic impossibility results, for example in combinatorial exchanges.\nEso et al. [7] have studied a similar procurement problem, but for a different volume discount model.\nThis earlier work formulates the problem as a general mixed integer linear program, and gives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple buyers and sellers trade a divisible good.\nThe focus of this paper is also different: it investigates the equilibrium prices using the demand and supply curves, whereas our focus is on efficient mechanism design.\nAusubel [1] has proposed an ascending-price multi-unit auction for buyers with marginal-decreasing values [1], with an interpretation as a primal-dual algorithm [2].\n2.\nAPPROXIMATELY-STRATEGYPROOF VCG AUCTIONS\nIn this section, we first describe the marginal-decreasing piecewise bidding language that is used in our forward and reverse auctions.\nContinuing, we introduce the VCG mechanism for the problem and the \u03b5-dominance results for approximations to VCG outcomes.\nWe also discuss the economic properties of VCG mechanisms in these forward and reverse auction multi-unit settings.\n2.1 Marginal-Decreasing Piecewise Bids\nWe provide a piecewise-constant and marginal-decreasing bidding language.\nThis bidding language is expressive for a natural class of valuation and cost functions: fixed unit prices over intervals of quantities.\nSee Figure 1 for an example.\nIn addition, we slightly relax the marginal-decreasing requirement to allow: a bidder in the forward auction to state a minimal purchase amount, such that she has zero value for quantities smaller than that amount; a seller in the reverse auction to state a capacity constraint, such that she has an effectively infinite cost to supply quantities in excess of a particular amount.\nFigure 1: Marginal-decreasing, piecewise constant bids.\nIn the forward auction bid, the bidder offers $10 per unit for quantity in the range [5, 10), $8 per unit in the range [10, 20), and $7 in the range [20, 25].\nHer valuation is zero for quantities outside the range [10, 25].\nIn the reverse auction bid, the cost of the seller is \u221e outside the range [10, 25].\nIn detail, in a forward auction, a bid from buyer i can be written as a list of (quantity-range, unit-price) tuples, ((u1i, p1i),\ni on the quantity.\nThe interpretation is that the bidder's valuation in the\n(semi-open) quantity range [uji, uj +1 i) is pji for each unit.\nAdditionally, it is assumed that the valuation is 0 for quantities less than u1i as well as for quantities more than umi.\nThis is implemented by adding two dummy bid tuples, with zero prices in the range [0, u1i) and (umi i, \u221e).\nWe interpret the bid list as defining a price function, pbid, i (q) = qpji, if uji 0 to purchase all M units, but zero value otherwise.\nTo simplify the mechanism design problem we assume that the buyer will truthfully announce this value to the mechanism .4 The winner\nan efficient trading mechanism in this setting.\ndetermination problem in the reverse auction is to determine the allocation, x *, that minimizes the cost to the buyer, or forfeits trade if the minimal cost is greater than value, V.\nLet C (I) denote the minimal cost given bids from all sellers, and let C (I \\ i) denote the minimal cost without bids from seller i.\nWe can assume, without loss of generality, that there is an efficient trade and V> C (I).\nOtherwise, then the efficient outcome is no trade, and the outcome of the VCG mechanism is no trade and no payments.\nThe VCG mechanism implements the outcome x * that minimizes cost based on bids from all sellers, and then provides payment pvcg, i = pask, i (x * i) + [V \u2212 C (I) \u2212 max (0, V \u2212 C (I \\ i))] to each seller.\nThe total payment is collected from the buyer.\nAgain, in equilibrium each seller's payoff is exactly the marginal-value that the seller contributes to the economic efficiency of the system; in the simple case that V> C (I \\ i) for all sellers i, this is precisely C (I \\ i) \u2212 C (I).\nAlthough the VCG mechanism remains strategyproof for sellers in the reverse direction, its applicability is limited to cases in which the total payments to the sellers are less than the buyer's value.\nOtherwise, there will be instances in which the buyer will not choose to voluntarily participate in the mechanism, based on its own value and its beliefs about the costs of sellers.\nThis leads to a loss in efficiency when the buyer chooses not to participate, because efficient trades are missed.\nThis problem with the size of the payments, does not occur in simple single-item reverse auctions, or even in multi-unit reverse auctions with a buyer that has a constant marginal-valuation for each additional item that she procures .5 Intuitively, the problem occurs in the reverse multi-unit setting because the buyer demands a fixed number of items, and has zero value without them.\nThis leads to the possibility of the trade being contingent on the presence of particular, so-called \"pivotal\" sellers.\nDefine a seller i as pivotal, if C (I) V.\nIn words, there would be no efficient trade without the seller.\nAny time there is a pivotal seller, the VCG payments to that seller allow her to extract all of the surplus, and the payments are too large to sustain with the buyer's value unless this is the only winning seller.\nConcretely, we have this participation problem in the reverse auction when the total payoff to the sellers, in equilibrium, exceeds the total payoff from the efficient allocation:\nAs stated above, first notice that we require V> C (I \\ i) for all sellers i.\nIn other words, there must be no pivotal sellers.\nGiven this, it is then necessary and sufficient that:\n5To make the reverse auction symmetric with the forward direction, we would need a buyer with a constant marginal-value to buy the first M units, and zero value for additional units.\nThe payments to the sellers would never exceed the buyer's value in this case.\nConversely, to make the forward auction symmetric with the reverse auction, we would need a seller with a constant (and high) marginal-cost to sell anything less than the first M units, and then a low (or zero) marginal cost.\nThe total payments received by the seller can be less than the seller's cost for the outcome in this case.\nIn words, the surplus of the efficient allocation must be greater than the total marginal-surplus provided by each seller .6 Consider an example with 3 agents f1, 2, 3}, and V = 150 and C (123) = 50.\nCondition (1) holds when C (12) = C (23) = 70 and C (13) = 100, but not when C (12) = C (23) = 80 and C (13) = 100.\nIn the first case, the agent payoffs \u03c0 = (\u03c00, \u03c01, \u03c02, \u03c03), where 0 is the seller, is (10, 20, 50, 20).\nIn the second case, the payoffs are \u03c0 = (\u2212 10, 30, 50, 30).\nOne thing we do know, because the VCG mechanism will maximize the payoff to the buyer across all efficient mechanisms [14], is that whenever Eq.\n1 is not satisfied there can be no efficient auction mechanism .7\n2.3 \u03b5-Strategyproofness\nWe now consider the same VCG mechanism, but with an approximation scheme for the underlying allocation problem.\nWe derive an \u03b5-strategyproofness result, that bounds the maximal gain in payoff that an agent can expect to achieve through a unilateral deviation from following a simple truth-revealing strategy.\nWe describe the result for the forward auction direction, but it is quite a general observation.\nAs before, let V (Z) denote the value of the optimal solution to the allocation problem with truthful bids from all agents, and V (Z \\ i) denote the value of the optimal solution computed without bids from agent i. Let V\u02c6 (Z) and V\u02c6 (Z \\ i) denote the value of the allocation computed with an approximation scheme, and assume that the approximation satisfies:\nfor some ~> 0.\nWe provide such an approximation scheme for our setting later in the paper.\nLet x\u02c6 denote the allocation implemented by the approximation scheme.\nThe payoff to agent i, for announcing valuation \u02c6vi, is:\nThe final term is independent of the agent's announced value, and can be ignored in an incentive-analysis.\nHowever, agent i can try to improve its payoff through the effect of its announced value on the allocation x\u02c6 implemented by the mechanism.\nIn particular, agent i wants the mechanism to select x\u02c6 to maximize the sum of its true value, vi (\u02c6xi), and the reported value of the other agents, Ej ~ = i \u02c6vj (\u02c6xj).\nIf the mechanism's allocation algorithm is optimal, then all the agent needs to do is truthfully state its value and the mechanism will do the rest.\nHowever, faced with an approximate allocation algorithm, the agent can try to improve its payoff by announcing a value that corrects for the approximation, and causes the approximation algorithm to implement the allocation that exactly maximizes the total reported value of the other agents together with its own actual value [18].\n6This condition is implied by the agents are substitutes requirement [3], that has received some attention in the combinatorial auction literature because it characterizes the case in which VCG payments can be supported in a competitive equilibrium.\nUseful characterizations of conditions that satisfy agents are substitutes, in terms of the underlying valuations of agents have proved quite elusive.\n7Moreover, although there is a small literature on maximallyefficient mechanisms subject to requirements of voluntaryparticipation and budget-balance (i.e. with the mechanism neither introducing or removing money), analytic results are only known for simple problems (e.g. [16, 4]).\nWe can now analyze the best possible gain from manipulation to an agent in our setting.\nWe first assume that the other agents are truthful, and then relax this.\nIn both cases, the maximal benefit to agent i occurs when the initial approximation is worst-case.\nWith truthful reports from other agents, this occurs when the value of choice x\u02c6 is V (Z) \/ (1 + \u03b5).\nThen, an agent could hope to receive an improved payoff of: V (Z) \u03b5\nThis is possible if the agent is able to select a reported type to correct the approximation algorithm, and make the algorithm implement the allocation with value V (Z).\nThus, if other agents are truthful, and with a (1 + \u03b5) - approximation scheme to the allocation problem, then no agent can improve its payoff by more than a factor \u03b5 \/ (1 + \u03b5) of the value of the optimal solution.\nThe analysis is very similar when the other agents are not truthful.\nIn this case, an individual agent can improve its payoff by no more than a factor ~ \/ (1 + ~) of the value of the optimal solution given the values reported by the other agents.\nLet V in the following theorem define the total value of the efficient allocation, given the reported values of agents j = ~ i, and the true value of agent i.\nNotice that we did not need to bound the error on the allocation problems without each agent, because the ~ - strategyproofness result follows from the accuracy of the first-term in the VCG payment and is independent of the accuracy of the second-term.\nHowever, the accuracy of the solution to the problem without each agent is important to implement a good approximation to the revenue properties of the VCG mechanism.\n3.\nTHE GENERALIZED KNAPSACK PROBLEM\nIn this section, we design a fully polynomial approximation scheme for the generalized knapsack, which models the winnerdetermination problem for the VCG-based multi-unit auctions.\nWe describe our results for the reverse auction variation, but the formulation is completely symmetric for the forward-auction.\nIn describing our approximation scheme, we begin with a simple property (the Anchor property) of an optimal knapsack solution.\nWe use this property to develop an O (n2) time 2-approximation for the generalized knapsack.\nIn turn, we use this basic approximation to develop our fully polynomial-time approximation scheme (FPTAS).\nOne of the major appeals of our piecewise bidding language is its compact representation of the bidder's valuation functions.\nWe strive to preserve this, and present an approximation scheme that will depend only on the number of bidders, and not the maximum quantity, M, which can be very large in realistic procurement settings.\nThe FPTAS implements an (1 + \u03b5) approximation to the optimal solution x \u2217, in worst-case time T = O (n3 \/ \u03b5), where n is the number of bidders, and where we assume that the piecewise bid for each bidder has O (1) pieces.\nThe dependence on the number of pieces is also polynomial: if each bid has a maximum\nof c pieces, then the running time can be derived by substituting nc for each occurrence of n.\n3.1 Preliminaries\nBefore we begin, let us recall the classic 0\/1 knapsack problem: we are given a set of n items, where the item i has value vi and size si, and a knapsack of capacity M; all sizes are integers.\nThe goal is to determine a subset of items of maximum value with total size at most M.\nSince we want to focus on a reverse auction, the equivalent knapsack problem will be to choose a set of items with minimum value (i.e. cost) whose size exceeds M.\nThe generalized knapsack problem of interest to us can be defined as follows:\nGeneralized Knapsack:\nInstance: A target M, and a set of n lists, where the ith list has the form\nwhere uj i are increasing with j and pji are decreasing with j, and uji, pji, M are positive integers.\nProblem: Determine a set of integers xji such that 1.\n(One per list) At most one xji is non-zero for any i, 2.\n(Membership) xji = ~ 0 implies xji E [uji, uj +1 i), 4.\n(Objective) EiEj pj 3.\n(Target) EiEj xji> M, and ixji is minimized.\nThis generalized knapsack formulation is a clear generalization of the classic 0\/1 knapsack.\nIn the latter, each list consists of a single point (si, vi).8 The connection between the generalized knapsack and our auction problem is transparent.\nEach list encodes a bid, representing multiple mutually exclusive quantity intervals, and one can choose any quantity in an interval, but at most one interval can be selected.\nChoosing interval [uji, uj +1 i) has cost pji per unit.\nThe goal is to procure at least M units of the good at minimum possible cost.\nThe problem has some flavor of the continuous knapsack problem.\nHowever, there are two major differences that make our problem significantly more difficult: (1) intervals have boundaries, and so to choose interval [uji, uj +1 i) requires that at least uji and at most uj +1 i units must be taken; (2) unlike the classic knapsack, we cannot sort the items (bids) by value\/size, since different intervals in one list have different unit costs.\n3.2 A 2-Approximation Scheme\nWe begin with a definition.\nGiven an instance of the generalized knapsack, we call each tuple tji = (uji, pji) an anchor.\nRecall that these tuples represent the breakpoints in the piecewise constant curve bids.\nWe say that the size of an anchor tji is uji, 8In fact, because of the \"one per list\" constraint, the generalized problem is closer in spirit to the multiple choice knapsack problem [9], where the underling set of items is partitioned into disjoint subsets U1, U2,..., Uk, and one can choose at most one item from each subset.\nPTAS do exist for this problem [10], and indeed, one can convert our problem into a huge instance of the multiple choice knapsack problem, by creating one group for each list; put a (quantity, price) point tuple (x, p) for each possible quantity for a bidder into his group (subset).\nHowever, this conversion explodes the problem size, making it infeasible for all but the most trivial instances.\nthe minimum number of units available at this anchor's price pji.\nThe cost of the anchor tji is defined to be the minimum total price associated with this tuple, namely, cost (tji) = pji uji if j R, and St together with t forms a feasible solution, and we have:\nIn the second case, imagine a modified instance of iKnapsack (f, j), which excludes all the tuples of the set Skip.\nSince none of these tuples were included in V * (B, j), the optimal solution for the modified problem should be the same as the one for the original.\nSuppose our approximation algorithm returns the value V' (B, j) for this modified instance.\nLet t' be the last tuple considered by the approximation algorithm before termination on the modified instance, and let Sty be the corresponding tentative solution set in that step.\nSince we consider tuples in order of increasing per unit price, and none of the tuples are going to be placed in the set Skip, we must have cost (Sty) 0.\nSince x and y has equal scaled cost, However, by (2), the scaled costs satisfy the following inequalities:\nSubstituting the upper-bound on scaled cost from (4) for cost (x), the lower-bound on scaled cost from (4) for cost (y), and using equality (3) to simplify, we have:\nThe last inequality uses the fact that at most n components of an indicator vector are non-zero; that is, any feasible solution contains at most n tuples.\nFinally, given the dynamic programming table for iKnapsack (~, j), we consider all the entries in the last row of this table, G [n \u2212 1, r].\nThese entries correspond to optimal solutions with all agents except l, for different levels of cost.\nIn particular, we consider the entries that provide at least M \u2212 uj +1 ~ units.\nTogether with a contribution from agent l, we choose the entry in this set that minimizes the total cost, defined as follows:\nPROOF.\nLet x_~ denote the vector of the elements in solution A * without element l. Then, by definition, cost (A *) = cost -LRB-x_~-RRB- + pj ~ xj ~.\nLet r = scost -LRB-x_~-RRB- be the scaled cost associated with the vector x_~.\nNow consider the dynamic programming table constructed for iKnapsack (~, j), and consider its entry G [n \u2212 1, r].\nLet A denote the 2-approximation to the generalized knapsack problem, and A (l, j) denote the solution from the dynamic-programming algorithm.\nSuppose y _ ~ is the solution associated with this entry in our dynamic program; the components of the vector y _ ~ are the quantities from different lists.\nSince both x_~ and y _ ~ have equal scaled costs, by Lemma 3, their unscaled costs are within \u03b5cost (A) of each other; that is,\nNow, define yj ~ = max {uj ~, M \u2212 di ~ = ~ j yji}; this is the contribution needed from ~ to make (y _ ~, yj ~) a feasible solution.\nAmong all the equal cost solutions, our dynamic programming tables chooses the one with maximum units.\nTherefore,\nwhere cost () is the original, unscaled cost associated with entry G [n \u2212 1, r].\nIt is worth noting, that unlike the 2-approximation scheme for iKnapsack (~, j), the value computed with this FPTAS includes the cost to acquire ujl units from l.\nThe following lemma shows that we achieve a (1 + \u03b5) - approximation.\nLEMMA 4.\nSuppose A * is an optimal solution of the generalized knapsack problem, and suppose that element (l, j) is midrange in the optimal solution.\nThen, the solution A (l, j) from running the scaled dynamic-programming algorithm on iKnapsack (~, j) satisfies i ~ = ~ k scost (tki) Ix [i, k] = scost (tki) Iy [i, k] (3) i ~ = ~ yji \u2265 j xj i ~ = ~ k j i ~ = ~ i\nTherefore, it must be the case that yj ~ i).\nWe have:\nLet ri = scost (xi) and r--i = scost (x--i).\nLet yi and y--i be the solution vectors corresponding to scaled cost ri and r--i in F ~ (i--1) and B ~ (n--i), respectively.\nFrom Lemma 3 we conclude that,\nwhere cost (A) is the upper-bound on C (Z) computed with the 2-approximation.\nAmong all equal scaled cost solutions, our dynamic program chooses the one with maximum units.\nTherefore we also have, (size (yi)> size (xi)) and (size (y--i)> size (x--i)) where we use shorthand size (x) to denote total number of units in all tuples in x. Now, define yjl = max (ujl, M--size (yi)--size (y--i)).\nFrom the preceding inequalities, we have yjl \nl into two disjoint sets.\nFor each set we compute the best solution, and then take the best between the two sets.\nWe define a pair (g, h) to be feasible if size (g) + size (h)> M--ujl.\nNow to compute Eq.\n6, we do a forward and backward walk on F ~ (i--1) and B ~ (n--i) respectively.\nWe start from the smallest index of F ~ (i--1) and move right, and from the highest index of B ~ (n--i) and move left.\nLet (g, h) be the current pair.\nIf (g, h) is feasible, we decrement B's pointer (that is, move backward) otherwise we increment F's pointer.\nThe feasible pairs found during the walk are used to compute Eq.\n6.\nThe complexity of this step is linear in size of F ~ (i--1), which is O (n\u03b1 \/ E).\nTo compute the above equation, we transform the above problem to another problem using modified cost, which is defined as:\nThe modified cost simplifies the problem, but unfortunately the elements in F ~ (i--1) and B ~ (n--i) are no longer sorted with respect to mcost.\nHowever, the elements are still sorted in quantity and we use this property to compute Eq.\n7.\nCall a pair (g, h) feasible if M--uj +1 l 0.\nAs such it is an example of computationally tractable \u03b5-dominance result, as well as an example of a non-trivial but approximable allocation problem.\nIt is particularly interesting that we are able to compute the payments to n agents in a VCG-based mechanism in worst-case time O (T log n), where T is the time complexity to compute the solution to a single allocation problem.","keyphrases":["multi-unit auction","bid languag","approxim-effici and approximatelystrategyproof auction mechan","singl-good multi-unit alloc problem","fulli polynomi-time approxim scheme","vickrei-clark-grove","forward auction","revers auction","equilibrium","margin-decreas piecewis constant curv","dynam program","approxim algorithm","strategyproof"],"prmu":["P","P","M","M","M","U","M","M","U","M","U","M","U"]} {"id":"C-18","title":"An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior","abstract":"The Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes. Although the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective. Most proposed countermeasures strategies are based primarily on rate detection and limiting algorithms. However, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer. In our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective. Specifically, we propose to study a new generation of worms called Swarm Worms, whose behavior is predicated on the concept of emergent intelligence. Emergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior. In this manuscript we will introduce the basic principles behind the idea of Swarm Worms, as well as the basic structure required in order to be considered a swarm worm. In addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm. We will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.","lvl-1":"An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior Fernando C.Col\u00b4on Osorio Wireless System Security Research Laboratory (W.S.S.R.L.) 420 Lakeside Avneue Marlboro, Massachusetts 01752 fcco@cs.wpi.edu Zachi Klopman Wireless System Security Research Laboratory (W.S.S.R.L.) 420 Lakeside Avneue Marlboro, Massachusetts 01752 zachi@cs.wpi.edu ABSTRACT The Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes.\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective.\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms.\nHowever, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective.\nSpecifically, we propose to study a new generation of worms called Swarm Worms, whose behavior is predicated on the concept of emergent intelligence.\nEmergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior.\nIn this manuscript we will introduce the basic principles behind the idea of Swarm Worms, as well as the basic structure required in order to be considered a swarm worm.\nIn addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm.\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Intrusion Detection; D.4.6 [Security and Protection]: Invasive software General Terms Experimentation, Security 1.\nINTRODUCTION AND PREVIOUSWORK In the early morning hours (05:30 GMT) of January 25, 2003 the fastest computer worm in recorded history began spreading throughout the Internet.\nWithin 10 minutes after the first infected host (patient zero), 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure.\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented: The Slammer worm spread so quickly that human response was ineffective, see [4] The interesting part, from our perspective, about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior, namely self-reproduction.\nSince Slammer, researchers have explored the behaviors of fast spreading worms, and have designed countermeasures strategies based primarily on rate detection and limiting algorithms.\nFor example, Zou, et al., [2], proposed a scheme where a Kalman filter is used to detect the early propagation of a worm.\nOther researchers have proposed the use of detectors where rates of Destination Unreachable messages are monitored by firewalls, and a significant increase beyond normal, alerts the organization to the potential presence of a worm.\nHowever, such strategies suffer from the fighting the last War syndrome.\nThat is, systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn the work described here, we put forth the hypothesis that next generation worms will be different, and therefore such techniques may have some significant limitations.\nSpecifically, we propose to study a new generation of worms called Swarm Worms, whose behavior is predicated on the concept of emergent intelligence.\nThe concept of emergent intelligence was first studied in association with biological systems.\nIn such studies, early researchers discovered a variety of interesting insect or animal behaviors in the wild.\nA flock of birds sweeps across the sky.\nA group of ants forages for food.\nA school of fish swims, turns, flees together away from a predator, ands so forth.\nIn general, this kind of aggregate motion has been called swarm behavior.\nBiologists, and computer scientists in the field of artificial intelligence have studied such biological swarms, and 323 attempted to create models that explain how the elements of a swarm interact, achieve goals, and evolve.\nMoreover, in recent years the study of swarm intelligence has become increasingly important in the fields of robotics, the design of Mobile ad-hoc Networks (MANETS), the design of Intrusion Detection Systems, the study of traffic patterns in transportation systems, in military applications, and other areas, see [3].\nThe basic concepts that have been developed over the last decade to explain swarms, and swarm behavior include four basic components.\nThese are: 1.\nSimplicity of logic & actions: A swarm is composed of N agents whose intelligence is limited.\nAgents in the swarm use simple local rules to govern their actions.\nSome models called this primitive actions or behaviors; 2.\nLocal Communication Mechanisms: Agents interact with other members in the swarm via simple local communication mechanisms.\nFor example, a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow.\n3.\nDistributed control: Autonomous agents interact with their environment, which probably consists of other agents, but act relatively independently from all other agents.\nThere is no central command or leader, and certainly there is no global plan.\n4.\nEmergent Intelligence: Aggregate behavior of autonomous agents results in complex intelligent behaviors; including self-organization.\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms.\nThis model, which extends the work by Weaver [5] is presented here in section 2.\nIn addition, we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts.\nSpecifically, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty-(30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart.\nThe rest of this manuscript is structure as follows.\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented.\nThis model is used in section 2.6 to described the first instance of a swarm worm, ZachiK.\nIn section 4, preliminary results via both empirical measurements as well as simulation is presented.\nFinally, in section 5 our conclusions and insights into future work are presented.\n2.\nWORM MODELING In order to study the behavior of swarm worms in general, it is necessary to create a model that realistically reflects the structure of worms and it is not necessarily tied to a specific instance.\nIn this section, we described such a model where a general worm is describe as having four-(4) basic components or subfunctions.\nBy definition, a worm is a selfcontained, self propagating program.\nThus, in simple terms, it has two main functions: that which propagates and that which does other things.\nWe propose that there is a third broad functionality of a worm, that of self-preservation.\nWe also propose that the other functionality of a worm may be more appropriately categorized as Goal-Based Actions (GBA), as whatever functionality included in a worm will naturally be dependent on whatever goals (and subgoals) the author has.\nThe work presented by Weaver et.\nal. in [5] provides us with mainly an action and technique based taxonomy of computer worms, which we utilize and further extend here.\n2.1 Propagation The propagation function itself may be broken down into three actions: acquire target, send scan, and infect target.\nAcquiring the target simply means picking a host to attack next.\nSending a scan involves checking to see if that host is receptive to an infection attempt, since IP-space is sparsely populated.\nThis may involve a simple ping to check if the host is alive or a full out vulnerability assessment.\nInfecting the target is the actual method used to send the worm code to the new host.\nIn algorithm form: propagate() { host := acquire_target() success := send_scan(host) if( success ) then infect(host) endif } In the case of a simple worm which does not first check to see if the host is available or susceptible (such as Slammer), the scan method is dropped: propagate() { host := acquire_target() infect(host) } Each of these actions may have an associated cost to its inclusion and execution, such as increased worm size and CPU or network load.\nDepending on the authors needs or requirements, these become limiting factors in what may be included in the worm``s actions.\nThis is discussed further after expanding upon these actions below.\n2.2 Target Acquisition: The Target Acquisition phase of our worm algorithm is built directly off of the Target Discovery section in [5].\nWeaver et.\nal. taxonomize this task into 5 separate categories.\nHere, we further extend their work through parameterization.\nScanning: Scanning may be considered an equation-based method for choosing a host.\nAny type of equation may be used to arrive at an IP address, but there are three main types seen thus far: sequential, random, and local preference.\nSequential scanning is exactly as it sounds: start at an IP address and increment through all the IP space.\nThis could carry with it the options of which IP to start with (user chosen value, random, or based on IP of infected host) and 324 how many times to increment (continuous, chosen value, or subnet-based).\nRandom scanning is completely at random (depending on the chosen PRNG method and its seed value).\nLocal preference scanning is a variance of either Sequential or Random, whereby it has a greater probability of choosing a local IP address over a remote one (for example, the traditional 80\/20 split).\nPre-generated Target Lists: Pre-generated Target Lists, or so called hit-lists, could include the options for percentage of total population and percentage wrong, or just number of IPs to include.\nImplicit to this type is the fact that the list is divided among a parent and its children, avoiding the problem of every instance hitting the exact same machines.\nExternally Generated Target Lists: Externally generated target lists depend on one or more external sources that can be queried for host data.\nThis will involve either servers that are normally publicly available, such as gaming meta-servers, or ones explicitly setup by the worm or worm author.\nThe normally available meta-servers could have parameters for rates of change, such as many popping up at night or leaving in the morning.\nEach server could also have a maximum queries\/second that it would be able to handle.\nThe worm would also need a way of finding these servers, either hard-coded or through scanning.\nInternal Target Lists: Internal Target Lists are highly dependent on the infected host.\nThis method could parameterize the choice of how much info is on the host, such as all machines in subnet, all windows boxes in subnet, particular servers, number of internal\/external, or some combination.\nPassive: Passive methods are determined by normal interactions between hosts.\nParameters may include a rate of interaction with particular machines, internal\/external rate of interaction, or subnet-based rate of interaction.\nAny of these methods may also be combined to produce different types of target acquisition strategies.\nFor example, the a worm may begin with an initial hit-list of 100 different hosts or subnets.\nOnce it has exhausted its search using the hit-list, it may then proceed to perform random scanning with a 50% local bias.\nIt is important to note, however, that the resource consumption of each method is not the same.\nDifferent methods may require the worm to be large, such as the extra bytes required by a hit-list, or to take more processing time, such as by searching the host for addresses of other vulnerable hosts.\nFurther research and analysis should be performed in this area to determine associated costs for using each method.\nThe costs could then be used in determining design tradeoffs that worm authors engage at.\nFor example, hit lists provide a high rate of infection, but at a high cost of worm payload size.\n2.2.1 Sending a Scan The send scan function tests to see if the host is available for infection.\nThis can be as simple as checking if the host is up on the network or as complex as checking if the host is vulnerable to the exploit which will be used.\nThe sending of a scan before attempted infection can increase` the scanning rate if the cost for failing an infection is greater than the cost of failing a scan or sending a scan plus infection; and failures are more frequent than successes.\nOne important parameter to this would be the choice of transport protocol (TCP\/UDP) or just simply the time for one successful scan and time for one failed scan.\nAlso, whether or not it tests for the host to be up or if it is a full test for the vulnerability (or for multiple vulnerabilities).\n2.2.2 Infection Vector (IV) The particular infection vector used to access the remote host is mainly dependent on the particular vulnerability chosen to exploit.\nIn a non-specific sense, it is dependent on the transport protocol chosen to use and the message size to be sent.\nSection 3 of [5] also proposes three particular classes of IV: Self-carried, second channel, and embedded.\n2.3 Self Preservation The Self Preservation actions of a worm may take many forms.\nIn the wild, worms have been observed to disable anti-virus software or prevent sending itself to certain antivirusknown addresses.\nThey have also been seen to attempt disabling of other worms which may be contending for the same system.\nWe also believe that a time-based throttled scanning may help the worm to slip under the radar.\nWe also propose a decoy method, whereby a worm will release a few children that cause a lot of noise so that the parent is not noticed.\nIt has also been proposed [5] that a worm cause damage to its host if, and only if, it is disturbed in some way.\nThis module could contain parameters for: probability of success in disabling anti-virus or other software updates, probability of being noticed and thus removed, or hardening of the host against other worms.\n2.4 Goal-Based Actions A worm``s GBA functionality depends on the author``s goal list.\nThe Payloads section of [5] provides some useful suggestions for such a module.\nThe opening of a back-door can make the host susceptible to more attacks.\nThis would involve a probability of the back-door being used and any associated traffic utilization.\nIt could also provide a list of other worms this host is now susceptible to or a list of vulnerabilities this host now has.\nSpam relays and HTTP-Proxies of course have an associated bandwidth consumption or traffic pattern.\nInternet DoS attacks would have a set time of activation, a target, and a traffic pattern.\nData damage would have an associated probability that the host dies because of the damage.\nIn Figure 1, this general model of a worm is summarized.\nPlease note that in this model there is no learning, no, or very little, sharing of information between worm instances, and certainly no coordination of actions.\nIn the next section we expand the model to include such mechanisms, and hence, arrive at the general model of a swarm worm.\n2.5 Swarms - General Model As described in section 1, the basic characteristics that distinguished swarm behavior from simply what appears to be collective coordinated behaviors are four basic attributes.\nThese are: 1.\nSimplicity of logic & actions; 2.\nLocal Communication Mechanisms; 3.\nDistributed control; and 4.\nEmergent Intelligence, including self-organization.\n325 Structure Function\/Example Infection, Infection Vector Executable is run Protection & Stealthiness Disable McAfee (Staying Alive) Propagation Send email to everyone in address book Goal Based Action (GBA) DDoS www.sco.com Everything else, often called payload Figure 1: General Worm Model In this work we aggregate all of these attributes under the general title of Learning, Communication, and Distributed Control.\nThe presence of these attributes distinguishes swarm worms from otherwise regular worms, or other types of malware such as Zombies.\nIn figure ??\n, the generic model of a worm is expanded to included these set of actions.\nWithin this context then, a worm like Slammer cannot be categorized as a swarm worm due to the fact that new instances of the worm do not coordinate their actions or share information.\nOn the other hand, Zombies and many other forms of DDoS, which at first glance may be consider swarm worms are not.\nThis is simply due to fact that in the case of Zombies, control is not distributed but rather centralized, and no emergent behaviors arise.\nThe latter, the potential emergence of intelligence or new behaviors is what makes swarm worms so potentially dangerous.\nFinally, when one considers the majority of recent disruptions to the Internet Infrastructure, and in light of our description of swarm attacks, then said disruptions can be easily categorized as precursors to truly swarm behavior.\nSpecifically, \u2022 DDOS - Large number of compromised hosts send useless packets requiring processing (Stacheldraht, http : \/\/www.cert.org\/ incidentnotes\/IN \u2212 99 \u2212 04.\nhtml).\nDDoS attacks are the early precursors to Swarm Attacks due to the large number of agents involved.\n\u2022 Code Red CrV1, Code Red II, Nimbda - Exhibit early notions of swarm attacks, including a backdoor communication channel.\n\u2022 Staniford & Paxson in How to Own the Internet in Your Spare Time?\nexplore modifications to CrV1, Code Red I, II with a swarm like type of behavior.\nFor example, they speculate on new worms which employ direct worm-to-worm communication, and employ programmable updates.\nFor example the Warhol worm, and Permutation-Scanning (self coordinating) worms.\n2.6 Swarm Worm: the details In considering the creation of what we believed to be the first Swarm Worm in existence, we wanted to adhere as close as possible to the general model presented in section ??\nwhile at the same time facilitating large scale analysis, both empirical and through simulations, of the behavior of the swarm.\nFor this reason, we selected as the first instance Structure Function\/Example Infection, Infection Vector Executable is run Protection & Stealthiness Disable McAfee (Staying Alive) Propagation Send email to everyone in address book Learning, Communication, Pheromones\/Flags (Test and Distributed Control if Worm is already present) Time bombs, Learning Algorithms, IRC channel Goal Based Action (GBA) DDoS www.sco.com Everything else, often called payload Figure 2: General Model of a Swarm Worm of the swarm a simple password cracking worm.\nThe objective of this worm simply is to infect a host by sequentially attempting to login into the host using well known passwords (dictionary attack), passwords that have been discovered previously by any member of the swarm, and random passwords.\nOnce, a host is infected, the worm will create communication channels with both its known neighbors at that time, as well as with any offsprings that it successfully generates.\nIn this context a successful generation of an offspring means simply infecting a new host and replicating an exact copy of itself in such a host.\nWe call this swarm worm the ZachiK worm in honor of one of its creators.\nAs it can be seen from this description, the ZachiK worm exhibits all of the elements described before.\nIn the following sections, we described in detail each one of the elements of the ZachiK worm.\n2.7 Infection Vector The infection vector used for ZachiK worm is the secure shell protocol SSH.\nA modified client which is capable of receiving passwords from the command line was written, and integrated with a script that supplies it with various passwords: known and random.\nWhen a password is found for an appropriate target, the infection process begins.\nAfter the root password of a host is discovered, the worm infects the target host and replicates itself.\nThe worm creates a new directory in the target host, copies the modified ssh client, the script, the communications servers, and the updated versions of data files (list of known passwords and a list of current neighbors).\nIt then runs the modified script on the newly infected hosts, which spawns the communications server, notifies the neighbors and starts looking for new targets.\nIt could be argued, correctly, that the ZachiK worm can be easily defeated by current countermeasure techniques present on most systems today, such as disallowing direct root logins from the network.\nWithin this context ZachiK can quickly be discarded as very simple and harmless worm that does not require further study.\nHowever, the reader should consider the following: 1.\nZachiK can be easily modified to include a variety of infection vectors.\nFor example, it could be programmed to guess common user names and their passwords; gain 326 access to a system, then guess the root password or use other well know vulnerabilities to gain root privileges; 2.\nZachiK is a proof of concept worm.\nThe importance of ZachiK is that it incorporates all of the behaviors of a swarm worm including, but not restricted to, distributed control, communication amongst agents, and learning; 3.\nZachiK is composed of a large collection of agents operating independently which lends itself naturally to parallel algorithms such as a parallel search of the IPV4 address space.\nWithin this context, SLAMMER, does incorporate a parallel search capability of potentially susceptible addresses.\nHowever, unlike ZachiK, the knowledge discovered by the search is never shared amongst the agents.\nFor this reasons, and many others, one should not discard the potential of this new class of worms but rather embrace its study.\n2.8 Self-Preservation In the case of ZachiK worm, the main self-preservation techniques used are simply keeping the payload small.\nIn this context, this simply means restricting the number of passwords that an offspring inherits, masquerading worm messages as common http requests, and restricting the number of neighbors to a maximum of five-(5).\n2.9 Propagation Choosing the next target(s) in an efficient matter requires thought.\nIn the past, known and proposed worms, see [5], have applied propagation techniques that varied.\nThese include: strictly random selection of a potential vulnerable host; target lists of vulnerable hosts; locally biased random selection (select a host target at random from a local subnet); and a combination of some or all of the above.\nIn our test and simulation environments, we will apply a combination of locally biased and totally random selection of potential vulnerable hosts.\nHowever, due to the fact that the ZachiK worm is a swarm worm, address discovery (that is when non-existent addresses are discovered) information will be shared amongst members of the swarm.\nThe infection and propagation threads do the following set of activities repeatedly: \u2022 Choose an address \u2022 Check the validity of the address \u2022 Choose a set of passwords \u2022 Try infecting the selected host with this set of passwords As described earlier, choosing an address makes use of a combination of random selection, local bias, and target lists.\nSpecifically, to choose an address, the instance may either: \u2022 Generate a new random address \u2022 Generate an address on the local network \u2022 Pick an address from a handoff list The choice is made randomly among these options, and can be varied to test the dependency of propagation on particular choices.\nPassword are either chosen from the list of known passwords or newly generated.\nWhen an infection of a valid address fails, it is added to a list of handoffs, which is sent to the neighbors to try to work on.\n2.10 Learning, CommunicationandDistributed Control 2.10.1 Communication The concept of a swarm is based on transfer of information amongst neighbors, which relay their new incoming messages to their neighbors, and so on until every worm instance in the swarm is aware of these messages.\nThere are two classes of messages: data or information messages and commands.\nThe command messages are meant for an external user (a.k.a., hackers and\/or crackers) to control the actions of the instances, and are currently not implemented.\nThe information messages are currently of three kinds: new member, passwords and exploitable addresses (handoffs).\nThe new member messages are messages that a new instance sends to the neighbors on its (short) list of initial neighbors.\nThe neighbors then register these instances in their neighbor list.\nThese are messages that form the multi-connectivity of the swarm, and without them, the topology will be a treelike structure, where eliminating a single node would cause the instances beneath it to be inaccessible.\nThe passwords messages inform instances of newly discovered passwords, and by informing all instances, the swarm as whole collects this information, which allows it to infect new instances more effectively.\nThe handoffs messages inform instances of valid addresses that could not be compromised (fail at breaking the password for the root account).\nSince the address space is rather sparse, it takes a relatively long time (i.e. many trials) to discover a valid address.\nTherefore, by handing off discovered valid addresses, the swarm is (a) conserving energy by not re-discovering the same addresses (b) attacking more effectively.\nIn a way, this is a simple instance of coordinated activity of a swarm.\n2.10.2 Coordination When a worm instance is born, it relays its existence to all neighbors on its list.\nThe main thread then spawns a few infection threads, and continues to handle incoming messages (registering neighbors, adding new passwords, receiving addresses and relaying these messages).\n2.10.3 Distributed Control Control in the ZachiK worm is distributed in the sense that each instance of the worm performs a set of actions independently of every other instance while at the same time benefiting from the learning achieve by its immediate neighbors.\n2.11 Goal Based Actions The first instantiation of the ZachiK worm has two basic goals.\nThese are: (1) propagate, and (2) discover and share with members of th swarm new root passwords.\n3.\nEXPERIMENTAL DESIGN In order to verify our hypothesis that Swarm Worms are more capable, and therefore dangerous than other well known 327 worms, a network testbed was created, and a simulator, capable of simulating large scale Internet-like topologies (IPV4 space), was developed.\nThe network testbed consisted of a local area network of 30 Linux based computers.\nThe simulator was written in C++ .\nThe simple swarm worm described in section 2.6 was used to infect patient-zero, and then the swarm worm was allowed to propagate via its own mechanisms of propagation, distributed control, and swarm behaviors.\nIn the case of a simple local area network of 30 computers, six-(6) different root passwords out of a password space of 4 digits (10000 options) were selected.\nAt the start of the experiment a single known password is known, that of patient-zero.\nAll shared passwords are distributed randomly across all nodes.\nSimilarly, in the case of the simulation, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and\/or targeted to specific network topologies subnets.\nFor example, in one of our simulation runs, the network topology consisted of 200 subnets each containing 50 hosts.\nIn such a topology, shared passwords were distributed across subnets where a varying percentage of passwords were shared across subnets.\nThe percentages of shared passwords used was reflective of early empirical studies, where up to 39.7% of common passwords were found to be shared.\n4.\nRESULTS In Figure 3, the results comparing Swarm Attack behavior versus that of a typical Malform Worm for a 30 node LAN are presented.\nIn this set of empirical runs, six-(6) shared passwords were distributed at random across all nodes from a possible of 10,000 unknown passwords.\nThe data presented reflects the behaviors of a total of three-(3) distinct classes of worm or swarm worms.\nThe class of worms presented are as follows: \u2022 I-NS-NL:= Generic worm, independent (I), no learning\/memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 S-L-SP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; and \u2022 S-L-SP&A:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 3, the results validate our original hypothesis that swarm worms are significantly more efficient and dangerous than generic worms.\nIn this set of experiments, the sharing of passwords provides an order of magnitude improvement over a memoryless random worm.\nSimilarly, a swarm worm that shares passwords and addresses is approximately two orders of magnitude more efficient than its generic counterpart.\nIn Figure 3, a series of discontinuities can be observed.\nThese discontinuities are an artifact of the small sample space used for this experiment.\nBasically, as soon as a password is broken, all nodes sharing that specific password are infected within a few seconds.\nNote that it is trivial for a swarm worm to scan and discovered a small shared password space.\nIn Figure 4, the simulation results comparing Swarm Attack Behavior versus that of a Generic Malform Worm are presented.\nIn this set of simulation runs, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and\/or targeted to specific network topologies subnets.\nThe data presented reflects the behaviors of three-(3) distinct classes of worm or swarm worms and two(2) different target host selection scanning strategies (random scanning and local bias).\nThe amount of local bias was varied across multiple simulation runs.\nThe results presented are aggregate behaviors.\nIn general the following class of Generic Worms and Swarm Worms were simulated.\nAddress Scanning: \u2022 Random:= addresses are selected at random from a subset of the IPV4 space, namely, a 224 address space; and \u2022 Local Bias:= addresses are selected at random from either a local subnet (256 addresses) or from a subset of the IPV4 space, namely, a 224 address space.\nThe percentage of local bias is varied across multiple runs.\nLearning, Communication & Distributed Control \u2022 I-NL-NS: Generic worm, independent (I), no learning\/ memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 I-L-OOS: Generic worm, independent (I), learning\/ memoryless (L), and one time sharing of information with offsprings only (OOS); \u2022 S-L-SP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; \u2022 S-L-S&AOP:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of addresses with neighbors and offsprings, shares passwords one time only (at creation) with offsprings(SA&OP); \u2022 S-L-SP&A:= Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 4, the results are consistent with our set of empirical results.\nIn addition, the following observations can be made.\n1.\nLocal preference is incredibly effective.\n2.\nShort address handoffs are more effective than long ones.\nWe varied the size of the list allowed in the sharing of addresses; the overhead associated with a long address list is detrimental to the performance of the swarm worm, as well as to its stealthiness; 3.\nFor the local bias case, sharing valid addresses of susceptible host, S-L-S&AOP worm (recall, the S-L-S&AOP swarm shares passwords, one time only, with offsprings 328 at creation time) is more effective than sharing passwords in the case of the S-L-SP swarm.\nIn this case, we can think of the swarm as launching a distributeddictionary attack: different segments of the swarm use different passwords to try to break into susceptible uninfected host.\nIn the local bias mode, early in the life of the swarm, address-sharing is more effective than password-sharing, until most subnets are discovered.\nThen the targeting of local addresses assists in discovering the susceptible hosts, while the swarm members need to waste time rediscovering passwords; and 4.\nInfecting the last 0.5% of nodes takes a very long time in non-local bias mode.\nBasically, the shared password list across subnets has been exhausted, and the swarm reverts to simply a random discovery of password algorithm.\nFigure 3: Swarm Attack Behavior vs. Malform Worm: Empirical Results, 30node LAN Figure 4: Swarm Attack Behavior vs. Malform Worm: Simulation Results 5.\nSUMMARY AND FUTURE WORK In this manuscript, we have presented an abstract model, similar in some aspects to that of Weaver [5], that helps explain the generic nature of worms.\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their generic counterparts.\nIn addition, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty-(30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities.\nThis work opens up a new area of interesting problems.\nSome of the most interesting and pressing problems to be consider are as follows: \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence, agent systems, and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms?\n; and \u2022 What techniques, if any, can be developed to create defenses against swarm worms?\n6.\nACKNOWLEDGMENTS This work was conducted as part of a larger effort in the development of next generation Intrusion Detection & CounterMeasure Systems at WSSRL.\nThe work is conducted under the auspices of Grant ACG-2004-06 by the Acumen Consulting Group, Inc., Marlboro, Massachusetts.\n7.\nREFERENCES [1] C.C. Zou, L. Gao, W. G., and Towsley, D. Monitoring and early warning for internet worms.\nIn 10th ACM Conference on Computer and Communications Security, Washington, DC (October 2003).\n[2] Liu, S., and Passino, K. Swarm intelligence: Literature overview.\nIn Dept. of Electrical Engineering, The Ohio State University, 2015 Neil Ave., Columbus, OH 43210 (2000).\n[3] Moore, D., Paxson, V., Savage, S., Shannon, C., Staniford, S., and Weaver, N.\nThe spread of the saphire\/slammer worm.\nTech.\nrep., A joint effort of CAIDA, ICSI, Silicon Defense, UC Berkeley EECS and UC San Diego CSE, 2003.\n[4] Weaver, N., Paxson, V., Staniford, S., and Cunningham, R.\nA taxonomy of computer worms.\nIn Proceedings of the ACM Workshop on Rapid Malware (WORM) (2003).\n329","lvl-3":"An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes.\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective.\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms.\nHowever, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nEmergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior.\nIn this manuscript we will introduce the basic principles behind the idea of\" Swarm Worms\", as well as the basic structure required in order to be considered a\" swarm worm\".\nIn addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm.\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.\n1.\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours (05:30 GMT) of January 25, 2003 the fastest computer worm in recorded history began spreading throughout the Internet.\nWithin 10 minutes after the first infected host (patient zero), 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure.\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented:\" The Slammer worm spread so quickly that human response was ineffective\", see [4] The interesting part, from our perspective, about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior, namely self-reproduction.\nSince Slammer, researchers have explored the behaviors of fast spreading worms, and have designed countermeasures strategies based primarily on rate detection and limiting algorithms.\nFor example, Zou, et al., [2], proposed a scheme where a Kalman filter is used to detect the early propagation of a worm.\nOther researchers have proposed the use of detectors where rates of\" Destination Unreachable\" messages are monitored by firewalls, and a significant increase beyond\" normal\", alerts the organization to the potential presence of a worm.\nHowever, such strategies suffer from the\" fighting the last War\" syndrome.\nThat is, systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn the work described here, we put forth the hypothesis that next generation worms will be different, and therefore such techniques may have some significant limitations.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nThe concept of emergent intelligence was first studied in association with biological systems.\nIn such studies, early researchers discovered a variety of interesting insect or animal behaviors in the wild.\nA flock of birds sweeps across the sky.\nA group of ants forages for food.\nA school of fish swims, turns, flees together away from a predator, ands so forth.\nIn general, this kind of aggregate motion has been called\" swarm behavior.\"\nBiologists, and computer scientists in the field of artificial intelligence have studied such biological swarms, and\nattempted to create models that explain how the elements of a swarm interact, achieve goals, and evolve.\nMoreover, in recent years the study of\" swarm intelligence\" has become increasingly important in the fields of robotics, the design of Mobile Ad-Hoc Networks (MANETS), the design of Intrusion Detection Systems, the study of traffic patterns in transportation systems, in military applications, and other areas, see [3].\nThe basic concepts that have been developed over the last decade to explain\" swarms, and\" swarm behavior\" include four basic components.\nThese are:\n1.\nSimplicity of logic & actions: A swarm is composed of N agents whose intelligence is limited.\nAgents in the swarm use simple local rules to govern their actions.\nSome models called this primitive actions or behaviors; 2.\nLocal Communication Mechanisms: Agents interact with other members in the swarm via simple\" local\" communication mechanisms.\nFor example, a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow.\n3.\nDistributed control: Autonomous agents interact with their environment, which probably consists of other agents, but act relatively independently from all other agents.\nThere is no central command or leader, and certainly there is no global plan.\n4.\"\nEmergent Intelligence\": Aggregate behavior of autonomous agents results in complex\" intelligent\" behaviors; including self-organization\".\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms.\nThis model, which extends the work by Weaver [5] is presented here in section 2.\nIn addition, we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts.\nSpecifically, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart.\nThe rest of this manuscript is structure as follows.\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented.\nThis model is used in section 2.6 to described the first instance of a swarm worm, ZachiK.\nIn section 4, preliminary results via both empirical measurements as well as simulation is presented.\nFinally, in section 5 our conclusions and insights into future work are presented.\n2.\nWORM MODELING\n2.1 Propagation\n2.2 Target Acquisition:\n2.2.1 Sending a Scan\n2.2.2 Infection Vector (IV)\n2.3 Self Preservation\n2.4 Goal-Based Actions\n2.5 Swarms - General Model\n2.6 Swarm Worm: the details\n2.7 Infection Vector\n2.8 Self-Preservation\n2.9 Propagation\n2.10.1 Communication\n2.10.2 Coordination\n2.10.3 Distributed Control\n2.11 Goal Based Actions\n3.\nEXPERIMENTAL DESIGN\n4.\nRESULTS\nAddress Scanning:\n5.\nSUMMARY AND FUTURE WORK\nIn this manuscript, we have presented an abstract model, similar in some aspects to that of Weaver [5], that helps explain the generic nature of worms.\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their generic counterparts.\nIn addition, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities.\nThis work opens up a new area of interesting problems.\nSome of the most interesting and pressing problems to be consider are as follows: \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence, agent systems, and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms?\n; and \u2022 What techniques, if any, can be developed to create defenses against swarm worms?","lvl-4":"An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes.\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective.\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms.\nHowever, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nEmergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior.\nIn this manuscript we will introduce the basic principles behind the idea of\" Swarm Worms\", as well as the basic structure required in order to be considered a\" swarm worm\".\nIn addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm.\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.\n1.\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours (05:30 GMT) of January 25, 2003 the fastest computer worm in recorded history began spreading throughout the Internet.\nSince Slammer, researchers have explored the behaviors of fast spreading worms, and have designed countermeasures strategies based primarily on rate detection and limiting algorithms.\nFor example, Zou, et al., [2], proposed a scheme where a Kalman filter is used to detect the early propagation of a worm.\nThat is, systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn the work described here, we put forth the hypothesis that next generation worms will be different, and therefore such techniques may have some significant limitations.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nThe concept of emergent intelligence was first studied in association with biological systems.\nIn such studies, early researchers discovered a variety of interesting insect or animal behaviors in the wild.\nA flock of birds sweeps across the sky.\nIn general, this kind of aggregate motion has been called\" swarm behavior.\"\nBiologists, and computer scientists in the field of artificial intelligence have studied such biological swarms, and\nattempted to create models that explain how the elements of a swarm interact, achieve goals, and evolve.\nThe basic concepts that have been developed over the last decade to explain\" swarms, and\" swarm behavior\" include four basic components.\nThese are:\n1.\nSimplicity of logic & actions: A swarm is composed of N agents whose intelligence is limited.\nAgents in the swarm use simple local rules to govern their actions.\nSome models called this primitive actions or behaviors; 2.\nLocal Communication Mechanisms: Agents interact with other members in the swarm via simple\" local\" communication mechanisms.\nFor example, a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow.\n3.\n4.\"\nEmergent Intelligence\": Aggregate behavior of autonomous agents results in complex\" intelligent\" behaviors; including self-organization\".\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms.\nThis model, which extends the work by Weaver [5] is presented here in section 2.\nIn addition, we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts.\nSpecifically, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart.\nThe rest of this manuscript is structure as follows.\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented.\nThis model is used in section 2.6 to described the first instance of a swarm worm, ZachiK.\nIn section 4, preliminary results via both empirical measurements as well as simulation is presented.\nFinally, in section 5 our conclusions and insights into future work are presented.\n5.\nSUMMARY AND FUTURE WORK\nIn this manuscript, we have presented an abstract model, similar in some aspects to that of Weaver [5], that helps explain the generic nature of worms.\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their generic counterparts.\nIn addition, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities.\nThis work opens up a new area of interesting problems.\nSome of the most interesting and pressing problems to be consider are as follows: \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence, agent systems, and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms?\n; and \u2022 What techniques, if any, can be developed to create defenses against swarm worms?","lvl-2":"An Initial Analysis and Presentation of Malware Exhibiting Swarm-Like Behavior\nABSTRACT\nThe Slammer, which is currently the fastest computer worm in recorded history, was observed to infect 90 percent of all vulnerable Internets hosts within 10 minutes.\nAlthough the main action that the Slammer worm takes is a relatively unsophisticated replication of itself, it still spreads so quickly that human response was ineffective.\nMost proposed countermeasures strategies are based primarily on rate detection and limiting algorithms.\nHowever, such strategies are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn our work, we put forth the hypothesis that next generation worms will be radically different, and potentially such techniques will prove ineffective.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nEmergent Intelligence is the behavior of systems, very much like biological systems such as ants or bees, where simple local interactions of autonomous members, with simple primitive actions, gives rise to complex and intelligent global behavior.\nIn this manuscript we will introduce the basic principles behind the idea of\" Swarm Worms\", as well as the basic structure required in order to be considered a\" swarm worm\".\nIn addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm.\nWe will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities.\n1.\nINTRODUCTION AND PREVIOUS WORK\nIn the early morning hours (05:30 GMT) of January 25, 2003 the fastest computer worm in recorded history began spreading throughout the Internet.\nWithin 10 minutes after the first infected host (patient zero), 90 percent of all vulnerable hosts had been compromised creating significant disruption to the global Internet infrastructure.\nVern Paxson of the International Computer Science Institute and Lawrence Berkeley National Laboratory in its analysis of Slammer commented:\" The Slammer worm spread so quickly that human response was ineffective\", see [4] The interesting part, from our perspective, about the spread of Slammer is that it was a relatively unsophisticated worm with benign behavior, namely self-reproduction.\nSince Slammer, researchers have explored the behaviors of fast spreading worms, and have designed countermeasures strategies based primarily on rate detection and limiting algorithms.\nFor example, Zou, et al., [2], proposed a scheme where a Kalman filter is used to detect the early propagation of a worm.\nOther researchers have proposed the use of detectors where rates of\" Destination Unreachable\" messages are monitored by firewalls, and a significant increase beyond\" normal\", alerts the organization to the potential presence of a worm.\nHowever, such strategies suffer from the\" fighting the last War\" syndrome.\nThat is, systems are being designed and developed to effectively contain worms whose behaviors are similar to that of Slammer.\nIn the work described here, we put forth the hypothesis that next generation worms will be different, and therefore such techniques may have some significant limitations.\nSpecifically, we propose to study a new generation of worms called\" Swarm Worms\", whose behavior is predicated on the concept of\" emergent intelligence\".\nThe concept of emergent intelligence was first studied in association with biological systems.\nIn such studies, early researchers discovered a variety of interesting insect or animal behaviors in the wild.\nA flock of birds sweeps across the sky.\nA group of ants forages for food.\nA school of fish swims, turns, flees together away from a predator, ands so forth.\nIn general, this kind of aggregate motion has been called\" swarm behavior.\"\nBiologists, and computer scientists in the field of artificial intelligence have studied such biological swarms, and\nattempted to create models that explain how the elements of a swarm interact, achieve goals, and evolve.\nMoreover, in recent years the study of\" swarm intelligence\" has become increasingly important in the fields of robotics, the design of Mobile Ad-Hoc Networks (MANETS), the design of Intrusion Detection Systems, the study of traffic patterns in transportation systems, in military applications, and other areas, see [3].\nThe basic concepts that have been developed over the last decade to explain\" swarms, and\" swarm behavior\" include four basic components.\nThese are:\n1.\nSimplicity of logic & actions: A swarm is composed of N agents whose intelligence is limited.\nAgents in the swarm use simple local rules to govern their actions.\nSome models called this primitive actions or behaviors; 2.\nLocal Communication Mechanisms: Agents interact with other members in the swarm via simple\" local\" communication mechanisms.\nFor example, a bird in a flock senses the position of adjacent bird and applies a simple rule of avoidance and follow.\n3.\nDistributed control: Autonomous agents interact with their environment, which probably consists of other agents, but act relatively independently from all other agents.\nThere is no central command or leader, and certainly there is no global plan.\n4.\"\nEmergent Intelligence\": Aggregate behavior of autonomous agents results in complex\" intelligent\" behaviors; including self-organization\".\nIn order to understand fully the behavior of such swarms it is necessary to construct a model that explains the behavior of what we will call generic worms.\nThis model, which extends the work by Weaver [5] is presented here in section 2.\nIn addition, we intend to extend said model in such a way that it clearly explains the behaviors of this new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their similar generic counterparts.\nSpecifically, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms are capable of compromising hosts at rates up to two orders of magnitude faster than their generic counterpart.\nThe rest of this manuscript is structure as follows.\nIn section 2 an abstract model of both generic worms as well as swarm worms is presented.\nThis model is used in section 2.6 to described the first instance of a swarm worm, ZachiK.\nIn section 4, preliminary results via both empirical measurements as well as simulation is presented.\nFinally, in section 5 our conclusions and insights into future work are presented.\n2.\nWORM MODELING\nIn order to study the behavior of swarm worms in general, it is necessary to create a model that realistically reflects the structure of worms and it is not necessarily tied to a specific instance.\nIn this section, we described such a model where a general worm is describe as having four - (4) basic components or subfunctions.\nBy definition, a worm is a selfcontained, self propagating program.\nThus, in simple terms, it has two main functions: that which propagates and that which does\" other\" things.\nWe propose that there is a third broad functionality of a worm, that of self-preservation.\nWe also propose that the\" other\" functionality of a worm may be more appropriately categorized as Goal-Based Actions (GBA), as whatever functionality included in a worm will naturally be dependent on whatever goals (and subgoals) the author has.\nThe work presented by Weaver et.\nal. in [5] provides us with mainly an action and technique based taxonomy of computer worms, which we utilize and further extend here.\n2.1 Propagation\nThe propagation function itself may be broken down into three actions: acquire target, send scan, and infect target.\nAcquiring the target simply means picking a host to attack next.\nSending a scan involves checking to see if that host is receptive to an infection attempt, since IP-space is sparsely populated.\nThis may involve a simple ping to check if the host is alive or a full out vulnerability assessment.\nInfecting the target is the actual method used to send the worm code to the new host.\nIn algorithm form:\nIn the case of a simple worm which does not first check to see if the host is available or susceptible (such as Slammer), the scan method is dropped:\nEach of these actions may have an associated cost to its inclusion and execution, such as increased worm size and CPU or network load.\nDepending on the authors needs or requirements, these become limiting factors in what may be included in the worm's actions.\nThis is discussed further after expanding upon these actions below.\n2.2 Target Acquisition:\nThe Target Acquisition phase of our worm algorithm is built directly off of the Target Discovery section in [5].\nWeaver et.\nal. taxonomize this task into 5 separate categories.\nHere, we further extend their work through parameterization.\nScanning: Scanning may be considered an equation-based method for choosing a host.\nAny type of equation may be used to arrive at an IP address, but there are three main types seen thus far: sequential, random, and local preference.\nSequential scanning is exactly as it sounds: start at an IP address and increment through all the IP space.\nThis could carry with it the options of which IP to start with (user chosen value, random, or based on IP of infected host) and\nhow many times to increment (continuous, chosen value, or subnet-based).\nRandom scanning is completely at random (depending on the chosen PRNG method and its seed value).\nLocal preference scanning is a variance of either Sequential or Random, whereby it has a greater probability of choosing a local IP address over a remote one (for example, the traditional 80\/20 split).\nPre-generated Target Lists: Pre-generated Target Lists, or so called\" hit-lists\", could include the options for percentage of total population and percentage wrong, or just number of IPs to include.\nImplicit to this type is the fact that the list is divided among a parent and its children, avoiding the problem of every instance hitting the exact same machines.\nExternally Generated Target Lists: Externally generated target lists depend on one or more external sources that can be queried for host data.\nThis will involve either servers that are normally publicly available, such as gaming meta-servers, or ones explicitly setup by the worm or worm author.\nThe normally available meta-servers could have parameters for rates of change, such as many popping up at night or leaving in the morning.\nEach server could also have a maximum queries\/second that it would be able to handle.\nThe worm would also need a way of finding these servers, either hard-coded or through scanning.\nInternal Target Lists: Internal Target Lists are highly dependent on the infected host.\nThis method could parameterize the choice of how much info is on the host, such as\" all machines in subnet\",\" all windows boxes in subnet\", particular servers, number of internal\/external, or some combination.\nPassive: Passive methods are determined by\" normal\" interactions between hosts.\nParameters may include a rate of interaction with particular machines, internal\/external rate of interaction, or subnet-based rate of interaction.\nAny of these methods may also be combined to produce different types of target acquisition strategies.\nFor example, the a worm may begin with an initial hit-list of 100 different hosts or subnets.\nOnce it has exhausted its search using the hit-list, it may then proceed to perform random scanning with a 50% local bias.\nIt is important to note, however, that the resource consumption of each method is not the same.\nDifferent methods may require the worm to be large, such as the extra bytes required by a hit-list, or to take more processing time, such as by searching the host for addresses of other vulnerable hosts.\nFurther research and analysis should be performed in this area to determine associated costs for using each method.\nThe costs could then be used in determining design tradeoffs that worm authors engage at.\nFor example, hit lists provide a high rate of infection, but at a high cost of worm payload size.\n2.2.1 Sending a Scan\nThe send scan function tests to see if the host is available for infection.\nThis can be as simple as checking if the host is up on the network or as complex as checking if the host is vulnerable to the exploit which will be used.\nThe sending of a scan before attempted infection can increase ` the scanning rate if the cost for failing an infection is greater than the cost of failing a scan or sending a scan plus infection; and failures are more frequent than successes.\nOne important parameter to this would be the choice of transport protocol (TCP\/UDP) or just simply the time for one successful scan and time for one failed scan.\nAlso, whether or not it tests for the host to be up or if it is a full test for the vulnerability (or for multiple vulnerabilities).\n2.2.2 Infection Vector (IV)\nThe particular infection vector used to access the remote host is mainly dependent on the particular vulnerability chosen to exploit.\nIn a non-specific sense, it is dependent on the transport protocol chosen to use and the message size to be sent.\nSection 3 of [5] also proposes three particular classes of IV: Self-carried, second channel, and embedded.\n2.3 Self Preservation\nThe Self Preservation actions of a worm may take many forms.\nIn the wild, worms have been observed to disable anti-virus software or prevent sending itself to certain antivirusknown addresses.\nThey have also been seen to attempt disabling of other worms which may be contending for the same system.\nWe also believe that a time-based throttled scanning may help the worm to\" slip under the radar\".\nWe also propose a decoy method, whereby a worm will release a few children that\" cause a lot of noise\" so that the parent is not noticed.\nIt has also been proposed [5] that a worm cause damage to its host if, and only if, it is\" disturbed\" in some way.\nThis module could contain parameters for: probability of success in disabling anti-virus or other software updates, probability of being noticed and thus removed, or\" hardening\" of the host against other worms.\n2.4 Goal-Based Actions\nA worm's GBA functionality depends on the author's goal list.\nThe Payloads section of [5] provides some useful suggestions for such a module.\nThe opening of a back-door can make the host susceptible to more attacks.\nThis would involve a probability of the back-door being used and any associated traffic utilization.\nIt could also provide a list of other worms this host is now susceptible to or a list of vulnerabilities this host now has.\nSpam relays and HTTP-Proxies of course have an associated bandwidth consumption or traffic pattern.\nInternet DoS attacks would have a set time of activation, a target, and a traffic pattern.\nData damage would have an associated probability that the host dies because of the damage.\nIn Figure 1, this general model of a worm is summarized.\nPlease note that in this model there is no learning, no, or very little, sharing of information between worm instances, and certainly no coordination of actions.\nIn the next section we expand the model to include such mechanisms, and hence, arrive at the general model of a swarm worm.\n2.5 Swarms - General Model\nAs described in section 1, the basic characteristics that distinguished swarm behavior from simply what appears to be collective coordinated behaviors are four basic attributes.\nThese are:\n1.\nSimplicity of logic & actions; 2.\nLocal Communication Mechanisms; 3.\nDistributed control; and 4.\nEmergent Intelligence, including\" self-organization\".\nFigure 1: General Worm Model\nIn this work we aggregate all of these attributes under the general title of\" Learning, Communication, and Distributed Control.\nThe presence of these attributes distinguishes swarm worms from otherwise regular worms, or other types of malware such as Zombies.\nIn figure??\n, the generic model of a worm is expanded to included these set of actions.\nWithin this context then, a worm like Slammer cannot be categorized as a swarm worm due to the fact that new instances of the worm do not coordinate their actions or share information.\nOn the other hand, Zombies and many other forms of DDoS, which at first glance may be consider swarm worms are not.\nThis is simply due to fact that in the case of Zombies, control is not distributed but rather centralized, and no emergent behaviors arise.\nThe latter, the potential emergence of intelligence or new behaviors is what makes swarm worms so potentially dangerous.\nFinally, when one considers the majority of recent disruptions to the Internet Infrastructure, and in light of our description of swarm attacks, then said disruptions can be easily categorized as precursors to truly swarm behavior.\nSpecifically,\n\u2022 DDOS - Large number of compromised hosts send useless packets requiring processing (Stacheldraht, http: \/ \/ www.cert.org \/ incidentnotes\/IN \u2212 99 \u2212 04.\nhtml).\nDDoS attacks are the early precursors to Swarm Attacks due to the large number of agents involved.\n\u2022 Code Red CrV1, Code Red II, Nimbda - Exhibit early notions of swarm attacks, including a backdoor communication channel.\n\u2022 Staniford & Paxson in\" How to Own the Internet in Your Spare Time?\"\nexplore modifications to CrV1, Code Red I, II with a\" swarm\" like type of behavior.\nFor example, they speculate on new worms which employ direct worm-to-worm communication, and employ programmable updates.\nFor example the Warhol worm, and Permutation-Scanning (self coordinating) worms.\n2.6 Swarm Worm: the details\nIn considering the creation of what we believed to be the first\" Swarm Worm\" in existence, we wanted to adhere as close as possible to the general model presented in section??\nwhile at the same time facilitating large scale analysis, both empirical and through simulations, of the behavior of the swarm.\nFor this reason, we selected as the first instance\nFigure 2: General Model of a Swarm Worm\nof the swarm a simple password cracking worm.\nThe objective of this worm simply is to infect a host by sequentially attempting to login into the host using well known passwords (dictionary attack), passwords that have been discovered previously by any member of the swarm, and random passwords.\nOnce, a host is infected, the worm will create communication channels with both its\" known neighbors\" at that time, as well as with any offsprings that it successfully generates.\nIn this context a successful generation of an offspring means simply infecting a new host and replicating an exact copy of itself in such a host.\nWe call this swarm worm the ZachiK worm in honor of one of its creators.\nAs it can be seen from this description, the ZachiK worm exhibits all of the elements described before.\nIn the following sections, we described in detail each one of the elements of the ZachiK worm.\n2.7 Infection Vector\nThe infection vector used for ZachiK worm is the secure shell protocol SSH.\nA modified client which is capable of receiving passwords from the command line was written, and integrated with a script that supplies it with various passwords: known and random.\nWhen a password is found for an appropriate target, the infection process begins.\nAfter the root password of a host is discovered, the worm infects the target host and replicates itself.\nThe worm creates a new directory in the target host, copies the modified ssh client, the script, the communications servers, and the updated versions of data files (list of known passwords and a list of current neighbors).\nIt then runs the modified script on the newly infected hosts, which spawns the communications server, notifies the neighbors and starts looking for new targets.\nIt could be argued, correctly, that the ZachiK worm can be easily defeated by current countermeasure techniques present on most systems today, such as disallowing direct root logins from the network.\nWithin this context ZachiK can quickly be discarded as very simple and harmless worm that does not require further study.\nHowever, the reader should consider the following: 1.\nZachiK can be easily modified to include a variety of infection vectors.\nFor example, it could be programmed to guess common user names and their passwords; gain\naccess to a system, then guess the root password or use other well know vulnerabilities to gain root privileges;\n2.\nZachiK is a proof of concept worm.\nThe importance of ZachiK is that it incorporates all of the behaviors of a\" swarm worm\" including, but not restricted to, distributed control, communication amongst agents, and learning; 3.\nZachiK is composed of a large collection of agents operating independently which lends itself naturally to parallel algorithms such as a parallel search of the IPV4 address space.\nWithin this context, SLAMMER, does incorporate a parallel search capability of potentially susceptible addresses.\nHowever, unlike ZachiK, the knowledge discovered by the search is never shared amongst the agents.\nFor this reasons, and many others, one should not discard the potential of this new class of worms but rather embrace its study.\n2.8 Self-Preservation\nIn the case of ZachiK worm, the main self-preservation techniques used are simply keeping the payload small.\nIn this context, this simply means restricting the number of passwords that an offspring inherits, masquerading worm messages as common http requests, and restricting the number of neighbors to a maximum of five - (5).\n2.9 Propagation\nChoosing the next target (s) in an efficient matter requires thought.\nIn the past, known and proposed worms, see [5], have applied propagation techniques that varied.\nThese include: strictly random selection of a potential vulnerable host; target lists of vulnerable hosts; locally biased random selection (select a host target at random from a local subnet); and a combination of some or all of the above.\nIn our test and simulation environments, we will apply a combination of locally biased and totally random selection of potential vulnerable hosts.\nHowever, due to the fact that the ZachiK worm is a swarm worm, address discovery (that is when non-existent addresses are discovered) information will be shared amongst members of the swarm.\nThe infection and propagation threads do the following set of activities repeatedly:\n\u2022 Choose an address \u2022 Check the validity of the address \u2022 Choose a set of passwords \u2022 Try infecting the selected host with this set of passwords\nAs described earlier, choosing an address makes use of a combination of random selection, local bias, and target lists.\nSpecifically, to choose an address, the instance may either:\n\u2022 Generate a new random address \u2022 Generate an address on the local network \u2022 Pick an address from a handoff list\nThe choice is made randomly among these options, and can be varied to test the dependency of propagation on particular choices.\nPassword are either chosen from the list of known passwords or newly generated.\nWhen an infection of a valid address fails, it is added to a list of handoffs, which is sent to the neighbors to try to work on.\n2.10.1 Communication\nThe concept of a swarm is based on transfer of information amongst neighbors, which relay their new incoming messages to their neighbors, and so on until every worm instance in the swarm is aware of these messages.\nThere are two classes of messages: data or information messages and commands.\nThe command messages are meant for an external user (a.k.a., hackers and\/or crackers) to control the actions of the instances, and are currently not implemented.\nThe information messages are currently of three kinds: new member, passwords and exploitable addresses (\" handoffs\").\nThe new member messages are messages that a new instance sends to the neighbors on its (short) list of initial neighbors.\nThe neighbors then register these instances in their neighbor list.\nThese are messages that form the multi-connectivity of the swarm, and without them, the topology will be a treelike structure, where eliminating a single node would cause the instances beneath it to be inaccessible.\nThe passwords messages inform instances of newly discovered passwords, and by informing all instances, the swarm as whole collects this information, which allows it to infect new instances more effectively.\nThe handoffs messages inform instances of valid addresses that could not be compromised (fail at breaking the password for the root account).\nSince the address space is rather sparse, it takes a relatively long time (i.e. many trials) to discover a valid address.\nTherefore, by handing off discovered valid addresses, the swarm is (a) conserving\" energy\" by not re-discovering the same addresses (b) attacking more effectively.\nIn a way, this is a simple instance of coordinated activity of a swarm.\n2.10.2 Coordination\nWhen a worm instance is\" born\", it relays its existence to all neighbors on its list.\nThe main thread then spawns a few infection threads, and continues to handle incoming messages (registering neighbors, adding new passwords, receiving addresses and relaying these messages).\n2.10.3 Distributed Control\nControl in the ZachiK worm is distributed in the sense that each instance of the worm performs a set of actions independently of every other instance while at the same time benefiting from the learning achieve by its immediate neighbors.\n2.11 Goal Based Actions\nThe first instantiation of the ZachiK worm has two basic goals.\nThese are: (1) propagate, and (2) discover and share with members of th swarm new root passwords.\n3.\nEXPERIMENTAL DESIGN\nIn order to verify our hypothesis that Swarm Worms are more capable, and therefore dangerous than other well known\nworms, a network testbed was created, and a simulator, capable of simulating large scale\" Internet-like\" topologies (IPV4 space), was developed.\nThe network testbed consisted of a local area network of 30 Linux based computers.\nThe simulator was written in C++.\nThe simple swarm worm described in section 2.6 was used to infect patient-zero, and then the swarm worm was allowed to propagate via its own mechanisms of propagation, distributed control, and swarm behaviors.\nIn the case of a simple local area network of 30 computers, six - (6) different root passwords out of a password space of 4 digits (10000 options) were selected.\nAt the start of the experiment a single known password is known, that of patient-zero.\nAll shared passwords are distributed randomly across all nodes.\nSimilarly, in the case of the simulation, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and\/or targeted to specific network topologies subnets.\nFor example, in one of our simulation runs, the network topology consisted of 200 subnets each containing 50 hosts.\nIn such a topology, shared passwords were distributed across subnets where a varying percentage of passwords were shared across subnets.\nThe percentages of shared passwords used was reflective of early empirical studies, where up to 39.7% of common passwords were found to be shared.\n4.\nRESULTS\nIn Figure 3, the results comparing Swarm Attack behavior versus that of a typical Malform Worm for a 30 node LAN are presented.\nIn this set of empirical runs, six - (6) shared passwords were distributed at random across all nodes from a possible of 10,000 unknown passwords.\nThe data presented reflects the behaviors of a total of three - (3) distinct classes of worm or swarm worms.\nThe class of worms presented are as follows:\n\u2022 I-NS-NL: = Generic worm, independent (I), no learning\/memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 S-L-SP: = Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; and \u2022 S-L-SP & A: = Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 3, the results validate our original hypothesis that swarm worms are significantly more efficient and dangerous than generic worms.\nIn this set of experiments, the sharing of passwords provides an order of magnitude improvement over a memoryless random worm.\nSimilarly, a swarm worm that shares passwords and addresses is approximately two orders of magnitude more efficient than its generic counterpart.\nIn Figure 3, a series of discontinuities can be observed.\nThese discontinuities are an artifact of the small sample space used for this experiment.\nBasically, as soon as a password is broken, all nodes sharing that specific password are infected within a few seconds.\nNote that it is trivial for a swarm worm to scan and discovered a small shared password space.\nIn Figure 4, the simulation results comparing Swarm Attack Behavior versus that of a Generic Malform Worm are presented.\nIn this set of simulation runs, a network topology of 10,000 hosts, whose addresses were selected randomly across the IPV4 space, was constructed.\nWithin that space, a total of 200 shared passwords were selected and distributed either randomly and\/or targeted to specific network topologies subnets.\nThe data presented reflects the behaviors of three - (3) distinct classes of worm or swarm worms and two (2) different target host selection scanning strategies (random scanning and local bias).\nThe amount of local bias was varied across multiple simulation runs.\nThe results presented are aggregate behaviors.\nIn general the following class of Generic Worms and Swarm Worms were simulated.\nAddress Scanning:\n\u2022 Random: = addresses are selected at random from a subset of the IPV4 space, namely, a 224 address space; and \u2022 Local Bias: = addresses are selected at random from either a local subnet (256 addresses) or from a subset of the IPV4 space, namely, a 224 address space.\nThe percentage of local bias is varied across multiple runs.\nLearning, Communication & Distributed Control \u2022 I-NL-NS: Generic worm, independent (I), no learning \/ memoryless (NL), and no sharing of information with neighbors or offsprings (NS); \u2022 I-L-OOS: Generic worm, independent (I), learning \/ memoryless (L), and one time sharing of information with offsprings only (OOS); \u2022 S-L-SP: = Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords (SP) across nearest neighbors and offsprings; \u2022 S-L-S & AOP: = Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of addresses with neighbors and offsprings, shares passwords one time only (at creation) with offsprings (SA&OP); \u2022 S-L-SP & A: = Swarm Worm (S), learning (L), keeps list of learned passwords, and sharing of passwords and existent addresses (SP&A) across nearest neighbors and offsprings.\nAs it is shown in Figure 4, the results are consistent with our set of empirical results.\nIn addition, the following observations can be made.\n1.\nLocal preference is incredibly effective.\n2.\nShort address handoffs are more effective than long ones.\nWe varied the size of the list allowed in the sharing of addresses; the overhead associated with a long address list is detrimental to the performance of the swarm worm, as well as to its stealthiness; 3.\nFor the local bias case, sharing valid addresses of susceptible host, S-L-S & AOP worm (recall, the S-L-S & AOP swarm shares passwords, one time only, with offsprings\nat creation time) is more effective than sharing passwords in the case of the S-L-SP swarm.\nIn this case, we can think of the swarm as launching a distributeddictionary attack: different segments of the swarm use different passwords to try to break into susceptible uninfected host.\nIn the local bias mode, early in the life of the swarm, address-sharing is more effective than password-sharing, until most subnets are discovered.\nThen the targeting of local addresses assists in discovering the susceptible hosts, while the swarm members need to waste time rediscovering passwords; and 4.\nInfecting the last 0.5% of nodes takes a very long time in non-local bias mode.\nBasically, the shared password list across subnets has been exhausted, and the swarm reverts to simply a random discovery of password algorithm.\nFigure 3: Swarm Attack Behavior vs. Malform Worm: Empirical Results, 30node LAN Figure 4: Swarm Attack Behavior vs. Malform Worm: Simulation Results\n5.\nSUMMARY AND FUTURE WORK\nIn this manuscript, we have presented an abstract model, similar in some aspects to that of Weaver [5], that helps explain the generic nature of worms.\nThe model presented in section 2 was extended to incorporate a new class of potentially dangerous worms called Swarm Worms.\nSwarm Worms behave very much like biological swarms and exhibit a high degree of learning, communication, and distributed intelligence.\nSuch Swarm Worms are potentially more harmful than their generic counterparts.\nIn addition, the first instance, to our knowledge, of such a learning worm was created, called ZachiK.\nZachiK is a simple password cracking swarm worm that incorporates different learning and information sharing strategies.\nSuch a swarm worm was deployed in both a local area network of thirty - (30) hosts, as well as simulated in a 10,000 node topology.\nPreliminary results showed that such worms is capable of compromising hosts a rates up to 2 orders of magnitude faster than its generic counterpart while retaining stealth capabilities.\nThis work opens up a new area of interesting problems.\nSome of the most interesting and pressing problems to be consider are as follows: \u2022 Is it possible to apply some of learning concepts developed over the last ten years in the areas of swarm intelligence, agent systems, and distributed control to the design of sophisticated swarm worms in such a way that true emergent behavior takes place?\n\u2022 Are the current techniques being developed in the design of Intrusion Detection & CounterMeasure Systems and Survivable systems effective against this new class of worms?\n; and \u2022 What techniques, if any, can be developed to create defenses against swarm worms?","keyphrases":["malwar","slammer worm","swarm worm","emerg intellig","zachik","local commun mechan","prng method","pre-gener target list","distribut intellig","intrus detect","countermeasur system","emerg behavior","internet worm","swarm intellig"],"prmu":["P","P","P","P","P","M","U","U","M","M","R","R","R","R"]} {"id":"C-19","title":"Service Interface: A New Abstraction for Implementing and Composing Protocols","abstract":"In this paper we compare two approaches to the design of protocol frameworks -- tools for implementing modular network protocols. The most common approach uses events as the main abstraction for a local interaction between protocol modules. We argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols. It also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols. We then describe an experimental implementation of a service-based protocol framework in Java.","lvl-1":"Service Interface: A New Abstraction for Implementing and Composing Protocols\u2217 Olivier R\u00a8utti Pawe\u0142 T. Wojciechowski Andr\u00b4e Schiper Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL) 1015 Lausanne, Switzerland {Olivier.Rutti, Pawel.Wojciechowski, Andre.Schiper}@epfl.\nch ABSTRACT In this paper we compare two approaches to the design of protocol frameworks - tools for implementing modular network protocols.\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules.\nWe argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols.\nIt also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols.\nWe then describe an experimental implementation of a service-based protocol framework in Java.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Applications 1.\nINTRODUCTION Protocol frameworks, such Cactus [5, 2], Appia [1, 16], Ensemble [12, 17], Eva [3], SDL [8] and Neko[6, 20], are programming tools for developing modular network protocols.\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together.\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications.\nMoreover, protocol modules can be plugged in to the system dynamically.\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [14] - an important class of applications.\nMost protocol frameworks are based on events (all frameworks cited above are based on this abstraction).\nEvents are used for asynchronous communication between different modules on the same machine.\nHowever, the use of events raises some problems [4, 13].\nFor instance, the composition of modules may require connectors to route events, which introduces burden for a protocol composer [4].\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels.\nHowever, in our opinion, this solution is not satisfactory since composition of complex protocol stacks becomes more difficult.\nIn this paper, we propose a new approach for building modular protocols, that is based on a service abstraction.\nWe compare this new approach with the common, event-based approach.\nWe show that protocol frameworks based on services have several advantages, e.g. allow for a fairly straightforward protocol composition, clear implementation, and better support of dynamic replacement of distributed protocols.\nTo validate our claims, we have implemented SAMOA - an experimental protocol framework that is purely based on the service-based approach to module composition and implementation.\nThe framework allowed us to compare the service- and event-based implementations of an adaptive group communication middleware.\nThe paper is organized as follows.\nSection 2 defines general notions.\nSection 3 presents the main characteristics of event-based frameworks, and features that are distinct for each framework.\nSection 4 describes our new approach, which is based on service abstraction.\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework.\nThe description of our experimental implementation is presented in Section 6.\nFinally, we conclude in Section 7.\n2.\nPROTOCOL FRAMEWORKS In this section, we describe notions that are common to all protocol frameworks.\nProtocols and Protocol Modules.\nA protocol is a distributed algorithm that solves a specific problem in a distributed system, e.g. a TCP protocol solves the reliable channel problem.\nA protocol is implemented as a set of identical protocol modules located on different machines.\nProtocol Stacks.\nA stack is a set of protocol modules (of different protocols) that are located on the same machine.\nNote that, despite its name, a stack is not strictly layered, 691 i.e. a protocol module can interact with all other protocol modules in the same stack, not only with the protocol modules directly above and below.\nIn the remainder of this paper, we use the terms machine and stack interchangeably.\nStack 1 S1 Q1 R1 P1 Network Figure 1: Example of a protocol stack In Figure 1, we show an example protocol stack.\nWe represent protocol modules by capital letters indexed with a natural number, e.g. P1, Q1, R1 and S1.\nWe write Pi to denote the protocol module of a protocol P in stack i.\nWe use this notation throughout the paper.\nModules are represented as white boxes.\nArrows show module interactions.\nFor instance, protocol module P1 interacts with the protocol module Q1 and conversely (See Fig. 1).\nProtocol Module Interactions.\nBelow, we define the different kinds of interaction between protocol modules.\n\u2022 Requests are issued by protocol modules.\nA request by a protocol module Pi is an asynchronous call by Pi of another protocol module.\n\u2022 Replies are the results of a request.\nA single request can generate several replies.\nOnly protocol modules belonging to the same protocol as the module that has issued the request are concerned by the corresponding replies.\nFor example, a request by Pi generates replies that concern only protocol modules Pj.\n\u2022 Notifications can be used by a protocol module to inform (possibly many) protocol modules in the same stack about the occurrence of a specific event.\nNotifications may also be the results of a request.\n3.\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN Most existing protocol frameworks are event-based.\nExamples are Cactus [5, 2], Appia [1, 16] and Ensemble [12, 17].\nIn this section, we define the notion of an event in protocol frameworks.\nWe also explain how protocol modules are structured in event-based frameworks.\nEvents.\nAn event is a special object for indirect communication between protocol modules in the same stack.\nEvents may transport some information, e.g. a network message or some other data.\nWith events, the communication is indirect, i.e. a protocol module that triggers an event is not aware of the module(s) that handle the event.\nEvents enable one-to-many communication within a protocol stack.\nTriggering an event can be done either synchronously or asynchronously.\nIn the former case, the thread that triggers an event e is blocked until all protocol modules that handle e have terminated handling of event e.\nIn the latter case, the thread that triggers the event is not blocked.\nProtocol Modules.\nIn event-based protocol frameworks, a protocol module consists of a set of handlers.\nEach handler is dedicated to handling of a specific event.\nHandlers of the same protocol module may share data.\nHandlers can be dynamically bound to events.\nHandlers can also be unbound dynamically.\nUpon triggering some event e, all handlers bound to e are executed.\nIf no handler is bound, the behavior is usually unspecified.\nStack 1 P1 Q1 R1 S1 Network f e gg deliver send h Figure 2: Example of an event-based protocol stack In Figure 2, we show an example of an event-based stack.\nEvents are represented by small letters, e.g. e, f, ... The fact that a protocol module can trigger an event is represented by an arrow starting from the module.\nA white trapezoid inside a module box represents a handler defined by the protocol module.\nTo mark that some handler is bound to event e, we use an arrow pointing to the handler (the label on the arrow represents the event e).\nFor example, the protocol module P1 triggers event e and handles event f (see Fig. 2).\nNote that the network is represented as a special protocol module that handles the send event (to send a message to another machine) and triggers the deliver event (upon receipt of a message from another machine).\nSpecific Features.\nSome protocol frameworks have unique features.\nBelow, we present the features that influence composition and implementation of protocol modules.\nIn Cactus [5, 2], the programmer can give a priority number to a handler upon binding it to an event.\nWhen an event is triggered, all handlers are executed following the order of priority.\nA handler h is also able to cancel the execution of an event trigger: all handlers that should be executed after h according to the priority are not executed.\nAppia [1, 16] and Eva [3] introduce the notion of channels.\nChannels allow to build routes of events in protocol stacks.\nEach protocol module has to subscribe to one or many channels.\nAll events are triggered by specifying a channel they belong to.\nWhen a protocol module triggers an event e specifying channel c, all handlers bound to e that are part of a protocol that subscribes to c are executed (in the order prescribed by the definition of channel c).\n4.\nSERVICE-BASED PROTOCOL FRAMEWORK In this section, we describe our new approach for implementing and composing protocols that is based on services.\n692 We show in Section 5 the advantages of service-based protocol frameworks over event-based protocol frameworks.\nService Interface.\nIn our service-based framework, protocol modules in the same stack communicate through objects called service interfaces.\nRequests, replies and notifications are all issued to service interfaces.\nProtocol Modules.\nA protocol module is a set of executers, listeners and interceptors.\nExecuters handle requests.\nAn executer can be dynamically bound to a service interface.\nIt can be later unbound.\nA request issued to a service interface si leads to the execution of the executer bound to si.\nIf no executer is bound to si, the request is delayed until some executer is bound to si.\nContrary to events, at most one executer at any time can be bound to a service interface on every machine.\nListeners handle replies and notifications.\nA listener can be dynamically bound and unbound to\/from a service interface si.\nA notification issued to a service interface si is handled by all listeners bound to si in the local stack.\nA reply issued to a service interface is handled by one single listener.\nTo ensure that one single listener handles a reply, a module Pi has to identify, each time it issues a request, the listener to handle the possible reply.\nIf the request and the reply occur respectively, in stack i and in stack j, the service interface si on i communicates to the service interface si on j the listener that must handle the reply.\nIf the listener that must handle the reply does not exist, the reply is delayed until the listener is created.\nStack 1 P1 Q1 R1 S1 Network t u nt Figure 3: Example of a service-based protocol stack In Figure 3, we show an example of a service-based stack.\nWe denote a service interface by a small letter (e.g. t, u and nt) in a hexagonal box.\nThe fact that a module Pi can generate a request to a service interface si is represented by a dashed black arrow going from Pi to si.\nSimilarly, a dashed white arrow going from module Pi to service interface si represents the fact that Pi can generate a reply or a notification to si.\nWe represent executers with white boxes inside protocol modules and listeners with white boxes with a gray border.\nA connecting line between a service interface si and an executer e (resp.\na listener l) shows that e (resp.\nl) is bound to si.\nIn Figure 3, module Q1 contains an executer bound to service interface t and a listener bound to service interface u. Module Q1 can generate replies and notifications to service interface t and requests to service interface u. Note that the service interface nt allows to access the network.\nP1 Q1 P1 Q1 T1T1 t t t Figure 4: Execution of protocol interactions with interceptors An interceptor plays a special r\u02c6ole.\nSimilarly to executers, interceptors can be dynamically bound or unbound to a service interface.\nThey are activated each time a request, a reply or a notification is issued to the service interface they are bound to.\nThis is illustrated in Figure 4.\nIn the right part of the figure, the interceptor of the protocol module T1 is represented by a rounded box.\nThe interceptor is bound to service interface t.\nThe left part of the figure shows that an interceptor can be seen as an executer plus a listener.\nWhen P1 issues a request req to the service interface t, the executer-interceptor of T1 is executed.\nThen, module T1 may forward a request req to the service interface t, where we can have req = req 1 .\nWhen module Q1 issues a reply or a notification, a similar mechanism is used, except that this time the listener-interceptor of T1 is executed.\nNote that a protocol module Ti, that has an interceptor bound to a service interface, is able to modify requests, replies and notifications.\nUpon requests, if several interceptors are bound to the same service interface, they are executed in the order of binding.\nUpon replies and notifications, the order is reversed.\n5.\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN We show in this section the advantages of service-based protocol frameworks over event-based protocol frameworks.\nWe structure our discussion in three parts.\nFirstly, we present how protocol interactions are modeled in each of the protocol frameworks.\nThen, we discuss the composition of protocol modules in each of these frameworks.\nFinally, we present the problem of dynamic protocol replacement and the advantages of service interfaces in order to implement it.\nThe discussion is summarized in Table 1.\n5.1 Protocol Module Interactions A natural model of protocol interactions (as presented in Section 2) facilitates the implementation of protocol modules.\nFor each protocol interaction, we show how it is modeled in both frameworks.\nWe also explain that an inadequate model may lead to problems.\nRequests.\nIn service-based frameworks, a request is generated to a service interface.\nEach request is handled by at most one executer, since we allow only one executer to be bound to a service interface at any time.\nOn the other hand, in event-based frameworks, a protocol module emulates a request by triggering an event.\nThere is no guarantee 1 The two service interfaces t in the left part of Figure 4 represent the same service interface t.\nThe duplication is only to make the figure readable.\n693 that this event is bound to only one handler, which may lead to programming errors.\nReplies.\nWhen a protocol module generates a reply in a service-based framework, only the correct listener (identified at the time the corresponding request was issued) is executed.\nThis ensures that a request issued by some protocol module Qi, leads to replies handled by protocol modules Qj (i.e. protocol modules of the same protocol).\nThis is not the case in event-based frameworks, as we now show.\nConsider protocol module Q1 in Figure 2 that triggers event g to emulate a request.\nModule S1 handles the request.\nWhen modules Si triggers event h to emulate a reply (remember that a reply can occur in many stacks), both modules Qi and Ri will handle the reply (they both contain a handler bound to h).\nThis behavior is not correct: only protocol modules Qi should handle the reply.\nMoreover, as modules Ri are not necessarily implemented to interact with modules Qi, this behavior may lead to errors.\nSolutions to solve this problem exist.\nHowever, they introduce an unnecessary burden on the protocol programmers and the stack composer.\nFor instance, channels allow to route events to ensure that modules handle only events concerning them.\nHowever, the protocol programmer must take channels into account when implementing protocols.\nMoreover, the composition of complex stacks becomes more difficult due to the fact that the composer has to create many channels to ensure that modules handle events correctly.\nAn addition of special protocol modules (named connectors) for routing events is also not satisfactory, since it requires additional work from the composer and introduces overhead.\nNotifications.\nContrary to requests and replies, notifications are well modeled in event-based frameworks.\nThe reason is that notifications correspond to the one-to-many communication scheme provided by events.\nIn service-based frameworks, notifications are also well modeled.\nWhen a module generates a notification to a service interface si, all listeners bound to s are executed.\nNote that in this case, service interfaces provide the same pattern of communication as events.\n5.2 Protocol Module Composition Replies (and sometimes notifications) are the results of a request.\nThus, there is a semantic link between them.\nThe composer of protocol modules must preserve this link in order to compose correct stacks.\nWe explain now that service based frameworks provide a mechanism to preserve this link, while in event-based frameworks, the lack of such mechanism leads to error-prone composition.\nIn service-based frameworks, requests, replies and notifications are issued to a service interface.\nThus, a service interface introduces a link between these interactions.\nTo compose a correct stack, the composer has to bound a listener to service interface si for each module that issues a request to si.\nThe same must be done for one executer that is part of a module that issues replies or notifications.\nApplying this simple methodology ensures that every request issued to a service interface si eventually results in several replies or notifications issued to the same service interface si.\nIn event-based frameworks, all protocol interactions are issued through different events: there is no explicit link between an event triggered upon requests and an event triggered upon the corresponding replies.\nThus, the composer of a protocol stack must know the meaning of each event in order to preserve the semantic link between replies (and notifications) and requests.\nMoreover, nothing prevents from binding a handler that should handle a request to an event used to issue a reply.\nNote that these problems can be partially solved by typing events and handlers.\nHowever, it does not prevent from errors if there are several instances of the same event type.\nNote that protocol composition is clearer in the protocol frameworks that are based on services, rather than on events.\nThe reason is that several events that are used to model different protocol interactions can be modeled by a single service interface.\n5.3 Dynamic Replacement of Protocols Dynamic replacement of protocols consists in switching on-the-fly between protocols that solve the same problem.\nReplacement of a protocol P by a new protocol newP means that a protocol module Pi is replaced by newPi in every stack i.\nThis replacement is problematic since the local replacements (within stacks) must be synchronized in order to guarantee protocol correctness [21, 18].\nQ1 Q1 R1 P1 1P 1newP 1 Repl\u2212P1 Repl\u2212P1 R newP1 gg h h'' g'' t Figure 5: Dynamic replacement of protocol P For the synchronization algorithms to work, module interactions are intercepted in order to detect a time when Pi should be replaced by newPi.\n(Other solutions, e.g. in [11], are more complex.)\nIn Fig. 5, we show how this interception can be implemented in protocol frameworks that are based on services (in the left part of the figure) and events (in the right part of the figure).\nThe two-sided arrows point to the protocol modules P1 and newP1 that are switched.\nIt can be seen that the approach that uses the Service Interface mechanism has advantages.\nThe intercepting module Repl-P1 has an interceptor bound to service interface t that intercepts every request handled by modules P1 and all replies and notifications issued by P1.\nThe code of the module P1 can therefore remain unchanged.\nIn event-based frameworks, the solution is to add an intermediate module Repl-P1 that intercepts the requests issued to P1 and also the replies and notifications issued by P1.\nAlthough this ad-hoc solution may seem similar to the servicebased approach, there is an important difference.\nThe eventbased solution requires to slightly modify the module P1 since instead of handling event g and triggering event h, P1 must now handle different events g'' and h'' (see Fig. 5).\n6.\nIMPLEMENTATION We have implemented an experimental service-based protocol framework (called SAMOA) [7].\nOur implementation is light-weight: it consists of approximately 1200 lines of code in Java 1.5 (with generics).\nIn this section, we describe the main two classes of our implementation: Service (encoding the Service Interface) and 694 service-based event-based Protocol Interaction an adequate an inadequate representation representation Protocol Composition clear and safe complex and error-prone Dynamic Replacement an integrated ad-hoc solutions mechanism Table 1: Service-based vs. event-based Protocol (encoding protocol modules).\nFinally, we present an example protocol stack that we have implemented to validate the service-based approach.\nThe Service Class.\nA Service object is characterized by the arguments of requests and the arguments of responses.\nA response is either a reply or a notification.\nA special argument, called message, determines the kind of interactions modeled by the response.\nA message represents a piece of information sent over the network.\nWhen a protocol module issues a request, it can give a message as an argument.\nThe message can specify the listener that must handle the reply.\nWhen a protocol module issues a response to a service interface, a reply is issued if one of the arguments of the response is a message specifying a listener.\nOtherwise, a notification is issued.\nExecuters, listeners and interceptors are encoded as innerclasses of the Service class.\nThis allows to provide type-safe protocol interactions.\nFor instance, executers can only be bound to the Service object, they belong to.\nThus, the parameters passed to requests (that are verified statically) always correspond to the parameters accepted by the corresponding executers.\nThe type of a Service object is determined by the type of the arguments of requests and responses.\nA Service object t is compatible with another Service object s if the type of the arguments of requests (and responses) of t is a subtype of the arguments of requests (and responses) of s.\nIn practice, if a protocol module Pi can issue a request to a protocol UDP, then it may also issue a request to TCP (compatible with UDP) due to the subtyping relation on parameters of communicating modules.\nThe Protocol Class.\nA Protocol object consists of three sets of components, one set for each component type (a listener, an executer, and an interceptor).\nProtocol objects are characterized by names to retrieve them easily.\nMoreover, we have added some features to bind and unbind all executers or interceptors to\/from the corresponding Service objects.\nProtocol objects can be loaded to a stack dynamically.\nAll these features made it easy to implement dynamic replacement of network protocols.\nProtocol Stack Implementation.\nTo validate our ideas, we have developed an Adaptive Group Communication (AGC) middleware, adopting both the service- and the event-based approaches.\nFig. 6 shows the corresponding stacks of the AGC middleware.\nBoth stacks allow the Consensus and Atomic Broadcast protocols to be dynamically updated.\nThe architecture of our middleware, shown in Fig. 6, builds on the group communication stack described in [15].\nThe UDP and RP2P modules provide respectively, unreliable and reliable point-to-point transport.\nThe FD module implements a failure detector; we assume that it ensures the Stack 1 UDP1RP2P1 Repl CT1 1ABc.\nRepl CT1 ABc.1 Network FD1 GM1 rp2p nt udp d f abcast consensus Stack 1 Repl CT1 1ABc.\nRepl ABc.1 UDP1 FD1 RP2P1 CT1 Network 1GM send deliver Figure 6: Adaptive Group Communication Middleware: service-based (left) vs. event-based (right) properties of the 3S failure detector [9].\nThe CT module provides a distributed consensus service using the ChandraToueg algorithm [10].\nThe ABc.\nmodule implements atomic broadcast - a group communication primitive that delivers messages to all processes in the same order.\nThe GM module provides a group membership service that maintains consistent membership data among group members (see [19] for details).\nThe Repl ABc.\nand the Repl CT modules implement the replacement algorithms [18] for, respectively, the ABc.\nand the CT protocol modules.\nNote that each arrow in the event-based architecture represents an event.\nWe do not name events in the figure for readability.\nThe left stack in Figure 6 shows the implementation of AGC with our service-based framework.\nThe right stack shows the same implementation with an event-based framework.\nPerformance Evaluation.\nTo evaluate the overhead of service interfaces, we compared performance of the serviceand event-based implementations of the AGC middleware.\nThe latter implementation of AGC uses the Cactus protocol framework [5, 2].\nIn our experiment, we compared the average latency of Atomic Broadcast (ABcast), which is defined as follows.\nConsider a message m sent using ABcast.\nWe denote by ti(m) the time between the moment of sending m and the moment of delivering m on a machine (stack) i.\nWe define the average latency of m as the average of ti(m) for all machines (stacks) i within a group of stacks.\nPerformance tests have been made using a cluster of PCs running Red Hat Linux 7.2, where each PC has a Pentium III 766 MHz processor and 128MB of RAM.\nAll PCs are interconnected by a 100 Base-TX duplex Ethernet hub.\nOur experiment has involved 7 machines (stacks) that ABcast messages of 4Mb under a constant load, where a load is a number of messages per second.\nIn Figure 7, we show the results of our experiment for different loads.\nLatencies are shown on the vertical axis, while message loads are shown on the horizontal axis.\nThe solid line shows the results obtained with our service-based framework.\nThe dashed line shows the results obtained with the Cactus framework.\nThe 695 0 500 1000 1500 2000 10 20 30 40 50 60 70 80 90 100 Averagelatency[ms] Load [msg\/s] Service-Based Framework Cactus Figure 7: Comparison between our service-based framework and Cactus overhead of the service-based framework is approximately 10%.\nThis can be explained as follows.\nFirstly, the servicebased framework provides a higher level abstraction, which has a small cost.\nSecondly, the AGC middleware was initially implemented and optimized for the event-based Cactus framework.\nHowever, it is possible to optimize the AGC middleware for the service-based framework.\n7.\nCONCLUSION In the paper, we proposed a new approach to the protocol composition that is based on the notion of Service Interface, instead of events.\nWe believe that the service-based framework has several advantages over event-based frameworks.\nIt allows us to: (1) model accurately protocol interactions, (2) reduce the risk of errors during the composition phase, and (3) simply implement dynamic protocol updates.\nA prototype implementation allowed us to validate our ideas.\n8.\nREFERENCES [1] The Appia project.\nDocumentation available electronically at http:\/\/appia.di.fc.ul.pt\/.\n[2] Nina T. Bhatti, Matti A. Hiltunen, Richard D. Schlichting, and Wanda Chiu.\nCoyote: a system for constructing fine-grain configurable communication services.\nACM Transactions on Computer Systems, 16(4):321-366, November 1998.\n[3] Francisco Vilar Brasileiro, Fab\u00b4\u0131ola Greve, Frederic Tronel, Michel Hurfin, and Jean-Pierre Le Narzul.\nEva: An event-based framework for developing specialized communication protocols.\nIn Proceedings of the 1st IEEE International Symposium on Network Computing and Applications (NCA ``01), 2001.\n[4] Daniel C. B\u00a8unzli, Sergio Mena, and Uwe Nestmann.\nProtocol composition frameworks.\nA header-driven model.\nIn Proceedings of the 4th IEEE International Symposium on Network Computing and Applications (NCA ``05), July 2005.\n[5] The Cactus project.\nDocumentation available electronically at http:\/\/www.cs.arizona.edu\/ cactus\/.\n[6] The Neko project.\nDocumentation available electronically at http:\/\/lsrwww.epfl.ch\/neko\/.\n[7] The SAMOA project.\nDocumentation available electronically at http:\/\/lsrwww.epfl.ch\/samoa\/.\n[8] The SDL project.\nDocumentation available electronically at http:\/\/www.sdl-forum.org\/SDL\/.\n[9] Tushar Deepak Chandra, Vassos Hadzilacos, and Sam Toueg.\nThe weakest failure detector for solving consensus.\nJournal of the ACM, 43(4):685-722, 1996.\n[10] Tushar Deepak Chandra and Sam Toueg.\nUnreliable failure detectors for reliable distributed systems.\nJournal of the ACM, 43(2):225-267, 1996.\n[11] Wen-Ke Chen, Matti A. Hiltunen, and Richard D. Schlichting.\nConstructing adaptive software in distributed systems.\nIn Proceedings of the 21st IEEE International Conference on Distributed Computing System (ICDCS ``01), April 2001.\n[12] The Ensemble project.\nDocumentation available electronically at http:\/\/www.cs.cornell.edu\/Info\/ Projects\/Ensemble\/.\n[13] Richard Ekwall, Sergio Mena, Stefan Pleisch, and Andr\u00b4e Schiper.\nTowards flexible finite-state-machine-based protocol composition.\nIn Proceedings of the 3rd IEEE International Symposium on Network Computing and Applications (NCA ``04), August 2004.\n[14] Philip K. McKinley, Seyed Masoud Sadjadi, Eric P. Kasten, and Betty H.C. Cheng.\nComposing adaptive software.\nIEEE Computer, 37(7):56-64, 2004.\n[15] Sergio Mena, Andr\u00b4e Schiper, and Pawel T. Wojciechowski.\nA step towards a new generation of group communication systems.\nIn Proceedings of the 4th ACM\/IFIP\/USENIX International Middleware Conference (Middleware ``03), LNCS 2672, June 2003.\n[16] Hugo Miranda, Alexandre Pinto, and Lu\u00b4\u0131s Rodrigues.\nAppia, a flexible protocol kernel supporting multiple coordinated channels.\nIn Proceedings of the 21st IEEE International Conference on Distributed Computing Systems (ICDCS ``01), April 2001.\n[17] Ohad Rodeh, Kenneth P. Birman, Mark Hayden, Zhen Xiao, and Danny Dolev.\nThe architecture and performance of security protocols in the Ensemble group communication system.\nTechnical Report TR-98-1703, Computer Science Department, Cornell University, September 1998.\n[18] Olivier R\u00a8utti, Pawel T. Wojciechowski, and Andr\u00b4e Schiper.\nDynamic update of distributed agreement protocols.\nTR IC-2005-12, School of Computer and Communication Sciences, Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL), March 2005.\n[19] Andr\u00b4e Schiper.\nDynamic Group Communication.\nTechnical Report IC-2003-27, School of Computer and Communication Sciences, Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne (EPFL), April 2003.\nTo appear in ACM Distributed Computing.\n[20] P\u00b4eter Urb\u00b4an, Xavier D\u00b4efago, and Andr\u00b4e Schiper.\nNeko: A single environment to simulate and prototype distributed algorithms.\nIn Proceedings of the 15th International Conference on Information Networking (ICOIN ``01), February 2001.\n[21] Pawel T. Wojciechowski and Olivier R\u00a8utti.\nOn correctness of dynamic protocol update.\nIn Proceedings of the 7th IFIP Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS ``05), LNCS 3535.\nSpringer, June 2005.\n696","lvl-3":"Service Interface: A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks--tools for implementing modular network protocols.\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules.\nWe argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols.\nIt also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols.\nWe then describe an experimental implementation of a service-based protocol framework in Java.\n1.\nINTRODUCTION\nProtocol frameworks, such Cactus [5, 2], Appia [1, 16], Ensemble [12, 17], Eva [3], SDL [8] and Neko [6, 20], are programming tools for developing modular network protocols.\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together.\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications.\nMoreover, protocol modules can be plugged in to the system dynamically.\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [14] - an important class of applications.\n* Research supported by the Swiss National Science Foundation under grant number 21-67715 .02 and Hasler Stiftung under grant number DICS-1825.\nMost protocol frameworks are based on events (all frameworks cited above are based on this abstraction).\nEvents are used for asynchronous communication between different modules on the same machine.\nHowever, the use of events raises some problems [4, 13].\nFor instance, the composition of modules may require connectors to route events, which introduces burden for a protocol composer [4].\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels.\nHowever, in our opinion, this solution is not satisfactory since composition of complex protocol stacks becomes more difficult.\nIn this paper, we propose a new approach for building modular protocols, that is based on a service abstraction.\nWe compare this new approach with the common, event-based approach.\nWe show that protocol frameworks based on services have several advantages, e.g. allow for a fairly straightforward protocol composition, clear implementation, and better support of dynamic replacement of distributed protocols.\nTo validate our claims, we have implemented SAMOA--an experimental protocol framework that is purely based on the service-based approach to module composition and implementation.\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware.\nThe paper is organized as follows.\nSection 2 defines general notions.\nSection 3 presents the main characteristics of event-based frameworks, and features that are distinct for each framework.\nSection 4 describes our new approach, which is based on service abstraction.\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework.\nThe description of our experimental implementation is presented in Section 6.\nFinally, we conclude in Section 7.\n2.\nPROTOCOL FRAMEWORKS\n3.\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN\n4.\nSERVICE-BASED PROTOCOL FRAMEWORK\n5.\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN\n5.1 Protocol Module Interactions\n5.2 Protocol Module Composition\n5.3 Dynamic Replacement of Protocols\n6.\nIMPLEMENTATION\n7.\nCONCLUSION\nIn the paper, we proposed a new approach to the protocol composition that is based on the notion of Service Interface, instead of events.\nWe believe that the service-based framework has several advantages over event-based frameworks.\nIt allows us to: (1) model accurately protocol interactions, (2) reduce the risk of errors during the composition phase, and (3) simply implement dynamic protocol updates.\nA prototype implementation allowed us to validate our ideas.","lvl-4":"Service Interface: A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks--tools for implementing modular network protocols.\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules.\nWe argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols.\nIt also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols.\nWe then describe an experimental implementation of a service-based protocol framework in Java.\n1.\nINTRODUCTION\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together.\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications.\nMoreover, protocol modules can be plugged in to the system dynamically.\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [14] - an important class of applications.\nMost protocol frameworks are based on events (all frameworks cited above are based on this abstraction).\nEvents are used for asynchronous communication between different modules on the same machine.\nFor instance, the composition of modules may require connectors to route events, which introduces burden for a protocol composer [4].\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels.\nHowever, in our opinion, this solution is not satisfactory since composition of complex protocol stacks becomes more difficult.\nIn this paper, we propose a new approach for building modular protocols, that is based on a service abstraction.\nWe compare this new approach with the common, event-based approach.\nWe show that protocol frameworks based on services have several advantages, e.g. allow for a fairly straightforward protocol composition, clear implementation, and better support of dynamic replacement of distributed protocols.\nTo validate our claims, we have implemented SAMOA--an experimental protocol framework that is purely based on the service-based approach to module composition and implementation.\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware.\nSection 2 defines general notions.\nSection 3 presents the main characteristics of event-based frameworks, and features that are distinct for each framework.\nSection 4 describes our new approach, which is based on service abstraction.\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework.\nThe description of our experimental implementation is presented in Section 6.\nFinally, we conclude in Section 7.\n7.\nCONCLUSION\nIn the paper, we proposed a new approach to the protocol composition that is based on the notion of Service Interface, instead of events.\nWe believe that the service-based framework has several advantages over event-based frameworks.\nA prototype implementation allowed us to validate our ideas.","lvl-2":"Service Interface: A New Abstraction for Implementing and Composing Protocols *\nABSTRACT\nIn this paper we compare two approaches to the design of protocol frameworks--tools for implementing modular network protocols.\nThe most common approach uses events as the main abstraction for a local interaction between protocol modules.\nWe argue that an alternative approach, that is based on service abstraction, is more suitable for expressing modular protocols.\nIt also facilitates advanced features in the design of protocols, such as dynamic update of distributed protocols.\nWe then describe an experimental implementation of a service-based protocol framework in Java.\n1.\nINTRODUCTION\nProtocol frameworks, such Cactus [5, 2], Appia [1, 16], Ensemble [12, 17], Eva [3], SDL [8] and Neko [6, 20], are programming tools for developing modular network protocols.\nThey allow complex protocols to be implemented by decomposing them into several modules cooperating together.\nThis approach facilitates code reuse and customization of distributed protocols in order to fit the needs of different applications.\nMoreover, protocol modules can be plugged in to the system dynamically.\nAll these features of protocol frameworks make them an interesting enabling technology for implementing adaptable systems [14] - an important class of applications.\n* Research supported by the Swiss National Science Foundation under grant number 21-67715 .02 and Hasler Stiftung under grant number DICS-1825.\nMost protocol frameworks are based on events (all frameworks cited above are based on this abstraction).\nEvents are used for asynchronous communication between different modules on the same machine.\nHowever, the use of events raises some problems [4, 13].\nFor instance, the composition of modules may require connectors to route events, which introduces burden for a protocol composer [4].\nProtocol frameworks such as Appia and Eva extend the event-based approach with channels.\nHowever, in our opinion, this solution is not satisfactory since composition of complex protocol stacks becomes more difficult.\nIn this paper, we propose a new approach for building modular protocols, that is based on a service abstraction.\nWe compare this new approach with the common, event-based approach.\nWe show that protocol frameworks based on services have several advantages, e.g. allow for a fairly straightforward protocol composition, clear implementation, and better support of dynamic replacement of distributed protocols.\nTo validate our claims, we have implemented SAMOA--an experimental protocol framework that is purely based on the service-based approach to module composition and implementation.\nThe framework allowed us to compare the service - and event-based implementations of an adaptive group communication middleware.\nThe paper is organized as follows.\nSection 2 defines general notions.\nSection 3 presents the main characteristics of event-based frameworks, and features that are distinct for each framework.\nSection 4 describes our new approach, which is based on service abstraction.\nSection 5 discusses the advantages of a service-based protocol framework compared to an event-based protocol framework.\nThe description of our experimental implementation is presented in Section 6.\nFinally, we conclude in Section 7.\n2.\nPROTOCOL FRAMEWORKS\nIn this section, we describe notions that are common to all protocol frameworks.\nProtocols and Protocol Modules.\nA protocol is a distributed algorithm that solves a specific problem in a distributed system, e.g. a TCP protocol solves the reliable channel problem.\nA protocol is implemented as a set of identical protocol modules located on different machines.\nProtocol Stacks.\nA stack is a set of protocol modules (of different protocols) that are located on the same machine.\nNote that, despite its name, a stack is not strictly layered,\ni.e. a protocol module can interact with all other protocol modules in the same stack, not only with the protocol modules directly above and below.\nIn the remainder of this paper, we use the terms machine and stack interchangeably.\nNetwork\nFigure 1: Example of a protocol stack\nIn Figure 1, we show an example protocol stack.\nWe represent protocol modules by capital letters indexed with a natural number, e.g. P1, Q1, R1 and S1.\nWe write Pi to denote the protocol module of a protocol P in stack i.\nWe use this notation throughout the paper.\nModules are represented as white boxes.\nArrows show module interactions.\nFor instance, protocol module P1 interacts with the protocol module Q1 and conversely (See Fig. 1).\nProtocol Module Interactions.\nBelow, we define the different kinds of interaction between protocol modules.\n\u2022 Requests are issued by protocol modules.\nA request by a protocol module Pi is an asynchronous call by Pi of another protocol module.\n\u2022 Replies are the results of a request.\nA single request can generate several replies.\nOnly protocol modules belonging to the same protocol as the module that has issued the request are concerned by the corresponding replies.\nFor example, a request by Pi generates replies that concern only protocol modules Pj.\n\u2022 Notifications can be used by a protocol module to inform (possibly many) protocol modules in the same stack about the occurrence of a specific event.\nNotifications may also be the results of a request.\n3.\nEVENT-BASED PROTOCOL FRAMEWORK DESIGN\nMost existing protocol frameworks are event-based.\nExamples are Cactus [5, 2], Appia [1, 16] and Ensemble [12, 17].\nIn this section, we define the notion of an event in protocol frameworks.\nWe also explain how protocol modules are structured in event-based frameworks.\nEvents.\nAn event is a special object for indirect communication between protocol modules in the same stack.\nEvents may transport some information, e.g. a network message or some other data.\nWith events, the communication is indirect, i.e. a protocol module that triggers an event is not aware of the module (s) that handle the event.\nEvents enable one-to-many communication within a protocol stack.\nTriggering an event can be done either synchronously or asynchronously.\nIn the former case, the thread that triggers an event e is blocked until all protocol modules that handle e have terminated handling of event e.\nIn the latter case, the thread that triggers the event is not blocked.\nProtocol Modules.\nIn event-based protocol frameworks, a protocol module consists of a set of handlers.\nEach handler is dedicated to handling of a specific event.\nHandlers of the same protocol module may share data.\nHandlers can be dynamically bound to events.\nHandlers can also be unbound dynamically.\nUpon triggering some event e, all handlers bound to e are executed.\nIf no handler is bound, the behavior is usually unspecified.\nFigure 2: Example of an event-based protocol stack\nIn Figure 2, we show an example of an event-based stack.\nEvents are represented by small letters, e.g. e, f,...The fact that a protocol module can trigger an event is represented by an arrow starting from the module.\nA white trapezoid inside a module box represents a handler defined by the protocol module.\nTo mark that some handler is bound to event e, we use an arrow pointing to the handler (the label on the arrow represents the event e).\nFor example, the protocol module P1 triggers event e and handles event f (see Fig. 2).\nNote that the network is represented as a special protocol module that handles the send event (to send a message to another machine) and triggers the deliver event (upon receipt of a message from another machine).\nSpecific Features.\nSome protocol frameworks have unique features.\nBelow, we present the features that influence composition and implementation of protocol modules.\nIn Cactus [5, 2], the programmer can give a priority number to a handler upon binding it to an event.\nWhen an event is triggered, all handlers are executed following the order of priority.\nA handler h is also able to cancel the execution of an event trigger: all handlers that should be executed after h according to the priority are not executed.\nAppia [1, 16] and Eva [3] introduce the notion of channels.\nChannels allow to build routes of events in protocol stacks.\nEach protocol module has to subscribe to one or many channels.\nAll events are triggered by specifying a channel they belong to.\nWhen a protocol module triggers an event e specifying channel c, all handlers bound to e that are part of a protocol that subscribes to c are executed (in the order prescribed by the definition of channel c).\n4.\nSERVICE-BASED PROTOCOL FRAMEWORK\nIn this section, we describe our new approach for implementing and composing protocols that is based on services.\nWe show in Section 5 the advantages of service-based protocol frameworks over event-based protocol frameworks.\nService Interface.\nIn our service-based framework, protocol modules in the same stack communicate through objects called service interfaces.\nRequests, replies and notifications are all issued to service interfaces.\nProtocol Modules.\nA protocol module is a set of executers, listeners and interceptors.\nExecuters handle requests.\nAn executer can be dynamically bound to a service interface.\nIt can be later unbound.\nA request issued to a service interface si leads to the execution of the executer bound to si.\nIf no executer is bound to si, the request is delayed until some executer is bound to si.\nContrary to events, at most one executer at any time can be bound to a service interface on every machine.\nListeners handle replies and notifications.\nA listener can be dynamically bound and unbound to\/from a service interface si.\nA notification issued to a service interface si is handled by all listeners bound to si in the local stack.\nA reply issued to a service interface is handled by one single listener.\nTo ensure that one single listener handles a reply, a module Pi has to identify, each time it issues a request, the listener to handle the possible reply.\nIf the request and the reply occur respectively, in stack i and in stack j, the service interface si on i communicates to the service interface si' on j the listener that must handle the reply.\nIf the listener that must handle the reply does not exist, the reply is delayed until the listener is created.\nNetwork\nFigure 3: Example of a service-based protocol stack\nIn Figure 3, we show an example of a service-based stack.\nWe denote a service interface by a small letter (e.g. t, u and nt) in a hexagonal box.\nThe fact that a module Pi can generate a request to a service interface si is represented by a dashed black arrow going from Pi to si.\nSimilarly, a dashed white arrow going from module Pi to service interface si represents the fact that Pi can generate a reply or a notification to si.\nWe represent executers with white boxes inside protocol modules and listeners with white boxes with a gray border.\nA connecting line between a service interface si and an executer e (resp.\na listener l) shows that e (resp.\nl) is bound to si.\nIn Figure 3, module Q1 contains an executer bound to service interface t and a listener bound to service interface u. Module Q1 can generate replies and notifications to service interface t and requests to service interface u. Note that the service interface nt allows to access the network.\nFigure 4: Execution of protocol interactions with interceptors\nAn interceptor plays a special r\u02c6ole.\nSimilarly to executers, interceptors can be dynamically bound or unbound to a service interface.\nThey are activated each time a request, a reply or a notification is issued to the service interface they are bound to.\nThis is illustrated in Figure 4.\nIn the right part of the figure, the interceptor of the protocol module T1 is represented by a rounded box.\nThe interceptor is bound to service interface t.\nThe left part of the figure shows that an interceptor can be seen as an executer plus a listener.\nWhen P1 issues a request req to the service interface t, the executer-interceptor of T1 is executed.\nThen, module T1 may forward a request req' to the service interface t, where we can have req = 6 req' 1.\nWhen module Q1 issues a reply or a notification, a similar mechanism is used, except that this time the listener-interceptor of T1 is executed.\nNote that a protocol module Ti, that has an interceptor bound to a service interface, is able to modify requests, replies and notifications.\nUpon requests, if several interceptors are bound to the same service interface, they are executed in the order of binding.\nUpon replies and notifications, the order is reversed.\n5.\nADVANTAGES OF SERVICE-BASED PROTOCOL FRAMEWORK DESIGN\nWe show in this section the advantages of service-based protocol frameworks over event-based protocol frameworks.\nWe structure our discussion in three parts.\nFirstly, we present how protocol interactions are modeled in each of the protocol frameworks.\nThen, we discuss the composition of protocol modules in each of these frameworks.\nFinally, we present the problem of dynamic protocol replacement and the advantages of service interfaces in order to implement it.\nThe discussion is summarized in Table 1.\n5.1 Protocol Module Interactions\nA natural model of protocol interactions (as presented in Section 2) facilitates the implementation of protocol modules.\nFor each protocol interaction, we show how it is modeled in both frameworks.\nWe also explain that an inadequate model may lead to problems.\nRequests.\nIn service-based frameworks, a request is generated to a service interface.\nEach request is handled by at most one executer, since we allow only one executer to be bound to a service interface at any time.\nOn the other hand, in event-based frameworks, a protocol module emulates a request by triggering an event.\nThere is no guarantee\nthat this event is bound to only one handler, which may lead to programming errors.\nReplies.\nWhen a protocol module generates a reply in a service-based framework, only the correct listener (identified at the time the corresponding request was issued) is executed.\nThis ensures that a request issued by some protocol module Qi, leads to replies handled by protocol modules Qj (i.e. protocol modules of the same protocol).\nThis is not the case in event-based frameworks, as we now show.\nConsider protocol module Q, in Figure 2 that triggers event g to emulate a request.\nModule S, handles the request.\nWhen modules Si triggers event h to emulate a reply (remember that a reply can occur in many stacks), both modules Qi and Ri will handle the reply (they both contain a handler bound to h).\nThis behavior is not correct: only protocol modules Qi should handle the reply.\nMoreover, as modules Ri are not necessarily implemented to interact with modules Qi, this behavior may lead to errors.\nSolutions to solve this problem exist.\nHowever, they introduce an unnecessary burden on the protocol programmers and the stack composer.\nFor instance, channels allow to route events to ensure that modules handle only events concerning them.\nHowever, the protocol programmer must take channels into account when implementing protocols.\nMoreover, the composition of complex stacks becomes more difficult due to the fact that the composer has to create many channels to ensure that modules handle events correctly.\nAn addition of special protocol modules (named connectors) for routing events is also not satisfactory, since it requires additional work from the composer and introduces overhead.\nNotifications.\nContrary to requests and replies, notifications are well modeled in event-based frameworks.\nThe reason is that notifications correspond to the one-to-many communication scheme provided by events.\nIn service-based frameworks, notifications are also well modeled.\nWhen a module generates a notification to a service interface si, all listeners bound to s are executed.\nNote that in this case, service interfaces provide the same pattern of communication as events.\n5.2 Protocol Module Composition\nReplies (and sometimes notifications) are the results of a request.\nThus, there is a semantic link between them.\nThe composer of protocol modules must preserve this link in order to compose correct stacks.\nWe explain now that service based frameworks provide a mechanism to preserve this link, while in event-based frameworks, the lack of such mechanism leads to error-prone composition.\nIn service-based frameworks, requests, replies and notifications are issued to a service interface.\nThus, a service interface introduces a link between these interactions.\nTo compose a correct stack, the composer has to bound a listener to service interface si for each module that issues a request to si.\nThe same must be done for one executer that is part of a module that issues replies or notifications.\nApplying this simple methodology ensures that every request issued to a service interface si eventually results in several replies or notifications issued to the same service interface si.\nIn event-based frameworks, all protocol interactions are issued through different events: there is no explicit link between an event triggered upon requests and an event triggered upon the corresponding replies.\nThus, the composer of a protocol stack must know the meaning of each event in order to preserve the semantic link between replies (and notifications) and requests.\nMoreover, nothing prevents from binding a handler that should handle a request to an event used to issue a reply.\nNote that these problems can be partially solved by typing events and handlers.\nHowever, it does not prevent from errors if there are several instances of the same event type.\nNote that protocol composition is clearer in the protocol frameworks that are based on services, rather than on events.\nThe reason is that several events that are used to model different protocol interactions can be modeled by a single service interface.\n5.3 Dynamic Replacement of Protocols\nDynamic replacement of protocols consists in switching on-the-fly between protocols that solve the same problem.\nReplacement of a protocol P by a new protocol newP means that a protocol module Pi is replaced by newPi in every stack i.\nThis replacement is problematic since the local replacements (within stacks) must be synchronized in order to guarantee protocol correctness [21, 18].\nFigure 5: Dynamic replacement of protocol P\nFor the synchronization algorithms to work, module interactions are intercepted in order to detect a time when Pi should be replaced by newPi.\n(Other solutions, e.g. in [11], are more complex.)\nIn Fig. 5, we show how this interception can be implemented in protocol frameworks that are based on services (in the left part of the figure) and events (in the right part of the figure).\nThe two-sided arrows point to the protocol modules P, and newP, that are switched.\nIt can be seen that the approach that uses the Service Interface mechanism has advantages.\nThe intercepting module Repl-P, has an interceptor bound to service interface t that intercepts every request handled by modules P, and all replies and notifications issued by P,.\nThe code of the module P, can therefore remain unchanged.\nIn event-based frameworks, the solution is to add an intermediate module Repl-P, that intercepts the requests issued to P, and also the replies and notifications issued by P,.\nAlthough this ad-hoc solution may seem similar to the servicebased approach, there is an important difference.\nThe eventbased solution requires to slightly modify the module P, since instead of handling event g and triggering event h, P, must now handle different events g' and h' (see Fig. 5).\n6.\nIMPLEMENTATION\nWe have implemented an experimental service-based protocol framework (called SAMOA) [7].\nOur implementation is light-weight: it consists of approximately 1200 lines of code in Java 1.5 (with generics).\nIn this section, we describe the main two classes of our implementation: Service (encoding the Service Interface) and\nTable 1: Service-based vs. event-based\nProtocol (encoding protocol modules).\nFinally, we present an example protocol stack that we have implemented to validate the service-based approach.\nThe Service Class.\nA Service object is characterized by the arguments of requests and the arguments of responses.\nA response is either a reply or a notification.\nA special argument, called message, determines the kind of interactions modeled by the response.\nA message represents a piece of information sent over the network.\nWhen a protocol module issues a request, it can give a message as an argument.\nThe message can specify the listener that must handle the reply.\nWhen a protocol module issues a response to a service interface, a reply is issued if one of the arguments of the response is a message specifying a listener.\nOtherwise, a notification is issued.\nExecuters, listeners and interceptors are encoded as innerclasses of the Service class.\nThis allows to provide type-safe protocol interactions.\nFor instance, executers can only be bound to the Service object, they belong to.\nThus, the parameters passed to requests (that are verified statically) always correspond to the parameters accepted by the corresponding executers.\nThe type of a Service object is determined by the type of the arguments of requests and responses.\nA Service object t is compatible with another Service object s if the type of the arguments of requests (and responses) of t is a subtype of the arguments of requests (and responses) of s.\nIn practice, if a protocol module Pi can issue a request to a protocol UDP, then it may also issue a request to TCP (compatible with UDP) due to the subtyping relation on parameters of communicating modules.\nThe Protocol Class.\nA Protocol object consists of three sets of components, one set for each component type (a listener, an executer, and an interceptor).\nProtocol objects are characterized by names to retrieve them easily.\nMoreover, we have added some features to bind and unbind all executers or interceptors to\/from the corresponding Service objects.\nProtocol objects can be loaded to a stack dynamically.\nAll these features made it easy to implement dynamic replacement of network protocols.\nProtocol Stack Implementation.\nTo validate our ideas, we have developed an Adaptive Group Communication (AGC) middleware, adopting both the service - and the event-based approaches.\nFig. 6 shows the corresponding stacks of the AGC middleware.\nBoth stacks allow the Consensus and Atomic Broadcast protocols to be dynamically updated.\nThe architecture of our middleware, shown in Fig. 6, builds on the group communication stack described in [15].\nThe UDP and RP2P modules provide respectively, unreliable and reliable point-to-point transport.\nThe FD module implements a failure detector; we assume that it ensures the\nFigure 6: Adaptive Group Communication Middleware: service-based (left) vs. event-based (right)\nproperties of the \u2738 S failure detector [9].\nThe CT module provides a distributed consensus service using the ChandraToueg algorithm [10].\nThe ABc.\nmodule implements atomic broadcast--a group communication primitive that delivers messages to all processes in the same order.\nThe GM module provides a group membership service that maintains consistent membership data among group members (see [19] for details).\nThe Repl ABc.\nand the Repl CT modules implement the replacement algorithms [18] for, respectively, the ABc.\nand the CT protocol modules.\nNote that each arrow in the event-based architecture represents an event.\nWe do not name events in the figure for readability.\nThe left stack in Figure 6 shows the implementation of AGC with our service-based framework.\nThe right stack shows the same implementation with an event-based framework.\nPerformance Evaluation.\nTo evaluate the overhead of service interfaces, we compared performance of the serviceand event-based implementations of the AGC middleware.\nThe latter implementation of AGC uses the Cactus protocol framework [5, 2].\nIn our experiment, we compared the average latency of Atomic Broadcast (ABcast), which is defined as follows.\nConsider a message m sent using ABcast.\nWe denote by ti (m) the time between the moment of sending m and the moment of delivering m on a machine (stack) i.\nWe define the average latency of m as the average of ti (m) for all machines (stacks) i within a group of stacks.\nPerformance tests have been made using a cluster of PCs running Red Hat Linux 7.2, where each PC has a Pentium III 766 MHz processor and 128MB of RAM.\nAll PCs are interconnected by a 100 Base-TX duplex Ethernet hub.\nOur experiment has involved 7 machines (stacks) that ABcast messages of 4Mb under a constant load, where a load is a number of messages per second.\nIn Figure 7, we show the results of our experiment for different loads.\nLatencies are shown on the vertical axis, while message loads are shown on the horizontal axis.\nThe solid line shows the results obtained with our service-based framework.\nThe dashed line shows the results obtained with the Cactus framework.\nThe Network Network\nFigure 7: Comparison between our service-based framework and Cactus\noverhead of the service-based framework is approximately 10%.\nThis can be explained as follows.\nFirstly, the servicebased framework provides a higher level abstraction, which has a small cost.\nSecondly, the AGC middleware was initially implemented and optimized for the event-based Cactus framework.\nHowever, it is possible to optimize the AGC middleware for the service-based framework.\n7.\nCONCLUSION\nIn the paper, we proposed a new approach to the protocol composition that is based on the notion of Service Interface, instead of events.\nWe believe that the service-based framework has several advantages over event-based frameworks.\nIt allows us to: (1) model accurately protocol interactions, (2) reduce the risk of errors during the composition phase, and (3) simply implement dynamic protocol updates.\nA prototype implementation allowed us to validate our ideas.","keyphrases":["servic interfac","protocol framework","modul","modular","network","distribut algorithm","distribut system","commun","event-base framework","stack","request","repli","dynam protocol replac"],"prmu":["P","P","P","P","P","M","M","U","M","U","U","U","M"]} {"id":"J-15","title":"Generalized Value Decomposition and Structured Multiattribute Auctions","abstract":"Multiattribute auction mechanisms generally either remain agnostic about traders' preferences, or presume highly restrictive forms, such as full additivity. Real preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction. We develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences. A set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes. We introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations. When traders' preferences are consistent with the auction's generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.","lvl-1":"Generalized Value Decomposition and Structured Multiattribute Auctions Yagil Engel and Michael P. Wellman University of Michigan, Computer Science & Engineering 2260 Hayward St, Ann Arbor, MI 48109-2121, USA {yagil,wellman}@umich.\nedu ABSTRACT Multiattribute auction mechanisms generally either remain agnostic about traders'' preferences, or presume highly restrictive forms, such as full additivity.\nReal preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction.\nWe develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences.\nA set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes.\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations.\nWhen traders'' preferences are consistent with the auction``s generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.\nCategories and Subject Descriptors: J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms: Algorithms, Economics 1.\nINTRODUCTION Multiattribute trading mechanisms extend traditional, price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal.\nRather than negotiating over a fully defined good or service, a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified.\nFor example, a procurement department of a company may use a multiattribute auction to select a supplier of hard drives.\nSupplier offers may be evaluated not only over the price they offer, but also over various qualitative attributes such as volume, RPM, access time, latency, transfer rate, and so on.\nIn addition, suppliers may offer different contract conditions such as warranty, delivery time, and service.\nIn order to account for traders'' preferences, the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations.\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes, therefore practical multiattribute auctions must either accommodate partial specifications, or support compact expression of preferences assuming some simplified form.\nBy far the most popular multiattribute form to adopt is the simplest: an additive representation where overall value is a linear combination of values associated with each attribute.\nFor example, several recent proposals for iterative multiattribute auctions [2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference specification exponentially (compared to the general discrete case), but precludes expression of any interdependencies among the attributes.\nIn practice, however, interdependencies among natural attributes are quite common.\nFor example, the buyer may exhibit complementary preferences for size and access time (since the performance effect is more salient if much data is involved), or may view a strong warranty as a good substitute for high reliability ratings.\nSimilarly, the seller``s production characteristics (such as increasing access time is harder for larger hard drives) can easily violate additivity.\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it is reasonable to expect multiattribute preferences to exhibit some structure.\nOur goal, therefore, is to identify the subtler yet more widely applicable structured representations, and exploit these properties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a flexible preference structure.\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences, due to Parkes and Kalagnanam (PK) [19].\nPK propose two types of iterative auctions: the first (NLD) makes no assumptions about traders'' preferences, and lets sellers bid on the full multidimensional attribute space.\nBecause NLD maintains an exponential price structure, it is suitable only for small domains.\nThe other auction (AD) assumes additive buyer valuation and seller cost functions.\nIt collects sell bids per attribute level and for a single discount term.\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces, albeit for levels of clusters of attributes rather than singletons.\nWe employ a preference decomposition based on generalized additive independence (GAI), a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired, yet providing a compact functional form to the extent that interdependence can be limited.\nGiven its roots in multiattribute utility theory [13], 227 the GAI condition is defined with respect to the expected utility function.\nTo apply it for modeling values for certain outcomes, therefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated with continuous prices, which provide a natural scale for assessing magnitude of preference.\nWe first lay out a representation framework for preferences that captures, in addition to simple orderings among attribute configuration values, the difference in the willingness to pay (wtp) for each.\nThat is, we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price.\nNext, we build a direct, formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function.\nAfter laying out this infrastructure, we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format.\nWe then study the auction``s allocational, computational, and practical properties.\nIn Section 2 we present essential background on our representation framework, the measurable value function (MVF).\nSection 3 develops new multiattribute structures for MVF, supporting generalized additive decompositions.\nNext, we show the applicability of the theoretical framework to preferences in trading.\nThe rest of the paper is devoted to the proposed auction mechanism.\n2.\nMULTIATTRIBUTE PREFERENCES As mentioned, most tools facilitating expression of multiattribute value for trading applications assume that agents'' preferences can be represented in an additive form.\nBy way of background, we start by introducing the formal prerequisites justifying the additive representation, as provided by multiattribute utility theory.\nWe then present the generalized additive form, and develop the formal underpinnings for measurable value needed to extend this model to the case of choice under certainty.\n2.1 Preferential Independence Let \u0398 denote the space of possible outcomes, with a preference relation (weak total order) over \u0398.\nLet A = {a0, ... , am} be a set of attributes describing \u0398.\nCapital letters denote subsets of variables, small letters (with or without numeric subscripts) denote specific variables, and \u00afX denotes the complement of X with respect to A.\nWe indicate specific variable assignments with prime signs or superscripts.\nTo represent an instantiation of subsets X, Y at the same time we use a sequence of instantiation symbols, as in X Y .\nDEFINITION 1.\nA set of attributes Y \u2282 A is preferentially independent (PI) of its complement Z = A \\ Y if the conditional preference order over Y given a fixed level Z0 of Z is the same regardless of the choice of Z0 .\nIn other words, the preference order over the projection of A on the attributes in Y is the same for any instantiation of the attributes in Z. DEFINITION 2.\nA = {a1, ... , am} is mutually preferentially independent (MPI) if any subset of A is preferentially independent of its complement.\nThe preference relation when no uncertainty is modeled is usually represented by a value function v [17].\nThe following fundamental result greatly simplifies the value function representation.\nTHEOREM 1 ([9]).\nA preference order over set of attributes A has an additive value function representation v(a1, ... , am) = mX i=1 vi(ai) iff A is mutually preferential independent.\nEssentially, the additive forms used in trading mechanisms assume mutual preferential independence over the full set of attributes, including the money attribute.\nIntuitively that means that willingness to pay for value of an attribute or attributes cannot be affected by the value of other attributes.\nA cardinal value function representing an ordering over certain outcomes need not in general coincide with the cardinal utility function that represents preference over lotteries or expected utility (EU).\nNevertheless, EU functions may possess structural properties analogous to that for value functions, such as additive decomposition.\nSince the present work does not involve decisions under uncertainty, we do not provide a full exposition of the EU concept.\nHowever we do make frequent reference to the following additive independence relations.\nDEFINITION 3.\nLet X, Y, Z be a partition of the set of attributes A. X and Y are conditionally additive independent given Z, denoted as CAI(X, Y | Z), if preferences over lotteries on A depend only on their marginal conditional probability distributions over X and Y .\nDEFINITION 4.\nLet I1, ... , Ig \u2286 A such that Sg i=1 Ii = A. I1, ... , Ig are called generalized additive independent (GAI) if preferences over lotteries on A depend only on their marginal distributions over I1, ... , Ig.\nAn (expected) utility function u(\u00b7) can be decomposed additively according to its (possibly overlapping) GAI sub-configurations.\nTHEOREM 2 ([13]).\nLet I1, ... , Ig be GAI.\nThen there exist functions f1, ... , fg such that u(a1, ... , am) = g X r=1 fr(Ir).\n(1) What is now known as the GAI condition was originally introduced by Fishburn [13] for EU, and was named GAI and brought to the attention of AI researchers by Bacchus and Grove [1].\nGraphical models and elicitation procedures for GAI decomposable utility were developed for EU [4, 14, 6], for a cardinal representation of the ordinal value function [15], and for an ordinal preference relations corresponding to a TCP-net structure by Brafman et al. [5].\nApart from the work on GAI in the context of preference handling that were discussed above, GAI have been recently used in the context of mechanism design by Hyafil and Boutilier [16], as an aid in direct revelation mechanisms.\nAs shown by Bacchus and Grove [1], GAI structure can be identified based on a set of CAI conditions, which are much easier to detect and verify.\nIn general, utility functions may exhibit GAI structure not based on CAI.\nHowever, to date all proposals for reasoning and eliciting utility in GAI form take advantage of the GAI structure primarily to the extent that it represents a collection of CAI conditions.\nFor example, GAI trees [14] employ triangulation of the CAI map, and Braziunas and Boutilier``s [6] conditional set Cj of a set Ij corresponds to the CAI separating set of Ij.\nSince the CAI condition is also defined based on preferences over lotteries, we cannot apply Bacchus and Grove``s result without first establishing an alternative framework based on priced outcomes.\nWe develop such a framework using the theory of measurable value functions, ultimately producing a GAI decomposition 228 (Eq.\n1) of the wtp function.\nReaders interested primarily in the multiattribute auction and willing to grant the well-foundedness of the preference structure may skip down to Section 5.\n2.2 Measurable Value Functions Trading decisions represent a special case of decisions under certainty, where choices involve multiattribute outcomes and corresponding monetary payments.\nIn such problems, the key decision often hinges on relative valuations of price differences compared to differences in alternative configurations of goods and services.\nTheoretically, price can be treated as just another attribute, however, such an approach fails to exploit the special character of the money dimension, and can significantly add to complexity due to the inherent continuity and typical wide range of possible monetary outcome values.\nWe build on the fundamental work of Dyer and Sarin [10, 11] on measurable value functions (MVFs).\nAs we show below, wtp functions in a quasi-linear setting can be interpreted as MVFs.\nHowever we first present the MVF framework in a more generic way, where the measurement is not necessarily monetary.\nWe present the essential definitions and refer to Dyer and Sarin for more detailed background and axiomatic treatment.\nThe key concept is that of preference difference.\nLet \u03b81 , \u03b82 , \u03d11 , \u03d12 \u2208 \u0398 such that \u03b81 \u03b82 and \u03d11 \u03d12 .\n[\u03b82 , \u03b81 ] denotes the preference difference between \u03b82 and \u03b81 , interpreted as the strength, or degree, to which \u03b82 is preferred over \u03b81 .\nLet \u2217 denote a preference order over \u0398 \u00d7 \u0398.\nWe interpret the statement [\u03b82 , \u03b81 ] \u2217 [\u03d12 , \u03d11 ] as the preference of \u03d12 over \u03d11 is at least as strong as the preference of \u03b82 over \u03b81 .\nWe use the symbol \u223c\u2217 to represent equality of preference differences.\nDEFINITION 5.\nu : D \u2192 is a measurable value function (MVF) wrt \u2217 if for any \u03b81 , \u03b82 , \u03d11 , \u03d12 \u2208 D, [\u03b82 , \u03b81 ] \u2217 [\u03d12 , \u03d11 ] \u21d4 u(\u03b82 ) \u2212 u(\u03b81 ) \u2264 u(\u03d12 ) \u2212 u(\u03d11 ).\nNote that an MVF can also be used as a value function representing , since [\u03b8 , \u03b8] \u2217 [\u03b8 , \u03b8] iff \u03b8 \u03b8 .\nDEFINITION 6 ([11]).\nAttribute set X \u2282 A is called difference independent of \u00afX if for any two assignments X1 \u00afX X2 \u00afX , [X1 \u00afX , X2 \u00afX ] \u223c\u2217 [X1 \u00afX , X2 \u00afX ] for any assignment \u00afX .\nOr, in words, the preference differences on assignments to X given a fixed level of \u00afX do not depend on the particular level chosen for \u00afX.\nAs with additive independence for EU, this condition is stronger than preferential independence of X. Also analogously to EU, mutual preferential independence combined with other conditions leads to additive decomposition of the MVF.\nMoreover, Dyer and Sarin [11] have defined analogs of utility independence [17] for MVF, and worked out a parallel set of decomposition results.\n3.\nADVANCED MVF STRUCTURES 3.1 Conditional Difference Independence Our first step is to generalize Definition 6 to a conditional version.\nDEFINITION 7.\nLet X, Y, Z be a partition of the set of attributes A. X is conditionally difference independent of Y given Z, denoted as CDI(X, Y | Z), if \u2200 instantiations \u02c6Z, X1 , X2 , Y 1 , Y 2 [X1 Y 1 \u02c6Z, X2 Y 1 \u02c6Z] \u223c [X1 Y 2 \u02c6Z, X2 Y 2 \u02c6Z].\nSince the conditional set is always the complement, we sometimes leave it implicit, using the abbreviated notation CDI(X, Y ).\nCDI leads to a decomposition similar to that obtained from CAI [17].\nLEMMA 3.\nLet u(A) be an MVF representing preference differences.\nThen CDI(X, Y | Z) iff u(A) = u(X0 , Y, Z) + u(X, Y 0 , Z) \u2212 u(X0 , Y 0 , Z).\nTo complete the analogy with CAI, we generalize Lemma 3 as follows.\nPROPOSITION 4.\nCDI(X, Y | Z) iff there exist functions \u03c81(X, Z) and \u03c82(Y, Z), such that u(X, Y, Z) = \u03c81(X, Z) + \u03c82(Y, Z).\n(2) An immediate result of Proposition 4 is that CDI is a symmetric relation.\nThe conditional independence condition is much more applicable than the unconditional one.\nFor example, if attributes a \u2208 X and b \/\u2208 X are complements or substitutes, X cannot be difference independent of \u00afX. However, X \\ {a} may still be CDI of \u00afX given a. 3.2 GAI Structure for MVF A single CDI condition decomposes the value function into two parts.\nWe seek a finer-grain global decomposition of the utility function, similar to that obtained from mutual preferential independence.\nFor this purpose we are now ready to employ the results of Bacchus and Grove [1], who establish that the CAI condition has a perfect map [20]; that is, there exists a graph whose nodes correspond to the set A, and its node separation reflects exactly the complete set of CAI conditions on A. Moreover, they show that the utility function decomposes over the set of maximal cliques of the perfect map.\nTheir proofs can be easily adapted to CDI, since they only rely on the decomposition property of CAI that is also implied by CDI according to Proposition 4.\nTHEOREM 5.\nLet G = (A, E) be a perfect map for the CDI conditions on A.\nThen u(A) = g X r=1 fr(Ir), (3) where I1, ... , Ig are (overlapping) subsets of A, each corresponding to a maximal clique of G. Given Theorem 5, we can now identify an MVF GAI structure from a collection of CDI conditions.\nThe CDI conditions, in turn, are particularly intuitive to detect when the preference differences carry a direct interpretation, as in the case with monetary differences discussed below.\nMoreover, the assumption or detection of CDI conditions can be performed incrementally, until the MVF is decomposed to a reasonable dimension.\nThis is in contrast with the fully additive decomposition of MVF that requires mutual preferential independence [11].\nTheorem 5 defines a decomposition structure, but to represent the actual MVF we need to specify the functions over the cliques.\n229 The next theorem establishes that the functional constituents of MVF are the same as those for GAI decompositions as defined by Fishburn [13] for EU.\nWe adopt the following conventional notation.\nLet (a0 1, ... , a0 m) be a predefined vector called the reference outcome.\nFor any I \u2286 A, the function u([I]) stands for the projection of u(A) to I where the rest of the attributes are fixed at their reference levels.\nTHEOREM 6.\nLet G = (A, E) be a perfect map for the CDI condition on A, and {I1, ... , Ig} a set of maximal cliques as defined in Theorem 5.\nThen the functional decomposition from that theorem can be defined as f1 = u([I1]), and for r = 2, ... , g (4) fr = u([Ir]) + r\u22121X k=1 (\u22121)k X 1\u2264i1<\u00b7\u00b7\u00b7 fb,r(\u03b8r) for all \u03b8r.\nThe discount \u0394 is initialized to zero.\nThe auction has the dynamics of a descending clock auction: at each round t, bids are collected for current prices and then prices are reduced according to price rules.\nA seller is considered active in a round if she submits at least one full bid.\nIn round t > 1, only sellers who where active in round t \u2212 1 are allowed to participate, and the auction terminates when no more than a single seller is active.\nWe denote the set of sub-bids submitted by si by Bt i , and the corresponding set of full bids is Bt i = {\u03b8 = (\u03b81, ... , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Bt i }.\nIn our example, a seller could submit sub-bids on a set of subconfigurations such as a1 b1 and b1 c1 , and that combines to a full bid on a1 b1 c1 .\nThe auction proceeds in two phases.\nIn the first phase (A), at each round t the auction computes a set of preferred sub-configurations Mt .\nSection 5.4 shows how to define Mt to ensure convergence, and Section 5.5 shows how to efficiently compute it.\nIn phase A, the auction adjusts prices after each round, reducing the price of every sub-configuration that has received a bid but is not in the preferred set.\nLet be the prespecified price increment parameter.\nSpecifically, the phase A price change rule is applied to all \u03b8r \u2208 Sn i=1 Bt i \\ Mt : pt+1 (\u03b8r) \u2190 max(pt (\u03b8r) \u2212 g , fb,r(\u03b8r)).\n[A] The RHS maximum ensures that prices do not get reduced below the buyer``s valuation in phase A. Let Mt denote the set of configurations that are consistent covers in Mt : Mt = {\u03b8 = (\u03b81, ... , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Mt } The auction switches to phase B when all active sellers have at least one full bid in the buyer``s preferred set: \u2200i. Bt i = \u2205 \u2228 Bt i \u2229 Mt = \u2205.\n[SWITCH] Let T be the round at which [SWITCH] becomes true.\nAt this point, the auction selects the buyer-optimal full bid \u03b7i for each seller si.\n\u03b7i = arg max \u03b8\u2208BT i (ub(\u03b8) \u2212 pT (\u03b8)).\n(6) In phase B, si may bid only on \u03b7i.\nThe prices of sub-configurations are fixed at pT (\u00b7) during this phase.\nThe only adjustment in phase B is to \u0394, which is increased in every round by .\nThe auction terminates when at most one seller (if exactly one, designate it s\u02c6i) is active.\nThere are four distinct cases: 1.\nAll sellers drop out in phase A (i.e., before rule [SWITCH] holds).\nThe auction returns with no allocation.\n6 The discount term could be replaced with a uniform price reduction across all sub-configurations.\n2.\nAll active sellers drop out in the same round in phase B.\nThe auction selects the best seller (s\u02c6i) from the preceding round, and applies the applicable case below.\n3.\nThe auction terminates in phase B with a final price above buyer``s valuation, pT (\u03b7\u02c6i) \u2212 \u0394 > ub(\u03b7\u02c6i).\nThe auction offers the winner s\u02c6i an opportunity to supply \u03b7\u02c6i at price ub(\u03b7\u02c6i).\n4.\nThe auction terminates in phase B with a final price pT (\u03b7\u02c6i)\u2212 \u0394 \u2264 ub(\u03b7\u02c6i).\nThis is the ideal situation, where the auction allocates the chosen configuration and seller at this resulting price.\nThe overall auction is described by high-level pseudocode in Algorithm 1.\nAs explained in Section 5.4, the role of phase A is to guide the traders to their efficient configurations.\nPhase B is a one-dimensional competition over the surplus that remaining seller candidates can provide to the buyer.\nIn Section 5.5 we discuss the computational tasks associated with the auction, and Section 5.6 provides a detailed example.\nAlgorithm 1 GAI-based multiattribute auction collect a reported valuation, \u02c6v from the buyer set high initial prices, p1 (\u03b8r) on each level \u03b8r, and set \u0394 = 0 while not [SWITCH] do collect sub-bids from sellers compute Mt apply price change by [A] end while compute \u03b7i while more than one active seller do increase \u0394 by collect bids on (\u03b7i, \u0394) from sellers end while implement allocation and payment to winning seller 5.4 Economic Analysis When the optimal solution to MAP (5) provides negative welfare and sellers do not bid below their cost, the auction terminates in phase A, no trade occurs and the auction is trivially efficient.\nWe therefore assume throughout the analysis that the optimal (seller,configuration) pair provides non-negative welfare.\nThe buyer profit from a configuration \u03b8 is defined as7 \u03c0b(\u03b8) = ub(\u03b8) \u2212 p(\u03b8) and similarly \u03c0i(\u03b8) = p(\u03b8) \u2212 ci(\u03b8) is the profit of si.\nIn addition, for \u03bc \u2286 {1, ... , g} we denote the corresponding set of subconfigurations by \u03b8\u03bc, and define the profit from a configuration \u03b8 over the subset \u03bc as \u03c0b(\u03b8\u03bc) = X r\u2208\u03bc (fb,r(\u03b8r) \u2212 p(\u03b8r)).\n\u03c0i(\u03b8\u03bc) is defined similarly for si.\nCrucially, for any \u03bc and its complement \u00af\u03bc and for any trader \u03c4, \u03c0\u03c4 (\u03b8) = \u03c0\u03c4 (\u03b8\u03bc) + \u03c0\u03c4 (\u03b8\u00af\u03bc).\nThe function \u03c3i : \u0398 \u2192 R represents the welfare, or surplus function ub(\u00b7) \u2212 ci(\u00b7).\nFor any price system p, \u03c3i(\u03b8) = \u03c0b(\u03b8) + \u03c0i(\u03b8).\n7 We drop the t superscript in generic statements involving price and profit functions, understanding that all usage is with respect to the (currently) applicable prices.\n232 Since we do not assume anything about the buyer``s strategy, the analysis refers to profit and surplus with respect to the face value of the buyer``s report.\nThe functions \u03c0i and \u03c3i refer to the true cost functions of si.\nDEFINITION 10.\nA seller is called Straightforward Bidder (SB) if at each round t she bids on Bt i as follows: if max\u03b8\u2208\u0398 \u03c0t i (\u03b8) < 0, then Bt i = \u2205.\nOtherwise let \u03a9t i \u2286 arg max \u03b8\u2208\u0398 \u03c0t i (\u03b8) Bt i = {\u03b8r | \u03b8 \u2208 \u03a9t i, r \u2208 {1, ... , g}}.\nIntuitively, an SB seller follows a myopic best response strategy (MBR), meaning they bid myopically rather than strategically by optimizing their profit with respect to current prices.\nTo calculate Bt i sellers need to optimize their current profit function, as discussed in Section 4.2.\nThe following lemma bridges the apparent gap between the compact pricing and bid structure and the global optimization performed by the traders.\nLEMMA 8.\nLet \u03a8 be a set of configurations, all maximizing profit for a trader \u03c4 (seller or buyer) at the relevant prices.\nLet \u03a6 = {\u03b8r | \u03b8 \u2208 \u03a8, r \u2208 {1, ... , g}.\nThen any consistent cover in \u03a6 is also a profit-maximizing configuration for \u03c4.\nProof sketch (full proof in the online appendix): A source of an element \u03b8r is a configuration \u02dc\u03b8 \u2208 \u03a8 from which it originated (meaning, \u02dc\u03b8r = \u03b8r).\nStarting from the supposedly suboptimal cover \u03b81 , we build a series of covers \u03b81 , ... , \u03b8L .\nAt each \u03b8j we flip the value of a set of sub-configurations \u03bcj corresponding to a subtree, with the sub-configurations of the configuration \u02c6\u03b8j \u2208 \u03a8 which is the source of the parent \u03b3j of \u03bcj .\nThat ensures that all elements in \u03bcj \u222a {\u03b3j} have a mutual source \u02c6\u03b8j .\nWe show that all \u03b8j are consistent and that they must all be suboptimal as well, and since all elements of \u03b8L have a mutual source, meaning \u03b8L = \u02c6\u03b8L \u2208 \u03a8, it contradicts optimality of \u03a8.\nCOROLLARY 9.\nFor SB seller si, \u2200t, \u2200\u03b8 \u2208 Bt i , \u03c0t i (\u03b8 ) = max \u03b8\u2208\u0398 \u03c0t i (\u03b8).\nNext we consider combinations of configurations that are only within some \u03b4 of optimality.\nLEMMA 10.\nLet \u03a8 be a set of configurations, all are within \u03b4 of maximizing profit for a trader \u03c4 at the prices, and \u03a6 defined as in Lemma 8.\nThen any consistent cover in \u03a6 is within \u03b4g of maximizing utility for \u03c4.\nThis bound is tight, that is for any GAI tree and a non-trivial domain we can construct a set \u03a8 as above in which there exists a consistent cover whose utility is exactly \u03b4g below the maximal.\nNext we formally define Mt .\nFor connected GAI trees, Mt is the set of sub-configurations that are part of a configuration within of optimal.\nWhen the GAI tree is in fact a forest, we apportion the error proportionally across the disconnected trees.\nLet G be comprised of trees G1, ... , Gh.\nWe use \u03b8j to denote the projection of a configuration \u03b8 on the tree Gj , and gj denotes the number of GAI elements in Gj .\nMt j = {\u03b8r | \u03c0t b(\u03b8j) \u2265 max \u03b8j \u2208\u0398j \u03c0t b(\u03b8j ) \u2212 gj g , r \u2208 Gj } Then define Mt = Sh j=1 Mt j. Let ej = gj \u22121 denote the number of edges in Gj .\nWe define the connectivity parameter, e = maxj=1,...,h ej .\nAs shown below, this connectivity parameter is an important factor in the performance of the auction.\nCOROLLARY 11.\n\u2200\u03b8 \u2208 Mt , \u03c0t b(\u03b8 ) \u2265 max \u03b8\u2208\u0398 \u03c0t b(\u03b8) \u2212 (e + 1) In the fully additive case this loss of efficiency reduces to .\nOn the other extreme, if the GAI network is connected then e+1 = g.\nWe also note that without assuming any preference structure, meaning that the CDI map is fully connected, g = 1 and the efficiency loss is again .\nLemmas 12 through 15 show that through the price system, the choice of buyer preferred configurations, and price change rules, Phase A leads the buyer and each of the sellers to their mutually efficient configuration.\nLEMMA 12.\nmax\u03b8\u2208\u0398 \u03c0t b(\u03b8) does not change in any round t of phase A. PROOF.\nWe prove the lemma per each tree Gj.\nThe optimal values for disconnected components are independent of each other hence if the maximal profit for each component does not change the combined maximal profit does not change as well.\nIf the price of \u03b8j was reduced during phase A, that is pt+1 (\u03b8j) = pt (\u03b8j ) \u2212 \u03b4, it must be the case that some w \u2264 gj sub-configurations of \u03b8j are not in Mt j, and \u03b4 = w g .\nThe definition of Mt j ensures \u03c0t b(\u03b8j ) < max \u03b8\u2208\u0398 \u03c0t b(\u03b8j) \u2212 gj g .\nTherefore, \u03c0t+1 b (\u03b8 ) = \u03c0t (\u03b8 ) + \u03b4 = \u03c0t (\u03b8 ) + w g \u2264 max \u03b8\u2208\u0398 \u03c0t b(\u03b8j).\nThis is true for any configuration whose profit improves, therefore the maximal buyer profit does not change during phase A. LEMMA 13.\nThe price of at least one sub-configuration must be reduced at every round in phase A. PROOF.\nIn each round t < T of phase A there exists an active seller i for whom Bt i \u2229 Mt = \u2205.\nHowever to be active in round t, Bt i = \u2205.\nLet \u02c6\u03b8 \u2208 Bt i .\nIf \u2200r.\u02c6\u03b8r \u2208 Mt , then \u02c6\u03b8 \u2208 Mt by definition of Mt .\nTherefore there must be \u02c6\u03b8r \u2208 Mt .\nWe need to prove that for at least one of these sub-configurations, \u03c0t b(\u02c6\u03b8r) < 0 to ensure activation of rule [A].\nAssume for contradiction that for any \u02c6\u03b8r \u2208 \u00afMt , \u03c0t b(\u02c6\u03b8r) \u2265 0.\nFor simplicity we assume that for any \u03b8r, \u03c01 b (\u03b8r) is some product of g (that can be easily done), and that ensures that \u03c0t b(\u02c6\u03b8r) = 0 because once profit hits 0 it cannot increase by rule [A].\nIf \u02c6\u03b8r \u2208 \u00afMt , \u2200r = 1, ... , g then \u03c0t b(\u02c6\u03b8) = 0.\nThis contradicts Lemma 12 since we set high initial prices.\nTherefore some of the sub-configurations of \u02c6\u03b8 are in Mt , and WLOG we assume it is \u02c6\u03b81, ... , \u02c6\u03b8k.\nTo be in Mt these k sub-configurations must have been in some preferred full configuration, meaning there exists \u03b8 \u2208 Mt such that \u03b8 = (\u02c6\u03b81, ... , \u02c6\u03b8k, \u03b8k+1, ... , \u03b8g) Since \u02c6\u03b8 \/\u2208 Mt It must be that case that \u03c0t b(\u02c6\u03b8) < \u03c0t b(\u03b8 ).\nTherefore \u03c0t b(\u03b8k+1, ... , \u03b8g) > \u03c0t b(\u02c6\u03b8k+1, ... , \u02c6\u03b8g) = 0 Hence for at least one r \u2208 {k + 1, ... , g}, \u03c0t b(\u03b8r) > 0 contradicting rule [A].\n233 LEMMA 14.\nWhen the solution to MAP provides positive surplus, and at least the best seller is SB, the auction must reach phase B. PROOF.\nBy Lemma 13 prices must go down in every round of phase A. Rule [A] sets a lower bound on all prices therefore the auction either terminates in phase A or must reach condition [SWITCH].\nWe set the initial prices are high such that max\u03b8\u2208\u0398 \u03c01 b (\u03b8) < 0, and by Lemma 12 max\u03b8\u2208\u0398 \u03c0t b(\u03b8) < 0 during phase A.\nWe assume that the efficient allocation (\u03b8\u2217 , i\u2217 ) provides positive welfare, that is \u03c3i\u2217 (\u03b8\u2217 ) = \u03c0t b(\u03b8\u2217 ) + \u03c0t i\u2217 (\u03b8\u2217 ) > 0.\nsi\u2217 is SB therefore she will leave the auction only when \u03c0t i\u2217 (\u03b8\u2217 ) < 0.\nThis can happen only when \u03c0t b(\u03b8\u2217 ) > 0, therefore si\u2217 does not drop in phase A hence the auction cannot terminate before reaching condition [SWITCH].\nLEMMA 15.\nFor SB seller si, \u03b7i is (e + 1) -efficient.\nPROOF.\n\u03b7i is chosen to maximize the buyer``s surplus out of Bt i at the end of phase A.\nSince Bt i \u2229 Mt = \u2205, clearly \u03b7i \u2208 Mt .\nFrom Corollary 11 and Corollary 9, for any \u02dc\u03b8, \u03c0T b (\u03b7i) \u2265 \u03c0T b (\u02dc\u03b8) \u2212 (e + 1) \u03c0T i (\u03b7i) \u2265 \u03c0T i (\u02dc\u03b8) \u21d2 \u03c3i(\u03b7i) \u2265 \u03c3i(\u02dc\u03b8) \u2212 (e + 1) This establishes the approximate bilateral efficiency of the results of Phase A (at this point under the assumption of SB).\nBased on Phase B``s simple role as a single-dimensional bidding competition over the discount, we next assert that the overall result is efficient under SB, which in turn proves to be an approximately ex-post equilibrium strategy in the two phases.\nLEMMA 16.\nIf sellers si and sj are SB, and si is active at least as long as sj is active in phase B, then \u03c3i(\u03b7i) \u2265 max \u03b8\u2208\u0398 \u03c3j(\u03b8) \u2212 (e + 2) .\nTHEOREM 17.\nGiven a truthful buyer and SB sellers, the auction is (e+2) -efficient: the surplus of the final allocation is within (e + 2) of the maximal surplus.\nFollowing PK, we rely on an equivalence to the one-sided VCG auction to establish incentive properties for the sellers.\nIn the onesided multiattribute VCG auction, buyer and sellers report valuation and cost functions \u02c6ub, \u02c6ci, and the buyer pays the sell-side VCG payment to the winning seller.\nDEFINITION 11.\nLet (\u03b8\u2217 , i\u2217 ) be the optimal solution to MAP.\nLet (\u02dc\u03b8,\u02dci) be the best solution to MAP when i\u2217 does not participate.\nThe sell-side VCG payment is V CG(\u02c6ub, \u02c6ci) = \u02c6ub(\u03b8\u2217 ) \u2212 (\u02c6ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)).\nIt is well-known that truthful bidding is a dominant strategy for sellers in the one-sided VCG auction.\nIt is also shown by PK that the maximal regret for buyers from bidding truthfully in this mechanism is ub(\u03b8\u2217 ) \u2212 ci\u2217 (\u03b8\u2217 ) \u2212 (ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)), that is, the marginal product of the efficient seller.\nUsually in iterative auctions the VCG outcome is only nearly achieved, but the deviation is bounded by the minimal price change.\nWe show a similar result, and therefore define \u03b4-VCG payments.\nDEFINITION 12.\nSell-side \u03b4-VCG payment for MAP is a payment p such that V CG(\u02c6ub, \u02c6ci) \u2212 \u03b4 \u2264 p \u2264 V CG(\u02c6ub, \u02c6ci) + \u03b4.\nWhen payment is guaranteed to be \u03b4-VCG sellers can only affect their payment within that range, therefore their gain by falsely reporting their cost is bounded by 2\u03b4.\nLEMMA 18.\nWhen sellers are SB, the payment in the end of GAI auction is sell-side (e + 2) -VCG.\nTHEOREM 19.\nSB is an (3e + 5) ex-post Nash Equilibrium for sellers in GAI auction.\nThat is, sellers cannot gain more than (3e + 5) by deviating.\nIn practice, however, sellers are unlikely to have the information that would let them exploit that potential gain.\nThey are much more likely to lose from bidding on their less attractive configurations.\n5.5 Computation and Complexity The size of the price space maintained in the auction is equal to the total number of sub-configurations, meaning it is exponential in maxr |Ir|.\nThis is also equivalent to the tree-width (plus one) of the original CDI-map.\nFor the purpose of the computational analysis let dj denote the domain of attribute aj, and I = Sg r=1 Q j\u2208Ir dj, the collection of all sub-configurations.\nThe first purpose of this sub-section is to show that the complexity of all the computations required for the auction depends only on |I|, i.e., no computation depends on the size of the full exponential domain.\nWe are first concerned with the computation of Mt .\nSince Mt grows monotonically with t, a naive application of optimization algorithm to generate the best outcomes sequentially might end up enumerating significant portions of the fully exponential domain.\nHowever as shown below this plain enumeration can be avoided.\nPROPOSITION 20.\nThe computation of Mt can be done in time O(|I|2 ).\nMoreover, the total time spent on this task throughout the auction is O(|I|(|I| + T)).\nThe bounds are in practice significantly lower, based on results on similar problems from the probabilistic reasoning literature [18].\nOne of the benefits of the compact pricing structure is the compact representation it lends for bids: sellers submit only sub-bids, and therefore the number of them submitted and stored per seller is bounded by |I|.\nSince the computation tasks: Bt i = \u2205, rule [SWITCH] and choice of \u03b7i are all involving the set Bt i , it is important to note that their performance only depend on the size of the set Bt i , since they are all subsumed by the combinatorial optimization task over Bt i or Bt i \u2229 Mt .\nNext, we analyze the number of rounds it takes for the auction to terminate.\nPhase B requires maxi=1,...n \u03c0T i (\u03b7i)1 .\nSince this is equivalent to price-only auctions, the concern is only with the time complexity of phase A.\nSince prices cannot go below fb,r(\u03b8r), an upper bound on the number of rounds required is T \u2264 X \u03b8r\u2208I (p1 (\u03b8r) \u2212 fb,r(\u03b8r)) g However phase A may converge faster.\nLet the initial negative profit chosen by the auctioneer be m = max\u03b8\u2208\u0398 \u03c01 b (\u03b8).\nIn the worst case phase A needs to run until \u2200\u03b8 \u2208 \u0398.\u03c0b(\u03b8) = m.\nThis happens for example when \u2200\u03b8r \u2208 I.pt (\u03b8r) = fb,r(\u03b8r) + m g .\nIn general, the closer the initial prices reflect buyer valuation, the faster phase A converges.\nOne extreme is to choose p1 (\u03b8r) = 234 I1 I2 a1 b1 a2 b1 a1 b2 a2 b2 b1 c1 b2 c1 b1 c2 b2 c2 fb 65 50 55 70 50 85 60 75 f1 35 20 30 70 65 65 70 61 f2 35 20 25 25 55 110 70 95 Table 1: GAI utility functions for the example domain.\nfb represents the buyer``s valuation, and f1 and f2 costs of the sellers s1 and s2.\nfb,r(\u03b8r) + m g .\nThat would make phase A redundant, at the cost of full initial revelation of buyer``s valuation as done in other mechanisms discussed below.\nBetween this option and the other extreme, which is \u2200\u03b1, \u02c6\u03b1 \u2208 I, p1 (\u03b1) = p1 (\u02c6\u03b1) the auctioneer has a range of choices to determine the right tradeoff between convergence time and information revelation.\nIn the example below the choice of a lower initial price for the domain of I1 provides some speedup by revealing a harmless amount of information.\nAnother potential concern is the communication cost associated with the Japanese auction style.\nThe sellers need to send their bids over and over again at each round.\nA simple change can be made to avoid much of the redundant communication: the auction can retain sub-bids from previous rounds on sub-configurations whose price did not change.\nSince combinations of sub-bids from different rounds can yield sub-optimal configurations, each sub-bid should be tagged with the number of the latest round in which it was submitted, and only consistent combinations from the same round are considered to be full bids.\nWith this implementation sellers need not resubmit their bid until a price of at least one sub-configuration has changed.\n5.6 Example We use the example settings introduced in Section 5.2.\nRecall that the GAI structure is I1 = {a, b}, I2 = {b, c} (note that e = 1).\nTable 1 shows the GAI utilities for the buyer and the two sellers s1, s2.\nThe efficient allocation is (s1, a1 b2 c1 ) with a surplus of 45.\nThe maximal surplus of the second best seller, s2, is 25, achieved by a1 b1 c1 , a2 b1 c1 , and a2 b2 c2 .\nWe set all initial prices over I1 to 75, and all initial prices over I2 to 90.\nWe set = 8, meaning that price reduction for sub-configurations is 4.\nThough with these numbers it is not guaranteed by Theorem 17, we expect s1 to win on either the efficient allocation or on a1 b2 c2 which provides a surplus of 39.\nThe reason is that these are the only two configurations which are within (e + 1) = 16 of being efficient for s1 (therefore one of them must be chosen by Phase A), and both provide more than surplus over s2``s most efficient configuration (and this is sufficient in order to win in Phase B).\nTable 2 shows the progress of phase A. Initially all configuration have the same cost (165), so sellers bid on their lowest cost configuration which is a2 b1 c1 for both (with profit 80 to s1 and 90 to s2), and that translates to sub-bids on a2 b1 and b1 c1 .\nM1 contains the sub-configurations a2 b2 and b2 c1 of the highest value configuration a2 b2 c1 .\nPrice is therefore decreased on a2 b1 and b1 c1 .\nAfter the price change, s1 has higher profit (74) on a1 b2 c2 and she therefore bids on a1 b2 and b2 c2 .\nNow (round 2) their prices go down, reducing the profit on a1 b2 c2 to 66 and therefore in round 3 s1 prefers a2 b1 c2 (profit 67).\nAfter the next price change the configurations a1 b2 c1 and a1 b2 c2 both become optimal (profit 66), and the subbids a1 b2 , b2 c1 and b2 c2 capture the two.\nThese configurations stay optimal for another round (5), with profit 62.\nAt this point s1 has a full bid (in fact two full bids: a1 b2 c2 and a1 b2 c1 ) in M5 , and I1 I2 t a1b1 a2b1 a1b2 a2b2 b1c1 b2c1 b1c2 b2c2 1 75 75 75 75 90 90 90 90 s1, s2 \u2217 s1, s2 \u2217 2 75 71 75 75 86 90 90 90 s2 s1 \u2217 s2 \u2217 s1 3 75 67 71 75 82 90 90 86 s1, s2 \u2217 s2 \u2217 s1 \u2217 4 75 63 71 75 78 90 86 86 s2 s1 \u2217 s2 \u2217, s1 \u2217, s1 5 75 59 67 75 74 90 86 86 s2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1 6 71 59 67 75 70 90 86 86 s2 \u2217, s1 \u2217 \u2217, s1 s2 \u2217, s1 7 71 55 67 75 70 90 82 86 s2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1 8 67 55 67 75 66 90 82 86 \u2217 s2 \u2217, s1 \u2217 \u2217 \u2217, s1 s2 \u2217, s1 9 67 51 67 75 66 90 78 86 \u2217, s2 \u2217, s1 \u2217 \u2217, s2 \u2217, s1 \u2217, s1 Table 2: Auction progression in phase A. Sell bids and designation of Mt (using \u2217) are shown below the price of each subconfiguration.\ntherefore she no longer changes her bids since the price of her optimal configurations does not decrease.\ns2 sticks to a2 b1 c1 during the first four rounds, switching to a1 b1 c1 in round 5.\nIt takes four more rounds for s2 and Mt to converge (M10 \u2229B10 2 = {a1 b1 c1 }).\nAfter round 9 the auction sets \u03b71 = a1 b2 c1 (which yields more buyer profit than a1 b2 c2 ) and \u03b72 = a1 b1 c1 .\nFor the next round (10) \u0394 = 8, increased by 8 for each subsequent round.\nNote that p9 (a1 b1 c1 ) = 133, and c2(a1 b1 c1 ) = 90, therefore \u03c0T 2 (\u03b72) = 43.\nIn round 15, \u0394 = 48 meaning p15 (a1 b1 c1 ) = 85 and that causes s2 to drop out, setting the final allocation to (s1, a1 b2 c1 ) and p15 (a1 b2 c1 ) = 157 \u2212 48 = 109.\nThat leaves the buyer with a profit of 31 and s1 with a profit of 14, less than below the VCG profit 20.\nThe welfare achieved in this case is optimal.\nTo illustrate how some efficiency loss could occur consider the case that c1(b2 c2 ) = 60.\nIn that case, in round 3 the configuration a1 b2 c2 provides the same profit (67) as a2 b1 c2 , and s1 bids on both.\nWhile a2 b1 c2 is no longer optimal after the price change, a1 b2 c2 remains optimal on subsequent rounds because b2 c2 \u2208 Mt , and the price change of a1 b2 affects both a1 b2 c2 and the efficient configuration a1 b2 c1 .\nWhen phase A ends B10 1 \u2229 M10 = {a1 b2 c2 } so the auction terminates with the slightly suboptimal configuration and surplus 40.\n6.\nDISCUSSION 6.1 Preferential Assumptions A key aspect in implementing GAI based auctions is the choice of the preference structure, that is, the elements {I1, ... , Ig}.\nIn some domains the structure can be more or less robust over time and over different decision makers.\nWhen this is not the case, extracting reliable structure from sellers (in the form of CDI conditions) is a serious challenge.\nThis could have been a deal breaker for such domains, but in fact it can be overcome.\nIt turns out that we can run this auction without any assumptions on sellers'' preference structure.\nThe only place where this assumption is used in our analysis is for Lemma 8.\nIf sellers whose preference structure does not agree with the one used by the auction are guided to submit only one full bid at each round, or a set of bids that does not yield undesired consistent combinations, all the properties of the auction 235 still hold.\nLocally, the sellers can optimize their profit functions using the union of their GAI structure with the auction``s structure.\nIt is therefore essential only that the buyer``s preference structure is accurately modeled.\nOf course, capturing sellers'' structures as well is still preferred since it can speed up the execution and let sellers take advantage of the compact bid representation.\nIn both cases the choice of clusters may significantly affect the complexity of the price structure and the runtime of the auction.\nIt is sometimes better to ignore some weaker interdependencies in order to reduce dimensionality.\nThe complexity of the structure also affects the efficiency of the auction through the value of e. 6.2 Information Revelation Properties In considering information properties of this mechanism we compare to the standard approach for iterative multiattribute auctions, which is based on the theoretical foundations of Che [7].\nIn most of these mechanisms the buyer reveals a scoring function and then the mechanism solicits bids from the sellers [3, 22, 8, 21] (the mechanisms suggested by Beil and Wein [2] is different since buyers can modify their scoring function each round, but the goal there is to maximize the buyer``s profit).\nWhereas these iterative procurement mechanisms tend to relieve the burden of information revelation from the sellers, a major drawback is that the buyer``s utility function must be revealed to the sellers before receiving any commitment.\nIn the mechanisms suggested by PK and in our GAI auction above, buyer information is revealed only in exchange for sell commitments.\nIn particular, sellers learn nothing (beyond the initial price upper bound, which can be arbitrarily loose) about the utility of configurations for which no bid was submitted.\nWhen bids are submitted for a configuration \u03b8, sellers would be able to infer its utility relative to the current preferred configurations only after the price of \u03b8 is driven down sufficiently to make it a preferred configuration as well.\n6.3 Conclusions We propose a novel exploitation of preference structure in multiattribute auctions.\nRather than assuming full additivity, or no structure at all, we model preferences using the GAI decomposition.\nWe developed an iterative auction mechanism directly relying on the decomposition, and also provided direct means of constructing the representation from relatively simple statements of willingnessto-pay.\nOur auction mechanism generalizes PK``s preference modeling, while in essence retaining their information revelation properties.\nIt allows for a range of tradeoffs between accuracy of preference representation and both the complexity of the pricing structure and efficiency of the auction, as well as tradeoffs between buyer``s information revelation and the time required for convergence.\n7.\nACKNOWLEDGMENTS This work was supported in part by NSF grants IIS-0205435 and IIS-0414710, and the STIET program under NSF IGERT grant 0114368.\nWe are grateful to comments from anonymous reviewers.\n8.\nREFERENCES [1] F. Bacchus and A. Grove.\nGraphical models for preference and utility.\nIn Eleventh Conference on Uncertainty in Artificial Intelligence, pages 3-10, Montreal, 1995.\n[2] D. R. Beil and L. M. Wein.\nAn inverse-optimization-based auction for multiattribute RFQs.\nManagement Science, 49:1529-1545, 2003.\n[3] M. Bichler.\nThe Future of e-Markets: Multi-Dimensional Market Mechanisms.\nCambridge University Press, 2001.\n[4] C. Boutilier, F. Bacchus, and R. I. Brafman.\nUCP-networks: A directed graphical representation of conditional utilities.\nIn Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 56-64, Seattle, 2001.\n[5] R. I. Brafman, C. Domshlak, and T. Kogan.\nCompact value-function representations for qualitative preferences.\nIn Twentieth Conference on Uncertainty in Artificial Intelligence, pages 51-59, Banff, 2004.\n[6] D. Braziunas and C. Boutilier.\nLocal utility elicitation in GAI models.\nIn Twenty-first Conference on Uncertainty in Artificial Intelligence, pages 42-49, Edinburgh, 2005.\n[7] Y.-K.\nChe.\nDesign competition through multidimensional auctions.\nRAND Journal of Economics, 24(4):668-680, 1993.\n[8] E. David, R. Azoulay-Schwartz, and S. Kraus.\nAn English auction protocol for multi-attribute items.\nIn Agent Mediated Electronic Commerce IV: Designing Mechanisms and Systems, volume 2531 of Lecture Notes in Artificial Intelligence, pages 52-68.\nSpringer, 2002.\n[9] G. Debreu.\nTopological methods in cardinal utility theory.\nIn K. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Methods in the Social Sciences.\nStanford Univ..\nPress, 1959.\n[10] J. S. Dyer and R. K. Sarin.\nAn axiomatization of cardinal additive conjoint measurement theory.\nWorking Paper 265, WMSI, UCLA, February 1977.\n[11] J. S. Dyer and R. K. Sarin.\nMeasurable multiattribute value functions.\nOperations Research, 27:810-822, 1979.\n[12] Y. Engel, M. P. Wellman, and K. M. Lochner.\nBid expressiveness and clearing algorithms in multiattribute double auctions.\nIn Seventh ACM Conference on Electronic Commerce, pages 110-119, Ann Arbor, MI, 2006.\n[13] P. C. Fishburn.\nInterdependence and additivity in multivariate, unidimensional expected utility theory.\nIntl..\nEconomic Review, 8:335-342, 1967.\n[14] C. Gonzales and P. Perny.\nGAI networks for utility elicitation.\nIn Ninth Intl..\nConf.\non the Principles of Knowledge Representation and Reasoning, pages 224-234, Whistler, BC, 2004.\n[15] C. Gonzales and P. Perny.\nGAI networks for decision making under certainty.\nIn IJCAI-05 Workshop on Advances in Preference Handling, Edinburgh, 2005.\n[16] N. Hyafil and C. Boutilier.\nRegret-based incremental partial revelation mechanisms.\nIn Twenty-first National Conference on Artificial Intelligence, pages 672-678, Boston, MA, 2006.\n[17] R. L. Keeney and H. Raiffa.\nDecisions with Multiple Objectives: Preferences and Value Tradeoffs.\nWiley, 1976.\n[18] D. Nilsson.\nAn efficient algorithm for finding the M most probable configurations in probabilistic expert systems.\nStatistics and Computinge, 8(2):159-173, 1998.\n[19] D. C. Parkes and J. Kalagnanam.\nModels for iterative multiattribute procurement auctions.\nManagement Science, 51:435-451, 2005.\n[20] J. Pearl and A. Paz.\nGraphoids: A graph based logic for reasoning about relevance relations.\nIn B. Du Boulay, editor, Advances in Artificial Intelligence II.\n1989.\n[21] J. Shachat and J. T. Swarthout.\nProcurement auctions for differentiated goods.\nIBM Research Report RC22587, IBM T.J. Watson Research Laboratory, 2002.\n[22] N. Vulkan and N. R. Jennings.\nEfficient mechanisms for the supply of services in multi-agent environments.\nDecision Support Systems, 28:5-19, 2000.\n236","lvl-3":"Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders' preferences, or presume highly restrictive forms, such as full additivity.\nReal preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction.\nWe develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences.\nA set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes.\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations.\nWhen traders' preferences are consistent with the auction's generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.\n1.\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional, price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal.\nRather than negotiating over a fully defined good or service, a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified.\nFor example, a procurement department of a company may use a multiattribute auction to select a supplier of hard drives.\nSupplier offers may be evaluated not only over the price they offer, but also over various qualitative attributes such as volume, RPM, access time, latency, transfer rate, and so on.\nIn addition, suppliers may offer different contract conditions such as warranty, delivery time, and service.\nIn order to account for traders' preferences, the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations.\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes, therefore practical multiattribute auctions must either accommodate partial specifications, or support compact expression of preferences assuming some simplified form.\nBy far the most popular multiattribute form to adopt is the simplest: an additive representation where overall value is a linear combination of values associated with each attribute.\nFor example, several recent proposals for iterative multiattribute auctions [2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference specification exponentially (compared to the general discrete case), but precludes expression of any interdependencies among the attributes.\nIn practice, however, interdependencies among natural attributes are quite common.\nFor example, the buyer may exhibit complementary preferences for size and access time (since the performance effect is more salient if much data is involved), or may view a strong warranty as a good substitute for high reliability ratings.\nSimilarly, the seller's production characteristics (such as \"increasing access time is harder for larger hard drives\") can easily violate additivity.\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it is reasonable to expect multiattribute preferences to exhibit some structure.\nOur goal, therefore, is to identify the subtler yet more widely applicable structured representations, and exploit these properties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a flexible preference structure.\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences, due to Parkes and Kalagnanam (PK) [19].\nPK propose two types of iterative auctions: the first (NLD) makes no assumptions about traders' preferences, and lets sellers bid on the full multidimensional attribute space.\nBecause NLD maintains an exponential price structure, it is suitable only for small domains.\nThe other auction (AD) assumes additive buyer valuation and seller cost functions.\nIt collects sell bids per attribute level and for a single discount term.\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces, albeit for levels of clusters of attributes rather than singletons.\nWe employ a preference decomposition based on generalized additive independence (GAI), a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired, yet providing a compact functional form to the extent that interdependence can be limited.\nGiven its roots in multiattribute utility theory [13],\nthe GAI condition is defined with respect to the expected utility function.\nTo apply it for modeling values for certain outcomes, therefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated with continuous prices, which provide a natural scale for assessing magnitude of preference.\nWe first lay out a representation framework for preferences that captures, in addition to simple orderings among attribute configuration values, the difference in the willingness to pay (wtp) for each.\nThat is, we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price.\nNext, we build a direct, formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function.\nAfter laying out this infrastructure, we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format.\nWe then study the auction's allocational, computational, and practical properties.\nIn Section 2 we present essential background on our representation framework, the measurable value function (MVF).\nSection 3 develops new multiattribute structures for MVF, supporting generalized additive decompositions.\nNext, we show the applicability of the theoretical framework to preferences in trading.\nThe rest of the paper is devoted to the proposed auction mechanism.\n2.\nMULTIATTRIBUTE PREFERENCES\n2.1 Preferential Independence\n2.2 Measurable Value Functions\n3.\nADVANCED MVF STRUCTURES\n3.1 Conditional Difference Independence\n3.2 GAI Structure for MVF\n4.\nWILLINGNESS-TO-PAY AS AN MVF\n4.1 Construction\n4.2 Optimization\n5.\nGAI IN MULTIATTRIBUTE AUCTIONS\n5.1 The Multiattribute Procurement Problem\n5.2 GAI Trees\n5.3 The GAI Auction\n5.4 Economic Analysis\n5.5 Computation and Complexity\nQ\n5.6 Example","lvl-4":"Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders' preferences, or presume highly restrictive forms, such as full additivity.\nReal preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction.\nWe develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences.\nA set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes.\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations.\nWhen traders' preferences are consistent with the auction's generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.\n1.\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional, price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal.\nRather than negotiating over a fully defined good or service, a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified.\nFor example, a procurement department of a company may use a multiattribute auction to select a supplier of hard drives.\nIn order to account for traders' preferences, the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations.\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes, therefore practical multiattribute auctions must either accommodate partial specifications, or support compact expression of preferences assuming some simplified form.\nBy far the most popular multiattribute form to adopt is the simplest: an additive representation where overall value is a linear combination of values associated with each attribute.\nFor example, several recent proposals for iterative multiattribute auctions [2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference specification exponentially (compared to the general discrete case), but precludes expression of any interdependencies among the attributes.\nIn practice, however, interdependencies among natural attributes are quite common.\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it is reasonable to expect multiattribute preferences to exhibit some structure.\nOur goal, therefore, is to identify the subtler yet more widely applicable structured representations, and exploit these properties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a flexible preference structure.\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences, due to Parkes and Kalagnanam (PK) [19].\nPK propose two types of iterative auctions: the first (NLD) makes no assumptions about traders' preferences, and lets sellers bid on the full multidimensional attribute space.\nBecause NLD maintains an exponential price structure, it is suitable only for small domains.\nThe other auction (AD) assumes additive buyer valuation and seller cost functions.\nIt collects sell bids per attribute level and for a single discount term.\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces, albeit for levels of clusters of attributes rather than singletons.\nGiven its roots in multiattribute utility theory [13],\nthe GAI condition is defined with respect to the expected utility function.\nTo apply it for modeling values for certain outcomes, therefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated with continuous prices, which provide a natural scale for assessing magnitude of preference.\nWe first lay out a representation framework for preferences that captures, in addition to simple orderings among attribute configuration values, the difference in the willingness to pay (wtp) for each.\nNext, we build a direct, formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function.\nAfter laying out this infrastructure, we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format.\nWe then study the auction's allocational, computational, and practical properties.\nIn Section 2 we present essential background on our representation framework, the measurable value function (MVF).\nSection 3 develops new multiattribute structures for MVF, supporting generalized additive decompositions.\nNext, we show the applicability of the theoretical framework to preferences in trading.\nThe rest of the paper is devoted to the proposed auction mechanism.","lvl-2":"Generalized Value Decomposition and Structured Multiattribute Auctions\nABSTRACT\nMultiattribute auction mechanisms generally either remain agnostic about traders' preferences, or presume highly restrictive forms, such as full additivity.\nReal preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction.\nWe develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences.\nA set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes.\nWe introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations.\nWhen traders' preferences are consistent with the auction's generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.\n1.\nINTRODUCTION\nMultiattribute trading mechanisms extend traditional, price-only mechanisms by facilitating the negotiation over a set of predefined attributes representing various non-price aspects of the deal.\nRather than negotiating over a fully defined good or service, a multiattribute mechanism delays commitment to specific configurations until the most promising candidates are identified.\nFor example, a procurement department of a company may use a multiattribute auction to select a supplier of hard drives.\nSupplier offers may be evaluated not only over the price they offer, but also over various qualitative attributes such as volume, RPM, access time, latency, transfer rate, and so on.\nIn addition, suppliers may offer different contract conditions such as warranty, delivery time, and service.\nIn order to account for traders' preferences, the auction mechanism must extract evaluative information over a complex domain of multidimensional configurations.\nConstructing and communicating a complete preference specification can be a severe burden for even a moderate number of attributes, therefore practical multiattribute auctions must either accommodate partial specifications, or support compact expression of preferences assuming some simplified form.\nBy far the most popular multiattribute form to adopt is the simplest: an additive representation where overall value is a linear combination of values associated with each attribute.\nFor example, several recent proposals for iterative multiattribute auctions [2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference specification exponentially (compared to the general discrete case), but precludes expression of any interdependencies among the attributes.\nIn practice, however, interdependencies among natural attributes are quite common.\nFor example, the buyer may exhibit complementary preferences for size and access time (since the performance effect is more salient if much data is involved), or may view a strong warranty as a good substitute for high reliability ratings.\nSimilarly, the seller's production characteristics (such as \"increasing access time is harder for larger hard drives\") can easily violate additivity.\nIn such cases an additive value function may not be able to provide even a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it is reasonable to expect multiattribute preferences to exhibit some structure.\nOur goal, therefore, is to identify the subtler yet more widely applicable structured representations, and exploit these properties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a flexible preference structure.\nOur approach is inspired by the design of an iterative multiattribute procurement auction for additive preferences, due to Parkes and Kalagnanam (PK) [19].\nPK propose two types of iterative auctions: the first (NLD) makes no assumptions about traders' preferences, and lets sellers bid on the full multidimensional attribute space.\nBecause NLD maintains an exponential price structure, it is suitable only for small domains.\nThe other auction (AD) assumes additive buyer valuation and seller cost functions.\nIt collects sell bids per attribute level and for a single discount term.\nThe price of a configuration is defined as the sum of the prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces, albeit for levels of clusters of attributes rather than singletons.\nWe employ a preference decomposition based on generalized additive independence (GAI), a model flexible enough to accommodate interdependencies to the exact degree of accuracy desired, yet providing a compact functional form to the extent that interdependence can be limited.\nGiven its roots in multiattribute utility theory [13],\nthe GAI condition is defined with respect to the expected utility function.\nTo apply it for modeling values for certain outcomes, therefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated with continuous prices, which provide a natural scale for assessing magnitude of preference.\nWe first lay out a representation framework for preferences that captures, in addition to simple orderings among attribute configuration values, the difference in the willingness to pay (wtp) for each.\nThat is, we should be able not only to compare outcomes but also decide whether the difference in quality is worth a given difference in price.\nNext, we build a direct, formally justified link from preference statements over priced outcomes to a generalized additive decomposition of the wtp function.\nAfter laying out this infrastructure, we employ this representation tool for the development of a multiattribute iterative auction mechanism that allows traders to express their complex preferences in GAI format.\nWe then study the auction's allocational, computational, and practical properties.\nIn Section 2 we present essential background on our representation framework, the measurable value function (MVF).\nSection 3 develops new multiattribute structures for MVF, supporting generalized additive decompositions.\nNext, we show the applicability of the theoretical framework to preferences in trading.\nThe rest of the paper is devoted to the proposed auction mechanism.\n2.\nMULTIATTRIBUTE PREFERENCES\nAs mentioned, most tools facilitating expression of multiattribute value for trading applications assume that agents' preferences can be represented in an additive form.\nBy way of background, we start by introducing the formal prerequisites justifying the additive representation, as provided by multiattribute utility theory.\nWe then present the generalized additive form, and develop the formal underpinnings for measurable value needed to extend this model to the case of choice under certainty.\n2.1 Preferential Independence\nLet \u0398 denote the space of possible outcomes, with a preference relation (weak total order) over \u0398.\nLet A = {a0,..., am} be a set of attributes describing \u0398.\nCapital letters denote subsets of variables, small letters (with or without numeric subscripts) denote specific variables, and X \u00af denotes the complement of X with respect to A.\nWe indicate specific variable assignments with prime signs or superscripts.\nTo represent an instantiation of subsets X, Y at the same time we use a sequence of instantiation symbols, as in X' Y'.\nThe preference relation when no uncertainty is modeled is usually represented by a value function v [17].\nThe following fundamental result greatly simplifies the value function representation.\nTHEOREM 1 ([9]).\nA preference order over set of attributes A has an additive value function representation\nEssentially, the additive forms used in trading mechanisms assume mutual preferential independence over the full set of attributes, including the money attribute.\nIntuitively that means that willingness to pay for value of an attribute or attributes cannot be affected by the value of other attributes.\nA cardinal value function representing an ordering over certain outcomes need not in general coincide with the cardinal utility function that represents preference over lotteries or expected utility (EU).\nNevertheless, EU functions may possess structural properties analogous to that for value functions, such as additive decomposition.\nSince the present work does not involve decisions under uncertainty, we do not provide a full exposition of the EU concept.\nHowever we do make frequent reference to the following additive independence relations.\nAn (expected) utility function u (\u00b7) can be decomposed additively according to its (possibly overlapping) GAI sub-configurations.\nWhat is now known as the GAI condition was originally introduced by Fishburn [13] for EU, and was named GAI and brought to the attention of AI researchers by Bacchus and Grove [1].\nGraphical models and elicitation procedures for GAI decomposable utility were developed for EU [4, 14, 6], for a cardinal representation of the ordinal value function [15], and for an ordinal preference relations corresponding to a TCP-net structure by Brafman et al. [5].\nApart from the work on GAI in the context of preference handling that were discussed above, GAI have been recently used in the context of mechanism design by Hyafil and Boutilier [16], as an aid in direct revelation mechanisms.\nAs shown by Bacchus and Grove [1], GAI structure can be identified based on a set of CAI conditions, which are much easier to detect and verify.\nIn general, utility functions may exhibit GAI structure not based on CAI.\nHowever, to date all proposals for reasoning and eliciting utility in GAI form take advantage of the GAI structure primarily to the extent that it represents a collection of CAI conditions.\nFor example, GAI trees [14] employ triangulation of the CAI map, and Braziunas and Boutilier's [6] conditional set Cj of a set Ij corresponds to the CAI separating set of Ij.\nSince the CAI condition is also defined based on preferences over lotteries, we cannot apply Bacchus and Grove's result without first establishing an alternative framework based on priced outcomes.\nWe develop such a framework using the theory of measurable value functions, ultimately producing a GAI decomposition\n(Eq.\n1) of the wtp function.\nReaders interested primarily in the multiattribute auction and willing to grant the well-foundedness of the preference structure may skip down to Section 5.\n2.2 Measurable Value Functions\nTrading decisions represent a special case of decisions under certainty, where choices involve multiattribute outcomes and corresponding monetary payments.\nIn such problems, the key decision often hinges on relative valuations of price differences compared to differences in alternative configurations of goods and services.\nTheoretically, price can be treated as just another attribute, however, such an approach fails to exploit the special character of the money dimension, and can significantly add to complexity due to the inherent continuity and typical wide range of possible monetary outcome values.\nWe build on the fundamental work of Dyer and Sarin [10, 11] on measurable value functions (MVFs).\nAs we show below, wtp functions in a quasi-linear setting can be interpreted as MVFs.\nHowever we first present the MVF framework in a more generic way, where the measurement is not necessarily monetary.\nWe present the essential definitions and refer to Dyer and Sarin for more detailed background and axiomatic treatment.\nThe key concept is that of preference difference.\nLet \u03b81, \u03b82, \u03d11, \u03d12 E \u0398 such that \u03b81 ~ \u03b82 and \u03d11 _ \u03d12.\n[\u03b82, \u03b81] denotes the preference difference between \u03b82 and \u03b81, interpreted as the strength, or degree, to which \u03b82 is preferred over \u03b81.\nLet \u2217 denote a preference order over \u0398 x \u0398.\nWe interpret the statement\nas \"the preference of \u03d12 over \u03d11 is at least as strong as the preference of \u03b82 over \u03b81\".\nWe use the symbol--\u2217 to represent equality of preference differences.\nNote that an MVF can also be used as a value function representing, since [\u03b8 ~, \u03b8] \u2217 [\u03b8 ~ ~, \u03b8] iff \u03b8 ~ \u03b8 ~ ~.\nDEFINITION 6 ([11]).\nAttribute set X C A is called difference independent of X \u00af iffor any two assignments X1 \u00af X ~ X2 \u00af X ~, [X1 \u00af X ~, X2 \u00af X ~] _ \u2217 [X1 \u00af X ~ ~, X2 \u00af X ~ ~] for any assignment \u00af X ~ ~.\nOr, in words, the preference differences on assignments to X given a fixed level of X \u00af do not depend on the particular level chosen for \u00af X.\nAs with additive independence for EU, this condition is stronger than preferential independence of X. Also analogously to EU, mutual preferential independence combined with other conditions leads to additive decomposition of the MVF.\nMoreover, Dyer and Sarin [11] have defined analogs of utility independence [17] for MVF, and worked out a parallel set of decomposition results.\n3.\nADVANCED MVF STRUCTURES\n3.1 Conditional Difference Independence\nOur first step is to generalize Definition 6 to a conditional version.\nSince the conditional set is always the complement, we sometimes leave it implicit, using the abbreviated notation CDI (X, Y).\nCDI leads to a decomposition similar to that obtained from CAI [17].\nLEMMA 3.\nLet u (A) be an MVF representing preference differences.\nThen CDI (X, Y I Z) iff\nTo complete the analogy with CAI, we generalize Lemma 3 as follows.\nAn immediate result of Proposition 4 is that CDI is a symmetric relation.\nThe conditional independence condition is much more applicable than the unconditional one.\nFor example, if attributes a E X and b E \/ X are complements or substitutes, X cannot be difference independent of \u00af X. However, X \\ {a} may still be CDI of X \u00af given a.\n3.2 GAI Structure for MVF\nA single CDI condition decomposes the value function into two parts.\nWe seek a finer-grain global decomposition of the utility function, similar to that obtained from mutual preferential independence.\nFor this purpose we are now ready to employ the results of Bacchus and Grove [1], who establish that the CAI condition has a perfect map [20]; that is, there exists a graph whose nodes correspond to the set A, and its node separation reflects exactly the complete set of CAI conditions on A. Moreover, they show that the utility function decomposes over the set of maximal cliques of the perfect map.\nTheir proofs can be easily adapted to CDI, since they only rely on the decomposition property of CAI that is also implied by CDI according to Proposition 4.\nwhere I1,..., Ig are (overlapping) subsets of A, each corresponding to a maximal clique of G. Given Theorem 5, we can now identify an MVF GAI structure from a collection of CDI conditions.\nThe CDI conditions, in turn, are particularly intuitive to detect when the preference differences carry a direct interpretation, as in the case with monetary differences discussed below.\nMoreover, the assumption or detection of CDI conditions can be performed incrementally, until the MVF is decomposed to a reasonable dimension.\nThis is in contrast with the fully additive decomposition of MVF that requires mutual preferential independence [11].\nTheorem 5 defines a decomposition structure, but to represent the actual MVF we need to specify the functions over the cliques.\nThe next theorem establishes that the functional constituents of MVF are the same as those for GAI decompositions as defined by Fishburn [13] for EU.\nWe adopt the following conventional notation.\nLet (a01,..., a0m) be a predefined vector called the reference outcome.\nFor any I C _ A, the function u ([I]) stands for the projection of u (A) to I where the rest of the attributes are fixed at their reference levels.\nThe proof directly shows that if graph G = (A, E) is a perfect map of CDI, u (A) decomposes to a sum over the functions defined in (4).1 Thus this proof does not rely on the decomposition result of Theorem 5, only on the existence of the perfect map.\nTo summarize, the results of this section generalize additive MVF theory.\nIn particular it justifies the application of methods recently developed under the EU framework [1, 4, 14, 6] to representation of value under certainty.\n4.\nWILLINGNESS-TO-PAY AS AN MVF\n4.1 Construction\nIn this section we apply measurable value to represent differences of willingness to pay for outcomes.\nWe assume that the agent has a preference order over outcome space, represented by a set of attributes A, and an attribute p representing monetary consequence.\nNote that in evaluating a purchase decision, p would correspond to the agent's money holdings net of the transaction (i.e., wealth after purchase), not the purchase price.\nAn outcome in this space is represented for example by (\u03b8 ~, p ~), where \u03b8 ~ is an instantiation of A and p ~ is a value of p.\nWe further assume that preferences are quasi-linear in p, that is there exists a value function of the form v (A, p) = u (A) + L (p), where L is a positive linear function .2 The quasi-linear form immediately qualifies money as a measure of preference differences, and establishes a monetary scale for u (A).\nDEFINITION 8.\nLet v (A, p) = u (A) + L (p) represent, where p is the attribute representing money.\nWe call u (A) a willingnessto-pay (wtp) function.\nNote that wtp may also refer to the seller's \"willingness to accept\" function.\nThe wtp u (A) is a cardinal function, unique up to a positive linear transformation.\nSince\n(where \u03b81, \u03b81 E \u0398, the domain of A) the wtp function can be used to choose among priced outcomes.\n1This proof and most other proofs in this paper are omitted for space consideration, and are available in an online appendix.\n2In many procurement applications, the deals in question are small relative to the enterprises involved, so the quasi-linearity assumption is warranted.\nThis assumption can be relaxed to a condition called corresponding tradeoffs [17], which does not require the value over money to be linear.\nTo simplify the presentation, however, we maintain the stronger assumption.\nNaturally, elicitation of wtp function is most intuitive when using direct monetary values.\nIn other words, we elicit a function in which L (p) = p, so v (A, p) = u (A) + p.\nWe define a reference outcome (\u03b80, p0), and assuming continuity of p, for any assignment \u03b8\u02c6 there exists a p\u02c6 such that (\u02c6\u03b8, \u02c6p) \u223c (\u03b80, p0).\nAs v is normalized such that v (\u03b80, p0) = 0, p\u02c6 is interpreted as the wtp for \u02c6\u03b8, or the reserve price of \u02c6\u03b8.\nPROPOSITION 7.\nThe wtp function is an MVF over differences in the reserve prices.\nWe note that the wtp function is used extensively in economics, and that all the development in Section 3 could be performed directly in terms of wtp, relying on quasi-linearity for preference measurement, and without formalization using MVFs.\nThis formalization however aligns this work with the fundamental difference independence theory by Dyer and Sarin.\nIn addition to facilitating the detection of GAI structure, the CDI condition supports elicitation using local queries, similar to how CAI is used by Braziunas and Boutilier [6].\nWe adopt their definition of conditional set of Ir, noted here Sr, as the set of neighbors of attributes in Ir not including the attributes of Ir.\nClearly, Sr is the separating set of Ir in the CDI map, hence CDI (Ir, Vr), where\nEliciting the wtp function therefore amounts to eliciting the utility (wtp) of one full outcome (the reference outcome \u03b80), and then obtaining the function over each maximal clique using monetary differences between its possible assignments (technique known as pricing out [17]), keeping the variables in the conditional set fixed.\nThese ceteris paribus elicitation queries are local in the sense that the agent does not need to consider the values of the rest of the attributes.\nFurthermore, in eliciting MVFs we can avoid the global scaling step that is required for EU functions.\nSince the preference differences are extracted with respect to specific amounts of the attribute p, the utility is already scaled according to that external measure.\nHence, once the conditional utility functions u ([Ij]) are obtained, we can calculate u (A) according to (4).\nThis last step may require (in the worst case) computation of a number of terms that is exponential in the number of max cliques.\nIn practice however we do not expect the intersection of the cliques to go that deep; intersection of more than just a few max cliques would normally be empty.\nTo take advantage of that we can use the search algorithm suggested by Braziunas and Boutilier [6], which efficiently finds all the nonempty intersections for each clique.\n4.2 Optimization\nAs shown, the wtp function can be used directly for pairwise comparisons of priced outcomes.\nAnother preference query often treated in the literature is optimization, or choice of best outcome, possibly under constraints.\nTypical decisions about exchange of a good or service exhibit what we call first-order preferential independence (FOPI), under which most or all single attributes have a natural ordering of quality, independent of the values of the rest .3 For example, when choosing a PC we always prefer more memory, faster CPU, longer warranty, and so on.\nUnder FOPI, the unconstrained optimization of 3This should not be mistaken with the highly demanding condition of mutual preferential independence, that requires all tradeoffs between attributes to be independent.\nunpriced outcomes is trivial, hence we consider choice among attribute points with prices.\nSince any outcome can be best given enough monetary compensation, this problem is not well-defined unless the combinations are constrained somehow.\nA particularly interesting optimization problem arises in the context of negotiation, where we consider the utility of both buyers and sellers.\nThe multiattribute matching problem (MMP) [12] is concerned with finding an attribute point that maximizes the surplus of a trade, or the difference between the utilities of the buyer and the seller, ub (A) \u2212 us (A).\nGAI, as an additive decomposition, has the property that if ub and us are in GAI form then ub (A) \u2212 us (A) is in GAI form as well.\nWe can therefore use combinatorial optimization procedures for GAI decomposition, based on the well studied variable elimination schemes (e.g., [15]) to find the best trading point.\nSimilarly, this optimization can be done to maximize surplus between a trader's utility function and a pricing system that assigns a price to each level of each GAI element, and this way guide traders to their optimal bidding points.\nIn the rest of the paper we develop a multiattribute procurement auction that builds on this idea.\n5.\nGAI IN MULTIATTRIBUTE AUCTIONS\n5.1 The Multiattribute Procurement Problem\nIn the procurement setting a single buyer wishes to procure a single good, in some configuration 0 \u2208 \u0398 from one of the candidate sellers s1,..., sn.\nThe buyer has some private valuation function (wtp) ub: \u0398 \u2192 R, and similarly each seller si has a private valuation function (willingness-to-accept).\nFor compliance with the procurement literature we refer to seller si's valuation as a cost function, denoted by ci.\nThe multiattribute allocation problem (MAP) [19] is the welfare optimization problem in procurement over a discrete domain, and it is defined as: i \u2217, 0 \u2217 = arg max (ub (0) \u2212 ci (0)).\n(5) i, \u03b8 To illustrate the need for a GAI price space we consider the case of traders with non-additive preferences bidding in an additive price space such as in PK's auction AD.\nIf the buyer's preferences are not additive, choosing preferred levels per attribute (as in auction AD) admits undesired combinations and fails to guide the sellers to the efficient configurations.\nNon-additive sellers face an exposure problem, somewhat analogous to traders with complementary preferences that participate in simultaneous auctions.\nA value a1 for attribute a may be optimal given that the value of another attribute b is b1, and arbitrarily suboptimal given other values of b. Therefore bidding a1 and b1 may result in a poor allocation if the seller is\" outbid\" on b1 but left\" holding\" a1 .4 Instead of assuming full additivity, the auction designer can come up with a GAI preference structure that captures the set of common interdependencies between attributes.\nIf traders could bid on clusters of interdependent attributes, it would solve the problems discussed above.\nFor example, if a and b are interdependent (meaning CDI (a, b) does not hold), we should be able to bid on the cluster ab.\nIf b in turn depends on c, we need another cluster bc.\nThis is still better than a general pricing structure that solicits bids for the cluster abc.\nWe stress that each trader may have a different set of interdependencies, and therefore to be completely general the 4If only the sellers are non-additive, the auction design could potentially alleviate this problem by collecting a new set of bids each round and \"forgetting\" bids from previous rounds, and also guiding non-additive sellers to bid on only one level per attribute in order to avoid undesired combinations.\nFigure 1: (i) CDI map for {a, b, c}, reflecting the single condition CDI (a, c).\n(ii) The corresponding GAI network.\nGAI structure needs to account for all .5 However, in practice many domains have natural dependencies that are mutual to traders.\n5.2 GAI Trees\nAssume that preferences of all traders are reflected in a GAI structure I1,..., Ig.\nWe call each Ir a GAI element, and any assignment to Ir a sub-configuration.\nWe use 0r to denote the subconfiguration formed by projecting configuration 0 to element Ir.\nDEFINITION 9.\nLet \u03b1 be an assignment to Ir and, Q an assignment to Ir.\nThe sub-configurations \u03b1 and, Q are consistent iffor any attribute aj \u2208 Ir \u2229 Ir, \u03b1 and, Q agree on the value of aj.\nA collection \u03bd of sub-configurations is consistent if all pairs \u03b1,, Q \u2208 \u03bd are consistent.\nThe collection is called a cover if it contains exactly one sub-configuration \u03b1r corresponding to each element Ir,\nNote that a consistent cover {\u03b11,..., \u03b1g} represents a full configuration, which we denote by (\u03b11,..., \u03b1g).\nA GAI network is a graph G whose nodes correspond to the GAI elements I1,..., Ig, with an edge between Ir, Ir iff Ir \u2229 Ir = ~ \u2205.\nEquivalently, a GAI network is the clique graph of a CDI-map.\nIn order to justify the compact pricing structure we require that for any set of optimal configurations (wrt a given utility function), with a corresponding collection of sub-configurations - y, all consistent covers in - y must be optimal configurations as well.\nTo ensure this (see Lemmas 8 and 10), we assume a GAI decomposition in the form of a tree or a forest (the GAI tree).\nA tree structure can be achieved for any set of CDI conditions by triangulation of the CDI-map prior to construction of the clique graph (GAI networks and GAI trees are defined by Gonzales and Perny [14], who also provide a triangulation algorithm).\nUnder GAI, the buyer's value function ub and sellers' cost functions ci can be decomposed as in (1).\nWe use fb, r and fi, r to denote the local functions of buyer and sellers (respectively), according to (4).\nFor example, consider the procurement of a good with three attributes, a, b, c. Each attribute's domain has two values (e.g., {a1, a2} is the domain of A).\nLet the GAI structure be I1 = {a, b}, I2 = {b, c}.\nFigure 1 shows the simple CDI map and the corresponding GAI network, which is a GAI tree.\nHere, subconfigurations are assignments of the form a1b1, a1b2, b1c1, and so on.\nThe set of sub-configurations {a1b1, b1c1} is a consistent cover, corresponding to the configuration a1b1c1.\nIn contrast, the set {a1b1, b2c1} is inconsistent.\n5.3 The GAI Auction\nWe define an iterative multiattribute auction that maintains a GAI pricing structure: that is, a price pt (\u00b7) corresponding to each subconfiguration of each GAI-tree element.\nThe price of a configuration 0 at time t is defined as\nBidders submit sub-bids on sub-configurations and on an additional global discount term \u0394 .6 Sub-bids are always submitted for current prices, and need to be resubmitted at each round, therefore they do not need to explicitly carry the price.\nThe set of full bids of a seller contains all consistent covers that can be generated from that seller's current set of sub-bids.\nThe existence of a full bid over a configuration \u03b8 represents the seller's willingness to accept the price pt (\u03b8) for supplying \u03b8.\nAt the start of the auction, the buyer reports (to the auction, not to sellers) her complete valuation in GAI form.\nThe initial prices of sub-configurations are set at some level above the buyer's valuations, that is, p1 (\u03b8r)> fb, r (\u03b8r) for all \u03b8r.\nThe discount \u0394 is initialized to zero.\nThe auction has the dynamics of a descending clock auction: at each round t, bids are collected for current prices and then prices are reduced according to price rules.\nA seller is considered active in a round if she submits at least one full bid.\nIn round t> 1, only sellers who where active in round t \u2212 1 are allowed to participate, and the auction terminates when no more than a single seller is active.\nWe denote the set of sub-bids submitted by si by Bit, and the corresponding set of full bids is\nIn our example, a seller could submit sub-bids on a set of subconfigurations such as a1b1 and b1c1, and that combines to a full bid on a1b1c1.\nThe auction proceeds in two phases.\nIn the first phase (A), at each round t the auction computes a set of preferred sub-configurations Mt. Section 5.4 shows how to define Mt to ensure convergence, and Section 5.5 shows how to efficiently compute it.\nIn phase A, the auction adjusts prices after each round, reducing the price of every sub-configuration that has received a bid but is not in the preferred set.\nLet be the prespecified price increment parameter.\nSpecifically, the phase A price change rule is applied to all \u03b8r \u2208 Sni = 1 Bti \\ Mt: The RHS maximum ensures that prices do not get reduced below the buyer's valuation in phase A. Let Mt denote the set of configurations that are consistent covers in Mt:\nThe auction switches to phase B when all active sellers have at least one full bid in the buyer's preferred set:\nLet T be the round at which [SWITCH] becomes true.\nAt this point, the auction selects the buyer-optimal full bid \u03b7i for each seller si.\nIn phase B, si may bid only on \u03b7i.\nThe prices of sub-configurations are fixed at pT (\u00b7) during this phase.\nThe only adjustment in phase B is to \u0394, which is increased in every round by.\nThe auction terminates when at most one seller (if exactly one, designate it s\u02c6i) is active.\nThere are four distinct cases:\n2.\nAll active sellers drop out in the same round in phase B.\nThe auction selects the best seller (s\u02c6i) from the preceding round, and applies the applicable case below.\n3.\nThe auction terminates in phase B with a final price above buyer's valuation, pT (\u03b7\u02c6i) \u2212 \u0394> ub (\u03b7\u02c6i).\nThe auction offers the winner s\u02c6i an opportunity to supply \u03b7\u02c6i at price ub (\u03b7\u02c6i).\n4.\nThe auction terminates in phase B with a final price pT (\u03b7\u02c6i) \u2212 \u0394 \u2264 ub (\u03b7\u02c6i).\nThis is the ideal situation, where the auction allocates the chosen configuration and seller at this resulting price.\nThe overall auction is described by high-level pseudocode in Algorithm 1.\nAs explained in Section 5.4, the role of phase A is to guide the traders to their efficient configurations.\nPhase B is a one-dimensional competition over the surplus that remaining seller candidates can provide to the buyer.\nIn Section 5.5 we discuss the computational tasks associated with the auction, and Section 5.6 provides a detailed example.\nimplement allocation and payment to winning seller\n5.4 Economic Analysis\nWhen the optimal solution to MAP (5) provides negative welfare and sellers do not bid below their cost, the auction terminates in phase A, no trade occurs and the auction is trivially efficient.\nWe therefore assume throughout the analysis that the optimal (seller, configuration) pair provides non-negative welfare.\nThe buyer profit from a configuration \u03b8 is defined as7\nand similarly \u03c0i (\u03b8) = p (\u03b8) \u2212 ci (\u03b8) is the profit of si.\nIn addition, for \u03bc \u2286 {1,..., g} we denote the corresponding set of subconfigurations by \u03b8\u03bc, and define the profit from a configuration \u03b8 over the subset \u03bc as\nThe function \u03c3i: \u0398 \u2192 R represents the welfare, or surplus function ub (\u00b7) \u2212 ci (\u00b7).\nFor any price system p,\n7We drop the t superscript in generic statements involving price and profit functions, understanding that all usage is with respect to the (currently) applicable prices.\nSince we do not assume anything about the buyer's strategy, the analysis refers to profit and surplus with respect to the face value of the buyer's report.\nThe functions \u03c0i and \u03c3i refer to the true cost functions of si.\nIntuitively, an SB seller follows a myopic best response strategy (MBR), meaning they bid myopically rather than strategically by optimizing their profit with respect to current prices.\nTo calculate Bti sellers need to optimize their current profit function, as discussed in Section 4.2.\nThe following lemma bridges the apparent gap between the compact pricing and bid structure and the global optimization performed by the traders.\nLEMMA 8.\nLet \u03a8 be a set of configurations, all maximizing profit for a trader \u03c4 (seller or buyer) at the relevant prices.\nLet 4) = {\u03b8r | \u03b8 \u2208 \u03a8, r \u2208 {1,..., g}.\nThen any consistent cover in 4) is also a profit-maximizing configuration for \u03c4.\nProof sketch (full proof in the online appendix): A source of an element \u03b8r is a configuration \u03b8\u02dc \u2208 \u03a8 from which it originated (mean\u02dc\u03b8r = \u03b8r).\nStarting from the supposedly suboptimal cover \u03b81, we build a series of covers \u03b81,..., \u03b8L.\nAt each \u03b8j we flip the value of a set of sub-configurations \u03bcj corresponding to a subtree, with the sub-configurations of the configuration \u02c6\u03b8j \u2208 \u03a8 which is the source of the parent \u03b3j of \u03bcj.\nThat ensures that all elements in \u03bcj \u222a {\u03b3j} have a mutual source \u02c6\u03b8j.\nWe show that all \u03b8j are consistent and that they must all be suboptimal as well, and since all elements of \u03b8L have a mutual source, meaning \u03b8L = \u02c6\u03b8L \u2208 \u03a8, it contradicts optimality of \u03a8.\nNext we consider combinations of configurations that are only within some \u03b4 of optimality.\nLEMMA 10.\nLet \u03a8 be a set of configurations, all are within \u03b4 of maximizing profit for a trader \u03c4 at the prices, and 4) defined as in Lemma 8.\nThen any consistent cover in 4) is within \u03b4g of maximizing utility for \u03c4.\nThis bound is tight, that is for any GAI tree and a non-trivial domain we can construct a set \u03a8 as above in which there exists a consistent cover whose utility is exactly \u03b4g below the maximal.\nNext we formally define Mt. For connected GAI trees, Mt is the set of sub-configurations that are part of a configuration within of optimal.\nWhen the GAI tree is in fact a forest, we apportion the error proportionally across the disconnected trees.\nLet G be comprised of trees G1,..., Gh.\nWe use \u03b8j to denote the projection of a configuration \u03b8 on the tree Gj, and gj denotes the number of GAI elements in Gj.\nLet ej = gj \u2212 1 denote the number of edges in Gj.\nWe define the connectivity parameter, e = maxj = 1,..., h ej.\nAs shown below, this connectivity parameter is an important factor in the performance of the auction.\nIn the fully additive case this loss of efficiency reduces to.\nOn the other extreme, if the GAI network is connected then e + 1 = g.\nWe also note that without assuming any preference structure, meaning that the CDI map is fully connected, g = 1 and the efficiency loss is again.\nLemmas 12 through 15 show that through the price system, the choice of buyer preferred configurations, and price change rules, Phase A leads the buyer and each of the sellers to their mutually efficient configuration.\nLEMMA 12.\nmax\u03b8E\u0398 \u03c0tb (\u03b8) does not change in any round t of phase A. PROOF.\nWe prove the lemma per each tree Gj.\nThe optimal values for disconnected components are independent of each other hence if the maximal profit for each component does not change the combined maximal profit does not change as well.\nIf the price of \u03b8' j was reduced during phase A, that is pt +1 (\u03b8 ` j) = pt (\u03b8 ` j) \u2212 \u03b4, it must be the case that some w \u2264 gj sub-configurations of \u03b8' j are not in Mtj, and \u03b4 = w ~\nThis is true for any configuration whose profit improves, therefore the maximal buyer profit does not change during phase A. LEMMA 13.\nThe price of at least one sub-configuration must be reduced at every round in phase A. PROOF.\nIn each round t 0 contradicting rule [A].\nMt j = {\u03b8r | \u03c0tb (\u03b8j) \u2265 max \u03b8 ~ j E\u0398j Then define Mt = Shj = 1 Mtj.\nLEMMA 14.\nWhen the solution to MAP provides positive surplus, and at least the best seller is SB, the auction must reach phase B. PROOF.\nBy Lemma 13 prices must go down in every round of phase A. Rule [A] sets a lower bound on all prices therefore the auction either terminates in phase A or must reach condition [SWITCH].\nWe set the initial prices are high such that max\u03b8E\u0398 \u03c01b (\u03b8) <0, and by Lemma 12 max\u03b8E\u0398 \u03c0tb (\u03b8) <0 during phase A.\nWe assume that the efficient allocation (\u03b8 *, i *) provides positive welfare, that is \u03c3i * (\u03b8 *) = \u03c0tb (\u03b8 *) + \u03c0t i * (\u03b8 *)> 0.\nsi * is SB therefore she will leave the auction only when \u03c0t i * (\u03b8 *) <0.\nThis can happen only when \u03c0tb (\u03b8 *)> 0, therefore si * does not drop in phase A hence the auction cannot terminate before reaching condition [SWITCH].\nLEMMA 15.\nFor SB seller si, \u03b7i is (e + 1) - efficient.\nPROOF.\n\u03b7i is chosen to maximize the buyer's surplus out of Bti at the end of phase A.\nSince Bti \u2229 Mt = ~ \u2205, clearly \u03b7i \u2208 Mt. From Corollary 11 and Corollary 9, for any \u02dc\u03b8,\nThis establishes the approximate bilateral efficiency of the results of Phase A (at this point under the assumption of SB).\nBased on Phase B's simple role as a single-dimensional bidding competition over the discount, we next assert that the overall result is efficient under SB, which in turn proves to be an approximately ex-post equilibrium strategy in the two phases.\nLEMMA 16.\nIf sellers si and sj are SB, and si is active at least as long as sj is active in phase B, then\nFollowing PK, we rely on an equivalence to the one-sided VCG auction to establish incentive properties for the sellers.\nIn the onesided multiattribute VCG auction, buyer and sellers report valuation and cost functions \u02c6ub, \u02c6ci, and the buyer pays the sell-side VCG payment to the winning seller.\nDEFINITION 11.\nLet (\u03b8 *, i *) be the optimal solution to MAP.\nLet (\u02dc\u03b8, \u02dci) be the best solution to MAP when i * does not participate.\nThe sell-side VCG payment is\nIt is well-known that truthful bidding is a dominant strategy for sellers in the one-sided VCG auction.\nIt is also shown by PK that the maximal regret for buyers from bidding truthfully in this mechanism is ub (\u03b8 *) \u2212 ci * (\u03b8 *) \u2212 (ub (\u02dc\u03b8) \u2212 \u02c6c\u02dci (\u02dc\u03b8)), that is, the marginal product of the efficient seller.\nUsually in iterative auctions the VCG outcome is only nearly achieved, but the deviation is bounded by the minimal price change.\nWe show a similar result, and therefore define \u03b4-VCG payments.\nWhen payment is guaranteed to be \u03b4-VCG sellers can only affect their payment within that range, therefore their gain by falsely reporting their cost is bounded by 2\u03b4.\nIn practice, however, sellers are unlikely to have the information that would let them exploit that potential gain.\nThey are much more likely to lose from bidding on their less attractive configurations.\n5.5 Computation and Complexity\nThe size of the price space maintained in the auction is equal to the total number of sub-configurations, meaning it is exponential in maxr | Ir |.\nThis is also equivalent to the tree-width (plus one) of the original CDI-map.\nFor the purpose of the computational analysis\nQ\nlet dj denote the domain of attribute aj, and I = Sg jEIr dj, r = 1 the collection of all sub-configurations.\nThe first purpose of this sub-section is to show that the complexity of all the computations required for the auction depends only on | I |, i.e., no computation depends on the size of the full exponential domain.\nWe are first concerned with the computation of Mt. Since Mt grows monotonically with t, a naive application of optimization algorithm to generate the best outcomes sequentially might end up enumerating significant portions of the fully exponential domain.\nHowever as shown below this plain enumeration can be avoided.\nPROPOSITION 20.\nThe computation of Mt can be done in time O (| I | 2).\nMoreover, the total time spent on this task throughout the auction is O (| I | (| I | + T)).\nThe bounds are in practice significantly lower, based on results on similar problems from the probabilistic reasoning literature [18].\nOne of the benefits of the compact pricing structure is the compact representation it lends for bids: sellers submit only sub-bids, and therefore the number of them submitted and stored per seller is bounded by | I |.\nSince the computation tasks: Bti = ~ \u2205, rule [SWITCH] and choice of \u03b7i are all involving the set Bit, it is important to note that their performance only depend on the size of the set Bit, since they are all subsumed by the combinatorial optimization task over Bti or Bti \u2229 Mt. Next, we analyze the number of rounds it takes for the auction to terminate.\nPhase B requires maxi = 1,...n \u03c0Ti (\u03b7i) ~ 1.\nSince this is equivalent to price-only auctions, the concern is only with the time complexity of phase A.\nSince prices cannot go below fb, r (\u03b8r), an upper bound on the number of rounds required is\nHowever phase A may converge faster.\nLet the initial negative profit chosen by the auctioneer be m = max\u03b8E\u0398 \u03c01b (\u03b8).\nIn the worst case phase A needs to run until \u2200 \u03b8 \u2208 \u0398.\u03c0b (\u03b8) = m.\nThis happens for example when \u2200 \u03b8r \u2208 I.pt (\u03b8r) = fb, r (\u03b8r) + mg.\nIn general, the closer the initial prices reflect buyer valuation, the faster phase A converges.\nOne extreme is to choose p1 (\u03b8r) =\nTable 1: GAI utility functions for the example domain.\nfb rep\nresents the buyer's valuation, and f1 and f2 costs of the sellers s1 and s2.\nfb, r (\u03b8r) + mg.\nThat would make phase A redundant, at the cost of full initial revelation of buyer's valuation as done in other mechanisms discussed below.\nBetween this option and the other extreme, which is ` d\u03b1, \u03b1\u02c6 G I, p1 (\u03b1) = p1 (\u02c6\u03b1) the auctioneer has a range of choices to determine the right tradeoff between convergence time and information revelation.\nIn the example below the choice of a lower initial price for the domain of I1 provides some speedup by revealing a harmless amount of information.\nAnother potential concern is the communication cost associated with the Japanese auction style.\nThe sellers need to send their bids over and over again at each round.\nA simple change can be made to avoid much of the redundant communication: the auction can retain sub-bids from previous rounds on sub-configurations whose price did not change.\nSince combinations of sub-bids from different rounds can yield sub-optimal configurations, each sub-bid should be tagged with the number of the latest round in which it was submitted, and only consistent combinations from the same round are considered to be full bids.\nWith this implementation sellers need not resubmit their bid until a price of at least one sub-configuration has changed.\n5.6 Example\nWe use the example settings introduced in Section 5.2.\nRecall that the GAI structure is I1 = fa, b}, I2 = fb, c} (note that e = 1).\nTable 1 shows the GAI utilities for the buyer and the two sellers s1, s2.\nThe efficient allocation is (s1, a1b2c1) with a surplus of 45.\nThe maximal surplus of the second best seller, s2, is 25, achieved by a1b1c1, a2b1c1, and a2b2c2.\nWe set all initial prices over I1 to 75, and all initial prices over I2 to 90.\nWe set = 8, meaning that price reduction for sub-configurations is 4.\nThough with these numbers it is not guaranteed by Theorem 17, we expect s1 to win on either the efficient allocation or on a1b2c2 which provides a surplus of 39.\nThe reason is that these are the only two configurations which are within (e + 1) = 16 of being efficient for s1 (therefore one of them must be chosen by Phase A), and both provide more than surplus over s2's most efficient configuration (and this is sufficient in order to win in Phase B).\nTable 2 shows the progress of phase A. Initially all configuration have the same cost (165), so sellers bid on their lowest cost configuration which is a2b1c1 for both (with profit 80 to s1 and 90 to s2), and that translates to sub-bids on a2b1 and b1c1.\nM1 contains the sub-configurations a2b2 and b2c1 of the highest value configuration a2b2c1.\nPrice is therefore decreased on a2b1 and b1c1.\nAfter the price change, s1 has higher profit (74) on a1b2c2 and she therefore bids on a1b2 and b2c2.\nNow (round 2) their prices go down, reducing the profit on a1b2c2 to 66 and therefore in round 3 s1 prefers a2b1c2 (profit 67).\nAfter the next price change the configurations a1b2c1 and a1b2c2 both become optimal (profit 66), and the subbids a1b2, b2c1 and b2c2 capture the two.\nThese configurations stay optimal for another round (5), with profit 62.\nAt this point s1 has a full bid (in fact two full bids: a1b2c2 and a1b2c1) in M5, and\nTable 2: Auction progression in phase A. Sell bids and designation of Mt (using *) are shown below the price of each subconfiguration.\ntherefore she no longer changes her bids since the price of her optimal configurations does not decrease.\ns2 sticks to a2b1c1 during the first four rounds, switching to a1b1c1 in round 5.\nIt takes four more rounds for s2 and Mt to converge (M10 nB10 2 = fa1b1c1}).\nAfter round 9 the auction sets \u03b71 = a1b2c1 (which yields more buyer profit than a1b2c2) and \u03b72 = a1b1c1.\nFor the next round (10) \u0394 = 8, increased by 8 for each subsequent round.\nNote that p9 (a1b1c1) = 133, and c2 (a1b1c1) = 90, therefore \u03c0T2 (\u03b72) = 43.\nIn round 15, \u0394 = 48 meaning p15 (a1b1c1) = 85 and that causes s2 to drop out, setting the final allocation to (s1, a1b2c1) and p15 (a1b2c1) = 157 \u2212 48 = 109.\nThat leaves the buyer with a profit of 31 and s1 with a profit of 14, less than below the VCG profit 20.\nThe welfare achieved in this case is optimal.\nTo illustrate how some efficiency loss could occur consider the case that c1 (b2c2) = 60.\nIn that case, in round 3 the configuration a1b2c2 provides the same profit (67) as a2b1c2, and s1 bids on both.\nWhile a2b1c2 is no longer optimal after the price change, a1b2c2 remains optimal on subsequent rounds because b2c2 G Mt, and the price change of a1b2 affects both a1b2c2 and the efficient configuration a1b2c1.\nWhen phase A ends B101 n M10 = fa1b2c2} so the auction terminates with the slightly suboptimal configuration and surplus 40.","keyphrases":["multiattribut auction","auction","iter auction mechan","prefer handl","measur valu function theori","mvf","gau","gai base auction"],"prmu":["P","P","P","M","R","U","U","M"]} {"id":"C-31","title":"Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets","abstract":"Many organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization. This access may be needed for editing or simply viewing a document. In some cases these documents are shared between authors, via email, to be edited. This can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document. There may even be multiple different documents in the process of being edited. The user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine. Another problem arises when a document is made available on a user's machine and that user is offline, in which case the document is no longer accessible. In this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.","lvl-1":"Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets Joshua J. Reynolds, Robbie McLeod, Qusay H. Mahmoud Distributed Computing and Wireless & Telecommunications Technology University of Guelph-Humber Toronto, ON, M9W 5L7 Canada {jreyno04,rmcleo01,qmahmoud}@uoguelph.\nca ABSTRACT Many organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are shared between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user``s machine.\nAnother problem arises when a document is made available on a user``s machine and that user is offline, in which case the document is no longer accessible.\nIn this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications.\nGeneral Terms Design, Experimentation, Performance.\n1.\nINTRODUCTION The Peer-to-Peer (P2P) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet.\nWith the increasingly common place broadband Internet access, P2P technology has finally become a viable way to share documents and media files.\nThere are already programs on the market that enable P2P file sharing.\nThese programs enable millions of users to share files among themselves.\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites, using such programs are not without their problems.\nThe downloaded files still require a lot of manual management by the user.\nThe user still needs to put the files in the proper directory, manage files with multiple versions, delete the files when they are no longer wanted.\nWe strive to make the process of sharing documents within an Intranet easier.\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all members of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are sent between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user``s machine.\nFurthermore, some organizations do not have a file sharing server or the necessary network infrastructure to enable one.\nIn this paper we present Apocrita, which is a cost-effective distributed P2P file sharing system for such organizations.\nThe rest of this paper is organized as follows.\nIn section 2, we present Apocrita.\nThe distributed indexing mechanism and protocol are presented in Section 3.\nSection 4 presents the peer-topeer distribution model.\nA proof of concept prototype is presented in Section 5, and performance evaluations are discussed in Section 6.\nRelated work is presented is Section 7, and finally conclusions and future work are discussed in Section 8.\n2.\nAPOCRITA Apocrita is a distributed peer-to-peer file sharing system, and has been designed to make finding documents easier in an Intranet environment.\nCurrently, it is possible for documents to be located on a user's machine or on a remote machine.\nIt is even possible that different revisions could reside on each node on the Intranet.\nThis means there must be a manual process to maintain document versions.\nApocrita solves this problem using two approaches.\nFirst, due to the inherent nature of Apocrita, the document will only reside on a single logical location.\nSecond, Apocrita provides a method of reverting to previous document versions.\nApocrita Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\nACMSE``07, MARCH 23-24, 2007, WINSTON-SALEM, NC, USA.\nCOPYRIGHT 2007 ACM 978-1-59593-629-5\/07\/0003 ...$5.00.\n174 will also distribute documents across multiple machines to ensure high availability of important documents.\nFor example, if a machine contains an important document and the machine is currently inaccessible, the system is capable of maintaining availability of the document through this distribution mechanism.\nIt provides a simple interface for searching and accessing files that may exist either locally or remotely.\nThe distributed nature of the documents is transparent to the user.\nApocrita supports a decentralized network model where the peers use a discovery protocol to determine peers.\nApocrita is intended for network users on an Intranet.\nThe main focus is organizations that may not have a network large enough to require a file server and supporting infrastructure.\nIt eliminates the need for documents to be manually shared between users while being edited and reduces the possibility of conflicting versions being distributed.\nThe system also provides some redundancy and in the event of a single machine failure, no important documents will be lost.\nIt is operating system independent, and easy to access through a web browser or through a standalone application.\nTo decrease the time required for indexing a large number of documents, the indexing process is distributed across available idle nodes.\nLocal and remote files should be easily accessible through a virtual mountable file system, providing transparency for users.\n3.\nDISTRIBUTED INDEXING Apocrita uses a distributed index for all the documents that are available on the Intranet.\nEach node will contain part of the full index, and be aware of what part of the index each other node has.\nA node will be able to contact each node that contains a unique portion of the index.\nIn addition, each node has a separate local index of its own documents.\nBut as discussed later, in the current implementation, each node has a copy of the entire index.\nIndexing of the documents is distributed.\nTherefore, if a node is in the process of indexing many documents, it will break up the work over the nodes.\nOnce a node``s local index is updated with the new documents, the distributed index will then be updated.\nThe current distributed indexing system consists of three separate modules: NodeController, FileSender, and NodeIndexer.\nThe responsibility of each module is discussed later in this section.\n3.1 Indexing Protocol The protocol we have designed for the distributed indexing is depicted in Figure 1.\nFigure 1.\nApocrita distributed indexing protocol.\nIDLE QUERY: The IDLE QUERY is sent out from the initiating node to determine which other nodes may be able to help with the overall indexing process.\nThere are no parameters sent with the command.\nThe receiving node will respond with either a BUSY or IDLE command.\nIf the IDLE command is received, the initiating node will add the responding node to a list of available distributed indexing helpers.\nIn the case of a BUSY command being received, the responding node is ignored.\nBUSY: Once a node received an IDL QUERY, it will determine whether it can be considered a candidate for distributed indexing.\nThis determination is based on the overall CPU usage of the node.\nIf the node is using most of its CPU for other processes, the node will respond to the IDLE QUERY with a BUSY command.\nIDLE: As with the case of the BUSY response, the node receiving the IDLE QUERY will determine its eligibility for distributed indexing.\nTo be considered a candidate for distributed indexing, the overall CPU usage must be at a minimum to all for dedicated indexing of the distributed documents.\nIf this is the case, the node will respond with an IDLE command.\nINCOMING FILE: Once the initiating node assembles a set of idle nodes to assist with the distributed indexing, it will divide the documents to be sent to the nodes.\nTo do this, it sends an INCOMING FILE message, which contains the name of the file as well as the size in bytes.\nAfter the INCOMING FILE command has been sent, the initiating node will begin to stream the file to the other node.\nThe initiating node will loop through the files that are to be sent to the other node; each file stream being preceded by the INCOMING FILE command with the appropriate parameters.\nINDEX FILE: Once the indexing node has completed the indexing process of the set of files, it must send the resultant index back to the initiating node.\nThe index is comprised of multiple files, which exist on the file system of the indexing node.\nAs with the INCOMING FILE command, the indexing node streams each index file after sending an INDEX FILE command.\nThe INDEX FILE command has two parameters: the first being the name of the index, and the second is the size of the file in bytes.\nSEND COMPLETE: When sending the sets of files for both the index and the files to be indexed, the node must notify the corresponding node when the process is complete.\nOnce the initiating node is finished sending the set of documents to be indexed, it will then send a SEND COMPLETE command indicating to the indexing node that there are no more files and the node can proceed with indexing the files.\nIn the case of the initiating node sending the index files, the indexing node will complete the transfer with the SEND COMPLETE command indicating to the initiating node that there are no more index files to be sent and the initiating node can then assemble those index files into the main index.\nThe NodeController is responsible for setting up connections with nodes in the idle state to distribute the indexing process.\nUsing JXTA [5], the node controller will obtain a set of nodes.\nThis set of nodes is iterated and each one is sent the IDLE QUERY command.\nThe nodes that respond with idle are then collected.\nThe set of idle nodes includes the node initiating the distributed indexing process, referred to as the local node.\nOnce the collection of idle nodes is obtained, the node updates the set of controllers and evenly divides the set of documents that are to be indexed.\nFor example, if there are 100 documents and 10 nodes (including the local node) then each node will have 10 documents to index.\nFor each indexing node an instance of the FileSender object is created.\nThe FileSender is aware of the set of documents that node is responsible for.\nOnce a FileSender object has been created for each node, the NodeController waits for each FileSender to complete.\nWhen the FileSender objects have completed the NodeController will take the resultant indexes from 175 each node and pass them to an instance of the IndexCompiler, which maintains the index and the list of FileSenders.\nOnce the IndexCompiler has completed it will return to the idle state and activate the directory scanner to monitor the locally owned set of documents for changes that may require reindexing.\nThe NodeIndexer is responsible for receiving documents sent to it by the initiating node and then indexing them using the Lucene engine [7].\nOnce the indexing is complete the resulting index is streamed back to the initiating node as well as compiled in the indexer nodes own local index.\nBefore initiating the indexing process it must be sent an IDLE QUERY message.\nThis is the first command that sets off the indexing process.\nThe indexer node will determine whether it is considered idle based on the current CPU usage.\nAs outlined in the protocol section if the node is not being used and has a low overall CPU usage percentage it will return IDLE to the IDLE QUERY command.\nIf the indexer nodes CPU usage is above 50% for a specified amount of time it is then considered to be busy and will respond to the IDLE QUERY command with BUSY.\nIf a node is determined busy it returns to its listening state waiting for another IDLE QUERY from another initiating node.\nIf the node is determined to be idle it will enter the state where it will receive files from the initiating node that it is responsible for indexing.\nOnce all of the files are received by the initiating node, indicated by a SEND COMPLETE message, it starts an instance of the Lucene indexing engine.\nThe files are stored in a temporary directory separate from the nodes local documents that it is responsible for maintaining an index of.\nThe Lucene index writer then indexes all of the transferred files.\nThe index is stored on the drive within a temporary directory separate from the current index.\nAfter the indexing of the files completes the indexer node enters the state where the index files are sent back to the initiating node.\nThe indexer node loops through all of the files created by Lucene``s IndexWriter and streams them to the initiating node.\nOnce these files are sent back that index is then merged into the indexer nodes own full index of the existing files.\nIt then enters the idle state where it will then listen for any other nodes that required distributing the indexing process.\nThe FileSender object is the initiating node equivalent of the indexer node.\nIt initiates the communication between the initiating node and the node that will assist in the distributed indexing.\nThe initiating node runs many instances of the FileSender node one for each other node it has determined to be idle.\nUpon instantiation of the FileSender it is passed the node that it is responsible for contacting and the set of files that must be sent.\nThe FileSender``s first job is to send the files that are to be indexed by the other idle node.\nThe files are streamed one at a time to the other node.\nIt sends each file using the INCOMING FILE command.\nWith that command it sends the name of the file being sent and the size in bytes.\nOnce all files have been sent the FileSender sends the SEND COMPLETE command.\nThe FileSender creates an instance of Lucene``s IndexWriter and prepares to create the index in a temporary directory on the file system.\nThe FileSender will begin to receive the files that are to be saved within the index.\nIt receives an INDEX FILE command with the name of the files and the size in bytes.\nThis file is then streamed into the temporary index directory on the FileSender node.\nAfter the transfer of the index files has been completed the FileSender notifies the instance of the index compiler that it is ready to combine the index.\nEach instance of the FileSender has its own unique section of temporary space to store the index that has been transferred back from the indexing node.\nWhen notifying the IndexCompiler it will also pass the location of the particular FileSenders directory location of that index.\n4.\nPEER-TO-PEER DISTRIBUTION Apocrita uses a peer-to-peer distribution model in order to distribute files.\nFiles are distributed solely from a serving node to a client node without regard for the availability of file pieces from other clients in the network.\nThis means that the file transfers will be fast and efficient and should not severely affect the usability of serving nodes from the point of view of a local user.\nThe JXTA framework [5] is used in order to implement peer-to-peer functionality.\nThis has been decided due to the extremely shorttimeline of the project which allows us to take advantage of over five years of testing and development and support from many large organizations employing JXTA in their own products.\nWe are not concerned with any potential quality problems because JXTA is considered to be the most mature and stable peer-to-peer framework available.\nUsing JXTA terminology, there are three types of peers used in node classification.\nEdge peers are typically low-bandwidth, non-dedicated nodes.\nDue to these characteristics, edge peers are not used with Apocrita.\nRelay peers are typically higher-bandwidth, dedicated nodes.\nThis is the classification of all nodes in the Apocrita network, and, as such, are the default classification used.\nRendezvous peers are used to coordinate message passing between nodes in the Apocrita network.\nThis means that a minimum of one rendezvous peer per subnet is required.\n4.1 Peer Discovery The Apocrita server subsystem uses the JXTA Peer Discovery Protocol (PDP) in order to find participating peers within the network as shown in Figure 2.\nFigure 2.\nApocrita peer discovery process.\n176 The PDP listens for peer advertisements from other nodes in the Apocrita swarm.\nIf a peer advertisement is detected, the server will attempt to join the peer group and start actively contributing to the network.\nIf no peers are found by the discovery service, the server will create a new peer group and start advertising this peer group.\nThis new peer group will be periodically advertised on the network; any new peers joining the network will attach to this peer group.\nA distinct advantage of using the JXTA PDP is that Apocrita does not have to be sensitive to particular networking nuances such as Maximum Transmission Unit (MTU).\nIn addition, Apocrita does not have to support one-to-many packet delivery methods such as multicast and instead can rely on JXTA for this support.\n4.2 Index Query Operation All nodes in the Apocrita swarm have a complete and up-to-date copy of the network index stored locally.\nThis makes querying the index for search results trivial.\nUnlike the Gnutella protocol, a query does not have to propagate throughout the network.\nThis also means that the time to return query results is very fast - much faster than protocols that rely on nodes in the network to pass the query throughout the network and then wait for results.\nThis is demonstrated in Figure 3.\nFigure 3.\nApocrita query operation.\nEach document in the swarm has a unique document identification number (ID).\nA node will query the index and a result will be returned with both the document ID number as well as a list of peers with a copy of the matched document ID.\nIt is then the responsibility of the searching peer to contact the peers in the list to negotiate file transfer between the client and server.\n5.\nPROTOTYPE IMPLEMENTATION Apocrita uses the Lucene framework [7], which is a project under development by the Apache Software Foundation.\nApache Lucene is a high-performance, full-featured text search engine library written entirely in Java.\nIn the current implementation, Apocrita is only capable of indexing plain text documents.\nApocrita uses the JXTA framework [5] as a peer-to-peer transport library between nodes.\nJXTA is used to pass both messages and files between nodes in the search network.\nBy using JXTA, Apocrita takes advantage of a reliable, and proven peer-to-peer transport mechanism.\nIt uses the pipe facility in order to pass messages and files between nodes.\nThe pipe facility provides many different types of pipe advertisements.\nThis includes an unsecured unicast pipe, a secured unicast pipe, and a propagated unsecured pipe.\nMessage passing is used to pass status messages between nodes in order to aid in indexing, searching, and retrieval.\nFor example, a node attempting to find an idle node to participate in indexing will query nodes via the message facility.\nIdle nodes will reply with a status message to indicate they are available to start indexing.\nFile passing is used within Apocrita for file transfer.\nAfter a file has been searched for and located within the peer group, a JXTA socket will be opened and file transfer will take place.\nA JXTA socket is similar to a standard Java socket, however a JXTA socket uses JXTA pipes in underlying network transport.\nFile passing uses an unsecured unicast pipe in order to transfer data.\nFile passing is also used within Apocrita for index transfer.\nIndex transfer works exactly like a file transfer.\nIn fact, the index transfer actually passes the index as a file.\nHowever, there is one key difference between file transfer and index transfer.\nIn the case of file transfer, a socket is created between only two nodes.\nIn the case of index transfer, a socket must be created between all nodes in the network in order to pass the index, which allows for all nodes to have a full and complete index of the entire network.\nIn order to facilitate this transfer efficiently, index transfer will use an unsecured propagated pipe to communicate with all nodes in the Apocrita network.\n6.\nPERFORMANCE EVALUATION It is difficult to objectively benchmark the results obtained through Apocrita because there is no other system currently available with the same goals as Apocrita.\nWe have, however, evaluated the performance of the critical sections of the system.\nThe critical sections were determined to be the processes that are the most time intensive.\nThe evaluation was completed on standard lab computers on a 100Mb\/s Ethernet LAN; the machines run Windows XP with a Pentium 4 CPU running at 2.4GHz with 512 MB of RAM.\nThe indexing time has been run against both: the Time Magazine collection [8], which contains 432 documents and 83 queries and their most relevant results, and the NPL collection [8] that has a total of 11,429 documents and 93 queries with expected results.\nEach document ranges in size between 4KB and 8KB.\nAs Figure 4 demonstrates, the number of nodes involved in the indexing process affects the time taken to complete the indexing processsometimes even drastically.\nFigure 4.\nNode vs. index time.\nThe difference in going from one indexing node to two indexing nodes is the most drastic and equates to an indexing time 37% faster than a single indexing node.\nThe different between two 177 indexing nodes and three indexing nodes is still significant and represents a 16% faster time than two indexing nodes.\nAs the number of indexing nodes increases the results are less dramatic.\nThis can be attributed to the time overhead associated with having many nodes perform indexing.\nThe time needed to communicate with a node is constant, so as the number of nodes increases, this constant becomes more prevalent.\nAlso, the complexity of joining the indexing results is a complex operation and is complicated further as the number of indexing nodes increases.\nSocket performance is also a very important part of Apocrita.\nBenchmarks were performed using a 65MB file on a system with both the client and server running locally.\nThis was done to isolate possible network issues.\nAlthough less drastic, similar results were shown when the client and server run on independent hardware.\nIn order to mitigate possible unexpected errors, each test was run 10 times.\nFigure 5.\nJava sockets vs. JXTA sockets.\nAs Figure 5 demonstrates, the performance of JXTA sockets is abysmal as compared to the performance of standard Java sockets.\nThe minimum transfer rate obtained using Java sockets is 81,945KB\/s while the minimum transfer rater obtained using JXTA sockets is much lower at 3, 805KB\/s.\nThe maximum transfer rater obtain using Java sockets is 97,412KB\/s while the maximum transfer rate obtained using JXTA sockets is 5,530KB\/s.\nFinally, the average transfer rate using Java sockets is 87,540KB\/s while the average transfer rate using JXTA sockets is 4,293KB\/s.\nThe major problem found in these benchmarks is that the underlying network transport mechanism does not perform as quickly or efficiently as expected.\nIn order to garner a performance increase, the JXTA framework needs to be substituted with a more traditional approach.\nThe indexing time is also a bottleneck and will need to be improved for the overall quality of Apocrita to be improved.\n7.\nRELATED WORK Several decentralized P2P systems [1, 2, 3] exist today that Apocrita features some of their functionality.\nHowever, Apocrita also has unique novel searching and indexing features that make this system unique.\nFor example, Majestic-12 [4] is a distributed search and indexing project designed for searching the Internet.\nEach user would install a client, which is responsible for indexing a portion of the web.\nA central area for querying the index is available on the Majestic-12 web page.\nThe index itself is not distributed, only the act of indexing is distributed.\nThe distributed indexing aspect of this project most closely relates Apocrita goals.\nYaCy [6] is a peer-to-peer web search application.\nYaCy consists of a web crawler, an indexer, a built-in database engine, and a p2p index exchange protocol.\nYaCy is designed to maintain a distributed index of the Internet.\nIt used a distributed hash table (DHT) to maintain the index.\nThe local node is used to query but all results that are returned are accessible on the Internet.\nYaCy used many peers and DHT to maintain a distributed index.\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT.\nYaCy however, is designed as a web search engine and, as such solves a much different problem than Apocrita.\n8.\nCONCLUSIONS AND FUTURE WORK We presented Apocrita, a distributed P2P searching and indexing system intended for network users on an Intranet.\nIt can help organizations with no network file server or necessary network infrastructure to share documents.\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed.\nA proof of concept prototype has been constructed, but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned.\nDespite these shortcomings, the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems.\nFor future work, Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network.\nIn addition, we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated.\nFinally, the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node, we plan to design a distributed index.\n9.\nREFERENCES [1] Rodrigues, R., Liskov, B., Shrira, L.: The Design of a Robust Peer-to-Peer System.\nAvailable online: http:\/\/www.pmg.lcs.mit.edu\/~rodrigo\/ew02-robust.pdf.\n[2] Chawathe, Y., Ratnasamy, S., Breslau, L., Lanham, N., and Chenker, S.: Making Gnutella-like P2P Systems Scalable.\nIn Proceedings of SIGCOMM``03, Karlsruhe, Germany.\n[3] Harvest: A Distributed Search System: http:\/\/harvest.sourceforge.net.\n[4] Majestic-12: Distributed Search Engine: http:\/\/www.majestic12.co.uk.\n[5] JXTA: http:\/\/www.jxta.org.\n[6] YaCy: Distributed P2P-based Web Indexing: http:\/\/www.yacy.net\/yacy.\n[7] Lucene Search Engine Library: http:\/\/lucene.apache.org.\n[8] Test Collections (Time Magazine and NPL): www.dcs.gla.ac.uk\/idom\/ir_resources\/test_collections.\n178","lvl-3":"Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are shared between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nAnother problem arises when a document is made available on a user's machine and that user is offline, in which case the document is no longer accessible.\nIn this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.\n1.\nINTRODUCTION\nThe Peer-to-Peer (P2P) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet.\nWith the increasingly common place broadband Internet access, P2P technology has finally become a viable way to share documents and media files.\nThere are already programs on the market that enable P2P file sharing.\nThese programs enable millions of users to share files among themselves.\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites, using such programs are not without their problems.\nThe downloaded files still require a lot of manual management by the user.\nThe user still needs to put the files in the proper directory, manage files with multiple versions, delete the files when they are no longer wanted.\nWe strive to make the process of sharing documents within an Intranet easier.\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all members of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are sent between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nFurthermore, some organizations do not have a file sharing server or the necessary network infrastructure to enable one.\nIn this paper we present Apocrita, which is a cost-effective distributed P2P file sharing system for such organizations.\nThe rest of this paper is organized as follows.\nIn section 2, we present Apocrita.\nThe distributed indexing mechanism and protocol are presented in Section 3.\nSection 4 presents the peer-topeer distribution model.\nA proof of concept prototype is presented in Section 5, and performance evaluations are discussed in Section 6.\nRelated work is presented is Section 7, and finally conclusions and future work are discussed in Section 8.\n2.\nAPOCRITA\n3.\nDISTRIBUTED INDEXING\n3.1 Indexing Protocol\n4.\nPEER-TO-PEER DISTRIBUTION\n4.1 Peer Discovery\n4.2 Index Query Operation\n5.\nPROTOTYPE IMPLEMENTATION\n6.\nPERFORMANCE EVALUATION\n7.\nRELATED WORK\nSeveral decentralized P2P systems [1, 2, 3] exist today that Apocrita features some of their functionality.\nHowever, Apocrita also has unique novel searching and indexing features that make this system unique.\nFor example, Majestic-12 [4] is a distributed search and indexing project designed for searching the Internet.\nEach user would install a client, which is responsible for indexing a portion of the web.\nA central area for querying the index is available on the Majestic-12 web page.\nThe index itself is not distributed, only the act of indexing is distributed.\nThe distributed indexing aspect of this project most closely relates Apocrita goals.\nYaCy [6] is a peer-to-peer web search application.\nYaCy consists of a web crawler, an indexer, a built-in database engine, and a p2p index exchange protocol.\nYaCy is designed to maintain a distributed index of the Internet.\nIt used a distributed hash table (DHT) to maintain the index.\nThe local node is used to query but all results that are returned are accessible on the Internet.\nYaCy used many peers and DHT to maintain a distributed index.\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT.\nYaCy however, is designed as a web search engine and, as such solves a much different problem than Apocrita.\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita, a distributed P2P searching and indexing system intended for network users on an Intranet.\nIt can help organizations with no network file server or necessary network infrastructure to share documents.\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed.\nA proof of concept prototype has been constructed, but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned.\nDespite these shortcomings, the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems.\nFor future work, Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network.\nIn addition, we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated.\nFinally, the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node, we plan to design a distributed index.","lvl-4":"Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are shared between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nAnother problem arises when a document is made available on a user's machine and that user is offline, in which case the document is no longer accessible.\nIn this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.\n1.\nINTRODUCTION\nThe Peer-to-Peer (P2P) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet.\nWith the increasingly common place broadband Internet access, P2P technology has finally become a viable way to share documents and media files.\nThere are already programs on the market that enable P2P file sharing.\nThese programs enable millions of users to share files among themselves.\nThe downloaded files still require a lot of manual management by the user.\nThe user still needs to put the files in the proper directory, manage files with multiple versions, delete the files when they are no longer wanted.\nWe strive to make the process of sharing documents within an Intranet easier.\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all members of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are sent between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nFurthermore, some organizations do not have a file sharing server or the necessary network infrastructure to enable one.\nIn this paper we present Apocrita, which is a cost-effective distributed P2P file sharing system for such organizations.\nIn section 2, we present Apocrita.\nThe distributed indexing mechanism and protocol are presented in Section 3.\nSection 4 presents the peer-topeer distribution model.\nA proof of concept prototype is presented in Section 5, and performance evaluations are discussed in Section 6.\nRelated work is presented is Section 7, and finally conclusions and future work are discussed in Section 8.\n7.\nRELATED WORK\nSeveral decentralized P2P systems [1, 2, 3] exist today that Apocrita features some of their functionality.\nHowever, Apocrita also has unique novel searching and indexing features that make this system unique.\nFor example, Majestic-12 [4] is a distributed search and indexing project designed for searching the Internet.\nEach user would install a client, which is responsible for indexing a portion of the web.\nA central area for querying the index is available on the Majestic-12 web page.\nThe index itself is not distributed, only the act of indexing is distributed.\nThe distributed indexing aspect of this project most closely relates Apocrita goals.\nYaCy [6] is a peer-to-peer web search application.\nYaCy is designed to maintain a distributed index of the Internet.\nIt used a distributed hash table (DHT) to maintain the index.\nThe local node is used to query but all results that are returned are accessible on the Internet.\nYaCy used many peers and DHT to maintain a distributed index.\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT.\nYaCy however, is designed as a web search engine and, as such solves a much different problem than Apocrita.\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita, a distributed P2P searching and indexing system intended for network users on an Intranet.\nIt can help organizations with no network file server or necessary network infrastructure to share documents.\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed.\nDespite these shortcomings, the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems.","lvl-2":"Apocrita: A Distributed Peer-to-Peer File Sharing System for Intranets\nABSTRACT\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all member of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are shared between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nAnother problem arises when a document is made available on a user's machine and that user is offline, in which case the document is no longer accessible.\nIn this paper we present Apocrita, a revolutionary distributed P2P file sharing system for Intranets.\n1.\nINTRODUCTION\nThe Peer-to-Peer (P2P) computing paradigm is becoming a completely new form of mutual resource sharing over the Internet.\nWith the increasingly common place broadband Internet access, P2P technology has finally become a viable way to share documents and media files.\nThere are already programs on the market that enable P2P file sharing.\nThese programs enable millions of users to share files among themselves.\nWhile the utilization of P2P clients is already a gigantic step forward compared to downloading files off websites, using such programs are not without their problems.\nThe downloaded files still require a lot of manual management by the user.\nThe user still needs to put the files in the proper directory, manage files with multiple versions, delete the files when they are no longer wanted.\nWe strive to make the process of sharing documents within an Intranet easier.\nMany organizations are required to author documents for various purposes, and such documents may need to be accessible by all members of the organization.\nThis access may be needed for editing or simply viewing a document.\nIn some cases these documents are sent between authors, via email, to be edited.\nThis can easily cause incorrect version to be sent or conflicts created between multiple users trying to make amendments to a document.\nThere may even be multiple different documents in the process of being edited.\nThe user may be required to search for a particular document, which some search tools such as Google Desktop may be a solution for local documents but will not find a document on another user's machine.\nFurthermore, some organizations do not have a file sharing server or the necessary network infrastructure to enable one.\nIn this paper we present Apocrita, which is a cost-effective distributed P2P file sharing system for such organizations.\nThe rest of this paper is organized as follows.\nIn section 2, we present Apocrita.\nThe distributed indexing mechanism and protocol are presented in Section 3.\nSection 4 presents the peer-topeer distribution model.\nA proof of concept prototype is presented in Section 5, and performance evaluations are discussed in Section 6.\nRelated work is presented is Section 7, and finally conclusions and future work are discussed in Section 8.\n2.\nAPOCRITA\nApocrita is a distributed peer-to-peer file sharing system, and has been designed to make finding documents easier in an Intranet environment.\nCurrently, it is possible for documents to be located on a user's machine or on a remote machine.\nIt is even possible that different revisions could reside on each node on the Intranet.\nThis means there must be a manual process to maintain document versions.\nApocrita solves this problem using two approaches.\nFirst, due to the inherent nature of Apocrita, the document will only reside on a single logical location.\nSecond, Apocrita provides a method of reverting to previous document versions.\nApocrita\nwill also distribute documents across multiple machines to ensure high availability of important documents.\nFor example, if a machine contains an important document and the machine is currently inaccessible, the system is capable of maintaining availability of the document through this distribution mechanism.\nIt provides a simple interface for searching and accessing files that may exist either locally or remotely.\nThe distributed nature of the documents is transparent to the user.\nApocrita supports a decentralized network model where the peers use a discovery protocol to determine peers.\nApocrita is intended for network users on an Intranet.\nThe main focus is organizations that may not have a network large enough to require a file server and supporting infrastructure.\nIt eliminates the need for documents to be manually shared between users while being edited and reduces the possibility of conflicting versions being distributed.\nThe system also provides some redundancy and in the event of a single machine failure, no important documents will be lost.\nIt is operating system independent, and easy to access through a web browser or through a standalone application.\nTo decrease the time required for indexing a large number of documents, the indexing process is distributed across available idle nodes.\nLocal and remote files should be easily accessible through a virtual mountable file system, providing transparency for users.\n3.\nDISTRIBUTED INDEXING\nApocrita uses a distributed index for all the documents that are available on the Intranet.\nEach node will contain part of the full index, and be aware of what part of the index each other node has.\nA node will be able to contact each node that contains a unique portion of the index.\nIn addition, each node has a separate local index of its own documents.\nBut as discussed later, in the current implementation, each node has a copy of the entire index.\nIndexing of the documents is distributed.\nTherefore, if a node is in the process of indexing many documents, it will break up the work over the nodes.\nOnce a node's local index is updated with the new documents, the distributed index will then be updated.\nThe current distributed indexing system consists of three separate modules: NodeController, FileSender, and NodeIndexer.\nThe responsibility of each module is discussed later in this section.\n3.1 Indexing Protocol\nThe protocol we have designed for the distributed indexing is depicted in Figure 1.\nFigure 1.\nApocrita distributed indexing protocol.\nIDLE QUERY: The IDLE QUERY is sent out from the initiating node to determine which other nodes may be able to help with the overall indexing process.\nThere are no parameters sent with the command.\nThe receiving node will respond with either a BUSY or IDLE command.\nIf the IDLE command is received, the initiating node will add the responding node to a list of available distributed indexing helpers.\nIn the case of a BUSY command being received, the responding node is ignored.\nBUSY: Once a node received an IDL QUERY, it will determine whether it can be considered a candidate for distributed indexing.\nThis determination is based on the overall CPU usage of the node.\nIf the node is using most of its CPU for other processes, the node will respond to the IDLE QUERY with a BUSY command.\nIDLE: As with the case of the BUSY response, the node receiving the IDLE QUERY will determine its eligibility for distributed indexing.\nTo be considered a candidate for distributed indexing, the overall CPU usage must be at a minimum to all for dedicated indexing of the distributed documents.\nIf this is the case, the node will respond with an IDLE command.\nINCOMING FILE: Once the initiating node assembles a set of idle nodes to assist with the distributed indexing, it will divide the documents to be sent to the nodes.\nTo do this, it sends an INCOMING FILE message, which contains the name of the file as well as the size in bytes.\nAfter the INCOMING FILE command has been sent, the initiating node will begin to stream the file to the other node.\nThe initiating node will loop through the files that are to be sent to the other node; each file stream being preceded by the INCOMING FILE command with the appropriate parameters.\nINDEX FILE: Once the indexing node has completed the indexing process of the set of files, it must send the resultant index back to the initiating node.\nThe index is comprised of multiple files, which exist on the file system of the indexing node.\nAs with the INCOMING FILE command, the indexing node streams each index file after sending an INDEX FILE command.\nThe INDEX FILE command has two parameters: the first being the name of the index, and the second is the size of the file in bytes.\nSEND COMPLETE: When sending the sets of files for both the index and the files to be indexed, the node must notify the corresponding node when the process is complete.\nOnce the initiating node is finished sending the set of documents to be indexed, it will then send a SEND COMPLETE command indicating to the indexing node that there are no more files and the node can proceed with indexing the files.\nIn the case of the initiating node sending the index files, the indexing node will complete the transfer with the SEND COMPLETE command indicating to the initiating node that there are no more index files to be sent and the initiating node can then assemble those index files into the main index.\nThe NodeController is responsible for setting up connections with nodes in the idle state to distribute the indexing process.\nUsing JXTA [5], the node controller will obtain a set of nodes.\nThis set of nodes is iterated and each one is sent the IDLE QUERY command.\nThe nodes that respond with idle are then collected.\nThe set of idle nodes includes the node initiating the distributed indexing process, referred to as the local node.\nOnce the collection of idle nodes is obtained, the node updates the set of controllers and evenly divides the set of documents that are to be indexed.\nFor example, if there are 100 documents and 10 nodes (including the local node) then each node will have 10 documents to index.\nFor each indexing node an instance of the FileSender object is created.\nThe FileSender is aware of the set of documents that node is responsible for.\nOnce a FileSender object has been created for each node, the NodeController waits for each FileSender to complete.\nWhen the FileSender objects have completed the NodeController will take the resultant indexes from\neach node and pass them to an instance of the IndexCompiler, which maintains the index and the list of FileSenders.\nOnce the IndexCompiler has completed it will return to the idle state and activate the directory scanner to monitor the locally owned set of documents for changes that may require reindexing.\nThe NodeIndexer is responsible for receiving documents sent to it by the initiating node and then indexing them using the Lucene engine [7].\nOnce the indexing is complete the resulting index is streamed back to the initiating node as well as compiled in the indexer nodes own local index.\nBefore initiating the indexing process it must be sent an IDLE QUERY message.\nThis is the first command that sets off the indexing process.\nThe indexer node will determine whether it is considered idle based on the current CPU usage.\nAs outlined in the protocol section if the node is not being used and has a low overall CPU usage percentage it will return IDLE to the IDLE QUERY command.\nIf the indexer nodes CPU usage is above 50% for a specified amount of time it is then considered to be busy and will respond to the IDLE QUERY command with BUSY.\nIf a node is determined busy it returns to its listening state waiting for another IDLE QUERY from another initiating node.\nIf the node is determined to be idle it will enter the state where it will receive files from the initiating node that it is responsible for indexing.\nOnce all of the files are received by the initiating node, indicated by a SEND COMPLETE message, it starts an instance of the Lucene indexing engine.\nThe files are stored in a temporary directory separate from the nodes local documents that it is responsible for maintaining an index of.\nThe Lucene index writer then indexes all of the transferred files.\nThe index is stored on the drive within a temporary directory separate from the current index.\nAfter the indexing of the files completes the indexer node enters the state where the index files are sent back to the initiating node.\nThe indexer node loops through all of the files created by Lucene's IndexWriter and streams them to the initiating node.\nOnce these files are sent back that index is then merged into the indexer nodes own full index of the existing files.\nIt then enters the idle state where it will then listen for any other nodes that required distributing the indexing process.\nThe FileSender object is the initiating node equivalent of the indexer node.\nIt initiates the communication between the initiating node and the node that will assist in the distributed indexing.\nThe initiating node runs many instances of the FileSender node one for each other node it has determined to be idle.\nUpon instantiation of the FileSender it is passed the node that it is responsible for contacting and the set of files that must be sent.\nThe FileSender's first job is to send the files that are to be indexed by the other idle node.\nThe files are streamed one at a time to the other node.\nIt sends each file using the INCOMING FILE command.\nWith that command it sends the name of the file being sent and the size in bytes.\nOnce all files have been sent the FileSender sends the SEND COMPLETE command.\nThe FileSender creates an instance of Lucene's IndexWriter and prepares to create the index in a temporary directory on the file system.\nThe FileSender will begin to receive the files that are to be saved within the index.\nIt receives an INDEX FILE command with the name of the files and the size in bytes.\nThis file is then streamed into the temporary index directory on the FileSender node.\nAfter the transfer of the index files has been completed the FileSender notifies the instance of the index compiler that it is ready to combine the index.\nEach instance of the FileSender has its own unique section of temporary space to store the index that has been transferred back from the indexing node.\nWhen notifying the IndexCompiler it will also pass the location of the particular FileSenders directory location of that index.\n4.\nPEER-TO-PEER DISTRIBUTION\nApocrita uses a peer-to-peer distribution model in order to distribute files.\nFiles are distributed solely from a serving node to a client node without regard for the availability of file pieces from other clients in the network.\nThis means that the file transfers will be fast and efficient and should not severely affect the usability of serving nodes from the point of view of a local user.\nThe JXTA framework [5] is used in order to implement peer-to-peer functionality.\nThis has been decided due to the extremely shorttimeline of the project which allows us to take advantage of over five years of testing and development and support from many large organizations employing JXTA in their own products.\nWe are not concerned with any potential quality problems because JXTA is considered to be the most mature and stable peer-to-peer framework available.\nUsing JXTA terminology, there are three types of peers used in node classification.\nEdge peers are typically low-bandwidth, non-dedicated nodes.\nDue to these characteristics, edge peers are not used with Apocrita.\nRelay peers are typically higher-bandwidth, dedicated nodes.\nThis is the classification of all nodes in the Apocrita network, and, as such, are the default classification used.\nRendezvous peers are used to coordinate message passing between nodes in the Apocrita network.\nThis means that a minimum of one rendezvous peer per subnet is required.\n4.1 Peer Discovery\nThe Apocrita server subsystem uses the JXTA Peer Discovery Protocol (PDP) in order to find participating peers within the network as shown in Figure 2.\nFigure 2.\nApocrita peer discovery process.\nThe PDP listens for peer advertisements from other nodes in the Apocrita swarm.\nIf a peer advertisement is detected, the server will attempt to join the peer group and start actively contributing to the network.\nIf no peers are found by the discovery service, the server will create a new peer group and start advertising this peer group.\nThis new peer group will be periodically advertised on the network; any new peers joining the network will attach to this peer group.\nA distinct advantage of using the JXTA PDP is that Apocrita does not have to be sensitive to particular networking nuances such as Maximum Transmission Unit (MTU).\nIn addition, Apocrita does not have to support one-to-many packet delivery methods such as multicast and instead can rely on JXTA for this support.\n4.2 Index Query Operation\nAll nodes in the Apocrita swarm have a complete and up-to-date copy of the network index stored locally.\nThis makes querying the index for search results trivial.\nUnlike the Gnutella protocol, a query does not have to propagate throughout the network.\nThis also means that the time to return query results is very fast--much faster than protocols that rely on nodes in the network to pass the query throughout the network and then wait for results.\nThis is demonstrated in Figure 3.\nFigure 3.\nApocrita query operation.\nEach document in the swarm has a unique document identification number (ID).\nA node will query the index and a result will be returned with both the document ID number as well as a list of peers with a copy of the matched document ID.\nIt is then the responsibility of the searching peer to contact the peers in the list to negotiate file transfer between the client and server.\n5.\nPROTOTYPE IMPLEMENTATION\nApocrita uses the Lucene framework [7], which is a project under development by the Apache Software Foundation.\nApache Lucene is a high-performance, full-featured text search engine library written entirely in Java.\nIn the current implementation, Apocrita is only capable of indexing plain text documents.\nApocrita uses the JXTA framework [5] as a peer-to-peer transport library between nodes.\nJXTA is used to pass both messages and files between nodes in the search network.\nBy using JXTA, Apocrita takes advantage of a reliable, and proven peer-to-peer transport mechanism.\nIt uses the pipe facility in order to pass messages and files between nodes.\nThe pipe facility provides many different types of pipe advertisements.\nThis includes an unsecured unicast pipe, a secured unicast pipe, and a propagated unsecured pipe.\nMessage passing is used to pass status messages between nodes in order to aid in indexing, searching, and retrieval.\nFor example, a node attempting to find an idle node to participate in indexing will query nodes via the message facility.\nIdle nodes will reply with a status message to indicate they are available to start indexing.\nFile passing is used within Apocrita for file transfer.\nAfter a file has been searched for and located within the peer group, a JXTA socket will be opened and file transfer will take place.\nA JXTA socket is similar to a standard Java socket, however a JXTA socket uses JXTA pipes in underlying network transport.\nFile passing uses an unsecured unicast pipe in order to transfer data.\nFile passing is also used within Apocrita for index transfer.\nIndex transfer works exactly like a file transfer.\nIn fact, the index transfer actually passes the index as a file.\nHowever, there is one key difference between file transfer and index transfer.\nIn the case of file transfer, a socket is created between only two nodes.\nIn the case of index transfer, a socket must be created between all nodes in the network in order to pass the index, which allows for all nodes to have a full and complete index of the entire network.\nIn order to facilitate this transfer efficiently, index transfer will use an unsecured propagated pipe to communicate with all nodes in the Apocrita network.\n6.\nPERFORMANCE EVALUATION\nIt is difficult to objectively benchmark the results obtained through Apocrita because there is no other system currently available with the same goals as Apocrita.\nWe have, however, evaluated the performance of the critical sections of the system.\nThe critical sections were determined to be the processes that are the most time intensive.\nThe evaluation was completed on standard lab computers on a 100Mb\/s Ethernet LAN; the machines run Windows XP with a Pentium 4 CPU running at 2.4 GHz with 512 MB of RAM.\nThe indexing time has been run against both: the Time Magazine collection [8], which contains 432 documents and 83 queries and their most relevant results, and the NPL collection [8] that has a total of 11,429 documents and 93 queries with expected results.\nEach document ranges in size between 4KB and 8KB.\nAs Figure 4 demonstrates, the number of nodes involved in the indexing process affects the time taken to complete the indexing process--sometimes even drastically.\nFigure 4.\nNode vs. index time.\nThe difference in going from one indexing node to two indexing nodes is the most drastic and equates to an indexing time 37% faster than a single indexing node.\nThe different between two\nindexing nodes and three indexing nodes is still significant and represents a 16% faster time than two indexing nodes.\nAs the number of indexing nodes increases the results are less dramatic.\nThis can be attributed to the time overhead associated with having many nodes perform indexing.\nThe time needed to communicate with a node is constant, so as the number of nodes increases, this constant becomes more prevalent.\nAlso, the complexity of joining the indexing results is a complex operation and is complicated further as the number of indexing nodes increases.\nSocket performance is also a very important part of Apocrita.\nBenchmarks were performed using a 65MB file on a system with both the client and server running locally.\nThis was done to isolate possible network issues.\nAlthough less drastic, similar results were shown when the client and server run on independent hardware.\nIn order to mitigate possible unexpected errors, each test was run 10 times.\nFigure 5.\nJava sockets vs. JXTA sockets.\nAs Figure 5 demonstrates, the performance of JXTA sockets is abysmal as compared to the performance of standard Java sockets.\nThe minimum transfer rate obtained using Java sockets is 81,945 KB\/s while the minimum transfer rater obtained using JXTA sockets is much lower at 3, 805KB\/s.\nThe maximum transfer rater obtain using Java sockets is 97,412 KB\/s while the maximum transfer rate obtained using JXTA sockets is 5,530 KB\/s.\nFinally, the average transfer rate using Java sockets is 87,540 KB\/s while the average transfer rate using JXTA sockets is 4,293 KB\/s.\nThe major problem found in these benchmarks is that the underlying network transport mechanism does not perform as quickly or efficiently as expected.\nIn order to garner a performance increase, the JXTA framework needs to be substituted with a more traditional approach.\nThe indexing time is also a bottleneck and will need to be improved for the overall quality of Apocrita to be improved.\n7.\nRELATED WORK\nSeveral decentralized P2P systems [1, 2, 3] exist today that Apocrita features some of their functionality.\nHowever, Apocrita also has unique novel searching and indexing features that make this system unique.\nFor example, Majestic-12 [4] is a distributed search and indexing project designed for searching the Internet.\nEach user would install a client, which is responsible for indexing a portion of the web.\nA central area for querying the index is available on the Majestic-12 web page.\nThe index itself is not distributed, only the act of indexing is distributed.\nThe distributed indexing aspect of this project most closely relates Apocrita goals.\nYaCy [6] is a peer-to-peer web search application.\nYaCy consists of a web crawler, an indexer, a built-in database engine, and a p2p index exchange protocol.\nYaCy is designed to maintain a distributed index of the Internet.\nIt used a distributed hash table (DHT) to maintain the index.\nThe local node is used to query but all results that are returned are accessible on the Internet.\nYaCy used many peers and DHT to maintain a distributed index.\nApocrita will also use a distributed index in future implementations and may benefit from using an implementation of a DHT.\nYaCy however, is designed as a web search engine and, as such solves a much different problem than Apocrita.\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented Apocrita, a distributed P2P searching and indexing system intended for network users on an Intranet.\nIt can help organizations with no network file server or necessary network infrastructure to share documents.\nIt eliminates the need for documents to be manually shared among users while being edited and reduce the possibility of conflicting versions being distributed.\nA proof of concept prototype has been constructed, but the results from measuring the network transport mechanism and the indexing time were not as impressive as initially envisioned.\nDespite these shortcomings, the experience gained from the design and implementation of Apocrita has given us more insight into building challenging distributed systems.\nFor future work, Apocrita will have a smart content distribution model in which a single instance of a file can intelligently and transparently replicate throughout the network to ensure a copy of every important file will always be available regardless of the availability of specific nodes in the network.\nIn addition, we plan to integrate a revision control system into the content distribution portion of Apocrita so that users could have the ability to update an existing file that they found and have the old revision maintained and the new revision propagated.\nFinally, the current implementation has some overhead and redundancy due to the fact that the entire index is maintained on each individual node, we plan to design a distributed index.","keyphrases":["apocrita","file share system","file share","intranet","author","document","p2p","peer-to-peer","jxta","distribut index","peer-to-peer distribut model","idl queri","index file","incom file","p2p search"],"prmu":["P","P","P","P","P","P","P","U","U","M","M","U","M","M","R"]} {"id":"H-10","title":"Regularized Clustering for Documents","abstract":"In recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering. In this paper, we propose a novel method for clustering documents using regularization. Unlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer. So we call our algorithm Clustering with Local and Global Regularization (CLGR). We will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods. Finally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.","lvl-1":"Regularized Clustering for Documents \u2217 Fei Wang, Changshui Zhang State Key Lab of Intelligent Tech.\nand Systems Department of Automation, Tsinghua University Beijing, China, 100084 feiwang03@gmail.com Tao Li School of Computer Science Florida International University Miami, FL 33199, U.S.A. taoli@cs.fiu.edu ABSTRACT In recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nIn this paper, we propose a novel method for clustering documents using regularization.\nUnlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer.\nSo we call our algorithm Clustering with Local and Global Regularization (CLGR).\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods.\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Clustering; I.2.6 [Artificial Intelligence]: Learning-Concept Learning General Terms Algorithms 1.\nINTRODUCTION Document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation, which is very valuable for complementing the deficiencies of traditional information retrieval technologies.\nAs pointed out by [8], the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months.\nTraditional document retrieval engines tend to fit well with the search end of the spectrum, i.e. they usually provide specified search for documents matching the user``s query, however, it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed.\nIn such cases, efficient browsing through a good cluster hierarchy will be definitely helpful.\nGenerally, document clustering methods can be mainly categorized into two classes: hierarchical methods and partitioning methods.\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches.\nFor example, hierarchical agglomerative clustering (HAC) [13] is a typical bottom-up hierarchical clustering method.\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster.\nOn the other hand, partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions.\nFor instance, K-means [13] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers.\nIn this paper, we will focus on the partitioning methods.\nAs we know that there are two main problems existing in partitioning methods (like Kmeans and Gaussian Mixture Model (GMM) [16]): (1) the predefined criterion is usually non-convex which causes many local optimal solutions; (2) the iterative procedure (e.g. the Expectation Maximization (EM) algorithm) for optimizing the criterions usually makes the final solutions heavily depend on the initializations.\nIn the last decades, many methods have been proposed to overcome the above problems of the partitioning methods [19][28].\nRecently, another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community.\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph, in which the graph nodes represent the data points, and the weights on the edges correspond to the similarities between pairwise points.\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph.\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches, it generally aims to optimize some cut value (e.g. Normalized Cut [22], Ratio Cut [7], Min-Max Cut [11]) defined on an undirected graph.\nAfter some relaxations, these criterions can usually be optimized via eigen-decompositions, which is guaranteed to be global optimal.\nIn this way, spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph.\nIn this paper, we propose a novel document clustering algorithm that inherits the superiority of spectral clustering, i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix.\nHowever, unlike spectral clustering, which just enforces a smoothness constraint on the data labels over the whole data manifold [2], our method first construct a regularized linear label predictor for each data point from its neighborhood as in [25], and then combine the results of all these local label predictors with a global label smoothness regularizer.\nSo we call our method Clustering with Local and Global Regularization (CLGR).\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31], and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods.\nThe rest of this paper is organized as follows: in section 2 we will introduce our CLGR algorithm in detail.\nThe experimental results on several datasets are presented in section 3, followed by the conclusions and discussions in section 4.\n2.\nTHE PROPOSED ALGORITHM In this section, we will introduce our Clustering with Local and Global Regularization (CLGR) algorithm in detail.\nFirst let``s see the how the documents are represented throughout this paper.\n2.1 Document Representation In our work, all the documents are represented by the weighted term-frequency vectors.\nLet W = {w1, w2, \u00b7 \u00b7 \u00b7 , wm} be the complete vocabulary set of the document corpus (which is preprocessed by the stopwords removal and words stemming operations).\nThe term-frequency vector xi of document di is defined as xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xim]T , xik = tik log n idfk , where tik is the term frequency of wk \u2208 W, n is the size of the document corpus, idfk is the number of documents that contain word wk.\nIn this way, xi is also called the TFIDF representation of document di.\nFurthermore, we also normalize each xi (1 i n) to have a unit length, so that each document is represented by a normalized TF-IDF vector.\n2.2 Local Regularization As its name suggests, CLGR is composed of two parts: local regularization and global regularization.\nIn this subsection we will introduce the local regularization part in detail.\n2.2.1 Motivation As we know that clustering is one type of learning techniques, it aims to organize the dataset in a reasonable way.\nGenerally speaking, learning can be posed as a problem of function estimation, from which we can get a good classification function that will assign labels to the training dataset and even the unseen testing dataset with some cost minimized [24].\nFor example, in the two-class classification scenario1 (in which we exactly know the label of each document), a linear classifier with least square fit aims to learn a column vector w such that the squared cost J = 1 n (wT xi \u2212 yi)2 (1) is minimized, where yi \u2208 {+1, \u22121} is the label of xi.\nBy taking \u2202J \/\u2202w = 0, we get the solution w\u2217 = n i=1 xixT i \u22121 n i=1 xiyi , (2) which can further be written in its matrix form as w\u2217 = XXT \u22121 Xy, (3) where X = [x1, x2, \u00b7 \u00b7 \u00b7 , xn] is an m \u00d7 n document matrix, y = [y1, y2, \u00b7 \u00b7 \u00b7 , yn]T is the label vector.\nThen for a test document t, we can determine its label by l = sign(w\u2217T u), (4) where sign(\u00b7) is the sign function.\nA natural problem in Eq.\n(3) is that the matrix XXT may be singular and thus not invertable (e.g. when m n).\nTo avoid such a problem, we can add a regularization term and minimize the following criterion J = 1 n n i=1 (wT xi \u2212 yi)2 + \u03bb w 2 , (5) where \u03bb is a regularization parameter.\nThen the optimal solution that minimize J is given by w\u2217 = XXT + \u03bbnI \u22121 Xy, (6) where I is an m \u00d7 m identity matrix.\nIt has been reported that the regularized linear classifier can achieve very good results on text classification problems [29].\nHowever, despite its empirical success, the regularized linear classifier is on earth a global classifier, i.e. w\u2217 is estimated using the whole training set.\nAccording to [24], this may not be a smart idea, since a unique w\u2217 may not be good enough for predicting the labels of the whole input space.\nIn order to get better predictions, [6] proposed to train classifiers locally and use them to classify the testing points.\nFor example, a testing point will be classified by the local classifier trained using the training points located in the vicinity 1 In the following discussions we all assume that the documents coming from only two classes.\nThe generalizations of our method to multi-class cases will be discussed in section 2.5.\nof it.\nAlthough this method seems slow and stupid, it is reported that it can get better performances than using a unique global classifier on certain tasks [6].\n2.2.2 Constructing the Local Regularized Predictors Inspired by their success, we proposed to apply the local learning algorithms for clustering.\nThe basic idea is that, for each document vector xi (1 i n), we train a local label predictor based on its k-nearest neighborhood Ni, and then use it to predict the label of xi.\nFinally we will combine all those local predictors by minimizing the sum of their prediction errors.\nIn this subsection we will introduce how to construct those local predictors.\nDue to the simplicity and effectiveness of the regularized linear classifier that we have introduced in section 2.2.1, we choose it to be our local label predictor, such that for each document xi, the following criterion is minimized Ji = 1 ni xj \u2208Ni wT i xj \u2212 qj 2 + \u03bbi wi 2 , (7) where ni = |Ni| is the cardinality of Ni, and qj is the cluster membership of xj.\nThen using Eq.\n(6), we can get the optimal solution is w\u2217 i = XiXT i + \u03bbiniI \u22121 Xiqi, (8) where Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xini ], and we use xik to denote the k-th nearest neighbor of xi.\nqi = [qi1, qi2, \u00b7 \u00b7 \u00b7 , qini ]T with qik representing the cluster assignment of xik.\nThe problem here is that XiXT i is an m \u00d7 m matrix with m ni, i.e. we should compute the inverse of an m \u00d7 m matrix for every document vector, which is computationally prohibited.\nFortunately, we have the following theorem: Theorem 1.\nw\u2217 i in Eq.\n(8) can be rewritten as w\u2217 i = Xi XT i Xi + \u03bbiniIi \u22121 qi, (9) where Ii is an ni \u00d7 ni identity matrix.\nProof.\nSince w\u2217 i = XiXT i + \u03bbiniI \u22121 Xiqi, then XiXT i + \u03bbiniI w\u2217 i = Xiqi =\u21d2 XiXT i w\u2217 i + \u03bbiniw\u2217 i = Xiqi =\u21d2 w\u2217 i = (\u03bbini)\u22121 Xi qi \u2212 XT i w\u2217 i .\nLet \u03b2 = (\u03bbini)\u22121 qi \u2212 XT i w\u2217 i , then w\u2217 i = Xi\u03b2 =\u21d2 \u03bbini\u03b2 = qi \u2212 XT i w\u2217 i = qi \u2212 XT i Xi\u03b2 =\u21d2 qi = XT i Xi + \u03bbiniIi \u03b2 =\u21d2 \u03b2 = XT i Xi + \u03bbiniIi \u22121 qi.\nTherefore w\u2217 i = Xi\u03b2 = Xi XT i Xi + \u03bbiniIi \u22121 qi 2 Using theorem 1, we only need to compute the inverse of an ni \u00d7 ni matrix for every document to train a local label predictor.\nMoreover, for a new testing point u that falls into Ni, we can classify it by the sign of qu = w\u2217T i u = uT wi = uT Xi XT i Xi + \u03bbiniIi \u22121 qi.\nThis is an attractive expression since we can determine the cluster assignment of u by using the inner-products between the points in {u \u222a Ni}, which suggests that such a local regularizer can easily be kernelized [21] as long as we define a proper kernel function.\n2.2.3 Combining the Local Regularized Predictors After all the local predictors having been constructed, we will combine them together by minimizing Jl = n i=1 w\u2217T i xi \u2212 qi 2 , (10) which stands for the sum of the prediction errors for all the local predictors.\nCombining Eq.\n(10) with Eq.\n(6), we can get Jl = n i=1 w\u2217T i xi \u2212 qi 2 = n i=1 xT i Xi XT i Xi + \u03bbiniIi \u22121 qi \u2212 qi 2 = Pq \u2212 q 2 , (11) where q = [q1, q2, \u00b7 \u00b7 \u00b7 , qn]T , and the P is an n \u00d7 n matrix constructing in the following way.\nLet \u03b1i = xT i Xi XT i Xi + \u03bbiniIi \u22121 , then Pij = \u03b1i j, if xj \u2208 Ni 0, otherwise , (12) where Pij is the (i, j)-th entry of P, and \u03b1i j represents the j-th entry of \u03b1i .\nTill now we can write the criterion of clustering by combining locally regularized linear label predictors Jl in an explicit mathematical form, and we can minimize it directly using some standard optimization techniques.\nHowever, the results may not be good enough since we only exploit the local informations of the dataset.\nIn the next subsection, we will introduce a global regularization criterion and combine it with Jl, which aims to find a good clustering result in a local-global way.\n2.3 Global Regularization In data clustering, we usually require that the cluster assignments of the data points should be sufficiently smooth with respect to the underlying data manifold, which implies (1) the nearby points tend to have the same cluster assignments; (2) the points on the same structure (e.g. submanifold or cluster) tend to have the same cluster assignments [31].\nWithout the loss of generality, we assume that the data points reside (roughly) on a low-dimensional manifold M2 , and q is the cluster assignment function defined on M, i.e. 2 We believe that the text data are also sampled from some low dimensional manifold, since it is impossible for them to for \u2200x \u2208 M, q(x) returns the cluster membership of x.\nThe smoothness of q over M can be calculated by the following Dirichlet integral [2] D[q] = 1 2 M q(x) 2 dM, (13) where the gradient q is a vector in the tangent space T Mx, and the integral is taken with respect to the standard measure on M.\nIf we restrict the scale of q by q, q M = 1 (where \u00b7, \u00b7 M is the inner product induced on M), then it turns out that finding the smoothest function minimizing D[q] reduces to finding the eigenfunctions of the Laplace Beltrami operator L, which is defined as Lq \u2212div q, (14) where div is the divergence of a vector field.\nGenerally, the graph can be viewed as the discretized form of manifold.\nWe can model the dataset as an weighted undirected graph as in spectral clustering [22], where the graph nodes are just the data points, and the weights on the edges represent the similarities between pairwise points.\nThen it can be shown that minimizing Eq.\n(13) corresponds to minimizing Jg = qT Lq = n i=1 (qi \u2212 qj)2 wij, (15) where q = [q1, q2, \u00b7 \u00b7 \u00b7 , qn]T with qi = q(xi), L is the graph Laplacian with its (i, j)-th entry Lij = di \u2212 wii, if i = j \u2212wij, if xi and xj are adjacent 0, otherwise, (16) where di = j wij is the degree of xi, wij is the similarity between xi and xj.\nIf xi and xj are adjacent3 , wij is usually computed in the following way wij = e \u2212 xi\u2212xj 2 2\u03c32 , (17) where \u03c3 is a dataset dependent parameter.\nIt is proved that under certain conditions, such a form of wij to determine the weights on graph edges leads to the convergence of graph Laplacian to the Laplace Beltrami operator [3][18].\nIn summary, using Eq.\n(15) with exponential weights can effectively measure the smoothness of the data assignments with respect to the intrinsic data manifold.\nThus we adopt it as a global regularizer to punish the smoothness of the predicted data assignments.\n2.4 Clustering with Local and Global Regularization Combining the contents we have introduced in section 2.2 and section 2.3 we can derive the clustering criterion is minq J = Jl + \u03bbJg = Pq \u2212 q 2 + \u03bbqT Lq s.t. qi \u2208 {\u22121, +1}, (18) where P is defined as in Eq.\n(12), and \u03bb is a regularization parameter to trade off Jl and Jg.\nHowever, the discrete fill in the whole high-dimensional sample space.\nAnd it has been shown that the manifold based methods can achieve good results on text classification tasks [31].\n3 In this paper, we define xi and xj to be adjacent if xi \u2208 N(xj) or xj \u2208 N(xi).\nconstraint of pi makes the problem an NP hard integer programming problem.\nA natural way for making the problem solvable is to remove the constraint and relax qi to be continuous, then the objective that we aims to minimize becomes J = Pq \u2212 q 2 + \u03bbqT Lq = qT (P \u2212 I)T (P \u2212 I)q + \u03bbqT Lq = qT (P \u2212 I)T (P \u2212 I) + \u03bbL q, (19) and we further add a constraint qT q = 1 to restrict the scale of q.\nThen our objective becomes minq J = qT (P \u2212 I)T (P \u2212 I) + \u03bbL q s.t. qT q = 1 (20) Using the Lagrangian method, we can derive that the optimal solution q corresponds to the smallest eigenvector of the matrix M = (P \u2212 I)T (P \u2212 I) + \u03bbL, and the cluster assignment of xi can be determined by the sign of qi, i.e. xi will be classified as class one if qi > 0, otherwise it will be classified as class 2.\n2.5 Multi-Class CLGR In the above we have introduced the basic framework of Clustering with Local and Global Regularization (CLGR) for the two-class clustering problem, and we will extending it to multi-class clustering in this subsection.\nFirst we assume that all the documents belong to C classes indexed by L = {1, 2, \u00b7 \u00b7 \u00b7 , C}.\nqc is the classification function for class c (1 c C), such that qc (xi) returns the confidence that xi belongs to class c.\nOur goal is to obtain the value of qc (xi) (1 c C, 1 i n), and the cluster assignment of xi can be determined by {qc (xi)}C c=1 using some proper discretization methods that we will introduce later.\nTherefore, in this multi-class case, for each document xi (1 i n), we will construct C locally linear regularized label predictors whose normal vectors are wc\u2217 i = Xi XT i Xi + \u03bbiniIi \u22121 qc i (1 c C), (21) where Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xini ] with xik being the k-th neighbor of xi, and qc i = [qc i1, qc i2, \u00b7 \u00b7 \u00b7 , qc ini ]T with qc ik = qc (xik).\nThen (wc\u2217 i )T xi returns the predicted confidence of xi belonging to class c. Hence the local prediction error for class c can be defined as J c l = n i=1 (wc\u2217 i ) T xi \u2212 qc i 2 , (22) And the total local prediction error becomes Jl = C c=1 J c l = C c=1 n i=1 (wc\u2217 i ) T xi \u2212 qc i 2 .\n(23) As in Eq.\n(11), we can define an n\u00d7n matrix P (see Eq.\n(12)) and rewrite Jl as Jl = C c=1 J c l = C c=1 Pqc \u2212 qc 2 .\n(24) Similarly we can define the global smoothness regularizer in multi-class case as Jg = C c=1 n i=1 (qc i \u2212 qc j )2 wij = C c=1 (qc )T Lqc .\n(25) Then the criterion to be minimized for CLGR in multi-class case becomes J = Jl + \u03bbJg = C c=1 Pqc \u2212 qc 2 + \u03bb(qc )T Lqc = C c=1 (qc )T (P \u2212 I)T (P \u2212 I) + \u03bbL qc = trace QT (P \u2212 I)T (P \u2212 I) + \u03bbL Q , (26) where Q = [q1 , q2 , \u00b7 \u00b7 \u00b7 , qc ] is an n \u00d7 c matrix, and trace(\u00b7) returns the trace of a matrix.\nThe same as in Eq.\n(20), we also add the constraint that QT Q = I to restrict the scale of Q.\nThen our optimization problem becomes minQ J = trace QT (P \u2212 I)T (P \u2212 I) + \u03bbL Q s.t. QT Q = I, (27) From the Ky Fan theorem [28], we know the optimal solution of the above problem is Q\u2217 = [q\u2217 1, q\u2217 2, \u00b7 \u00b7 \u00b7 , q\u2217 C ]R, (28) where q\u2217 k (1 k C) is the eigenvector corresponds to the k-th smallest eigenvalue of matrix (P \u2212 I)T (P \u2212 I) + \u03bbL, and R is an arbitrary C \u00d7 C matrix.\nSince the values of the entries in Q\u2217 is continuous, we need to further discretize Q\u2217 to get the cluster assignments of all the data points.\nThere are mainly two approaches to achieve this goal: 1.\nAs in [20], we can treat the i-th row of Q as the embedding of xi in a C-dimensional space, and apply some traditional clustering methods like kmeans to clustering these embeddings into C clusters.\n2.\nSince the optimal Q\u2217 is not unique (because of the existence of an arbitrary matrix R), we can pursue an optimal R that will rotate Q\u2217 to an indication matrix4 .\nThe detailed algorithm can be referred to [26].\nThe detailed algorithm procedure for CLGR is summarized in table 1.\n3.\nEXPERIMENTS In this section, experiments are conducted to empirically compare the clustering results of CLGR with other 8 representitive document clustering algorithms on 5 datasets.\nFirst we will introduce the basic informations of those datasets.\n3.1 Datasets We use a variety of datasets, most of which are frequently used in the information retrieval research.\nTable 2 summarizes the characteristics of the datasets.\n4 Here an indication matrix T is a n\u00d7c matrix with its (i, j)th entry Tij \u2208 {0, 1} such that for each row of Q\u2217 there is only one 1.\nThen the xi can be assigned to the j-th cluster such that j = argjQ\u2217 ij = 1.\nTable 1: Clustering with Local and Global Regularization (CLGR) Input: 1.\nDataset X = {xi}n i=1; 2.\nNumber of clusters C; 3.\nSize of the neighborhood K; 4.\nLocal regularization parameters {\u03bbi}n i=1; 5.\nGlobal regularization parameter \u03bb; Output: The cluster membership of each data point.\nProcedure: 1.\nConstruct the K nearest neighborhoods for each data point; 2.\nConstruct the matrix P using Eq.\n(12); 3.\nConstruct the Laplacian matrix L using Eq.\n(16); 4.\nConstruct the matrix M = (P \u2212 I)T (P \u2212 I) + \u03bbL; 5.\nDo eigenvalue decomposition on M, and construct the matrix Q\u2217 according to Eq.\n(28); 6.\nOutput the cluster assignments of each data point by properly discretize Q\u2217 .\nTable 2: Descriptions of the document datasets Datasets Number of documents Number of classes CSTR 476 4 WebKB4 4199 4 Reuters 2900 10 WebACE 2340 20 Newsgroup4 3970 4 CSTR.\nThis is the dataset of the abstracts of technical reports published in the Department of Computer Science at a university.\nThe dataset contained 476 abstracts, which were divided into four research areas: Natural Language Processing(NLP), Robotics\/Vision, Systems, and Theory.\nWebKB.\nThe WebKB dataset contains webpages gathered from university computer science departments.\nThere are about 8280 documents and they are divided into 7 categories: student, faculty, staff, course, project, department and other.\nThe raw text is about 27MB.\nAmong these 7 categories, student, faculty, course and project are four most populous entity-representing categories.\nThe associated subset is typically called WebKB4.\nReuters.\nThe Reuters-21578 Text Categorization Test collection contains documents collected from the Reuters newswire in 1987.\nIt is a standard text categorization benchmark and contains 135 categories.\nIn our experiments, we use a subset of the data collection which includes the 10 most frequent categories among the 135 topics and we call it Reuters-top 10.\nWebACE.\nThe WebACE dataset was from WebACE project and has been used for document clustering [17][5].\nThe WebACE dataset contains 2340 documents consisting news articles from Reuters new service via the Web in October 1997.\nThese documents are divided into 20 classes.\nNews4.\nThe News4 dataset used in our experiments are selected from the famous 20-newsgroups dataset5 .\nThe topic rec containing autos, motorcycles, baseball and hockey was selected from the version 20news-18828.\nThe News4 dataset contains 3970 document vectors.\n5 http:\/\/people.csail.mit.edu\/jrennie\/20Newsgroups\/ To pre-process the datasets, we remove the stop words using a standard stop list, all HTML tags are skipped and all header fields except subject and organization of the posted articles are ignored.\nIn all our experiments, we first select the top 1000 words by mutual information with class labels.\n3.2 Evaluation Metrics In the experiments, we set the number of clusters equal to the true number of classes C for all the clustering algorithms.\nTo evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures.\nClustering Accuracy (Acc).\nThe first performance measure is the Clustering Accuracy, which discovers the one-toone relationship between clusters and classes and measures the extent to which each cluster contained data points from the corresponding class.\nIt sums up the whole matching degree between all pair class-clusters.\nClustering accuracy can be computed as: Acc = 1 N max Ck,Lm T(Ck, Lm) , (29) where Ck denotes the k-th cluster in the final results, and Lm is the true m-th class.\nT(Ck, Lm) is the number of entities which belong to class m are assigned to cluster k. Accuracy computes the maximum sum of T(Ck, Lm) for all pairs of clusters and classes, and these pairs have no overlaps.\nThe greater clustering accuracy means the better clustering performance.\nNormalized Mutual Information (NMI).\nAnother evaluation metric we adopt here is the Normalized Mutual Information NMI [23], which is widely used for determining the quality of clusters.\nFor two random variable X and Y, the NMI is defined as: NMI(X, Y) = I(X, Y) H(X)H(Y) , (30) where I(X, Y) is the mutual information between X and Y, while H(X) and H(Y) are the entropies of X and Y respectively.\nOne can see that NMI(X, X) = 1, which is the maximal possible value of NMI.\nGiven a clustering result, the NMI in Eq.\n(30) is estimated as NMI = C k=1 C m=1 nk,mlog n\u00b7nk,m nk \u02c6nm C k=1 nklog nk n C m=1 \u02c6nmlog \u02c6nm n , (31) where nk denotes the number of data contained in the cluster Ck (1 k C), \u02c6nm is the number of data belonging to the m-th class (1 m C), and nk,m denotes the number of data that are in the intersection between the cluster Ck and the m-th class.\nThe value calculated in Eq.\n(31) is used as a performance measure for the given clustering result.\nThe larger this value, the better the clustering performance.\n3.3 Comparisons We have conducted comprehensive performance evaluations by testing our method and comparing it with 8 other representative data clustering methods using the same data corpora.\nThe algorithms that we evaluated are listed below.\n1.\nTraditional k-means (KM).\n2.\nSpherical k-means (SKM).\nThe implementation is based on [9].\n3.\nGaussian Mixture Model (GMM).\nThe implementation is based on [16].\n4.\nSpectral Clustering with Normalized Cuts (Ncut).\nThe implementation is based on [26], and the variance of the Gaussian similarity is determined by Local Scaling [30].\nNote that the criterion that Ncut aims to minimize is just the global regularizer in our CLGR algorithm except that Ncut used the normalized Laplacian.\n5.\nClustering using Pure Local Regularization (CPLR).\nIn this method we just minimize Jl (defined in Eq.\n(24)), and the clustering results can be obtained by doing eigenvalue decomposition on matrix (I \u2212 P)T (I \u2212 P) with some proper discretization methods.\n6.\nAdaptive Subspace Iteration (ASI).\nThe implementation is based on [14].\n7.\nNonnegative Matrix Factorization (NMF).\nThe implementation is based on [27].\n8.\nTri-Factorization Nonnegative Matrix Factorization (TNMF) [12].\nThe implementation is based on [15].\nFor computational efficiency, in the implementation of CPLR and our CLGR algorithm, we have set all the local regularization parameters {\u03bbi}n i=1 to be identical, which is set by grid search from {0.1, 1, 10}.\nThe size of the k-nearest neighborhoods is set by grid search from {20, 40, 80}.\nFor the CLGR method, its global regularization parameter is set by grid search from {0.1, 1, 10}.\nWhen constructing the global regularizer, we have adopted the local scaling method [30] to construct the Laplacian matrix.\nThe final discretization method adopted in these two methods is the same as in [26], since our experiments show that using such method can achieve better results than using kmeans based methods as in [20].\n3.4 Experimental Results The clustering accuracies comparison results are shown in table 3, and the normalized mutual information comparison results are summarized in table 4.\nFrom the two tables we mainly observe that: 1.\nOur CLGR method outperforms all other document clustering methods in most of the datasets; 2.\nFor document clustering, the Spherical k-means method usually outperforms the traditional k-means clustering method, and the GMM method can achieve competitive results compared to the Spherical k-means method; 3.\nThe results achieved from the k-means and GMM type algorithms are usually worse than the results achieved from Spectral Clustering.\nSince Spectral Clustering can be viewed as a weighted version of kernel k-means, it can obtain good results the data clusters are arbitrarily shaped.\nThis corroborates that the documents vectors are not regularly distributed (spherical or elliptical).\n4.\nThe experimental comparisons empirically verify the equivalence between NMF and Spectral Clustering, which Table 3: Clustering accuracies of the various methods CSTR WebKB4 Reuters WebACE News4 KM 0.4256 0.3888 0.4448 0.4001 0.3527 SKM 0.4690 0.4318 0.5025 0.4458 0.3912 GMM 0.4487 0.4271 0.4897 0.4521 0.3844 NMF 0.5713 0.4418 0.4947 0.4761 0.4213 Ncut 0.5435 0.4521 0.4896 0.4513 0.4189 ASI 0.5621 0.4752 0.5235 0.4823 0.4335 TNMF 0.6040 0.4832 0.5541 0.5102 0.4613 CPLR 0.5974 0.5020 0.4832 0.5213 0.4890 CLGR 0.6235 0.5228 0.5341 0.5376 0.5102 Table 4: Normalized mutual information results of the various methods CSTR WebKB4 Reuters WebACE News4 KM 0.3675 0.3023 0.4012 0.3864 0.3318 SKM 0.4027 0.4155 0.4587 0.4003 0.4085 GMM 0.4034 0.4093 0.4356 0.4209 0.3994 NMF 0.5235 0.4517 0.4402 0.4359 0.4130 Ncut 0.4833 0.4497 0.4392 0.4289 0.4231 ASI 0.5008 0.4833 0.4769 0.4817 0.4503 TNMF 0.5724 0.5011 0.5132 0.5328 0.4749 CPLR 0.5695 0.5231 0.4402 0.5543 0.4690 CLGR 0.6012 0.5434 0.4935 0.5390 0.4908 has been proved theoretically in [10].\nIt can be observed from the tables that NMF and Spectral Clustering usually lead to similar clustering results.\n5.\nThe co-clustering based methods (TNMF and ASI) can usually achieve better results than traditional purely document vector based methods.\nSince these methods perform an implicit feature selection at each iteration, provide an adaptive metric for measuring the neighborhood, and thus tend to yield better clustering results.\n6.\nThe results achieved from CPLR are usually better than the results achieved from Spectral Clustering, which supports Vapnik``s theory [24] that sometimes local learning algorithms can obtain better results than global learning algorithms.\nBesides the above comparison experiments, we also test the parameter sensibility of our method.\nThere are mainly two sets of parameters in our CLGR algorithm, the local and global regularization parameters ({\u03bbi}n i=1 and \u03bb, as we have said in section 3.3, we have set all \u03bbi``s to be identical to \u03bb\u2217 in our experiments), and the size of the neighborhoods.\nTherefore we have also done two sets of experiments: 1.\nFixing the size of the neighborhoods, and testing the clustering performance with varying \u03bb\u2217 and \u03bb.\nIn this set of experiments, we find that our CLGR algorithm can achieve good results when the two regularization parameters are neither too large nor too small.\nTypically our method can achieve good results when \u03bb\u2217 and \u03bb are around 0.1.\nFigure 1 shows us such a testing example on the WebACE dataset.\n2.\nFixing the local and global regularization parameters, and testing the clustering performance with different \u22125 \u22124.5 \u22124 \u22123.5 \u22123 \u22125 \u22124.5 \u22124 \u22123.5 \u22123 0.35 0.4 0.45 0.5 0.55 local regularization para (log 2 value) global regularization para (log 2 value) clusteringaccuracy Figure 1: Parameter sensibility testing results on the WebACE dataset with the neighborhood size fixed to 20, and the x-axis and y-axis represents the log2 value of \u03bb\u2217 and \u03bb.\nsizes of neighborhoods.\nIn this set of experiments, we find that the neighborhood with a too large or too small size will all deteriorate the final clustering results.\nThis can be easily understood since when the neighborhood size is very small, then the data points used for training the local classifiers may not be sufficient; when the neighborhood size is very large, the trained classifiers will tend to be global and cannot capture the typical local characteristics.\nFigure 2 shows us a testing example on the WebACE dataset.\nTherefore, we can see that our CLGR algorithm (1) can achieve satisfactory results and (2) is not very sensitive to the choice of parameters, which makes it practical in real world applications.\n4.\nCONCLUSIONS AND FUTURE WORKS In this paper, we derived a new clustering algorithm called clustering with local and global regularization.\nOur method preserves the merit of local learning algorithms and spectral clustering.\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets.\nIn the future, we will focus on the parameter selection and acceleration issues of the CLGR algorithm.\n5.\nREFERENCES [1] L. Baker and A. McCallum.\nDistributional Clustering of Words for Text Classification.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 1998.\n[2] M. Belkin and P. Niyogi.\nLaplacian Eigenmaps for Dimensionality Reduction and Data Representation.\nNeural Computation, 15 (6):1373-1396.\nJune 2003.\n[3] M. Belkin and P. Niyogi.\nTowards a Theoretical Foundation for Laplacian-Based Manifold Methods.\nIn Proceedings of the 18th Conference on Learning Theory (COLT).\n2005.\n10 20 30 40 50 60 70 80 90 100 0.35 0.4 0.45 0.5 0.55 size of the neighborhood clusteringaccuracy Figure 2: Parameter sensibility testing results on the WebACE dataset with the regularization parameters being fixed to 0.1, and the neighborhood size varing from 10 to 100.\n[4] M. Belkin, P. Niyogi and V. Sindhwani.\nManifold Regularization: a Geometric Framework for Learning from Examples.\nJournal of Machine Learning Research 7, 1-48, 2006.\n[5] D. Boley.\nPrincipal Direction Divisive Partitioning.\nData mining and knowledge discovery, 2:325-344, 1998.\n[6] L. Bottou and V. Vapnik.\nLocal learning algorithms.\nNeural Computation, 4:888-900, 1992.\n[7] P. K. Chan, D. F. Schlag and J. Y. Zien.\nSpectral K-way Ratio-Cut Partitioning and Clustering.\nIEEE Trans.\nComputer-Aided Design, 13:1088-1096, Sep. 1994.\n[8] D. R. Cutting, D. R. Karger, J. O. Pederson and J. W. Tukey.\nScatter\/Gather: A Cluster-Based Approach to Browsing Large Document Collections.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 1992.\n[9] I. S. Dhillon and D. S. Modha.\nConcept Decompositions for Large Sparse Text Data using Clustering.\nMachine Learning, vol.\n42(1), pages 143-175, January 2001.\n[10] C. Ding, X. He, and H. Simon.\nOn the equivalence of nonnegative matrix factorization and spectral clustering.\nIn Proceedings of the SIAM Data Mining Conference, 2005.\n[11] C. Ding, X. He, H. Zha, M. Gu, and H. D. Simon.\nA min-max cut algorithm for graph partitioning and data clustering.\nIn Proc.\nof the 1st International Conference on Data Mining (ICDM), pages 107-114, 2001.\n[12] C. Ding, T. Li, W. Peng, and H. Park.\nOrthogonal Nonnegative Matrix Tri-Factorizations for Clustering.\nIn Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.\n[13] R. O. Duda, P. E. Hart, and D. G. Stork.\nPattern Classification.\nJohn Wiley & Sons, Inc., 2001.\n[14] T. Li, S. Ma, and M. Ogihara.\nDocument Clustering via Adaptive Subspace Iteration.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2004.\n[15] T. Li and C. Ding.\nThe Relationships Among Various Nonnegative Matrix Factorization Methods for Clustering.\nIn Proceedings of the 6th International Conference on Data Mining (ICDM).\n2006.\n[16] X. Liu and Y. Gong.\nDocument Clustering with Cluster Refinement and Model Selection Capabilities.\nIn Proc.\nof the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2002.\n[17] E. Han, D. Boley, M. Gini, R. Gross, K. Hastings, G. Karypis, V. Kumar, B. Mobasher, and J. Moore.\nWebACE: A Web Agent for Document Categorization and Exploration.\nIn Proceedings of the 2nd International Conference on Autonomous Agents (Agents98).\nACM Press, 1998.\n[18] M. Hein, J. Y. Audibert, and U. von Luxburg.\nFrom Graphs to Manifolds - Weak and Strong Pointwise Consistency of Graph Laplacians.\nIn Proceedings of the 18th Conference on Learning Theory (COLT), 470-485.\n2005.\n[19] J. He, M. Lan, C.-L.\nTan, S.-Y.\nSung, and H.-B.\nLow.\nInitialization of Cluster Refinement Algorithms: A Review and Comparative Study.\nIn Proc.\nof Inter.\nJoint Conference on Neural Networks, 2004.\n[20] A. Y. Ng, M. I. Jordan, Y. Weiss.\nOn Spectral Clustering: Analysis and an algorithm.\nIn Advances in Neural Information Processing Systems 14.\n2002.\n[21] B. Sch\u00a8olkopf and A. Smola.\nLearning with Kernels.\nThe MIT Press.\nCambridge, Massachusetts.\n2002.\n[22] J. Shi and J. Malik.\nNormalized Cuts and Image Segmentation.\nIEEE Trans.\non Pattern Analysis and Machine Intelligence, 22(8):888-905, 2000.\n[23] A. Strehl and J. Ghosh.\nCluster Ensembles - A Knowledge Reuse Framework for Combining Multiple Partitions.\nJournal of Machine Learning Research, 3:583-617, 2002.\n[24] V. N. Vapnik.\nThe Nature of Statistical Learning Theory.\nBerlin: Springer-Verlag, 1995.\n[25] Wu, M. and Sch\u00a8olkopf, B.\nA Local Learning Approach for Clustering.\nIn Advances in Neural Information Processing Systems 18.\n2006.\n[26] S. X. Yu, J. Shi.\nMulticlass Spectral Clustering.\nIn Proceedings of the International Conference on Computer Vision, 2003.\n[27] W. Xu, X. Liu and Y. Gong.\nDocument Clustering Based On Non-Negative Matrix Factorization.\nIn Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, 2003.\n[28] H. Zha, X. He, C. Ding, M. Gu and H. Simon.\nSpectral Relaxation for K-means Clustering.\nIn NIPS 14.\n2001.\n[29] T. Zhang and F. J. Oles.\nText Categorization Based on Regularized Linear Classification Methods.\nJournal of Information Retrieval, 4:5-31, 2001.\n[30] L. Zelnik-Manor and P. Perona.\nSelf-Tuning Spectral Clustering.\nIn NIPS 17.\n2005.\n[31] D. Zhou, O. Bousquet, T. N. Lal, J. Weston and B. Sch\u00a8olkopf.\nLearning with Local and Global Consistency.\nNIPS 17, 2005.","lvl-3":"Regularized Clustering for Documents *\nABSTRACT\nIn recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nIn this paper, we propose a novel method for clustering documents using regularization.\nUnlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer.\nSo we call our algorithm Clustering with Local and Global Regularization (CLGR).\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods.\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.\n1.\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation, which is very valuable for complementing the deficiencies of traditional information retrieval technologies.\nAs pointed out by [8], the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months.\nTraditional document retrieval engines tend to fit well with the search end of the spectrum, i.e. they usually provide specified search for documents matching the user's query, however, it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed.\nIn such cases, efficient browsing through a good cluster hierarchy will be definitely helpful.\nGenerally, document clustering methods can be mainly categorized into two classes: hierarchical methods and partitioning methods.\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches.\nFor example, hierarchical agglomerative clustering (HAC) [13] is a typical bottom-up hierarchical clustering method.\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster.\nOn the other hand, partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions.\nFor instance, K-means [13] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers.\nIn this paper, we will focus on the partitioning methods.\nAs we know that there are two main problems existing in partitioning methods (like Kmeans and Gaussian Mixture Model (GMM) [16]): (1) the predefined criterion is usually non-convex which causes many local optimal solutions; (2) the iterative procedure (e.g. the Expectation Maximization (EM) algorithm) for optimizing the criterions usually makes the final solutions heavily depend on the initializations.\nIn the last decades, many methods have been proposed to overcome the above problems of the partitioning methods [19] [28].\nRecently, another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community.\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph, in which the graph nodes represent the data points, and the weights on the edges correspond to the similarities between pairwise points.\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph.\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches, it generally aims to optimize some cut value (e.g. Normalized Cut [22], Ratio Cut [7], Min-Max Cut [11]) defined on an undirected graph.\nAfter some relaxations, these criterions can usually be optimized via eigen-decompositions, which is guaranteed to be global optimal.\nIn this way, spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph.\nIn this paper, we propose a novel document clustering algorithm that inherits the superiority of spectral clustering, i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix.\nHowever, unlike spectral clustering, which just enforces a smoothness constraint on the data labels over the whole data manifold [2], our method first construct a regularized linear label predictor for each data point from its neighborhood as in [25], and then combine the results of all these local label predictors with a global label smoothness regularizer.\nSo we call our method Clustering with Local and Global Regularization (CLGR).\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31], and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods.\nThe rest of this paper is organized as follows: in section 2 we will introduce our CLGR algorithm in detail.\nThe experimental results on several datasets are presented in section 3, followed by the conclusions and discussions in section 4.\n2.\nTHE PROPOSED ALGORITHM\n2.1 Document Representation\n2.2 Local Regularization\n2.2.1 Motivation\n2.2.2 Constructing the Local Regularized Predictors\n2.2.3 Combining the Local Regularized Predictors\n2.3 Global Regularization\n2.4 Clustering with Local and Global Regularization\n2.5 Multi-Class CLGR\n3.\nEXPERIMENTS\n3.1 Datasets\n3.2 Evaluation Metrics\nCk, Lm\n3.3 Comparisons\n3.4 Experimental Results\n4.\nCONCLUSIONS AND FUTURE WORKS\nIn this paper, we derived a new clustering algorithm called clustering with local and global regularization.\nOur method preserves the merit of local learning algorithms and spectral clustering.\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets.\nIn the future, we will focus on the parameter selection and acceleration issues of the CLGR algorithm.","lvl-4":"Regularized Clustering for Documents *\nABSTRACT\nIn recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nIn this paper, we propose a novel method for clustering documents using regularization.\nUnlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer.\nSo we call our algorithm Clustering with Local and Global Regularization (CLGR).\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods.\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.\n1.\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation, which is very valuable for complementing the deficiencies of traditional information retrieval technologies.\nIn such cases, efficient browsing through a good cluster hierarchy will be definitely helpful.\nGenerally, document clustering methods can be mainly categorized into two classes: hierarchical methods and partitioning methods.\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches.\nFor example, hierarchical agglomerative clustering (HAC) [13] is a typical bottom-up hierarchical clustering method.\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster.\nOn the other hand, partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions.\nFor instance, K-means [13] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers.\nIn this paper, we will focus on the partitioning methods.\nIn the last decades, many methods have been proposed to overcome the above problems of the partitioning methods [19] [28].\nRecently, another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community.\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph, in which the graph nodes represent the data points, and the weights on the edges correspond to the similarities between pairwise points.\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph.\nAfter some relaxations, these criterions can usually be optimized via eigen-decompositions, which is guaranteed to be global optimal.\nIn this way, spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph.\nIn this paper, we propose a novel document clustering algorithm that inherits the superiority of spectral clustering, i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix.\nSo we call our method Clustering with Local and Global Regularization (CLGR).\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31], and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods.\nThe rest of this paper is organized as follows: in section 2 we will introduce our CLGR algorithm in detail.\nThe experimental results on several datasets are presented in section 3, followed by the conclusions and discussions in section 4.\n4.\nCONCLUSIONS AND FUTURE WORKS\nIn this paper, we derived a new clustering algorithm called clustering with local and global regularization.\nOur method preserves the merit of local learning algorithms and spectral clustering.\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets.\nIn the future, we will focus on the parameter selection and acceleration issues of the CLGR algorithm.","lvl-2":"Regularized Clustering for Documents *\nABSTRACT\nIn recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nIn this paper, we propose a novel method for clustering documents using regularization.\nUnlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer.\nSo we call our algorithm Clustering with Local and Global Regularization (CLGR).\nWe will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods.\nFinally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.\n1.\nINTRODUCTION\nDocument clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatic topic extraction, and fast information retrieval or filtering.\nA good document clustering approach can assist the computers to automatically organize the document corpus into a meaningful cluster hierarchy for efficient browsing and navigation, which is very valuable for complementing the deficiencies of traditional information retrieval technologies.\nAs pointed out by [8], the information retrieval needs can be expressed by a spectrum ranged from narrow keyword-matching based search to broad information browsing such as what are the major international events in recent months.\nTraditional document retrieval engines tend to fit well with the search end of the spectrum, i.e. they usually provide specified search for documents matching the user's query, however, it is hard for them to meet the needs from the rest of the spectrum in which a rather broad or vague information is needed.\nIn such cases, efficient browsing through a good cluster hierarchy will be definitely helpful.\nGenerally, document clustering methods can be mainly categorized into two classes: hierarchical methods and partitioning methods.\nThe hierarchical methods group the data points into a hierarchical tree structure using bottom-up or top-down approaches.\nFor example, hierarchical agglomerative clustering (HAC) [13] is a typical bottom-up hierarchical clustering method.\nIt takes each data point as a single cluster to start off with and then builds bigger and bigger clusters by grouping similar data points together until the entire dataset is encapsulated into one final cluster.\nOn the other hand, partitioning methods decompose the dataset into a number of disjoint clusters which are usually optimal in terms of some predefined criterion functions.\nFor instance, K-means [13] is a typical partitioning method which aims to minimize the sum of the squared distance between the data points and their corresponding cluster centers.\nIn this paper, we will focus on the partitioning methods.\nAs we know that there are two main problems existing in partitioning methods (like Kmeans and Gaussian Mixture Model (GMM) [16]): (1) the predefined criterion is usually non-convex which causes many local optimal solutions; (2) the iterative procedure (e.g. the Expectation Maximization (EM) algorithm) for optimizing the criterions usually makes the final solutions heavily depend on the initializations.\nIn the last decades, many methods have been proposed to overcome the above problems of the partitioning methods [19] [28].\nRecently, another type of partitioning methods based on clustering on data graphs have aroused considerable interests in the machine learning and data mining community.\nThe basic idea behind these methods is to first model the whole dataset as a weighted graph, in which the graph nodes represent the data points, and the weights on the edges correspond to the similarities between pairwise points.\nThen the cluster assignments of the dataset can be achieved by optimizing some criterions defined on the graph.\nFor example Spectral Clustering is one kind of the most representative graph-based clustering approaches, it generally aims to optimize some cut value (e.g. Normalized Cut [22], Ratio Cut [7], Min-Max Cut [11]) defined on an undirected graph.\nAfter some relaxations, these criterions can usually be optimized via eigen-decompositions, which is guaranteed to be global optimal.\nIn this way, spectral clustering efficiently avoids the problems of the traditional partitioning methods as we introduced in last paragraph.\nIn this paper, we propose a novel document clustering algorithm that inherits the superiority of spectral clustering, i.e. the final cluster results can also be obtained by exploit the eigen-structure of a symmetric matrix.\nHowever, unlike spectral clustering, which just enforces a smoothness constraint on the data labels over the whole data manifold [2], our method first construct a regularized linear label predictor for each data point from its neighborhood as in [25], and then combine the results of all these local label predictors with a global label smoothness regularizer.\nSo we call our method Clustering with Local and Global Regularization (CLGR).\nThe idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31], and our experimental evaluations on several real document datasets show that CLGR performs better than many state-of-the-art clustering methods.\nThe rest of this paper is organized as follows: in section 2 we will introduce our CLGR algorithm in detail.\nThe experimental results on several datasets are presented in section 3, followed by the conclusions and discussions in section 4.\n2.\nTHE PROPOSED ALGORITHM\nIn this section, we will introduce our Clustering with Local and Global Regularization (CLGR) algorithm in detail.\nFirst let's see the how the documents are represented throughout this paper.\n2.1 Document Representation\nIn our work, all the documents are represented by the weighted term-frequency vectors.\nLet W = {w1, w2, \u00b7 \u00b7 \u00b7, wm} be the complete vocabulary set of the document corpus (which is preprocessed by the stopwords removal and words stemming operations).\nThe term-frequency vector xi of document di is defined as\nwhere tik is the term frequency of wk W, n is the size of the document corpus, idfk is the number of documents that contain word wk.\nIn this way, xi is also called the TFIDF representation of document di.\nFurthermore, we also normalize each xi (1 S i S n) to have a unit length, so that each document is represented by a normalized TF-IDF vector.\n2.2 Local Regularization\nAs its name suggests, CLGR is composed of two parts: local regularization and global regularization.\nIn this subsection we will introduce the local regularization part in detail.\n2.2.1 Motivation\nAs we know that clustering is one type of learning techniques, it aims to organize the dataset in a reasonable way.\nGenerally speaking, learning can be posed as a problem of function estimation, from which we can get a good classification function that will assign labels to the training dataset and even the unseen testing dataset with some cost minimized [24].\nFor example, in the two-class classification scenario1 (in which we exactly know the label of each document), a linear classifier with least square fit aims to learn a column vector w such that the squared cost\nis minimized, where yi {+1, \u2212 1} is the label of xi.\nBy taking J \/ w = 0, we get the solution\nwhere X = [x1, x2, \u00b7 \u00b7 \u00b7, xn] is an m \u00d7 n document matrix, y = [y1, y2, \u00b7 \u00b7 \u00b7, yn] T is the label vector.\nThen for a test document t, we can determine its label by l = sign (w * T u), (4) where sign (\u00b7) is the sign function.\nA natural problem in Eq.\n(3) is that the matrix XXT may be singular and thus not invertable (e.g. when m n).\nTo avoid such a problem, we can add a regularization term and minimize the following criterion\nwhere is a regularization parameter.\nThen the optimal solution that minimize J' is given by\nwhere I is an m \u00d7 m identity matrix.\nIt has been reported that the regularized linear classifier can achieve very good results on text classification problems [29].\nHowever, despite its empirical success, the regularized linear classifier is on earth a global classifier, i.e. w * is estimated using the whole training set.\nAccording to [24], this may not be a smart idea, since a unique w * may not be good enough for predicting the labels of the whole input space.\nIn order to get better predictions, [6] proposed to train classifiers locally and use them to classify the testing points.\nFor example, a testing point will be classified by the local classifier trained using the training points located in the vicinity 1In the following discussions we all assume that the documents coming from only two classes.\nThe generalizations of our method to multi-class cases will be discussed in section 2.5.\nof it.\nAlthough this method seems slow and stupid, it is reported that it can get better performances than using a unique global classifier on certain tasks [6].\n2.2.2 Constructing the Local Regularized Predictors\nInspired by their success, we proposed to apply the local learning algorithms for clustering.\nThe basic idea is that, for each document vector xi (1 S i S n), we train a local label predictor based on its k-nearest neighborhood Ni, and then use it to predict the label of xi.\nFinally we will combine all those local predictors by minimizing the sum of their prediction errors.\nIn this subsection we will introduce how to construct those local predictors.\nDue to the simplicity and effectiveness of the regularized linear classifier that we have introduced in section 2.2.1, we choose it to be our local label predictor, such that for each document xi, the following criterion is minimized\nwhere ni = | Ni | is the cardinality of Ni, and qj is the cluster membership of xj.\nThen using Eq.\n(6), we can get the optimal solution is\nwhere Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7, xini], and we use xik to denote the k-th nearest neighbor of xi.\nqi = [qi1, qi2, \u00b7 \u00b7 \u00b7, qini] T with qik representing the cluster assignment of xik.\nThe problem here is that XiXTi is an m \u00d7 m matrix with m ni, i.e. we should compute the inverse of an m \u00d7 m matrix for every document vector, which is computationally prohibited.\nFortunately, we have the following theorem: Theorem 1.\nw i in Eq.\n(8) can be rewritten as\nwhere Ii is an ni \u00d7 ni identity matrix.\nUsing theorem 1, we only need to compute the inverse of an ni \u00d7 ni matrix for every document to train a local label predictor.\nMoreover, for a new testing point u that falls into Ni, we can classify it by the sign of\nThis is an attractive expression since we can determine the cluster assignment of u by using the inner-products between the points in {u Ni}, which suggests that such a local regularizer can easily be kernelized [21] as long as we define a proper kernel function.\n2.2.3 Combining the Local Regularized Predictors\nAfter all the local predictors having been constructed, we will combine them together by minimizing\nwhich stands for the sum of the prediction errors for all the local predictors.\nCombining Eq.\n(10) with Eq.\n(6), we can get\nwhere q = [q1, q2, \u00b7 \u00b7 \u00b7, qn] T, and the P is an n \u00d7 n matrix constructing in the following way.\nLet () \u2212 1 = xTi Xi XTi Xi + iniIi, then aij, if xj Ni Pij = 0, otherwise, (12) where Pij is the (i, j) - th entry of P, and aij represents the j-th entry of ai.\nTill now we can write the criterion of clustering by combining locally regularized linear label predictors Jl in an explicit mathematical form, and we can minimize it directly using some standard optimization techniques.\nHowever, the results may not be good enough since we only exploit the local informations of the dataset.\nIn the next subsection, we will introduce a global regularization criterion and combine it with Jl, which aims to find a good clustering result in a local-global way.\n2.3 Global Regularization\nIn data clustering, we usually require that the cluster assignments of the data points should be sufficiently smooth with respect to the underlying data manifold, which implies (1) the nearby points tend to have the same cluster assignments; (2) the points on the same structure (e.g. submanifold or cluster) tend to have the same cluster assignments [31].\nWithout the loss of generality, we assume that the data points reside (roughly) on a low-dimensional manifold M2, and q is the cluster assignment function defined on M, i.e. 2We believe that the text data are also sampled from some low dimensional manifold, since it is impossible for them to\nfor x M, q (x) returns the cluster membership of x.\nThe smoothness of q over M can be calculated by the following Dirichlet integral [2] f\nwhere the gradient q is a vector in the tangent space TMx, and the integral is taken with respect to the standard measure on M.\nIf we restrict the scale of q by q, q M = 1 (where \u00b7, \u00b7 M is the inner product induced on M), then it turns out that finding the smoothest function minimizing D [q] reduces to finding the eigenfunctions of the Laplace Beltrami operator L, which is defined as Lq A \u2212 div q, (14) where div is the divergence of a vector field.\nGenerally, the graph can be viewed as the discretized form of manifold.\nWe can model the dataset as an weighted undirected graph as in spectral clustering [22], where the graph nodes are just the data points, and the weights on the edges represent the similarities between pairwise points.\nThen it can be shown that minimizing Eq.\n(13) corresponds to minimizing\nwhere q = [q1, q2, \u00b7 \u00b7 \u00b7, qn] T with qi = q (xi), L is the graph Laplacian with its (i, j) - th entry di \u2212 wii, if i = j \u2212 wij, if xi and xj are adjacent (16) 0, otherwise, where di = Ej wij is the degree of xi, wij is the similarity between xi and xj.\nIf xi and xj are adjacent3, wij is usually computed in the following way\nwhere v is a dataset dependent parameter.\nIt is proved that under certain conditions, such a form of wij to determine the weights on graph edges leads to the convergence of graph Laplacian to the Laplace Beltrami operator [3] [18].\nIn summary, using Eq.\n(15) with exponential weights can effectively measure the smoothness of the data assignments with respect to the intrinsic data manifold.\nThus we adopt it as a global regularizer to punish the smoothness of the predicted data assignments.\n2.4 Clustering with Local and Global Regularization\nCombining the contents we have introduced in section 2.2 and section 2.3 we can derive the clustering criterion is minq J = Jl + AJg = Pq \u2212 q 2 + AqT Lq\nwhere P is defined as in Eq.\n(12), and A is a regularization parameter to trade off Jl and Jg.\nHowever, the discrete fill in the whole high-dimensional sample space.\nAnd it has been shown that the manifold based methods can achieve good results on text classification tasks [31].\n3In this paper, we define xi and xj to be adjacent if xi N (xj) or xj N (xi).\nconstraint of pi makes the problem an NP hard integer programming problem.\nA natural way for making the problem solvable is to remove the constraint and relax qi to be continuous, then the objective that we aims to minimize becomes\nand we further add a constraint qT q = 1 to restrict the scale of q.\nThen our objective becomes\nUsing the Lagrangian method, we can derive that the optimal solution q corresponds to the smallest eigenvector of the matrix M = (P \u2212 I) T (P \u2212 I) + AL, and the cluster assignment of xi can be determined by the sign of qi, i.e. xi will be classified as class one if qi> 0, otherwise it will be classified as class 2.\n2.5 Multi-Class CLGR\nIn the above we have introduced the basic framework of Clustering with Local and Global Regularization (CLGR) for the two-class clustering problem, and we will extending it to multi-class clustering in this subsection.\nFirst we assume that all the documents belong to C classes indexed by L = {1, 2, \u00b7 \u00b7 \u00b7, C}.\nqc is the classification function for class c (1 S c S C), such that qc (xi) returns the confidence that xi belongs to class c.\nOur goal is to obtain the value of qc (xi) (1 S c S C, 1 S i S n), and the cluster assignment of xi can be determined by {qc (xi)} Cc = 1 using some proper discretization methods that we will introduce later.\nTherefore, in this multi-class case, for each document xi (1 S i S n), we will construct C locally linear regularized label predictors whose normal vectors are () \u2212 1 wc * i = Xi XTi Xi + AiniIi qci (1 S c S C), (21) where Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7, xini] with xik being the k-th neighbor of xi, and qci = [qci1, qci2, \u00b7 \u00b7 \u00b7, qcini] T with qcik = qc (xik).\nThen (wc * i) T xi returns the predicted confidence of xi belonging to class c. Hence the local prediction error for class c can be defined as\nAnd the total local prediction error becomes\nAs in Eq.\n(11), we can define an n \u00d7 n matrix P (see Eq.\n(12)) and rewrite Jl as Pqc \u2212 qc 2.\n(24) Similarly we can define the global smoothness regularizer\nin multi-class case as (qc) T Lqc.\n(25) Then the criterion to be minimized for CLGR in multi-class case becomes\nwhere Q = [q1, q2, \u00b7 \u00b7 \u00b7, qc] is an n \u00d7 c matrix, and trace (\u00b7) returns the trace of a matrix.\nThe same as in Eq.\n(20), we also add the constraint that QT Q = I to restrict the scale of Q.\nThen our optimization problem becomes\nFrom the Ky Fan theorem [28], we know the optimal solution of the above problem is\nwhere q * k (1 S k S C) is the eigenvector corresponds to the k-th smallest eigenvalue of matrix (P \u2212 I) T (P \u2212 I) + AL, and R is an arbitrary C \u00d7 C matrix.\nSince the values of the entries in Q * is continuous, we need to further discretize Q * to get the cluster assignments of all the data points.\nThere are mainly two approaches to achieve this goal: 1.\nAs in [20], we can treat the i-th row of Q as the embedding of xi in a C-dimensional space, and apply some traditional clustering methods like kmeans to clustering these embeddings into C clusters.\n2.\nSince the optimal Q * is not unique (because of the existence of an arbitrary matrix R), we can pursue an optimal R that will rotate Q * to an indication matrix4.\nThe detailed algorithm can be referred to [26].\nThe detailed algorithm procedure for CLGR is summarized in table 1.\n3.\nEXPERIMENTS\nIn this section, experiments are conducted to empirically compare the clustering results of CLGR with other 8 representitive document clustering algorithms on 5 datasets.\nFirst we will introduce the basic informations of those datasets.\n3.1 Datasets\nWe use a variety of datasets, most of which are frequently used in the information retrieval research.\nTable 2 summarizes the characteristics of the datasets.\n4Here an indication matrix T is a n \u00d7 c matrix with its (i, j) th entry Tij {0, 1} such that for each row of Q * there is only one 1.\nThen the xi can be assigned to the j-th cluster such that j = argjQ * ij = 1.\nTable 1: Clustering with Local and Global Regularization (CLGR)\nInput:\n1.\nDataset X = {xi} ni = 1; 2.\nNumber of clusters C; 3.\nSize of the neighborhood K; 4.\nLocal regularization parameters {ai} n i = 1; 5.\nGlobal regularization parameter A; Output: The cluster membership of each data point.\nProcedure: 1.\nConstruct the K nearest neighborhoods for each data point; 2.\nConstruct the matrix P using Eq.\n(12); 3.\nConstruct the Laplacian matrix L using Eq.\n(16); 4.\nConstruct the matrix M = (P \u2212 I) T (P \u2212 I) + AL; 5.\nDo eigenvalue decomposition on M, and construct the matrix Q * according to Eq.\n(28); 6.\nOutput the cluster assignments of each data point by properly discretize Q *.\nTable 2: Descriptions of the document datasets\nCSTR.\nThis is the dataset of the abstracts of technical reports published in the Department of Computer Science at a university.\nThe dataset contained 476 abstracts, which were divided into four research areas: Natural Language Processing (NLP), Robotics\/Vision, Systems, and Theory.\nWebKB.\nThe WebKB dataset contains webpages gathered from university computer science departments.\nThere are about 8280 documents and they are divided into 7 categories: student, faculty, staff, course, project, department and other.\nThe raw text is about 27MB.\nAmong these 7 categories, student, faculty, course and project are four most populous entity-representing categories.\nThe associated subset is typically called WebKB4.\nReuters.\nThe Reuters-21578 Text Categorization Test collection contains documents collected from the Reuters newswire in 1987.\nIt is a standard text categorization benchmark and contains 135 categories.\nIn our experiments, we use a subset of the data collection which includes the 10 most frequent categories among the 135 topics and we call it Reuters-top 10.\nWebACE.\nThe WebACE dataset was from WebACE project and has been used for document clustering [17] [5].\nThe WebACE dataset contains 2340 documents consisting news articles from Reuters new service via the Web in October 1997.\nThese documents are divided into 20 classes.\nNews4.\nThe News4 dataset used in our experiments are selected from the famous 20-newsgroups dataset5.\nThe topic rec containing autos, motorcycles, baseball and hockey was selected from the version 20news-18828.\nThe News4 dataset contains 3970 document vectors.\nTo pre-process the datasets, we remove the stop words using a standard stop list, all HTML tags are skipped and all header fields except subject and organization of the posted articles are ignored.\nIn all our experiments, we first select the top 1000 words by mutual information with class labels.\n3.2 Evaluation Metrics\nIn the experiments, we set the number of clusters equal to the true number of classes C for all the clustering algorithms.\nTo evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures.\nClustering Accuracy (Acc).\nThe first performance measure is the Clustering Accuracy, which discovers the one-toone relationship between clusters and classes and measures the extent to which each cluster contained data points from the corresponding class.\nIt sums up the whole matching degree between all pair class-clusters.\nClustering accuracy can be computed as:\nCk, Lm\nwhere Ck denotes the k-th cluster in the final results, and Lm is the true m-th class.\nT (Ck, Lm) is the number of entities which belong to class m are assigned to cluster k. Accuracy computes the maximum sum of T (Ck, Lm) for all pairs of clusters and classes, and these pairs have no overlaps.\nThe greater clustering accuracy means the better clustering performance.\nNormalized Mutual Information (NMI).\nAnother evaluation metric we adopt here is the Normalized Mutual Information NMI [23], which is widely used for determining the quality of clusters.\nFor two random variable X and Y, the NMI is defined as:\nwhere I (X, Y) is the mutual information between X and Y, while H (X) and H (Y) are the entropies of X and Y respectively.\nOne can see that NMI (X, X) = 1, which is the maximal possible value of NMI.\nGiven a clustering result, the NMI in Eq.\n(30) is estimated as where nk denotes the number of data contained in the cluster Ck (1 S k S C), \u02c6nm is the number of data belonging to the m-th class (1 S m S C), and nk, m denotes the number of data that are in the intersection between the cluster Ck and the m-th class.\nThe value calculated in Eq.\n(31) is used as a performance measure for the given clustering result.\nThe larger this value, the better the clustering performance.\n3.3 Comparisons\nWe have conducted comprehensive performance evaluations by testing our method and comparing it with 8 other representative data clustering methods using the same data corpora.\nThe algorithms that we evaluated are listed below.\n1.\nTraditional k-means (KM).\n2.\nSpherical k-means (SKM).\nThe implementation is based on [9].\n3.\nGaussian Mixture Model (GMM).\nThe implementation is based on [16].\n4.\nSpectral Clustering with Normalized Cuts (Ncut).\nThe implementation is based on [26], and the variance of the Gaussian similarity is determined by Local Scaling [30].\nNote that the criterion that Ncut aims to minimize is just the global regularizer in our CLGR algorithm except that Ncut used the normalized Laplacian.\n5.\nClustering using Pure Local Regularization (CPLR).\nIn this method we just minimize Jl (defined in Eq.\n(24)), and the clustering results can be obtained by doing eigenvalue decomposition on matrix (I \u2212 P) T (I \u2212 P) with some proper discretization methods.\n6.\nAdaptive Subspace Iteration (ASI).\nThe implementation is based on [14].\n7.\nNonnegative Matrix Factorization (NMF).\nThe implementation is based on [27].\n8.\nTri-Factorization Nonnegative Matrix Factorization (TNMF) [12].\nThe implementation is based on [15].\nFor computational efficiency, in the implementation of CPLR and our CLGR algorithm, we have set all the local regularization parameters {i} n i = 1 to be identical, which is set by grid search from {0.1, 1, 10}.\nThe size of the k-nearest neighborhoods is set by grid search from {20, 40, 80}.\nFor the CLGR method, its global regularization parameter is set by grid search from {0.1, 1, 10}.\nWhen constructing the global regularizer, we have adopted the local scaling method [30] to construct the Laplacian matrix.\nThe final discretization method adopted in these two methods is the same as in [26], since our experiments show that using such method can achieve better results than using kmeans based methods as in [20].\n3.4 Experimental Results\nThe clustering accuracies comparison results are shown in table 3, and the normalized mutual information comparison results are summarized in table 4.\nFrom the two tables we mainly observe that:\n1.\nOur CLGR method outperforms all other document clustering methods in most of the datasets; 2.\nFor document clustering, the Spherical k-means method usually outperforms the traditional k-means clustering method, and the GMM method can achieve competitive results compared to the Spherical k-means method; 3.\nThe results achieved from the k-means and GMM type algorithms are usually worse than the results achieved from Spectral Clustering.\nSince Spectral Clustering can be viewed as a weighted version of kernel k-means, it can obtain good results the data clusters are arbitrarily shaped.\nThis corroborates that the documents vectors are not regularly distributed (spherical or elliptical).\n4.\nThe experimental comparisons empirically verify the equivalence between NMF and Spectral Clustering, which\nTable 3: Clustering accuracies of the various methods\nTable 4: Normalized mutual information results of the various methods\nhas been proved theoretically in [10].\nIt can be observed from the tables that NMF and Spectral Clustering usually lead to similar clustering results.\n5.\nThe co-clustering based methods (TNMF and ASI) can usually achieve better results than traditional purely document vector based methods.\nSince these methods perform an implicit feature selection at each iteration, provide an adaptive metric for measuring the neighborhood, and thus tend to yield better clustering results.\n6.\nThe results achieved from CPLR are usually better than the results achieved from Spectral Clustering, which supports Vapnik's theory [24] that sometimes local learning algorithms can obtain better results than global learning algorithms.\nBesides the above comparison experiments, we also test the parameter sensibility of our method.\nThere are mainly two sets of parameters in our CLGR algorithm, the local and global regularization parameters ({Ai} n i = 1 and A, as we have said in section 3.3, we have set all Ai's to be identical to A * in our experiments), and the size of the neighborhoods.\nTherefore we have also done two sets of experiments: 1.\nFixing the size of the neighborhoods, and testing the clustering performance with varying A * and A.\nIn this set of experiments, we find that our CLGR algorithm can achieve good results when the two regularization parameters are neither too large nor too small.\nTypically our method can achieve good results when A * and A are around 0.1.\nFigure 1 shows us such a testing example on the WebACE dataset.\n2.\nFixing the local and global regularization parameters, and testing the clustering performance with different\nFigure 1: Parameter sensibility testing results on the WebACE dataset with the neighborhood size fixed to 20, and the x-axis and y-axis represents the loge value of A * and A.\nsizes of neighborhoods.\nIn this set of experiments, we find that the neighborhood with a too large or too small size will all deteriorate the final clustering results.\nThis can be easily understood since when the neighborhood size is very small, then the data points used for training the local classifiers may not be sufficient; when the neighborhood size is very large, the trained classifiers will tend to be global and cannot capture the typical local characteristics.\nFigure 2 shows us a testing example on the WebACE dataset.\nTherefore, we can see that our CLGR algorithm (1) can achieve satisfactory results and (2) is not very sensitive to the choice of parameters, which makes it practical in real world applications.\n4.\nCONCLUSIONS AND FUTURE WORKS\nIn this paper, we derived a new clustering algorithm called clustering with local and global regularization.\nOur method preserves the merit of local learning algorithms and spectral clustering.\nOur experiments show that the proposed algorithm outperforms most of the state of the art algorithms on many benchmark datasets.\nIn the future, we will focus on the parameter selection and acceleration issues of the CLGR algorithm.","keyphrases":["regular","document cluster","document cluster","global regular","cluster hierarchi","spectrum","specifi search","hierarch method","partit method","label predict","function estim","manifold"],"prmu":["P","P","P","P","M","U","U","M","M","M","U","U"]} {"id":"J-8","title":"Strong Equilibrium in Cost Sharing Connection Games","abstract":"In this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games). We study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges. Our main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games). (2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games). (3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games. As for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398(log n) from the optimal solution, where n is the number of players. (This should be contrasted with the \u2126(n) price of anarchy for the same setting.) For single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.","lvl-1":"Strong Equilibrium in Cost Sharing Connection Games\u2217 Amir Epstein School of Computer Science Tel-Aviv University Tel-Aviv, 69978, Israel amirep@tau.ac.il Michal Feldman School of Computer Science The Hebrew University of Jerusalem Jerusalem, 91904, Israel mfeldman@cs.huji.ac.il Yishay Mansour School of Computer Science Tel-Aviv University Tel-Aviv, 69978, Israel mansour@tau.ac.il ABSTRACT In this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games).\nWe study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges.\nOur main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games).\n(2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games).\n(3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games.\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398(log n) from the optimal solution, where n is the number of players.\n(This should be contrasted with the \u2126(n) price of anarchy for the same setting.)\nFor single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; F.2.0 [Analysis of Algorithms and Problem Complexity]: General; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Electronic Commerce]: Payment schemes General Terms Theory, Economics, Algorithms 1.\nINTRODUCTION Computational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems.\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance, but rather under the control of individuals with different incentives, has led already to many important insights.\nConsider classical routing and transportation problems such as multicast or multi-commodity problems, which are many times viewed as follows.\nWe are given a graph with edge costs and connectivity demands between nodes, and our goal is to find a minimal cost solution.\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives.\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility, and the resulting outcome could be far from the optimal solution.\nWhen considering individual incentives one needs to discuss the appropriate solution concept.\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept.\nIndeed Nash equilibrium has many benefits, and most importantly it always exists (in mixed strategies).\nHowever, the solution concept of Nash equilibrium is resilient only to unilateral deviations, while in reality, players may be able to coordinate their actions.\nA strong equilibrium [4] is a state from which no coalition (of any size) can deviate and improve the utility of every member of the coalition (while possibly lowering the utility 84 of players outside the coalition).\nThis resilience to deviations by coalitions of the players is highly attractive, and one can hope that once a strong equilibrium is reached it is highly likely to sustain.\nFrom a computational game theory point of view, an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior.\nThe strong price of anarchy (SPoA), introduced in [1], is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution.\nObviously, SPoA is meaningful only in those cases where a strong equilibrium exists.\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium.\nEven simple classical games like the prisoner``s dilemma do not posses any strong equilibrium (which is also an example of a congestion game that does not posses a strong equilibrium1 ).\nThis unfortunate fact has reduced the concentration in strong equilibrium, despite its highly attractive properties.\nYet, [1] have identified two broad families of games, namely job scheduling and network formation, where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy (which is the ratio between the worst Nash equilibrium and the optimal solution [15, 18, 5, 6]).\nIn this work we concentrate on cost sharing connection games, introduced by [3, 2].\nIn such a game, there is an underlying directed graph with edge costs, and individual users have connectivity demands (between a source and a sink).\nWe consider two models.\nThe fair cost connection model [2] allows each player to select a path from the source to the sink2 .\nIn this game the cost of an edge is shared equally between all the players that selected the edge, and the cost of the player is the sum of its costs on the edges it selected.\nThe general connection game [3] allows each player to offer prices for edges.\nIn this game an edge is bought if the sum of the offers at least covers its cost, and the cost of the player is the sum of its offers on the bought edges (in both games we assume that the player has to guarantee the connectivity between its source and sink).\nIn this work we focus on two important issues.\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed, and the second one is the quality of the strong equilibria.\nFor the existence part, we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs.\nOne can view this separation between the graph topology and the edge costs, as a separation between the underlying infrastructure and the costs the players observe to purchase edges.\nWhile one expects the infrastructure to be stable over long periods of time, the costs the players observe can be easily modified over short time periods.\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network.\nOur results are as follows.\nFor the single commodity case (all the players have the same source and sink), there is a strong equilibrium in any graph (both for fair and general connection games).\nMoreover, the strong equilibrium is also 1 while any congestion game is known to admit at least one Nash equilibrium in pure strategies [16].\n2 The fair cost sharing scheme is also attractive from a mechanism design point of view, as it is a strategyproof costsharing mechanism [14].\nthe optimal solution (namely, the players share a shortest path from the common source to the common sink).\nFor the case of a single source and multiple sinks (for example, in a multicast tree), we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph, and we show an example of a nonseries parallel graph that does not have a strong equilibrium.\nFor the case of multi-commodity (multi sources and sinks), we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium, and we show an example of a series parallel graph that does not have a strong equilibrium.\nAs far as we know, we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games.\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398(log n) from the optimal solution, where n is the number of players.\nThis should be contrasted with the \u0398(n) bound that exists for the price of anarchy [2].\nFor single source general connection games, we show that any series parallel graph possesses a strong equilibrium, and we show an example of a graph that does not have a strong equilibrium.\nIn this case we also show that any strong equilibrium is optimal.\nRelated work Topological characterizations for single-commodity network games have been recently provided for various equilibrium properties, including equilibrium existence [12, 7, 8], equilibrium uniqueness [10] and equilibrium efficiency [17, 11].\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [12].\nThe existence of strong equilibrium was studied in both utility-decreasing (e.g., routing) and utility-increasing (e.g., fair cost-sharing) congestion games.\n[7, 8] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games, and showed that a SE always exists if and only if the underlying graph is extension-parallel.\n[19] have shown that in single-commodity utility-increasing congestion games, the topological characterization is essentially equivalent to parallel links.\nIn addition, they have shown that these results hold for correlated strong equilibria as well (in contrast to the decreasing setting, where correlated strong equilibria might not exist at all).\nWhile the fair cost sharing games we study are utility increasing network congestion games, we derive a different characterization than [19] due to the different assumptions regarding the players'' actions.3 2.\nMODEL 2.1 Game Theory definitions A game \u039b =< N, (\u03a3i), (ci) > has a finite set N = {1, ... , n} of players.\nPlayer i \u2208 N has a set \u03a3i of actions, the joint action set is \u03a3 = \u03a31 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03a3n and a joint action S \u2208 \u03a3 is also called a profile.\nThe cost function of player i is 3 In [19] they allow to restrict some players from using certain links, even though the links exist in the graph, while we do not allow this, and assume that the available strategies for players are fully represented by the underlying graph.\n85 ci : \u03a3 \u2192 R+ , which maps the joint action S \u2208 \u03a3 to a non-negative real number.\nLet S = (S1, ... , Sn) denote the profile of actions taken by the players, and let S\u2212i = (S1, ... , Si\u22121, Si+1, ... , Sn) denote the profile of actions taken by all players other than player i. Note that S = (Si, S\u2212i).\nThe social cost of a game \u039b is the sum of the costs of the players, and we denote by OPT(\u039b) the minimal social cost of a game \u039b.\ni.e., OPT(\u039b) = minS\u2208\u03a3 cost\u039b(S), where cost\u039b(S) = i\u2208N ci(S).\nA joint action S \u2208 \u03a3 is a pure Nash equilibrium if no player i \u2208 N can benefit from unilaterally deviating from his action to another action, i.e., \u2200i \u2208 N \u2200Si \u2208 \u03a3i : ci(S\u2212i, Si) \u2265 ci(S).\nWe denote by NE(\u039b) the set of pure Nash equilibria in the game \u039b.\nResilience to coalitions: A pure deviation of a set of players \u0393 \u2282 N (also called coalition) specifies an action for each player in the coalition, i.e., \u03b3 \u2208 \u00d7i\u2208\u0393\u03a3i.\nA joint action S \u2208 \u03a3 is not resilient to a pure deviation of a coalition \u0393 if there is a pure joint action \u03b3 of \u0393 such that ci(S\u2212\u0393, \u03b3) < ci(S) for every i \u2208 \u0393 (i.e., the players in the coalition can deviate in such a way that each player in the coalition reduces its cost).\nA pure Nash equilibrium S \u2208 \u03a3 is a k-strong equilibrium, if there is no coalition \u0393 of size at most k, such that S is not resilient to a pure deviation by \u0393.\nWe denote by k-SE(\u039b) the set of k-strong equilibria in the game \u039b.\nWe denote by SE(\u039b) the set of n-strong equilibria, and call S \u2208 SE(\u039b) a strong equilibrium (SE).\nNext we define the Price of Anarchy [9], Price of Stability [2], and their extension to Strong Price of Anarchy and Strong Price of Stability.\nof anarchy (k-SPoA) for the game \u039b.\nThe Price of Anarchy (PoA) is the ratio between the maximal cost of a pure Nash equilibrium (assuming one exists) and the social optimum, i.e., maxS\u2208NE(\u039b) cost\u039b(S) \/OPT(\u039b).\nSimilarly, the Price of Stability (PoS) is the ratio between the minimal cost of a pure Nash equilibrium and the social optimum, i.e., minS\u2208NE(\u039b) cost\u039b(S)\/OPT(\u039b).\nThe k-Strong Price of Anarchy (k-SPoA) is the ratio between the maximal cost of a k-strong equilibrium (assuming one exists) and the social optimum, i.e., maxS\u2208k-SE(\u039b) cost\u039b(S) \/OPT(\u039b).\nThe SPoA is the n-SPoA.\nSimilarly, the Strong Price of Stability (SPoS) is the ratio between the minimal cost of a pure strong equilibrium and the social optimum, i.e., minS\u2208SE(\u039b) cost\u039b(S)\/OPT(\u039b).\nNote that both k-SPoA and SPoS are defined only if some strong equilibrium exists.\n2.2 Cost Sharing Connection Games A cost sharing connection game has an underlying directed graph G = (V, E) where each edge e \u2208 E has an associated cost ce \u2265 04 .\nIn a connection game each player i \u2208 N has an associated source si and sink ti.\nIn a fair connection game the actions \u03a3i of player i include all the paths from si to ti.\nThe cost of each edge is shared equally by the set of all players whose paths contain it.\nGiven a joint action, the cost of a player is the sum of his costs on the edges it selected.\nMore formally, the cost function of each player on an edge e, in a joint action S, is fe(ne(S)) = ce ne(S) , where ne(S) is the number of players that selected a path containing edge e in S.\nThe cost of player i, when selecting path Qi \u2208 \u03a3i is ci(S) = e\u2208Qi fe(ne(S)).\n4 In some of the existence proofs, we assume that ce > 0 for simplicity.\nThe full version contains the complete proofs for the case ce \u2265 0.\nIn a general connection game the actions \u03a3i of player i is a payment vector pi, where pi(e) is how much player i is offering to contribute to the cost of edge e.5 Given a profile p, any edge e such that i pi(e) \u2265 ce is considered bought, and Ep denotes the set of bought edges.\nLet Gp = (V, Ep) denote the graph bought by the players for profile p = (p1, ... , pn).\nClearly, each player tries to minimize his total payment which is ci(p) = e\u2208Ep pi(e) if si is connected to ti in Gp, and infinity otherwise.6 We denote by c(p) = i ci(p) the total cost under the profile p. For a subgraph H of G we denote the total cost of the edges in H by c(H).\nA symmetric connection game implies that the source and sink of all the players are identical.\n(We also call a symmetric connection game a single source single sink connection game, or a single commodity connection game.)\nA single source connection game implies that the sources of all the players are identical.\nFinally, A multi commodity connection game implies that each player has its own source and sink.\n2.3 Extension Parallel and Series Parallel Directed Graphs Our directed graphs would be acyclic, and would have a source node (from which all nodes are reachable) and a sink node (which every node can reach).\nWe first define the following actions for composition of directed graphs.\n\u2022 Identification: The identification operation allows to collapse two nodes to one.\nMore formally, given graph G = (V, E) we define the identification of a node v1 \u2208 V and v2 \u2208 V forming a new node v \u2208 V as creating a new graph G = (V , E ), where V = V \u2212{v1, v2}\u222a{v} and E includes the edges of E where the edges of v1 and v2 are now connected to v. \u2022 Parallel composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and s2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively, we define a new graph G = G1||G2 as follows.\nLet G = (V1 \u222a V2, E1 \u222a E2) be the union graph.\nTo create G = G1||G2 we identify the sources s1 and s2, forming a new source node s, and identify the sinks t1 and t2, forming a new sink t. \u2022 Series composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and s2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively, we define a new graph G = G1 \u2192 G2 as follows.\nLet G = (V1 \u222a V2, E1 \u222a E2) be the union graph.\nTo create G = G1 \u2192 G2 we identify the vertices t1 and s2, forming a new vertex u.\nThe graph G has a source s = s1 and a sink t = t2.\n\u2022 Extension composition : A series composition when one of the graphs, G1 or G2, is composed of a single directed edge is an extension composition, and we denote it by G = G1 \u2192e G2.\nAn extension parallel graph (EPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1||G2 or (3) a graph G = G1 \u2192e G2, where G1 and G2 are 5 We limit the players to select a path connecting si to ti and payment only on those edges.\n6 This implies that in equilibrium every player has its sink and source connected by a path in Gp.\n86 extension parallel graphs (and in the extension composition either G1 or G2 is a single edge.)\n.\nA series parallel graph (SPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1||G2 or (3) a graph G = G1 \u2192 G2, where G1 and G2 are series parallel graphs.\nGiven a path Q and two vertices u, v on Q, we denote the subpath of Q from u to v by Qu,v.\nThe following lemma, whose proof appears in the full version, would be the main topological tool in the case of single source graph.\nLemma 2.1.\nLet G be an SPG with source s and sink t. Given a path Q, from s to t, and a vertex t , there exist a vertex y \u2208 Q, such that for any path Q from s to t , the path Q contains y and the paths Qy,t and Q are edge disjoint.\n(We call the vertex y the intersecting vertex of Q and t .)\n3.\nFAIR CONNECTION GAMES This section derives our results for fair connection games.\n3.1 Existence of Strong Equilibrium While it is known that every fair connection game possesses a Nash equilibrium in pure strategies [2], this is not necessarily the case for a strong equilibrium.\nIn this section, we study the existence of strong equilibrium in fair connection games.\nWe begin with a simple case, showing that every symmetric fair connection game possesses a strong equilibrium.\nTheorem 3.1.\nIn every symmetric fair connection game there exists a strong equilibrium.\nProof.\nLet s be the source and t be the sink of all the players.\nWe show that a profile S in which all the players choose the same shortest path Q (from the source s to the sink t ) is a strong equilibrium.\nSuppose by contradiction that S is not a SE.\nThen there is a coalition \u0393 that can deviate to a new profile S such that the cost of every player j \u2208 \u0393 decreases.\nLet Qj be a new path used by player j \u2208 \u0393.\nSince Q is a shortest path, it holds that c(Qj \\ (Q \u2229 Qj)) \u2265 c(Q \\ (Q \u2229 Qj)), for any path Qj.\nTherefore for every player j \u2208 \u0393 we have that cj(S ) \u2265 cj(S).\nHowever, this contradicts the fact that all players in \u0393 reduce their cost.\n(In fact, no player in \u0393 has reduced its cost.)\nWhile every symmetric fair connection game admits a SE, it does not hold for every fair connection game.\nIn what follows, we study the network topologies that admit a strong equilibrium for any assignment of edge costs, and give examples of topologies for which a strong equilibrium does not exist.\nThe following lemma, whose proof appears in the full version, plays a major role in our proofs of the existence of SE.\nLemma 3.2.\nLet \u039b be a fair connection game on a series parallel graph G with source s and sink t. Assume that player i has si = s and ti = t and that \u039b has some SE.\nLet S be a SE that minimizes the cost of player i (out of all SE), i.e., ci(S) = minT \u2208SE(\u039b) ci(T) and let S\u2217 be the profile that minimizes the cost of player i (out of all possible profiles), i.e., ci(S\u2217 ) = minT \u2208\u03a3 ci(T).\nThen, ci(S) = ci(S\u2217 ).\nThe next lemma considers parallel composition.\nLemma 3.3.\nLet \u039b be a fair connection game on graph G = G1||G2, where G1 and G2 are series parallel graphs.\nIf every fair connection game on the graphs G1 and G2 possesses a strong equilibrium, then the game \u039b possesses a strong equilibrium.\nProof.\nLet G1 = (V1, E1) and G2 = (V2, E2) have sources s1 and s2 and sinks t1 and t2, respectively.\nLet Ti be the set of players with an endpoint in Vi \\ {s, t}, for i \u2208 {1, 2}.\n(An endpoint is either a source or a sink of a player).\nLet T3 be the set of players j such that sj = s and tj = t. Let \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S and S be the SE in \u039b1 and \u039b2 that minimizes the cost of players in T3, respectively.\nAssume w.l.o.g. that ci(S ) \u2264 ci(S ) where player i \u2208 T3.\nIn addition, let \u039b2 be the game on the graph G2 with players T2 and let \u00afS be a SE in \u039b2.\nWe will show that the profile S = S \u222a \u00afS is a SE in \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nBy Lemma 3.2 and the assumption that ci(S ) \u2264 ci(S ), a player j \u2208 T3 cannot improve his cost.\nTherefore, \u0393 \u2286 T1 \u222a T2.\nBut this is a contradiction to S being a SE in \u039b1 or \u00afS being a SE in \u039b2.\nThe following theorem considers the case of single source fair connection games.\nTheorem 3.4.\nEvery single source fair connection game on a series-parallel graph possesses a strong equilibrium.\nProof.\nWe prove the theorem by induction on the network size |V |.\nThe claim obviously holds if |V | = 2.\nWe show the claim for a series composition, i.e., G = G1 \u2192 G2, and for a parallel composition, i.e., G = G1||G2, where G1 = (V1, E1) and G2 = (V2, E2) are SPG``s with sources s1, s2, and sinks t1, t2, respectively.\nseries composition.\nLet G = G1 \u2192 G2.\nLet T1 be the set of players j such that tj \u2208 V1, and T2 be the set of players j such that tj \u2208 V2 \\ {s2}.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T2 and T2, respectively.\nFor every player i \u2208 T2 with action Si in the game \u039b let Si \u2229E1 be his induced action in the game \u039b1, and let Si \u2229E2 be his induced action in the game \u039b2.\nLet S be a SE in \u039b1 that minimizes the cost of players in T2 (such a SE exists by the induction hypothesis and Lemma 3.2).\nLet S be any SE in \u039b2.\nWe will show that the profile S = S \u222a S is a SE in the game \u039b, i.e., for player j \u2208 T2 we use the profile Sj = Sj \u222a Sj .\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nNow, there are two cases: Case 1: \u0393 \u2286 T1.\nThis is a contradiction to S being a SE.\nCase 2: There exists a player j \u2208 \u0393 \u2229 T2.\nBy Lemma 3.2, player j cannot improve his cost in \u039b1 so the improvement is due to \u039b2.\nConsider the coalition \u0393 \u2229 T2, it would still improve its cost.\nHowever, this contradicts the fact that S is a SE in \u039b2.\nparallel composition.\nFollows from Lemma 3.3.\nWhile multi-commodity fair connection games on series parallel graphs do not necessarily possess a SE (see Theorem 3.6), fair connection games on extension parallel graphs always possess a strong equilibrium.\nTheorem 3.5.\nEvery fair connection game on an extension parallel graph possesses a strong equilibrium.\n87 t2 t1 s1 s2 2 2 1 3 3 1 (b)(a) a b e f c d Figure 1: Graph topologies.\nProof.\nWe prove the theorem by induction on the network size |V |.\nLet \u039b be a fair connection game on an EPG G = (V, E).\nThe claim obviously holds if |V | = 2.\nIf the graph G is a parallel composition of two EPG graphs G1 and G2, then the claim follows from Lemma 3.3.\nIt remains to prove the claim for extension composition.\nSuppose the graph G is an extension composition of the graph G1 consisting of a single edge e = (s1, t1) and an EPG G2 = (V2, E2) with terminals s2, t2, such that s = s1 and t = t2.\n(The case that G2 is a single edge is similar.)\nLet T1 be the set of players with source s1 and sink t1 (i.e., their path is in G1).\nLet T2 be the set of players with source and sink in G2.\nLet T3 be the set of players with source s1 and sink in V2 \\ t1.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S , S be SE in \u039b1 and \u039b2 respectively.\nWe will show that the profile S = S \u222a S is a SE in the game \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 of minimal size that can deviate such that the cost of any player j \u2208 \u0393 decreases.\nClearly, T1 \u2229\u0393 = \u03c6, since players in T1 have a single strategy.\nHence, \u0393 \u2286 T2 \u222aT3.\nAny player j \u2208 T2 \u222aT3 cannot improve his cost in \u039b1.\nTherefore, any player j \u2208 T2 \u222a T3 improves his cost in \u039b2.\nHowever, this contradicts the fact that S is a SE in \u039b2.\nIn the following theorem we provide a few examples of topologies in which a strong equilibrium does not exist, showing that our characterization is almost tight.\nTheorem 3.6.\nThe following connection games exist: (1) There exists a multi-commodity fair connection game on a series parallel graph that does not possess a strong equilibrium.\n(2) There exists a single source fair connection game that does not possess a strong equilibrium.\nProof.\nFor claim (1) consider the graph depicted in Figure 1(a).\nThis game has a unique NE where S1 = {e, c}, S2 = {b, f}, and each player has a cost of 5.7 However, consider the following coordinated deviation S .\nS1 = {a, b, c}, 7 In any NE of the game, player 1 will buy the edge e and player 2 will buy the edge f.\nThis is since the alternate path, in the respective part, will cost the player 2.5.\nThus, player 1 (player 2) will buy the edge c (edge b) alone, and each player will have a cost of 5.\ns 2 + 2 2 1 \u2212 2 1 + 3 1 2 \u2212 3 1 1 1 2 \u2212 3 t1 t2 a c d e f h g b Figure 2: Example of a single source connection game that does not admit SE.\nand S2 = {b, c, d}.\nIn this profile, each player pays a cost of 4, and thus improves its cost.\nFor claim (2) consider a single source fair connection game on the graph G depicted in Figure 2.\nThere are two players.\nPlayer i = 1, 2 wishes to connect the source s to its sink ti and the unique NE is S1 = {a, b}, S2 = {a, c}, and each player has a cost of 2.\n8 Then, both players can deviate to S1 = {h, f, d} and S2 = {h, f, e}, and decrease their costs to 2 \u2212 \/2.\nUnfortunately, our characterization is not completely tight.\nThe graph in Figure 1(b) is an example of a non-extension parallel graph which always admits a strong equilibrium.\n3.2 Strong Price of Anarchy While the price of anarchy in fair connection games can be as bad as n, the following theorem shows that the strong price of anarchy is bounded by H(n) = n i=1 1 i = \u0398(log n).\nTheorem 3.7.\nThe strong price of anarchy of a fair connection game with n players is at most H(n).\nProof.\nLet \u039b be a fair connection game on the graph G.\nWe denote by \u039b(\u0393) the game played on the graph G by a set of players \u0393, where the action of player i \u2208 \u0393 remains \u03a3i (the same as in \u039b).\nLet S = (S1, ... , Sn) be a profile in the game \u039b.\nWe denote by S(\u0393) = S\u0393 the induced profile of players in \u0393 in the game \u039b(\u0393).\nLet ne(S(\u0393)) denote the load of edge e under the profile S(\u0393) in the game \u039b(\u0393), i.e., ne(S(\u0393)) = |{j|j \u2208 \u0393, e \u2208 Sj}|.\nSimilar to congestion games [16, 13] we denote by \u03a6(S(\u0393)) the potential function of the profile S(\u0393) in the game \u039b(\u0393), where \u03a6(S(\u0393)) = e\u2208E ne(S(\u0393)) j=1 fe(j), and define \u03a6(S(\u03c6)) = 0.\nIn our case, it holds that \u03a6(S) = e\u2208E ce \u00b7 H(ne(S)).\n(1) Let S be a SE, and let S\u2217 be the profile of the optimal solution.\nWe define an order on the players as follows.\nLet \u0393n = {1, ..., n} be the set of all the players.\nFor each k = 8 We can show that this is the unique NE by a simple case analysis: (i) If S1 = {h, f, d} and S2 = {h, f, e}, then player 1 can deviate to S1 = {h, g} and decrease his cost.\n(ii) If S1 = {h, g} and S2 = {h, f, e}, then player 2 can deviate to S2 = {a, c} and decrease his cost.\n(iii) If S1 = {h, g} and S2 = {a, c}, then player 1 can deviate to S1 = {a, b} and decrease his cost.\n88 n, ... , 1, since S is a SE, there exists a player in \u0393k, w.l.o.g. call it player k, such that, ck(S) \u2264 ck(S\u2212\u0393k , S\u2217 \u0393k ).\n(2) In this way, \u0393k is defined recursively, such that for every k = n, ... , 2 it holds that \u0393k\u22121 = \u0393k \\ {k}.\n(I.e., after the renaming, \u0393k = {1, ... , k}.)\nLet ck(S(\u0393k)) denote the cost of player k in the game \u039b(\u0393k) under the induced profile S(\u0393k).\nIt is easy to see that ck(S(\u0393k)) = \u03a6(S(\u0393k)) \u2212 \u03a6(S(\u0393k\u22121)).9 Therefore, ck(S) \u2264 ck(S\u2212\u0393k , S\u2217 \u0393k ) (3) \u2264 ck(S\u2217 (\u0393k)) = \u03a6(S\u2217 (\u0393k)) \u2212 \u03a6(S\u2217 (\u0393k\u22121)).\nSumming over all players, we obtain: i\u2208N ci(S) \u2264 \u03a6(S\u2217 (\u0393n)) \u2212 \u03a6(S\u2217 (\u03c6)) = \u03a6(S\u2217 (\u0393n)) = e\u2208S\u2217 ce \u00b7 H(ne(S\u2217 )) \u2264 e\u2208S\u2217 ce \u00b7 H(n) = H(n) \u00b7 OPT(\u039b), where the first inequality follows since the sum of the right hand side of equation (3) telescopes, and the second equality follows from equation (1).\nNext we bound the SPoA when coalitions of size at most k are allowed.\nTheorem 3.8.\nThe k-SPoA of a fair connection game with n players is at most n k \u00b7 H(k).\nProof.\nLet S be a SE of \u039b, and S\u2217 be the profile of the optimal solution of \u039b.\nTo simplify the proof, we assume that n\/k is an integer.\nWe partition the players to n\/k groups T1, ... , Tn\/k each of size k. Let \u039bj be the game on the graph G played by the set of players Tj.\nLet S(Tj) denote the profile of the k players in Tj in the game \u039bj induced by the profile S of the game \u039b.\nBy Theorem 3.7, it holds that for each game \u039bj, j = 1, ... , n\/k, cost\u039bj (S(Tj)) = i\u2208Tj ci(S(Tj)) \u2264 H(k) \u00b7 OPT(\u039bj) \u2264 H(k) \u00b7 OPT(\u039b).\nSumming over all games \u039bj, j = 1, ... , n\/k, cost\u039b(S) \u2264 n\/k j=1 cost\u039bj (S(Tj)) \u2264 n k \u00b7 H(k) \u00b7 OPT(\u039b), where the first inequality follows since for each group Tj and player i \u2208 Tj, it holds that ci(S) \u2264 ci(S(Tj)).\nNext we show an almost matching lower bound.\n(The lower bound is at most H(n) = O(log n) from the upper bound and both for k = O(1) and k = \u2126(n) the difference is only a constant.)\nTheorem 3.9.\nFor fair connection games with n players, k-SPoA \u2265 max{n k , H(n)}.\n9 This follows since for any strategy profile S, if a single player k deviates to strategy Sk, then the change in the potential value \u03a6(S) \u2212 \u03a6(Sk, S\u2212k) is exactly the change in the cost to player k. t2 s t1 tn\u22122 tn 1 2 t3 tn\u22121 1 1 3 1 n\u22122 2 n 1 + 00 0 0 0 00 0 Figure 3: Example of a network topology in which SPoS > PoS.\nProof.\nFor the lower bound of H(n) we observe that in the example presented in [2], the unique Nash equilibrium is also a strong equilibrium, and therefore k-SPoA = H(n) for any 1 \u2264 k \u2264 n. For the lower bound of n\/k, consider a graph composed of two parallel links of costs 1 and n\/k.\nConsider the profile S in which all n players use the link of cost n\/k.\nThe cost of each player is 1\/k, while if any coalition of size at most k deviates to the link of cost 1, the cost of each player is at least 1\/k.\nTherefore, the profile S is a k-SE, and k-SPoA = n\/k.\nThe results of Theorems 3.7 and 3.8 can be extended to concave cost functions.\nConsider the extended fair connection game, where each edge has a cost which depends on the number of players using that edge, ce(ne).\nWe assume that the cost function ce(ne) is a nondecreasing, concave function.\nNote that the cost of an edge ce(ne) might increase with the number of players using it, but the cost per player fe(ne) = ce(ne)\/ne decreases when ce(ne) is concave.\nTheorem 3.10.\nThe strong price of anarchy of a fair connection game with nondecreasing concave edge cost functions and n players is at most H(n).\nProof.\nThe proof is analogues to the proof of Theorem 3.7.\nFor the proof we show that cost(S) \u2264 \u03a6(S\u2217 ) \u2264 H(n)\u00b7cost(S\u2217 ).\nWe first show the first inequality.\nSince the function ce(x) is concave, the cost per player ce(x)\/x is a nonincreasing function.\nTherefore inequality (3) in the proof of Theorem 3.7 holds.\nSumming inequality (3) over all players we obtain cost(S) = i ci(S) \u2264 \u03a6(S\u2217 (\u0393n))\u2212\u03a6(S\u2217 (\u03c6)) = \u03a6(S\u2217 ).\nThe second inequality follows since ce(x) is nondecreasing and therefore ne x=1(ce(x)\/x) \u2264 H(ne) \u00b7 ce(ne).\nUsing the arguments in the proof of Theorem 3.10 and the proof of Theorem 3.8 we derive, Theorem 3.11.\nThe k-SPoA of a fair connection game with nondecreasing concave edge cost functions and n players is at most n k \u00b7 H(k).\nSince the set of strong equilibria is contained in the set of Nash equilibria, it must hold that SPoA \u2264 PoA, meaning that the SPoA can only be improved compared to the PoA.\nHowever, with respect to the price of stability the opposite direction holds, that is, SPoS \u2265 PoS.\nWe next show that there exists a fair connection game in which the inequality is strict.\n89 2 \u2212 2 \u2212 2 \u2212 3 s t1 t2 t3 Figure 4: Example of a single source general connection game that does not admit a strong equilibrium.\nThe edges that are not labeled with costs have a cost of zero.\nTheorem 3.12.\nThere exists a fair connection game in which SPoS > PoS.\nProof.\nConsider a single source fair connection game on the graph G depicted in Figure 3.10 Player i = 1, ... , n wishes to connect the source s to his sink ti.\nAssume that each player i = 1, ... , n \u2212 2 has his own path of cost 1\/i from s to ti and players i = n \u2212 1, n have a joint path of cost 2\/n from s to ti.\nAdditionally, all players can share a common path of cost 1+ for some small > 0.\nThe optimal solution connects all players through the common path of cost 1 + , and this is also a Nash equilibrium with total cost 1 + .\nIt is easy to verify that the solution where each player i = 1, ... , n\u22122 uses his own path and users i = n\u22121, n use their joint path is the unique strong equilibrium of this game with total cost n\u22122 i=1 1 i + 2 n = \u0398(log n) While the example above shows that the SPoS may be greater than the PoS, the upper bound of H(n) = \u0398(log n), proven for the PoS [2], serves as an upper bound for the SPoS as well.\nThis is a direct corollary from theorem 3.7, as SPoS \u2264 SPoA by definition.\nCorollary 3.13.\nThe strong price of stability of a fair connection game with n players is at most H(n) = O(log n).\n4.\nGENERAL CONNECTION GAMES In this section, we derive our results for general connection games.\n4.1 Existence of Strong Equilibrium We begin with a characterization of the existence of a strong equilibrium in symmetric general connection games.\nSimilar to Theorem 3.1 (using a similar proof) we establish, Theorem 4.1.\nIn every symmetric fair connection game there exists a strong equilibrium.\nWhile every single source general connection game possesses a pure Nash equilibrium [3], it does not necessarily admit some strong equilibrium.11 10 This is a variation on the example given in [2].\n11 We thank Elliot Anshelevich, whose similar topology for the fair-connection game inspired this example.\nTheorem 4.2.\nThere exists a single source general connection game that does not admit any strong equilibrium.\nProof.\nConsider single source general connection game with 3 players on the graph depicted in Figure 4.\nPlayer i wishes to connect the source s with its sink ti.We need to consider only the NE profiles: (i) if all three players use the link of cost 3, then there must be two agents whose total sum exceeds 2, thus they can both reduce cost by deviating to an edge of cost 2\u2212 .\n(ii) if two of the players use an edge of cost 2\u2212 jointly, and the third player uses a different edge of cost 2 \u2212 , then, the players with non-zero payments can deviate to the path with the edge of cost 3 and reduce their costs (since before the deviation the total payments of the players is 4 \u2212 2 ).\nWe showed that none of the NE are SE, and thus the game does not possess any SE.\nNext we show that for the class of series parallel graphs, there is always a strong equilibrium in the case of a single source.\nTheorem 4.3.\nIn every single source general connection game on a series-parallel graph, there exists a strong equilibrium.\nProof.\nLet \u039b be a single source general connection game on a SPG G = (V, E) with source s and sink t.\nWe present an algorithm that constructs a specific SE.\nWe first consider the following partial order between the players.\nFor players i and j, we have that i \u2192 j if there is a directed path from ti to tj.\nWe complete the partial order to a full order (in an arbitrary way), and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n.\nThe algorithm COMPUTE-SE, considers the players in an increasing order, starting with player 1.\nEach player i will fully buy a subset of the edges, and any player j > i will consider the cost of those (bought) edges as zero.\nWhen COMPUTE-SE considers player j, the cost of the edges that players 1 to j\u22121 have bought is set to zero, and player j fully buys a shortest path Qj from s to tj.\nNamely, for every edges e \u2208 Qj \\ \u222ai i pays for any edge on any path from s to ti.\nConsider a player k > i and let Qk = Qk \u222a Qk , where Qk is a path connecting tk to t. Let yk be the intersecting vertex of Qk and ti.\nSince there exists a path from s to yk that was fully paid for by players j < k before the deviation, in particularly the path Qi s,yk , player k will not pay for any edge on any path connecting s and yk.\nTherefore player i fully pays for all edges on the path \u00afQi y,ti , i.e., \u00afpi(e) = ce for all edges e \u2208 \u00afQi y,ti .\nNow consider the algorithm COMPUTESE at the step when player i selects a shortest path from the source s to its sink ti and determines his payment pi.\nAt this point, player i could buy the path \u00afQi y,ti , since a path from s to y was already paid for by players j < i. Hence, ci(\u00afp) \u2265 ci(p).\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost.\nThis implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy While for every single source general connection game, it holds that PoS = 1 [3], the price of anarchy can be as large as n, even for two parallel edges.\nHere, we show that any strong equilibrium in single source general connection games yields the optimal cost.\nTheorem 4.4.\nIn single source general connection game, if there exists a strong equilibrium, then the strong price of anarchy is 1.\nProof.\nLet p = (p1, ... , pn) be a strong equilibrium, and let T\u2217 be the minimum cost Steiner tree on all players, rooted at the (single) source s. Let T\u2217 e be the subtree of T\u2217 disconnected from s when edge e is removed.\nLet \u0393(Te) be the set of players which have sinks in Te.\nFor a set of edges E, let c(E) = e\u2208E ce.\nLet P(Te) = i\u2208\u0393(Te) ci(p).\nAssume by way of contradiction that c(p) > c(T\u2217 ).\nWe will show that there exists a sub-tree T of T\u2217 , that connects a subset of players \u0393 \u2286 N, and a new set of payments \u00afp, such that for each i \u2208 \u0393, ci(\u00afp) < ci(p).\nThis will contradict the assumption that p is a strong equilibrium.\nFirst we show how to find a sub-tree T of T\u2217 , such that for any edge e, the payments of players with sinks in T\u2217 e is more than the cost of T\u2217 e \u222a {e}.\nTo build T , define an edge e to be bad if the cost of T\u2217 e \u222a {e} is at least the payments of the players with sinks in T\u2217 e , i.e., c(T\u2217 e \u222a {e}) \u2265 P(T\u2217 e ).\nLet B be the set of bad edges.\nWe define T to be T\u2217 \u2212 \u222ae\u2208B(T\u2217 e \u222a {e}).\nNote that we can find a subset B of B such that \u222ae\u2208B(T\u2217 e \u222a {e}) is equal to \u222ae\u2208B (T\u2217 e \u222a {e}) and for any e1, e2 \u2208 B we have T\u2217 e1 \u2229 T\u2217 e2 = \u2205.\n(The set B will include any edge e \u2208 B for which there is no other edge e \u2208 B on the path from e to the source s.) Considering the edges in e \u2208 B we can see that any subtree T\u2217 e we delete from T can not decrease the difference between the payments and the cost of the remaining tree.\nTherefore, in T for every edge e, we have that c(Te \u222a {e}) < P(Te).\nNow we have a tree T and our coalition will be \u0393(T ).\nWhat remain is to find payments \u00afp for the players in \u0393(T ) such that they will buy the tree T and every player in \u0393(T ) will lower its cost, i.e. ci(p) > ci(\u00afp) for i \u2208 \u0393(T ).\n(Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00afp. Let ci(\u00afp, Te) = e\u2208Te \u00afpi(e) be the payments of player i for the subtree Te.\nWe will show that for every subtree Te, ci(\u00afp, Te \u222a {e}) < ci(p), and hence ci(\u00afp) < ci(p).\nConsider the following bottom up process that defines \u00afp.\nWe assign the payments of edge e in T , after we assign payments to all the edges in Te.\nThis implies that when we assign payments for e, we have that the sum of the payments in Te is equal to c(Te) = i\u2208\u0393(Te) ci(\u00afp, Te).\nSince e was not a bad edge, we know that c(Te \u222a {e}) = c(Te) + ce < P(Te).\nTherefore, we can update the payments \u00afp of players i \u2208 \u0393(Te), by setting \u00afpi(e) = ce\u2206i\/( j\u2208\u0393(Te) \u2206j), where \u2206j = cj(p) \u2212 cj(\u00afp, Te).\nAfter the update we have for player i \u2208 \u0393(Te), ci(\u00afp, Te \u222a {e}) = ci(\u00afp, Te) + \u00afpi(e) = ci(\u00afp, Te) + \u2206i ce j\u2208\u0393(Te) \u2206j = ci(p) \u2212 \u2206i(1 \u2212 ce P(\u0393(Te)) \u2212 c(Te) ), where we used the fact that j\u2208\u0393(Te) \u2206j = P(\u0393(Te))\u2212c(Te).\nSince ce < P(\u0393(Te)) \u2212 c(Te) it follows that ci(\u00afp, Te \u222a {e}) < ci(p).\n5.\nREFERENCES [1] N. Andelman, M. Feldman, and Y. Mansour.\nStrong Price of Anarchy.\nIn SODA``07, 2007.\n[2] E. Anshelevich, A. Dasgupta, J. M. Kleinberg, \u00b4E. Tardos, T. Wexler, and T. Roughgarden.\nThe price of stability for network design with fair cost allocation.\nIn FOCS, pages 295-304, 2004.\n[3] E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler.\nNear-Optimal Network Design with Selfish Agents.\nIn STOC``03, 2003.\n[4] R. Aumann.\nAcceptable Points in General Cooperative n-Person Games.\nIn Contributions to the Theory of Games, volume 4, 1959.\n[5] A. Czumaj and B. V\u00a8ocking.\nTight bounds for worst-case equilibria.\nIn SODA, pages 413-420, 2002.\n[6] A. Fabrikant, A. Luthra, E. Maneva, C. Papadimitriou, and S. Shenker.\nOn a network creation game.\nIn ACM Symposium on Principles of Distriubted Computing (PODC), 2003.\n[7] R. Holzman and N. Law-Yone.\nStrong equilibrium in congestion games.\nGames and Economic Behavior, 21:85-101, 1997.\n[8] R. Holzman and N. L.-Y.\n(Lev-tov).\nNetwork structure and strong equilibrium in route selection games.\nMathematical Social Sciences, 46:193-205, 2003.\n[9] E. Koutsoupias and C. H. Papadimitriou.\nWorst-case equilibria.\nIn STACS, pages 404-413, 1999.\n[10] I. Milchtaich.\nTopological conditions for uniqueness of equilibrium in networks.\nMathematics of Operations Research, 30:225244, 2005.\n[11] I. Milchtaich.\nNetwork topology and the efficiency of equilibrium.\nGames and Economic Behavior, 57:321346, 2006.\n[12] I. Milchtaich.\nThe equilibrium existence problem in finite network congestion games.\nForthcoming in Lecture Notes in Computer Science, 2007.\n[13] D. Monderer and L. S. Shapley.\nPotential Games.\nGames and Economic Behavior, 14:124-143, 1996.\n[14] H. Moulin and S. Shenker.\nStrategyproof sharing of 91 submodular costs: Budget balance versus efficiency.\nEconomic Theory, 18(3):511-533, 2001.\n[15] C. Papadimitriou.\nAlgorithms, Games, and the Internet.\nIn Proceedings of 33rd STOC, pages 749-753, 2001.\n[16] R. W. Rosenthal.\nA class of games possessing pure-strategy Nash equilibria.\nInternational Journal of Game Theory, 2:65-67, 1973.\n[17] T. Roughgarden.\nThe Price of Anarchy is Independent of the Network Topology.\nIn STOC``02, pages 428-437, 2002.\n[18] T. Roughgarden and E. Tardos.\nHow bad is selfish routing?\nJournal of the ACM, 49(2):236 - 259, 2002.\n[19] O. Rozenfeld and M. Tennenholtz.\nStrong and correlated strong equilibria in monotone congestion games.\nIn Workshop on Internet and Network Economics, 2006.\n92","lvl-3":"Strong Equilibrium in Cost Sharing Connection Games *\nABSTRACT\nIn this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games).\nWe study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges.\nOur main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games).\n(2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games).\n(3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games.\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398 (log n) from the optimal solution, where n is the number of players.\n(This should be contrasted with the \u03a9 (n) price of anarchy for the same setting.)\nFor single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.\n* Research supported in part by a grant of the Israel Science Foundation, Binational Science Foundation (BSF), GermanIsraeli Foundation (GIF), Lady Davis Fellowship, an IBM faculty award, and the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778.\nThis publication only reflects the authors' views.\n1.\nINTRODUCTION\nComputational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems.\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance, but rather under the control of individuals with different incentives, has led already to many important insights.\nConsider classical routing and transportation problems such as multicast or multi-commodity problems, which are many times viewed as follows.\nWe are given a graph with edge costs and connectivity demands between nodes, and our goal is to find a minimal cost solution.\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives.\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility, and the resulting outcome could be far from the optimal solution.\nWhen considering individual incentives one needs to discuss the appropriate solution concept.\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept.\nIndeed Nash equilibrium has many benefits, and most importantly it always exists (in mixed strategies).\nHowever, the solution concept of Nash equilibrium is resilient only to unilateral deviations, while in reality, players may be able to coordinate their actions.\nA strong equilibrium [4] is a state from which no coalition (of any size) can deviate and improve the utility of every member of the coalition (while possibly lowering the utility\nof players outside the coalition).\nThis resilience to deviations by coalitions of the players is highly attractive, and one can hope that once a strong equilibrium is reached it is highly likely to sustain.\nFrom a computational game theory point of view, an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior.\nThe strong price of anarchy (SPoA), introduced in [1], is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution.\nObviously, SPoA is meaningful only in those cases where a strong equilibrium exists.\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium.\nEven simple classical games like the prisoner's dilemma do not posses any strong equilibrium (which is also an example of a congestion game that does not posses a strong equilibriums).\nThis unfortunate fact has reduced the concentration in strong equilibrium, despite its highly attractive properties.\nYet, [1] have identified two broad families of games, namely job scheduling and network formation, where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy (which is the ratio between the worst Nash equilibrium and the optimal solution [15, 18, 5, 6]).\nIn this work we concentrate on cost sharing connection games, introduced by [3, 2].\nIn such a game, there is an underlying directed graph with edge costs, and individual users have connectivity demands (between a source and a sink).\nWe consider two models.\nThe fair cost connection model [2] allows each player to select a path from the source to the sink2.\nIn this game the cost of an edge is shared equally between all the players that selected the edge, and the cost of the player is the sum of its costs on the edges it selected.\nThe general connection game [3] allows each player to offer prices for edges.\nIn this game an edge is bought if the sum of the offers at least covers its cost, and the cost of the player is the sum of its offers on the bought edges (in both games we assume that the player has to guarantee the connectivity between its source and sink).\nIn this work we focus on two important issues.\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed, and the second one is the quality of the strong equilibria.\nFor the existence part, we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs.\nOne can view this separation between the graph topology and the edge costs, as a separation between the underlying infrastructure and the costs the players observe to purchase edges.\nWhile one expects the infrastructure to be stable over long periods of time, the costs the players observe can be easily modified over short time periods.\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network.\nOur results are as follows.\nFor the single commodity case (all the players have the same source and sink), there is a strong equilibrium in any graph (both for fair and general connection games).\nMoreover, the strong equilibrium is also swhile any congestion game is known to admit at least one Nash equilibrium in pure strategies [16].\n2The fair cost sharing scheme is also attractive from a mechanism design point of view, as it is a strategyproof costsharing mechanism [14].\nthe optimal solution (namely, the players share a shortest path from the common source to the common sink).\nFor the case of a single source and multiple sinks (for example, in a multicast tree), we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph, and we show an example of a nonseries parallel graph that does not have a strong equilibrium.\nFor the case of multi-commodity (multi sources and sinks), we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium, and we show an example of a series parallel graph that does not have a strong equilibrium.\nAs far as we know, we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games.\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398 (log n) from the optimal solution, where n is the number of players.\nThis should be contrasted with the \u0398 (n) bound that exists for the price of anarchy [2].\nFor single source general connection games, we show that any series parallel graph possesses a strong equilibrium, and we show an example of a graph that does not have a strong equilibrium.\nIn this case we also show that any strong equilibrium is optimal.\nRelated work\nTopological characterizations for single-commodity network games have been recently provided for various equilibrium properties, including equilibrium existence [12, 7, 8], equilibrium uniqueness [10] and equilibrium efficiency [17, 11].\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [12].\nThe existence of strong equilibrium was studied in both utility-decreasing (e.g., routing) and utility-increasing (e.g., fair cost-sharing) congestion games.\n[7, 8] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games, and showed that a SE always exists if and only if the underlying graph is extension-parallel.\n[19] have shown that in single-commodity utility-increasing congestion games, the topological characterization is essentially equivalent to parallel links.\nIn addition, they have shown that these results hold for correlated strong equilibria as well (in contrast to the decreasing setting, where correlated strong equilibria might not exist at all).\nWhile the fair cost sharing games we study are utility increasing network congestion games, we derive a different characterization than [19] due to the different assumptions regarding the players' actions .3\n2.\nMODEL\n2.1 Game Theory definitions\n2.2 Cost Sharing Connection Games\n2.3 Extension Parallel and Series Parallel Directed Graphs\n3.\nFAIR CONNECTION GAMES\n3.1 Existence of Strong Equilibrium\n3.2 Strong Price of Anarchy\n4.\nGENERAL CONNECTION GAMES\nIn this section, we derive our results for general connection games.\n4.1 Existence of Strong Equilibrium\nWe begin with a characterization of the existence of a strong equilibrium in symmetric general connection games.\nSimilar to Theorem 3.1 (using a similar proof) we establish, THEOREM 4.1.\nIn every symmetric fair connection game there exists a strong equilibrium.\nWhile every single source general connection game possesses a pure Nash equilibrium [3], it does not necessarily admit some strong equilibrium .11\nthe fair-connection game inspired this example.\nTHEOREM 4.2.\nThere exists a single source general connection game that does not admit any strong equilibrium.\nPROOF.\nConsider single source general connection game with 3 players on the graph depicted in Figure 4.\nPlayer i wishes to connect the source s with its sink ti.We need to consider only the NE profiles: (i) if all three players use the link of cost 3, then there must be two agents whose total sum exceeds 2, thus they can both reduce cost by deviating to an edge of cost 2 \u2212 E. (ii) if two of the players use an edge of cost 2 \u2212 e jointly, and the third player uses a different edge of cost 2 \u2212 e, then, the players with non-zero payments can deviate to the path with the edge of cost 3 and reduce their costs (since before the deviation the total payments of the players is 4 \u2212 2e).\nWe showed that none of the NE are SE, and thus the game does not possess any SE.\nNext we show that for the class of series parallel graphs, there is always a strong equilibrium in the case of a single source.\nPROOF.\nLet \u039b be a single source general connection game on a SPG G = (V, E) with source s and sink t.\nWe present an algorithm that constructs a specific SE.\nWe first consider the following partial order between the players.\nFor players i and j, we have that i \u2192 j if there is a directed path from ti to tj.\nWe complete the partial order to a full order (in an arbitrary way), and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n.\nThe algorithm COMPUTE-SE, considers the players in an increasing order, starting with player 1.\nEach player i will fully buy a subset of the edges, and any player j> i will consider the cost of those (bought) edges as zero.\nWhen COMPUTE-SE considers player j, the cost of the edges that players 1 to j \u2212 1 have bought is set to zero, and player j fully buys a shortest path Qj from s to tj.\nNamely, for every edges e G Qj \\ Ui i pays for any edge on any path from s to ti.\nConsider a player k> i and let Q0k = Qk U Q00k, where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti.\nSince there exists a path from s to yk that was fully paid for by players j ci (p).\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost.\nThis implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game, it holds that PoS = 1 [3], the price of anarchy can be as large as n, even for two parallel edges.\nHere, we show that any strong equilibrium in single source general connection games yields the optimal cost.\nPROOF.\nLet p = (p1,..., pn) be a strong equilibrium, and let T \u2217 be the minimum cost Steiner tree on all players, rooted at the (single) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed.\nLet \u0393 (Te) be the set of players which have sinks in Te.\nFor a set of edges E, let c (E) = Ee \u2208 E ce.\nLet P (Te) = Ei \u2208 \u0393 (Te) ci (p).\nAssume by way of contradiction that c (p)> c (T \u2217).\nWe will show that there exists a sub-tree T0 of T \u2217, that connects a subset of players \u0393 C _ N, and a new set of payments \u00af p, such that for each i E \u0393, ci (\u00af p) P (Te \u2217).\nLet B be the set of bad edges.\nWe define T0 to be T \u2217 \u2212 Ue \u2208 B (Te \u2217 U {e}).\nNote that we can find a subset B0 of B such that Ue \u2208 B (Te \u2217 U {e}) is equal to Ue \u2208 B (Te \u2217 U {e}) and for any e1, e2 E B0 we have T \u2217 e1 n T \u2217 ee = 0.\n(The set B0 will include any edge e E B for which there is no other edge e0 E B on the path from e to the source s.) Considering the edges in e E B0 we can see that any subtree Te \u2217 we delete from T cannot decrease the difference between the payments and the cost of the remaining tree.\nTherefore, in T0 for every edge e, we have that c (Te0 U {e})

ci (\u00af p) for i E \u0393 (T0).\n(Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00af p. Let ci (\u00af p, T0 e \u2208 Te \u00af pi (e) be the payments of player i for the subtree T0e.\nWe will show that for every subtree T0 e, ci (\u00af p, Te0 U {e}) i will consider the cost of those (bought) edges as zero.\nWhen COMPUTE-SE considers player j, the cost of the edges that players 1 to j \u2212 1 have bought is set to zero, and player j fully buys a shortest path Qj from s to tj.\nNamely, for every edges e G Qj \\ Ui i pays for any edge on any path from s to ti.\nConsider a player k> i and let Q0k = Qk U Q00k, where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti.\nSince there exists a path from s to yk that was fully paid for by players j ci (p).\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost.\nThis implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game, it holds that PoS = 1 [3], the price of anarchy can be as large as n, even for two parallel edges.\nHere, we show that any strong equilibrium in single source general connection games yields the optimal cost.\nPROOF.\nLet p = (p1,..., pn) be a strong equilibrium, and let T \u2217 be the minimum cost Steiner tree on all players, rooted at the (single) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed.\nLet \u0393 (Te) be the set of players which have sinks in Te.\nFor a set of edges E, let c (E) = Ee \u2208 E ce.\nAssume by way of contradiction that c (p)> c (T \u2217).\nWe will show that there exists a sub-tree T0 of T \u2217, that connects a subset of players \u0393 C _ N, and a new set of payments \u00af p, such that for each i E \u0393, ci (\u00af p) P (Te \u2217).\nLet B be the set of bad edges.\nTherefore, in T0 for every edge e, we have that c (Te0 U {e})

ci (\u00af p) for i E \u0393 (T0).\n(Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00af p. Let ci (\u00af p, T0 e \u2208 Te \u00af pi (e) be the payments of player i for the subtree T0e.\nConsider the following bottom up process that defines \u00af p.\nWe assign the payments of edge e in T0, after we assign payments to all the edges in T0e.\nTherefore, we can update the payments p \u00af of players i E \u0393 (T0e), by setting\nwhere we used the fact that E e).","lvl-2":"Strong Equilibrium in Cost Sharing Connection Games *\nABSTRACT\nIn this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games).\nWe study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges.\nOur main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games).\n(2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games).\n(3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games.\nAs for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398 (log n) from the optimal solution, where n is the number of players.\n(This should be contrasted with the \u03a9 (n) price of anarchy for the same setting.)\nFor single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.\n* Research supported in part by a grant of the Israel Science Foundation, Binational Science Foundation (BSF), GermanIsraeli Foundation (GIF), Lady Davis Fellowship, an IBM faculty award, and the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778.\nThis publication only reflects the authors' views.\n1.\nINTRODUCTION\nComputational game theory has introduced the issue of incentives to many of the classical combinatorial optimization problems.\nThe view that the demand side is many times not under the control of a central authority that optimizes the global performance, but rather under the control of individuals with different incentives, has led already to many important insights.\nConsider classical routing and transportation problems such as multicast or multi-commodity problems, which are many times viewed as follows.\nWe are given a graph with edge costs and connectivity demands between nodes, and our goal is to find a minimal cost solution.\nThe classical centralized approach assumes that all the individual demands can both be completely coordinated and have no individual incentives.\nThe game theory point of view would assume that each individual demand is controlled by a player that optimizes its own utility, and the resulting outcome could be far from the optimal solution.\nWhen considering individual incentives one needs to discuss the appropriate solution concept.\nMuch of the research in computational game theory has focused on the classical Nash equilibrium as the primary solution concept.\nIndeed Nash equilibrium has many benefits, and most importantly it always exists (in mixed strategies).\nHowever, the solution concept of Nash equilibrium is resilient only to unilateral deviations, while in reality, players may be able to coordinate their actions.\nA strong equilibrium [4] is a state from which no coalition (of any size) can deviate and improve the utility of every member of the coalition (while possibly lowering the utility\nof players outside the coalition).\nThis resilience to deviations by coalitions of the players is highly attractive, and one can hope that once a strong equilibrium is reached it is highly likely to sustain.\nFrom a computational game theory point of view, an additional benefit of a strong equilibrium is that it has a potential to reduce the distance between the optimal solution and the solution obtained as an outcome of selfish behavior.\nThe strong price of anarchy (SPoA), introduced in [1], is the ratio between the cost of the worst strong equilibrium and the cost of an optimal solution.\nObviously, SPoA is meaningful only in those cases where a strong equilibrium exists.\nA major downside of strong equilibrium is that most games do not admit any strong equilibrium.\nEven simple classical games like the prisoner's dilemma do not posses any strong equilibrium (which is also an example of a congestion game that does not posses a strong equilibriums).\nThis unfortunate fact has reduced the concentration in strong equilibrium, despite its highly attractive properties.\nYet, [1] have identified two broad families of games, namely job scheduling and network formation, where a strong equilibrium always exists and the SPoA is significantly lower than the price of anarchy (which is the ratio between the worst Nash equilibrium and the optimal solution [15, 18, 5, 6]).\nIn this work we concentrate on cost sharing connection games, introduced by [3, 2].\nIn such a game, there is an underlying directed graph with edge costs, and individual users have connectivity demands (between a source and a sink).\nWe consider two models.\nThe fair cost connection model [2] allows each player to select a path from the source to the sink2.\nIn this game the cost of an edge is shared equally between all the players that selected the edge, and the cost of the player is the sum of its costs on the edges it selected.\nThe general connection game [3] allows each player to offer prices for edges.\nIn this game an edge is bought if the sum of the offers at least covers its cost, and the cost of the player is the sum of its offers on the bought edges (in both games we assume that the player has to guarantee the connectivity between its source and sink).\nIn this work we focus on two important issues.\nThe first one is identifying under what conditions the existence of a strong equilibrium is guaranteed, and the second one is the quality of the strong equilibria.\nFor the existence part, we identify families of graph topologies that possess some strong equilibrium for any assignment of edge costs.\nOne can view this separation between the graph topology and the edge costs, as a separation between the underlying infrastructure and the costs the players observe to purchase edges.\nWhile one expects the infrastructure to be stable over long periods of time, the costs the players observe can be easily modified over short time periods.\nSuch a topological characterization of the underlying infrastructure provides a network designer topological conditions that will ensure stability in his network.\nOur results are as follows.\nFor the single commodity case (all the players have the same source and sink), there is a strong equilibrium in any graph (both for fair and general connection games).\nMoreover, the strong equilibrium is also swhile any congestion game is known to admit at least one Nash equilibrium in pure strategies [16].\n2The fair cost sharing scheme is also attractive from a mechanism design point of view, as it is a strategyproof costsharing mechanism [14].\nthe optimal solution (namely, the players share a shortest path from the common source to the common sink).\nFor the case of a single source and multiple sinks (for example, in a multicast tree), we show that in a fair connection game there is a strong equilibrium if the underlying graph is a series parallel graph, and we show an example of a nonseries parallel graph that does not have a strong equilibrium.\nFor the case of multi-commodity (multi sources and sinks), we show that in a fair connection game if the graph is an extension parallel graph then there is always a strong equilibrium, and we show an example of a series parallel graph that does not have a strong equilibrium.\nAs far as we know, we are the first to provide a topological characterization for equilibrium existence in multi-commodity and single-source network games.\nFor any fair connection game we show that if there exists a strong equilibrium it is at most a factor of \u0398 (log n) from the optimal solution, where n is the number of players.\nThis should be contrasted with the \u0398 (n) bound that exists for the price of anarchy [2].\nFor single source general connection games, we show that any series parallel graph possesses a strong equilibrium, and we show an example of a graph that does not have a strong equilibrium.\nIn this case we also show that any strong equilibrium is optimal.\nRelated work\nTopological characterizations for single-commodity network games have been recently provided for various equilibrium properties, including equilibrium existence [12, 7, 8], equilibrium uniqueness [10] and equilibrium efficiency [17, 11].\nThe existence of pure Nash equilibrium in single-commodity network congestion games with player-specific costs or weights was studied in [12].\nThe existence of strong equilibrium was studied in both utility-decreasing (e.g., routing) and utility-increasing (e.g., fair cost-sharing) congestion games.\n[7, 8] have provided a full topological characterization for a SE existence in single-commodity utility-decreasing congestion games, and showed that a SE always exists if and only if the underlying graph is extension-parallel.\n[19] have shown that in single-commodity utility-increasing congestion games, the topological characterization is essentially equivalent to parallel links.\nIn addition, they have shown that these results hold for correlated strong equilibria as well (in contrast to the decreasing setting, where correlated strong equilibria might not exist at all).\nWhile the fair cost sharing games we study are utility increasing network congestion games, we derive a different characterization than [19] due to the different assumptions regarding the players' actions .3\n2.\nMODEL\n2.1 Game Theory definitions\nA game \u039b = has a finite set N = {1,..., n} of players.\nPlayer i E N has a set \u03a3i of actions, the joint action set is \u03a3 = \u03a3s x \u00b7 \u00b7 \u00b7 x \u03a3n and a joint action S E \u03a3 is also called a profile.\nThe cost function of player i is\nci: \u03a3--+ R +, which maps the joint action S E \u03a3 to a non-negative real number.\nLet S = (S1,..., Sn) denote the profile of actions taken by the players, and let S \u2212 i = (S1,..., Si \u2212 1, Si +1,..., Sn) denote the profile of actions taken by all players other than player i. Note that S = (Si, S \u2212 i).\nThe social cost of a game A is the sum of the costs of the players, and we denote by OPT (A) the minimal social cost of a game A. i.e., OPT (A) = minS \u2208 \u03a3 cost\u039b (S), where cost\u039b (S) = Ei \u2208 N ci (S).\nA joint action S E \u03a3 is a pure Nash equilibrium if no player i E N can benefit from unilaterally deviating from his action to another action, i.e., ` di E N ` dS0i E \u03a3i: ci (S \u2212 i, S0i)> ci (S).\nWe denote by NE (A) the set of pure Nash equilibria in the game A. Resilience to coalitions: A pure deviation of a set of players \u0393 C N (also called coalition) specifies an action for each player in the coalition, i.e., - y E Xi \u2208 \u0393\u03a3i.\nA joint action S E \u03a3 is not resilient to a pure deviation of a coalition \u0393 if there is a pure joint action - y of \u0393 such that ci (S \u2212 \u0393, - y) 04.\nIn a connection game each player i E N has an associated source si and sink ti.\nIn a fair connection game the actions \u03a3i of player i include all the paths from si to ti.\nThe cost of each edge is shared equally by the set of all players whose paths contain it.\nGiven a joint action, the cost of a player is the sum of his costs on the edges it selected.\nMore formally, the cost function of each player on an edge e, in a joint action S, is fe (ne (S)) = ce ne (S), where ne (S) is the number of players that selected a path containing edge e in ci (S) = E S.\nThe cost of player i, when selecting path Qi E \u03a3i is e \u2208 Qi fe (ne (S)).\n4In some of the existence proofs, we assume that ce> 0 for simplicity.\nThe full version contains the complete proofs for the case ce> 0.\nIn a general connection game the actions \u03a3i of player i is a payment vector pi, where pi (e) is how much player i is offering to contribute to the cost of edge e. 5 Given a profile p, any edge e such that Ei pi (e)> ce is considered bought, and Ep denotes the set of bought edges.\nLet Gp = (V, Ep) denote the graph bought by the players for profile p = (p1,..., pn).\nClearly, each player tries to minimize his total payment which is ci (p) = & \u2208 Ep pi (e) if si is connected to ti in Gp, and infinity otherwise .6 We denote by c (p) = Ei ci (p) the total cost under the profile p. For a subgraph H of G we denote the total cost of the edges in H by c (H).\nA symmetric connection game implies that the source and sink of all the players are identical.\n(We also call a symmetric connection game a single source single sink connection game, or a single commodity connection game.)\nA single source connection game implies that the sources of all the players are identical.\nFinally, A multi commodity connection game implies that each player has its own source and sink.\n2.3 Extension Parallel and Series Parallel Directed Graphs\nOur directed graphs would be acyclic, and would have a source node (from which all nodes are reachable) and a sink node (which every node can reach).\nWe first define the following actions for composition of directed graphs.\n\u2022 Identification: The identification operation allows to collapse two nodes to one.\nMore formally, given graph G = (V, E) we define the identification of a node v1 E V and v2 E V forming a new node v E V as creating a new graph G0 = (V0, E0), where V 0 = V--{v1, v2} U {v} and E0 includes the edges of E where the edges of v1 and v2 are now connected to v. \u2022 Parallel composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 E V1 and s2 E V2 and sinks t1 E V1 and t2 E V2, respectively, we define a new graph G = G1IIG2 as follows.\nLet G0 = (V1 U V2, E1 U E2) be the union graph.\nTo create G = G1IIG2 we identify the sources s1 and s2, forming a new source node s, and identify the sinks t1 and t2, forming a new sink t. \u2022 Series composition: Given two directed graphs, G1 = (V1, E1) and G2 = (V2, E2), with sources s1 E V1 and s2 E V2 and sinks t1 E V1 and t2 E V2, respectively, we define a new graph G = G1--+ G2 as follows.\nLet G0 = (V1 U V2, E1 U E2) be the union graph.\nTo create G = G1--+ G2 we identify the vertices t1 and s2, forming a new vertex u.\nThe graph G has a source s = s1 and a sink t = t2.\n\u2022 Extension composition: A series composition when\none of the graphs, G1 or G2, is composed of a single directed edge is an extension composition, and we denote it by G = G1--+ e G2.\nAn extension parallel graph (EPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1IIG2 or (3) a graph G = G1--+ e G2, where G1 and G2 are\nextension parallel graphs (and in the extension composition either G1 or G2 is a single edge.)\n.\nA series parallel graph (SPG) is a graph G consisting of either: (1) a single directed edge (s, t), (2) a graph G = G1 | | G2 or (3) a graph G = G1 \u2192 G2, where G1 and G2 are series parallel graphs.\nGiven a path Q and two vertices u, v on Q, we denote the subpath of Q from u to v by Qu, v.\nThe following lemma, whose proof appears in the full version, would be the main topological tool in the case of single source graph.\nLEMMA 2.1.\nLet G be an SPG with source s and sink t. Given a path Q, from s to t, and a vertex t', there exist a vertex y \u2208 Q, such that for any path Q' from s to t', the path Q' contains y and the paths Q' y, t, and Q are edge disjoint.\n(We call the vertex y the intersecting vertex of Q and t'.)\n3.\nFAIR CONNECTION GAMES\nThis section derives our results for fair connection games.\n3.1 Existence of Strong Equilibrium\nWhile it is known that every fair connection game possesses a Nash equilibrium in pure strategies [2], this is not necessarily the case for a strong equilibrium.\nIn this section, we study the existence of strong equilibrium in fair connection games.\nWe begin with a simple case, showing that every symmetric fair connection game possesses a strong equilibrium.\nPROOF.\nLet s' be the source and t' be the sink of all the players.\nWe show that a profile S in which all the players choose the same shortest path Q (from the source s' to the sink t') is a strong equilibrium.\nSuppose by contradiction that S is not a SE.\nThen there is a coalition \u0393 that can deviate to a new profile S' such that the cost of every player j \u2208 \u0393 decreases.\nLet Q' j be a new path used by player j \u2208 \u0393.\nSince Q is a shortest path, it holds that c (Q' j \\ (Q \u2229 Q' j)) \u2265 c (Q \\ (Q \u2229 Q' j)), for any path Q' j. Therefore for every player j \u2208 \u0393 we have that cj (S') \u2265 cj (S).\nHowever, this contradicts the fact that all players in \u0393 reduce their cost.\n(In fact, no player in \u0393 has reduced its cost.)\nWhile every symmetric fair connection game admits a SE, it does not hold for every fair connection game.\nIn what follows, we study the network topologies that admit a strong equilibrium for any assignment of edge costs, and give examples of topologies for which a strong equilibrium does not exist.\nThe following lemma, whose proof appears in the full version, plays a major role in our proofs of the existence of SE.\nLEMMA 3.2.\nLet \u039b be a fair connection game on a series parallel graph G with source s and sink t. Assume that player i has si = s and ti = t and that \u039b has some SE.\nLet S be a SE that minimizes the cost of player i (out of all SE), i.e., ci (S) = minT ESE (\u039b) ci (T) and let S * be the profile that minimizes the cost of player i (out of all possible profiles), i.e., ci (S *) = minT E\u03a3 ci (T).\nThen, ci (S) = ci (S *).\nThe next lemma considers parallel composition.\nLEMMA 3.3.\nLet \u039b be a fair connection game on graph G = G1 | | G2, where G1 and G2 are series parallel graphs.\nIf every fair connection game on the graphs G1 and G2 possesses a strong equilibrium, then the game \u039b possesses a strong equilibrium.\nPROOF.\nLet G1 = (V1, E1) and G2 = (V2, E2) have sources s1 and s2 and sinks t1 and t2, respectively.\nLet Ti be the set of players with an endpoint in Vi \\ {s, t}, for i \u2208 {1, 2}.\n(An endpoint is either a source or a sink of a player).\nLet T3 be the set of players j such that sj = s and tj = t. Let \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S' and S' ' be the SE in \u039b1 and \u039b2 that minimizes the cost of players in T3, respectively.\nAssume w.l.o.g. that ci (S') \u2264 ci (S' ') where player i \u2208 T3.\nIn addition, let \u039b' 2 be the game on the graph G2 with players T2 and let S \u00af be a SE in \u039b' 2.\nWe will show that the profile S = S' \u222a S \u00af is a SE in \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nBy Lemma 3.2 and the assumption that ci (S') \u2264 ci (S' '), a player j \u2208 T3 cannot improve his cost.\nTherefore, \u0393 \u2286 T1 \u222a T2.\nBut this is a contradiction to S' being a SE in \u039b1 or S \u00af being a SE in \u039b' 2.\nThe following theorem considers the case of single source fair connection games.\nPROOF.\nWe prove the theorem by induction on the network size | V |.\nThe claim obviously holds if | V | = 2.\nWe show the claim for a series composition, i.e., G = G1 \u2192 G2, and for a parallel composition, i.e., G = G1 | | G2, where G1 = (V1, E1) and G2 = (V2, E2) are SPG's with sources s1, s2, and sinks t1, t2, respectively.\nseries composition.\nLet G = G1 \u2192 G2.\nLet T1 be the set of players j such that tj \u2208 V1, and T2 be the set of players j such that tj \u2208 V2 \\ {s2}.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T2 and T2, respectively.\nFor every player i \u2208 T2 with action Si in the game \u039b let Si \u2229 E1 be his induced action in the game \u039b1, and let Si \u2229 E2 be his induced action in the game \u039b2.\nLet S' be a SE in \u039b1 that minimizes the cost of players in T2 (such a SE exists by the induction hypothesis and Lemma 3.2).\nLet S' ' be any SE in \u039b2.\nWe will show that the profile S = S' \u222a S' ' is a SE in the game \u039b, i.e., for player j \u2208 T2 we use the profile Sj = S' j \u222a S' ' j.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 that can deviate such that the cost of every player j \u2208 \u0393 decreases.\nNow, there are two cases: Case 1: \u0393 \u2286 T1.\nThis is a contradiction to S' being a SE.\nCase 2: There exists a player j \u2208 \u0393 \u2229 T2.\nBy Lemma 3.2, player j cannot improve his cost in \u039b1 so the improvement is due to \u039b2.\nConsider the coalition \u0393 \u2229 T2, it would still improve its cost.\nHowever, this contradicts the fact that S' ' is a SE in \u039b2.\nparallel composition.\nFollows from Lemma 3.3.\nWhile multi-commodity fair connection games on series parallel graphs do not necessarily possess a SE (see Theorem 3.6), fair connection games on extension parallel graphs always possess a strong equilibrium.\nTHEOREM 3.5.\nEvery fair connection game on an extension parallel graph possesses a strong equilibrium.\nFigure 1: Graph topologies.\nPROOF.\nWe prove the theorem by induction on the network size | V |.\nLet \u039b be a fair connection game on an EPG G = (V, E).\nThe claim obviously holds if | V | = 2.\nIf the graph G is a parallel composition of two EPG graphs G1 and G2, then the claim follows from Lemma 3.3.\nIt remains to prove the claim for extension composition.\nSuppose the graph G is an extension composition of the graph G1 consisting of a single edge e = (s1, t1) and an EPG G2 = (V2, E2) with terminals s2, t2, such that s = s1 and t = t2.\n(The case that G2 is a single edge is similar.)\nLet T1 be the set of players with source s1 and sink t1 (i.e., their path is in G1).\nLet T2 be the set of players with source and sink in G2.\nLet T3 be the set of players with source s1 and sink in V2 \\ t1.\nLet \u039b1 and \u039b2 be the original game on the respective graphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S0, S00 be SE in \u039b1 and \u039b2 respectively.\nWe will show that the profile S = S0 \u222a S00 is a SE in the game \u039b.\nSuppose by contradiction that S is not a SE.\nThen, there is a coalition \u0393 of minimal size that can deviate such that the cost of any player j \u2208 \u0393 decreases.\nClearly, T1 \u2229 \u0393 = \u03c6, since players in T1 have a single strategy.\nHence, \u0393 \u2286 T2 \u222a T3.\nAny player j \u2208 T2 \u222a T3 cannot improve his cost in \u039b1.\nTherefore, any player j \u2208 T2 \u222a T3 improves his cost in \u039b2.\nHowever, this contradicts the fact that S00 is a SE in \u039b2.\nIn the following theorem we provide a few examples of topologies in which a strong equilibrium does not exist, showing that our characterization is almost tight.\nTHEOREM 3.6.\nThe following connection games exist: (1) There exists a multi-commodity fair connection game on a series parallel graph that does not possess a strong equilibrium.\n(2) There exists a single source fair connection game that does not possess a strong equilibrium.\nPROOF.\nFor claim (1) consider the graph depicted in Figure 1 (a).\nThis game has a unique NE where S1 = {e, c}, S2 = {b, f}, and each player has a cost of 5.7 However, consider the following coordinated deviation S0.\nS01 = {a, b, c}, 7In any NE of the game, player 1 will buy the edge e and player 2 will buy the edge f.\nThis is since the alternate path, in the respective part, will cost the player 2.5.\nThus, player 1 (player 2) will buy the edge c (edge b) alone, and each player will have a cost of 5.\nFigure 2: Example of a single source connection game that does not admit SE.\nand S02 = {b, c, d}.\nIn this profile, each player pays a cost of 4, and thus improves its cost.\nFor claim (2) consider a single source fair connection game on the graph G depicted in Figure 2.\nThere are two players.\nPlayer i = 1, 2 wishes to connect the source s to its sink ti and the unique NE is S1 = {a, b}, S2 = {a, c}, and each player has a cost of 2.\n8 Then, both players can deviate to S01 = {h, f, d} and S02 = {h, f, e}, and decrease their costs to 2 \u2212 e\/2.\nUnfortunately, our characterization is not completely tight.\nThe graph in Figure 1 (b) is an example of a non-extension parallel graph which always admits a strong equilibrium.\n3.2 Strong Price of Anarchy\nWhile the price of anarchy in fair connection games can be as bad as n, the following theorem shows that the strong\nTHEOREM 3.7.\nThe strong price of anarchy of a fair connection game with n players is at most H (n).\nPROOF.\nLet \u039b be a fair connection game on the graph G.\nWe denote by \u039b (\u0393) the game played on the graph G by a set of players \u0393, where the action of player i \u2208 \u0393 remains \u03a3i (the same as in \u039b).\nLet S = (S1,..., Sn) be a profile in the game \u039b.\nWe denote by S (\u0393) = S\u0393 the induced profile of players in \u0393 in the game \u039b (\u0393).\nLet ne (S (\u0393)) denote the load of edge e under the profile S (\u0393) in the game \u039b (\u0393), i.e., ne (S (\u0393)) = | {j | j \u2208 \u0393, e \u2208 Sj} |.\nSimilar to congestion games [16, 13] we denote by 4) (S (\u0393)) the potential function of the profile S (\u0393) in the game \u039b (\u0393), where 4) (S (\u0393)) = ne (S (\u0393))\nand define 4) (S (\u03c6)) = 0.\nIn our case, it holds that\nLet S be a SE, and let S \u2217 be the profile of the optimal solution.\nWe define an order on the players as follows.\nLet \u0393n = {1,..., n} be the set of all the players.\nFor each k =\nn,..., 1, since S is a SE, there exists a player in \u0393k, w.l.o.g. call it player k, such that,\nwhere the first inequality follows since the sum of the right hand side of equation (3) telescopes, and the second equality follows from equation (1).\nNext we bound the SPoA when coalitions of size at most k are allowed.\nTHEOREM 3.8.\nThe k-SPoA of a fair connection game with n players is at most nk \u00b7 H (k).\nPROOF.\nLet S be a SE of \u039b, and S \u2217 be the profile of the optimal solution of \u039b.\nTo simplify the proof, we assume that n\/k is an integer.\nWe partition the players to n\/k groups T1,..., Tn\/k each of size k. Let \u039bj be the game on the graph G played by the set of players Tj.\nLet S (Tj) denote the profile of the k players in Tj in the game \u039bj induced by the profile S of the game \u039b.\nBy Theorem 3.7, it holds that for each game \u039bj, j = 1,..., n\/k,\nwhere the first inequality follows since for each group Tj and player i E Tj, it holds that ci (S) PoS.\nPROOF.\nFor the lower bound of H (n) we observe that in the example presented in [2], the unique Nash equilibrium is also a strong equilibrium, and therefore k-SPoA = H (n) for any 1 PoS.\nWe next show that there exists a fair connection game in which the inequality is strict.\nFigure 4: Example of a single source general connection game that does not admit a strong equilibrium.\nThe edges that are not labeled with costs have a cost of zero.\nTHEOREM 3.12.\nThere exists a fair connection game in which SPoS> PoS.\nPROOF.\nConsider a single source fair connection game on the graph G depicted in Figure 3.10 Player i = 1,..., n wishes to connect the source s to his sink ti.\nAssume that each player i = 1,..., n \u2212 2 has his own path of cost 1\/i from s to ti and players i = n \u2212 1, n have a joint path of cost 2\/n from s to ti.\nAdditionally, all players can share a common path of cost 1 + e for some small e> 0.\nThe optimal solution connects all players through the common path of cost 1 + e, and this is also a Nash equilibrium with total cost 1 + E.\nIt is easy to verify that the solution where each player i = 1,..., n \u2212 2 uses his own path and users i = n \u2212 1, n use their joint path is the unique strong equilibrium of this game with total cost En \u2212 2 i + 2\nWhile the example above shows that the SPoS may be greater than the PoS, the upper bound of H (n) = \u0398 (log n), proven for the PoS [2], serves as an upper bound for the SPoS as well.\nThis is a direct corollary from theorem 3.7, as SPoS i will consider the cost of those (bought) edges as zero.\nWhen COMPUTE-SE considers player j, the cost of the edges that players 1 to j \u2212 1 have bought is set to zero, and player j fully buys a shortest path Qj from s to tj.\nNamely, for every edges e G Qj \\ Ui i pays for any edge on any path from s to ti.\nConsider a player k> i and let Q0k = Qk U Q00k, where Q00k is a path connecting tk to t. Let yk be the intersecting vertex of Q0k and ti.\nSince there exists a path from s to yk that was fully paid for by players j ci (p).\nThis contradicts the fact that player i improved its cost and therefore not all the players in \u0393 reduce their cost.\nThis implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game, it holds that PoS = 1 [3], the price of anarchy can be as large as n, even for two parallel edges.\nHere, we show that any strong equilibrium in single source general connection games yields the optimal cost.\nPROOF.\nLet p = (p1,..., pn) be a strong equilibrium, and let T \u2217 be the minimum cost Steiner tree on all players, rooted at the (single) source s. Let Te \u2217 be the subtree of T \u2217 disconnected from s when edge e is removed.\nLet \u0393 (Te) be the set of players which have sinks in Te.\nFor a set of edges E, let c (E) = Ee \u2208 E ce.\nLet P (Te) = Ei \u2208 \u0393 (Te) ci (p).\nAssume by way of contradiction that c (p)> c (T \u2217).\nWe will show that there exists a sub-tree T0 of T \u2217, that connects a subset of players \u0393 C _ N, and a new set of payments \u00af p, such that for each i E \u0393, ci (\u00af p) P (Te \u2217).\nLet B be the set of bad edges.\nWe define T0 to be T \u2217 \u2212 Ue \u2208 B (Te \u2217 U {e}).\nNote that we can find a subset B0 of B such that Ue \u2208 B (Te \u2217 U {e}) is equal to Ue \u2208 B (Te \u2217 U {e}) and for any e1, e2 E B0 we have T \u2217 e1 n T \u2217 ee = 0.\n(The set B0 will include any edge e E B for which there is no other edge e0 E B on the path from e to the source s.) Considering the edges in e E B0 we can see that any subtree Te \u2217 we delete from T cannot decrease the difference between the payments and the cost of the remaining tree.\nTherefore, in T0 for every edge e, we have that c (Te0 U {e})

ci (\u00af p) for i E \u0393 (T0).\n(Recall that the payments have the restriction that player i can only pay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00af p. Let ci (\u00af p, T0 e \u2208 Te \u00af pi (e) be the payments of player i for the subtree T0e.\nWe will show that for every subtree T0 e, ci (\u00af p, Te0 U {e})

M .\nA finite occurrence sequence is a finite sequence of steps and markings: M1[S1 > M2 ... Mn[Sn > Mn+1 such that n \u2208 N and Mi[Si > Mi+1 \u2200i \u2208 {1, ... , n}.\nThe set of all possible markings reachable for a net Net from a marking M is called its reachability set, and is denoted as R(Net, M).\n5.1 Mapping to Coloured Petri Nets Our normative structure is a labelled bi-partite graph.\nThe same is true for a Coloured Petri Net.\nWe are presenting a mapping f from one to the other, in order to provide semantics for the normative structure and prove properties about it by using well-known theoretical results from work on CPNs.\nThe mapping f makes use of correspondences between normative scenes and CPN places, normative transitions and CPN transitions and finally, between arc labels and CPN arc expressions.\nS \u2192 P B \u2192 T Lin \u222a Lout \u2192 E The set of types is the singleton set containing the colour NP (i.e. \u03a3 = {NP}).\nThis complex type is structured as follows (we use CPN-ML [4] syntax): color NPT = with Obl | Per | Prh | NoMod color IP = with inform | declare | offer color UTT = record illp : IP ag1, role1, ag2, role2 : string content: string time : int color NP = record mode : NPT illoc : UTT Modelling illocutions as norms without modality (NoMod) is a formal trick we use to ensure that sub-nets can be combined as explained below.\nArcs are mapped almost directly.\nA is a finite set of arcs and N is a node function, such that \u2200a \u2208 A \u2203a \u2208 Ain \u222aAout .\nN(a) = a .\nThe initialisation function I is defined as I(p) = \u0394s (\u2200s \u2208 S where p is obtained from s using the mapping; remember that s = ids, \u0394s ).\nFinally, the colour function C assigns the colour NP to every place: C(p) = NP (\u2200p \u2208 P).\nWe are not making use of the guard function G.\nIn future work, this function can be used to model constraints when we extend the expressiveness of our norm language.\n5.2 Properties of Normative Structures Having defined the mapping from normative structures to Coloured Petri Nets, we now look at properties of CPNs that help us understand the complexity of conflict detection.\nOne question we would like to answer is, whether at a given point in time, a given normative structure is conflict-free.\nSuch a snapshot of a normative structure corresponds to a marking in the mapped CPN.\nDef.\n11.\nGiven a marking Mi, this marking is conflictfree if \u00ac\u2203p \u2208 P. \u2203np1, np2 \u2208 Mi(p) such that np1.mode = Obl and np2.mode = Prh and np1.illoc and np2.illoc unify under a valid substitution.\nAnother interesting question would be, whether a conflict will occur from such a snapshot of the system by propagating the normative positions.\nIn order to answer this question, we first translate the snapshot of the normative structure to the corresponding CPN and then execute the finite occurence sequence of markings and steps, verifying the conflict-freedom of each marking as we go along.\nDef.\n12.\nGiven a marking Mi, a finite occurrence sequence Si, Si+1, ..., Sn is called conflict-free, if and only if Mi[Si > Mi+1 ... Mn[Sn > Mn+1 and Mk is conflict-free for all k such that i \u2264 k \u2264 n + 1.\nHowever, the main question we would like to investigate, is whether or not a given normative structure is conflictresistant, that is, whether or not the agents enacting the MAS are able to bring about conflicts through their actions.\nAs soon as one includes the possibility of actions (or utterances) from autonomous agents, one looses determinism.\nHaving mapped the normative structure to a CPN, we now add CPN models of the agents'' interactions.\nEach form of agent interaction (i.e. each activity) can be modelled using CPNs along the lines of Cost et al. [5].\nThese nondeterministic CPNs feed tokens into the CPN that models the normative structure.\nThis leads to the introduction of non-determinism into the combined CPN.\nThe lower half of figure 3 shows part of a CPN model of an agent protocol where the arc denoted with `1'' represents some utterance of an illocution by an agent.\nThe target transition of this arc, not only moves a token on to the next state of this CPN, but also places a token in the place corresponding to the appropriate normative scene in the CPN model of the normative structure (via arc `2'').\nTransition `3'' finally could propagate that token in form of an obligation, for example.\nThus, from a given marking, many different occurrence sequences are possible depending on the agents'' actions.\nWe make use of the reachability set R to define a situation in which agents cannot cause conflicts.\n640 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: Constructing the combined CPN Def.\n13.\nGiven a net N, a marking M is conflict-resistant if and only if all markings in R(N,M) are conflict-free.\nChecking conflict-freedom of a marking can be done in polynomial time by checking all places of the CPN for conflicting tokens.\nConflict-freedom of an occurrence sequence in the CPN that represents the normative structure can also be done in polynomial time since this sequence is deterministic given a snapshot.\nWhether or not a normative structure is designed safely corresponds to checking the conflict-resistance of the initial marking M0.\nNow, verifying conflict-resistance of a marking becomes a very difficult task.\nIt corresponds to the reachability problem in a CPN: can a state be reached or a marking achieved, that contains a conflict?\n.\nThis reachability problem is known to be NP-complete for ordinary Petri Nets [22] and since CPNs are functionally identical, we cannot hope to verify conflict-resistance of a normative structure off-line in a reasonable amount of time.\nTherefore, distributed, run-time mechanisms are needed to ensure that a normative structure maintains consistency.\nWe present one such mechanism in the following section.\n6.\nMANAGING NORMATIVE STRUCTURES Once a conflict (as defined in Section 4) has been detected, we propose to employ the unifier to resolve the conflict.\nIn our example, if the variables in prh(inform(a1, r1, a2, r2, p(Y, d), T )) do not get the values specified in substitution \u03c3 then there will not be a conflict.\nHowever, rather than computing the complement set of a substitution (which can be an infinite set) we propose to annotate the prohibition with the unifier itself and use it to determine what the variables of that prohibition cannot be in future unifications in order to avoid a conflict.\nWe therefore denote annotated prohibitions as prh(\u00afI) \u03a3, where \u03a3 = {\u03c31, ... , \u03c3n}, is a set of unifiers.\nAnnotated norms3 are interpreted as deontic constructs with curtailed influences, that is, their effect (on agents, roles and illocutions) has been limited by the set \u03a3 of unifiers.\nA prohibition may be in conflict with various obligations in a given normative scene s = id, \u0394 and we need to record (and possibly avoid) all these conflicts.\nWe define below an algorithm which ensures that a normative position will be added to a normative scene in such a way that it will not cause any conflicts.\n3 Although we propose to curtail prohibitions, the same machinery can be used to define the curtailment of obligations instead.\nThese different policies are dependent on the intended deontic semantics and requirements of the systems addressed.\nFor instance, some MASs may require that their agents should not act in the presence of conflicts, that is, the obligation should be curtailed.\n6.1 Conflict Resolution We propose a fine-grained way of resolving normative conflicts via unification.\nWe detect the overlapping of the influences of norms , i.e. how they affect the behaviour of the concerned agents, and we curtail the influence of the normative position, by appropriately using the annotations when checking if the norm applies to illocutions.\nThe algorithm shown in Figure 4 depicts how we maintain a conflict-free set of norms.\nIt adds a given norm N to an existing, conflictfree normative state \u0394, obtaining a resulting new normative state \u0394 which is conflict-free, that is, its prohibitions are annotated with a set of conflict sets indicating which bindings for variables have to be avoided for conflicts not to take place.\nalgorithm addNorm(N, \u0394) begin 1 timestamp(N) 2 case N of 3 per(\u00afI): \u0394 := \u0394 \u222a {N} 4 prh(I): if N \u2208 \u0394 s.t. conflict(N, N , \u03c3) then \u0394 := \u0394 5 else \u0394 := \u0394 \u222a {N} 6 prh(\u00afI): 7 begin 8 \u03a3 := \u2205 9 for each N \u2208 \u0394 do 10 if conflict(N, N , \u03c3) then \u03a3 := \u03a3 \u222a {\u03c3} 11 \u0394 := \u0394 \u222a {N \u03a3} 12 end 13 obl(\u00afI): 14 begin 15 \u03941 := \u2205; \u03942 := \u2205 16 for each (N \u03a3) \u2208 \u0394 do 17 if N = prh(I) then 18 if conflict(N , N, \u03c3) then \u03941 := \u03941 \u222a {N \u03a3} 19 else nil 20 else 21 if conflict(N , N, \u03c3) then 22 begin 23 \u03941 := \u03941 \u222a {N \u03a3} 24 \u03942 := \u03942 \u222a {N (\u03a3 \u222a {\u03c3})} 25 end 26 \u0394 := (\u0394 \u2212 \u03941) \u222a \u03942 \u222a {N} 27 end 28 end case 29 return \u0394 end Figure 4: Algorithm to Preserve Conflict-Freedom The algorithm uses a case of structure to differentiate the different possibilities for a given norm N. Line 3 addresses the case when the given norm is a permission: N is simply added to \u0394.\nLines 4-5 address the case when we attempt to add a ground prohibition to a normative state: if it conflicts with any obligation, then it is discarded; otherwise it is added to the normative state.\nLines 6-12 describe the situation when the normative position to be added is a nonground prohibition.\nIn this case, the algorithm initialises \u03a3 to an empty set and loops (line 9-10) through the norms N in the old normative state \u0394.\nUpon finding one that conflicts with N, the algorithm updates \u03a3 by adding the newly found conflict set \u03c3 to it (line 10).\nBy looping through \u0394, we are able to check any conflicts between the new prohibition and the existing obligations, adequately building the annotation \u03a3 to be used when adding N to \u0394 in line 11.\nLines 13-27 describe how a new obligation is accommodated to an existing normative state.\nWe make use of two initially empty, temporary sets, \u03941, \u03942.\nThe algorithm loops through \u0394 (lines 16-25) picking up those annotated prohibitions N \u03a3 which conflict with the new obligation.\nThere are, however, two cases to deal with: the one when a ground The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 641 prohibition is found (line 17), and its exception, covering non-ground prohibitions (line 20).\nIn both cases, the old prohibition is stored in \u03941 (lines 18 and 23) to be later removed from \u0394 (line 26).\nHowever, in the case of a nonground prohibition, the algorithm updates its annotation of conflict sets (line 24).\nThe loop guarantees that an exhaustive (linear) search through a normative state takes place, checking if the new obligation is in conflict with any existing prohibitions, possibly updating the annotations of these conflicting prohibitions.\nIn line 26 the algorithm builds the new updated \u0394 by removing the old prohibitions stored in \u03941 and adding the updated prohibitions stored in \u03942 (if any), as well as the new obligation N.\nOur proposed algorithm is correct in that, for a given normative position N and a normative state \u0394, it provides a new normative state \u0394 in which all prohibitions have annotations recording how they unify with existing obligations.\nThe annotations can be empty, though: this is the case when we have a ground prohibition or a prohibition which does not unify\/conflict with any obligation.\nPermissions do not affect our algorithm and they are appropriately dealt with (line 3).\nAny attempt to insert a ground prohibition which conflicts, yields the same normative state (line 4).\nWhen a new obligation is being added then the algorithm guarantees that all prohibitions are considered (lines 14-27), leading to the removal of conflicting ground prohibitions or the update of annotations of non-ground prohibitions.\nThe algorithm always terminates: the loops are over a finite set \u0394 and the conflict checks and set operations always terminate.\nThe complexity of the algorithm is linear: the set \u0394 is only examined once for each possible case of norm to be added.\nWhen managing normative states we may also need to remove normative positions.\nThis is straightforward: permissions can be removed without any problems; annotated prohibitions can also be removed without further considerations; obligations, however, require some housekeeping.\nWhen an obligation is to be removed, we must check it against all annotated prohibitions in order to update their annotations.\nWe apply the conflict check and obtain a unifier, then remove this unifier from the prohibition``s annotation.\nWe invoke the removal algorithm as removeNorm(N, \u0394): it returns a new normative state \u0394 in which N has been removed, with possible alterations to other normative positions as explained.\n6.2 Enactment of a Normative Structure The enactment of a normative structure amounts to the parallel, distributed execution of normative scenes and normative transitions.\nFor illustrative purposes, hereafter we shall describe the interplay between the payment and delivery normative scenes and the normative transition nt linking them in the upper half of figure 2.\nWith this aim, consider for instance that obl(inform(jules, client, rod, acc, pay(copper, 400, 350), T) \u2208 \u0394payment and that \u0394delivery holds prh(inform(rod,wm, jules, client, delivered(Z, Q), T)).\nSuch states indicate that client Jules is obliged to pay #400 for 350kg of copper to accountant Rod according to the payment normative scene, whereas Rod, taking up the role of warehouse manager this time, is prohibited to deliver anything to client Jules according to the delivery normative scene.\nFor each normative scene, the enactment process goes as follows.\nFirstly, it processes its incoming message queue that contains three types of messages: utterances from the activity it is linked to; and normative commands either to add or remove normative positions.\nFor instance, in our example, the payment normative scene collects the illocution I = utt((inform(jules, client, rod, acc, pay(copper, 400, 350), 35)) standing for client Jules'' pending payment for copper (via arrow A in figure 2).\nUtterances are timestamped and subsequently added to the normative state.\nWe would have \u0394payment = \u0394payment \u222a {I}, in our example.\nUpon receiving normative commands to either add or remove a normative position, the normative scene invokes the corresponding addition or removal algorithm described in Section 6.1.\nSecondly, the normative scene acknowledges its state change by sending a trigger message to every outgoing normative transition it is connected to.\nIn our example, the payment normative scene would be signalling its state change to normative transition nt.\nFor normative transitions, the process works differently.\nBecause each normative transition controls the operation of a single rule, upon receiving a trigger message, it polls every incoming normative scene for substitutions for the relevant illocution schemata on the LHS of its rule.\nIn our example, nt (being responsible for the rule described in Section 3.4), would poll the payment normative scene (via arrow B) for substitutions.\nUpon receiving replies from them (in the form of sets of substitutions together with time-stamps), it has to unify substitutions from each of these normative scenes.\nFor each unification it finds, the rule is fired, and hence the corresponding normative command is sent along to the output normative scene.\nThe normative transition then keeps track of the firing message it sent on and of the time-stamps of the normative positions that triggered the firing.\nThis is done to ensure that the very same normative positions in the LHS of a rule only trigger its firing once.\nIn our example, nt would be receiving \u03c3 = {X\/jules, Y\/rod, Z\/copper, Q\/350} from the payment normative scene.\nSince the substitions in \u03c3 unify with nt``s rule, the rule is fired, and the normative command add(delivery : obl(rod, wm, jules, client, delivered(copper, 350), T)) is sent along to the delivery normative scene to oblige Rod to deliver to client Jules 350kg of copper.\nAfter that, the delivery normative scene would invoke the addNorm algorithm from figure 4 with \u0394delivery and N = obl(rod, wm, jules, client, delivered(copper, 350)) as arguments.\n7.\nRELATED WORK AND CONCLUSIONS Our contributions in this paper are three-fold.\nFirstly, we introduce an approach for the management of and reasoning about norms in a distributed manner.\nTo our knowledge, there is little work published in this direction.\nIn [8, 21], two languages are presented for the distributed enforcement of norms in MAS.\nHowever, in both works, each agent has a local message interface that forwards legal messages according to a set of norms.\nSince these interfaces are local to each agent, norms can only be expressed in terms of actions of that agent.\nThis is a serious disadvantage, e.g. when one needs to activate an obligation to one agent due to a certain message of another one.\nThe second contribution is the proposal of a normative structure.\nThe notion is fruitful because it allows the separation of normative and procedural concerns.\nThe normative structure we propose makes evident the similarity between the propagation of normative positions and the propagation 642 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) of tokens in Coloured Petri Nets.\nThat similarity readily suggests a mapping between the two, and gives grounds to a convenient analytical treatment of the normative structure, in general, and the complexity of conflict detection, in particular.\nThe idea of modelling interactions (in the form of conversations) via Petri Nets has been investigated in [18], where the interaction medium and individual agents are modelled as CPN sub-nets that are subsequently combined for analysis.\nIn [5], conversations are first designed and analysed at the level of CPNs and thereafter translated into protocols.\nLin et al. [20] map conversation schemata to CPNs.\nTo our knowledge, the use of this representation in the support of conflict detection in regulated MAS has not been reported elsewhere.\nFinally, we present a distributed mechanism to resolve normative conflicts.\nSartor [25] treats normative conflicts from the point of view of legal theory and suggests a way to order the norms involved.\nHis idea is implemented in [12] but requires a central resource for norm maintenance.\nThe approach to conflict detection and resolution is an adaptation and extension of the work on instantiation graphs reported in [17] and a related algorithm in [27].\nThe algorithm presented in the current paper can be used to manage normative states distributedly: normative scenes that happen in parallel have an associated normative state \u0394 to which the algorithm is independently applied each time a new norm is to be introduced.\nThese three contributions we present in this paper open many possibilities for future work.\nWe should mention first, that as a broad strategy we are working on a generalisation of the notion of normative structure to make it operate with different coordination models, with richer deontic content and on top of different computational realisations of regulated MAS.\nAs a first step in this direction we are taking advantage of the de-coupling between interaction protocols and declarative normative guidance that the normative structure makes available, to provide a normative layer for electronic institutions (as defined in [1]).\nWe expect such coupling will endow electronic institutions with a more flexible -and more expressive- normative environment.\nFurthermore, we want to extend our model along several directions: (1) to handle negation and constraints as part of the norm language, and in particular the notion of time; (2) to accommodate multiple, hierarchical norm authorities based on roles, along the lines of Cholvy and Cuppens [3] and power relationships as suggested by Carabelea et al. [2]; (3) to capture in the conflict resolution algorithm different semantics relating the deontic notions by supporting different axiomations (e.g., relative strength of prohibition versus obligation, default deontic notions, deontic inconsistencies).\nOn the theoretical side, we intend to use analysis techniques of CPNs in order to characterise classes of CPNs (e.g., acyclic, symmetric, etc.) corresponding to families of Normative Structures that are susceptible to tractable offline conflict detection.\nThe combination of these techniques along with our online conflict resolution mechanisms is intended to endow MAS designers with the ability to incorporate norms into their systems in a principled way.\n8.\nREFERENCES [1] J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodr\u00b4\u0131guez, and C. Sierra.\nEngineering open environments with electronic institutions.\nJournal on Engineering Applications of Artificial Intelligence, 18(2):191-204, 2005.\n[2] C. Carabelea, O. Boissier, and C. Castelfranchi.\nUsing social power to enable agents to reason about being part of a group.\nIn 5th Internat.\nWorkshop, ESAW 2004, pages 166-177, 2004.\n[3] L. Cholvy and F. Cuppens.\nSolving normative conflicts by merging roles.\nIn Fifth International Conference on Artificial Intelligence and Law, Washington, USA, 1995.\n[4] S. Christensen and T. B. Haagh.\nDesign CPN - overview of CPN ML syntax.\nTechnical report, University of Aarhus, 1996.\n[5] R. S. Cost, Y. Chen, T. W. Finin, Y. Labrou, and Y. Peng.\nUsing colored petri nets for conversation modeling.\nIn Issues in Agent Communication, pages 178-192, London, UK, 2000.\n[6] F. Dignum.\nAutonomous Agents with Norms.\nArtificial Intelligence and Law, 7(1):69-79, 1999.\n[7] A. Elhag, J. Breuker, and P. Brouwer.\nOn the Formal Analysis of Normative Conflicts.\nInformation & Comms.\nTechn.\nLaw, 9(3):207-217, Oct. 2000.\n[8] M. Esteva, W. Vasconcelos, C. Sierra, and J. A. Rodr\u00b4\u0131guez-Aguilar.\nNorm consistency in electronic institutions.\nvolume 3171 (LNAI), pages 494-505.\nSpringer-Verlag, 2004.\n[9] M. Fitting.\nFirst-Order Logic and Automated Theorem Proving.\nSpringer-Verlag, New York, U.S.A., 1990.\n[10] N. Fornara, F. Vigan`o, and M. Colombetti.\nAn Event Driven Approach to Norms in Artificial Institutions.\nIn AAMAS05 Workshop: Agents, Norms and Institutions for Regulated Multiagent Systems (ANI@REM), Utrecht, 2005.\n[11] D. Gaertner, P. Noriega, and C. Sierra.\nExtending the BDI architecture with commitments.\nIn Proceedings of the 9th International Conference of the Catalan Association of Artificial Intelligence, 2006.\n[12] A. Garc\u00b4\u0131a-Camino, P. Noriega, and J.-A.\nRodr\u00b4\u0131guez-Aguilar.\nAn Algorithm for Conflict Resolution in Regulated Compound Activities.\nIn 7th Int.Workshop - ESAW ``06, 2006.\n[13] A. Garc\u00b4\u0131a-Camino, J.-A.\nRodr\u00b4\u0131guez-Aguilar, C. Sierra, and W. Vasconcelos.\nA Distributed Architecture for Norm-Aware Agent Societies.\nIn DALT III, volume 3904 (LNAI), pages 89-105.\nSpringer, 2006.\n[14] F. Giunchiglia and L. Serafini.\nMulti-language hierarchical logics or: How we can do without modal logics.\nArtificial Intelligence, 65(1):29-70, 1994.\n[15] J. Habermas.\nThe Theory of Communication Action, Volume One, Reason and the Rationalization of Society.\nBeacon Press, 1984.\n[16] K. Jensen.\nColoured Petri Nets: Basic Concepts, Analysis Methods and Practical Uses (Volume 1).\nSpringer, 1997.\n[17] M. Kollingbaum and T. Norman.\nStrategies for resolving norm conflict in practical reasoning.\nIn ECAI Workshop Coordination in Emergent Agent Societies 2004, 2004.\n[18] J.-L.\nKoning, G. Francois, and Y. Demazeau.\nFormalization and pre-validation for interaction protocols in a multi agent systems.\nIn ECAI, pages 298-307, 1998.\n[19] B. Kramer and J. Mylopoulos.\nKnowledge Representation.\nIn S. C. Shapiro, editor, Encyclopedia of Artificial Intelligence, volume 1, pages 743-759.\nJohn Wiley & Sons, 1992.\n[20] F. Lin, D. H. Norrie, W. Shen, and R. Kremer.\nA schema-based approach to specifying conversation policies.\nIn Issues in Agent Communication, pages 193-204, 2000.\n[21] N. Minsky.\nLaw Governed Interaction (LGI): A Distributed Coordination and Control Mechanism (An Introduction, and a Reference Manual).\nTechnical report, Rutgers University, 2005.\n[22] T. Murata.\nPetri nets: Properties, analysis and applications.\nProceedings of the IEEE, 77(4):541-580, 1989.\n[23] S. Parsons, C. Sierra, and N. Jennings.\nAgents that reason and negotiate by arguing.\nJournal of Logic and Computation, 8(3):261-292, 1998.\n[24] A. Ricci and M. Viroli.\nCoordination Artifacts: A Unifying Abstraction for Engineering Environment-Mediated Coordination in MAS.\nInformatica, 29:433-443, 2005.\n[25] G. Sartor.\nNormative conflicts in legal reasoning.\nArtificial Intelligence and Law, 1(2-3):209-235, June 1992.\n[26] M. Sergot.\nA Computational Theory of Normative Positions.\nACM Trans.\nComput.\nLogic, 2(4):581-622, 2001.\n[27] W. W. Vasconcelos, M. Kollingbaum, and T. Norman.\nResolving Conflict and Inconsistency in Norm-Regulated Virtual Organisations.\nIn Proceedings of AAMAS ``07, Hawai``i, USA, 2007.\nIFAAMAS.\n[28] G. H. von Wright.\nNorm and Action: A Logical Inquiry.\nRoutledge and Kegan Paul, London, 1963.\n[29] M. Wooldridge.\nAn Introduction to Multiagent Systems.\nJohn Wiley & Sons, Chichester, UK, Feb. 2002.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 643","lvl-3":"Distributed Norm Management in Regulated Multi-Agent Systems *\nABSTRACT\nNorms are widely recognised as a means of coordinating multi-agent systems.\nThe distributed management of norms is a challenging issue and we observe a lack of truly distributed computational realisations of normative models.\nIn order to regulate the behaviour of autonomous agents that take part in multiple, related activities, we propose a normative model, the Normative Structure (NS), an artifact that is based on the propagation of normative positions (obligations, prohibitions, permissions), as consequences of agents' actions.\nWithin a NS, conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets and borrowing well-known theoretical results from that field.\nSince online conflict resolution is required, we present a tractable algorithm to be employed distributedly.\nWe then demonstrate that this algorithm is paramount for the distributed enactment of a NS.\n1.\nINTRODUCTION\nA fundamental feature of open, regulated multi-agent systems in which autonomous agents interact, is that participating agents are meant to comply with the conventions of the system.\nNorms can be used to model such conventions and hence as a means to regulate the observable behaviour of agents [6, 29].\nThere are many contributions on the subject of norms from sociologists, philosophers and logicians (e.g., [15, 28]).\nHowever, there are very few proposals for computational realisations of normative models--the way norms can be integrated in the design and execution of MASs. .\nThe few that exist (e.g. [10, 13, 24]), operate in a centralised manner which creates bottlenecks and single points-of-failure.\nTo our knowledge, no proposal truly supports the distributed enactment of normative environments.\nIn our paper we approach that problem and propose means to handle conflicting commitments in open, regulated, multiagent systems in a distributed manner.\nThe type of regulated MAS we envisage consists of multiple, concurrent, related activities where agents interact.\nEach agent may concurrently participate in several activities, and change from one activity to another.\nAn agent's actions within an activity may have consequences in the form of normative positions (i.e. obligations, permissions, and prohibitions) [26] that may constrain its future behaviour.\nFor instance, a buyer agent who runs out of credit may be forbidden to make further offers, or a seller agent is obliged to deliver after closing a deal.\nWe assume that agents may choose not to fulfill all their obligations and hence may be sanctioned by the MAS.\nNotice that, when activities are distributed, normative positions must flow from the activities in which they are generated to those in which they take effect.\nFor instance, the seller's obligation above must flow (or be propagated) from a negotiation activity to a delivery activity.\nSince in an open, regulated MAS one cannot embed normative aspects into the agents' design, we adopt the view that the MAS should be supplemented with a separate set of norms that further regulates the behaviour of participating agents.\nIn order to model the separation of concerns between the coordination level (agents' interactions) and the normative level (propagation of normative positions), we propose an artifact called the Normative Structure (NS).\nWithin a NS conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nFor instance, an agent may be obliged and prohibited to do the\nvery same action in an activity.\nSince the regulation of a MAS entails that participating agents need to be aware of the validity of those actions that take place within it, such conflicts ought to be identified and possibly resolved if a claim of validity is needed for an agent to engage in an action or be sanctioned.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets (CPNs) and borrowing well-known theoretical results from the field of CPNs.\nWe believe that online conflict detection and resolution is required.\nHence, we present a tractable algorithm for conflict resolution.\nThis algorithm is paramount for the distributed enactment of a NS.\nThe paper is organised as follows.\nIn Section 2 we detail a scenario to serve as an example throughout the paper.\nNext, in Section 3 we formally define the normative structure artifact.\nFurther on, in Section 4 we formalise the notion of conflict to subsequently analyse the complexity of conflict detection in terms of CPNs in Section 5.\nSection 6 describes the computational management of NSs by describing their enactment and presenting an algorithm for conflict resolution.\nFinally, we comment on related work, draw conclusions and report on future work in Section 7.\n2.\nSCENARIO\n3.\nNORMATIVE STRUCTURE\n3.1 Basic Concepts\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 637\n3.2 Formal Definition of the Notion of NS\n3.3 Intended Semantics\n638 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.4 Example\n4.\nCONFLICT DEFINITION\n5.\nFORMALISING CONFLICT-FREEDOM\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 639\n5.1 Mapping to Coloured Petri Nets\n5.2 Properties of Normative Structures\n640 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nMANAGING NORMATIVE STRUCTURES\n6.1 Conflict Resolution\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 641\n6.2 Enactment of a Normative Structure\n7.\nRELATED WORK AND CONCLUSIONS\nOur contributions in this paper are three-fold.\nFirstly, we introduce an approach for the management of and reasoning about norms in a distributed manner.\nTo our knowledge, there is little work published in this direction.\nIn [8, 21], two languages are presented for the distributed enforcement of norms in MAS.\nHowever, in both works, each agent has a local message interface that forwards legal messages according to a set of norms.\nSince these interfaces are local to each agent, norms can only be expressed in terms of actions of that agent.\nThis is a serious disadvantage, e.g. when one needs to activate an obligation to one agent due to a certain message of another one.\nThe second contribution is the proposal of a normative structure.\nThe notion is fruitful because it allows the separation of normative and procedural concerns.\nThe normative structure we propose makes evident the similarity between the propagation of normative positions and the propagation\n642 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nof tokens in Coloured Petri Nets.\nThat similarity readily suggests a mapping between the two, and gives grounds to a convenient analytical treatment of the normative structure, in general, and the complexity of conflict detection, in particular.\nThe idea of modelling interactions (in the form of conversations) via Petri Nets has been investigated in [18], where the interaction medium and individual agents are modelled as CPN sub-nets that are subsequently combined for analysis.\nIn [5], conversations are first designed and analysed at the level of CPNs and thereafter translated into protocols.\nLin et al. [20] map conversation schemata to CPNs.\nTo our knowledge, the use of this representation in the support of conflict detection in regulated MAS has not been reported elsewhere.\nFinally, we present a distributed mechanism to resolve normative conflicts.\nSartor [25] treats normative conflicts from the point of view of legal theory and suggests a way to order the norms involved.\nHis idea is implemented in [12] but requires a central resource for norm maintenance.\nThe approach to conflict detection and resolution is an adaptation and extension of the work on instantiation graphs reported in [17] and a related algorithm in [27].\nThe algorithm presented in the current paper can be used to manage normative states distributedly: normative scenes that happen in parallel have an associated normative state \u0394 to which the algorithm is independently applied each time a new norm is to be introduced.\nThese three contributions we present in this paper open many possibilities for future work.\nWe should mention first, that as a broad strategy we are working on a generalisation of the notion of normative structure to make it operate with different coordination models, with richer deontic content and on top of different computational realisations of regulated MAS.\nAs a first step in this direction we are taking advantage of the de-coupling between interaction protocols and declarative normative guidance that the normative structure makes available, to provide a normative layer for electronic institutions (as defined in [1]).\nWe expect such coupling will endow electronic institutions with a more flexible--and more expressive--normative environment.\nFurthermore, we want to extend our model along several directions: (1) to handle negation and constraints as part of the norm language, and in particular the notion of time; (2) to accommodate multiple, hierarchical norm authorities based on roles, along the lines of Cholvy and Cuppens [3] and power relationships as suggested by Carabelea et al. [2]; (3) to capture in the conflict resolution algorithm different semantics relating the deontic notions by supporting different axiomations (e.g., relative strength of prohibition versus obligation, default deontic notions, deontic inconsistencies).\nOn the theoretical side, we intend to use analysis techniques of CPNs in order to characterise classes of CPNs (e.g., acyclic, symmetric, etc.) corresponding to families of Normative Structures that are susceptible to tractable offline conflict detection.\nThe combination of these techniques along with our online conflict resolution mechanisms is intended to endow MAS designers with the ability to incorporate norms into their systems in a principled way.","lvl-4":"Distributed Norm Management in Regulated Multi-Agent Systems *\nABSTRACT\nNorms are widely recognised as a means of coordinating multi-agent systems.\nThe distributed management of norms is a challenging issue and we observe a lack of truly distributed computational realisations of normative models.\nIn order to regulate the behaviour of autonomous agents that take part in multiple, related activities, we propose a normative model, the Normative Structure (NS), an artifact that is based on the propagation of normative positions (obligations, prohibitions, permissions), as consequences of agents' actions.\nWithin a NS, conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets and borrowing well-known theoretical results from that field.\nSince online conflict resolution is required, we present a tractable algorithm to be employed distributedly.\nWe then demonstrate that this algorithm is paramount for the distributed enactment of a NS.\n1.\nINTRODUCTION\nA fundamental feature of open, regulated multi-agent systems in which autonomous agents interact, is that participating agents are meant to comply with the conventions of the system.\nNorms can be used to model such conventions and hence as a means to regulate the observable behaviour of agents [6, 29].\nThere are many contributions on the subject of norms from sociologists, philosophers and logicians (e.g., [15, 28]).\nHowever, there are very few proposals for computational realisations of normative models--the way norms can be integrated in the design and execution of MASs. .\nTo our knowledge, no proposal truly supports the distributed enactment of normative environments.\nIn our paper we approach that problem and propose means to handle conflicting commitments in open, regulated, multiagent systems in a distributed manner.\nThe type of regulated MAS we envisage consists of multiple, concurrent, related activities where agents interact.\nEach agent may concurrently participate in several activities, and change from one activity to another.\nAn agent's actions within an activity may have consequences in the form of normative positions (i.e. obligations, permissions, and prohibitions) [26] that may constrain its future behaviour.\nWe assume that agents may choose not to fulfill all their obligations and hence may be sanctioned by the MAS.\nNotice that, when activities are distributed, normative positions must flow from the activities in which they are generated to those in which they take effect.\nSince in an open, regulated MAS one cannot embed normative aspects into the agents' design, we adopt the view that the MAS should be supplemented with a separate set of norms that further regulates the behaviour of participating agents.\nIn order to model the separation of concerns between the coordination level (agents' interactions) and the normative level (propagation of normative positions), we propose an artifact called the Normative Structure (NS).\nWithin a NS conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nFor instance, an agent may be obliged and prohibited to do the\nvery same action in an activity.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets (CPNs) and borrowing well-known theoretical results from the field of CPNs.\nWe believe that online conflict detection and resolution is required.\nHence, we present a tractable algorithm for conflict resolution.\nThis algorithm is paramount for the distributed enactment of a NS.\nThe paper is organised as follows.\nIn Section 2 we detail a scenario to serve as an example throughout the paper.\nNext, in Section 3 we formally define the normative structure artifact.\nFurther on, in Section 4 we formalise the notion of conflict to subsequently analyse the complexity of conflict detection in terms of CPNs in Section 5.\nSection 6 describes the computational management of NSs by describing their enactment and presenting an algorithm for conflict resolution.\nFinally, we comment on related work, draw conclusions and report on future work in Section 7.\n7.\nRELATED WORK AND CONCLUSIONS\nOur contributions in this paper are three-fold.\nFirstly, we introduce an approach for the management of and reasoning about norms in a distributed manner.\nTo our knowledge, there is little work published in this direction.\nIn [8, 21], two languages are presented for the distributed enforcement of norms in MAS.\nHowever, in both works, each agent has a local message interface that forwards legal messages according to a set of norms.\nSince these interfaces are local to each agent, norms can only be expressed in terms of actions of that agent.\nThis is a serious disadvantage, e.g. when one needs to activate an obligation to one agent due to a certain message of another one.\nThe second contribution is the proposal of a normative structure.\nThe notion is fruitful because it allows the separation of normative and procedural concerns.\nThe normative structure we propose makes evident the similarity between the propagation of normative positions and the propagation\n642 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nof tokens in Coloured Petri Nets.\nThat similarity readily suggests a mapping between the two, and gives grounds to a convenient analytical treatment of the normative structure, in general, and the complexity of conflict detection, in particular.\nIn [5], conversations are first designed and analysed at the level of CPNs and thereafter translated into protocols.\nLin et al. [20] map conversation schemata to CPNs.\nTo our knowledge, the use of this representation in the support of conflict detection in regulated MAS has not been reported elsewhere.\nFinally, we present a distributed mechanism to resolve normative conflicts.\nSartor [25] treats normative conflicts from the point of view of legal theory and suggests a way to order the norms involved.\nHis idea is implemented in [12] but requires a central resource for norm maintenance.\nThe approach to conflict detection and resolution is an adaptation and extension of the work on instantiation graphs reported in [17] and a related algorithm in [27].\nThese three contributions we present in this paper open many possibilities for future work.\nWe expect such coupling will endow electronic institutions with a more flexible--and more expressive--normative environment.\nOn the theoretical side, we intend to use analysis techniques of CPNs in order to characterise classes of CPNs (e.g., acyclic, symmetric, etc.) corresponding to families of Normative Structures that are susceptible to tractable offline conflict detection.\nThe combination of these techniques along with our online conflict resolution mechanisms is intended to endow MAS designers with the ability to incorporate norms into their systems in a principled way.","lvl-2":"Distributed Norm Management in Regulated Multi-Agent Systems *\nABSTRACT\nNorms are widely recognised as a means of coordinating multi-agent systems.\nThe distributed management of norms is a challenging issue and we observe a lack of truly distributed computational realisations of normative models.\nIn order to regulate the behaviour of autonomous agents that take part in multiple, related activities, we propose a normative model, the Normative Structure (NS), an artifact that is based on the propagation of normative positions (obligations, prohibitions, permissions), as consequences of agents' actions.\nWithin a NS, conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets and borrowing well-known theoretical results from that field.\nSince online conflict resolution is required, we present a tractable algorithm to be employed distributedly.\nWe then demonstrate that this algorithm is paramount for the distributed enactment of a NS.\n1.\nINTRODUCTION\nA fundamental feature of open, regulated multi-agent systems in which autonomous agents interact, is that participating agents are meant to comply with the conventions of the system.\nNorms can be used to model such conventions and hence as a means to regulate the observable behaviour of agents [6, 29].\nThere are many contributions on the subject of norms from sociologists, philosophers and logicians (e.g., [15, 28]).\nHowever, there are very few proposals for computational realisations of normative models--the way norms can be integrated in the design and execution of MASs. .\nThe few that exist (e.g. [10, 13, 24]), operate in a centralised manner which creates bottlenecks and single points-of-failure.\nTo our knowledge, no proposal truly supports the distributed enactment of normative environments.\nIn our paper we approach that problem and propose means to handle conflicting commitments in open, regulated, multiagent systems in a distributed manner.\nThe type of regulated MAS we envisage consists of multiple, concurrent, related activities where agents interact.\nEach agent may concurrently participate in several activities, and change from one activity to another.\nAn agent's actions within an activity may have consequences in the form of normative positions (i.e. obligations, permissions, and prohibitions) [26] that may constrain its future behaviour.\nFor instance, a buyer agent who runs out of credit may be forbidden to make further offers, or a seller agent is obliged to deliver after closing a deal.\nWe assume that agents may choose not to fulfill all their obligations and hence may be sanctioned by the MAS.\nNotice that, when activities are distributed, normative positions must flow from the activities in which they are generated to those in which they take effect.\nFor instance, the seller's obligation above must flow (or be propagated) from a negotiation activity to a delivery activity.\nSince in an open, regulated MAS one cannot embed normative aspects into the agents' design, we adopt the view that the MAS should be supplemented with a separate set of norms that further regulates the behaviour of participating agents.\nIn order to model the separation of concerns between the coordination level (agents' interactions) and the normative level (propagation of normative positions), we propose an artifact called the Normative Structure (NS).\nWithin a NS conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents' actions.\nFor instance, an agent may be obliged and prohibited to do the\nvery same action in an activity.\nSince the regulation of a MAS entails that participating agents need to be aware of the validity of those actions that take place within it, such conflicts ought to be identified and possibly resolved if a claim of validity is needed for an agent to engage in an action or be sanctioned.\nHowever, ensuring conflict-freedom of a NS at design time is computationally intractable.\nWe show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets (CPNs) and borrowing well-known theoretical results from the field of CPNs.\nWe believe that online conflict detection and resolution is required.\nHence, we present a tractable algorithm for conflict resolution.\nThis algorithm is paramount for the distributed enactment of a NS.\nThe paper is organised as follows.\nIn Section 2 we detail a scenario to serve as an example throughout the paper.\nNext, in Section 3 we formally define the normative structure artifact.\nFurther on, in Section 4 we formalise the notion of conflict to subsequently analyse the complexity of conflict detection in terms of CPNs in Section 5.\nSection 6 describes the computational management of NSs by describing their enactment and presenting an algorithm for conflict resolution.\nFinally, we comment on related work, draw conclusions and report on future work in Section 7.\n2.\nSCENARIO\nWe use a supply-chain scenario in which companies and individuals come together at an online marketplace to conduct business.\nThe overall transaction procedure may be organised as six distributed activities, represented as nodes in the diagram in Figure 1.\nThey involve different participants whose behaviour is coordinated through protocols.\nIn this scenario agents can play one of four roles: mar\nFigure 1: Activity Structure of the Scenario\nketplace accountant (acc), client, supplier (supp) and warehouse managers (wm).\nThe arrows connecting the activities represent how agents can move from one activity to another.\nAfter registering at the marketplace, clients and suppliers get together in an activity where they negotiate the terms of their transaction, i.e. prices, amounts of goods to be delivered, deadlines and other details.\nIn the contract activity, the order becomes established and an invoice is prepared.\nThe client will then participate in a payment activity, verifying his credit-worthiness and instructing his bank to transfer the correct amount of money.\nThe supplier in the meantime will arrange for the goods to be delivered (e.g. via a warehouse manager) in the delivery activity.\nFinally, agents can leave the marketplace conforming to a predetermined exit protocol.\nThe marketplace accountant participates in most of the activities as a trusted provider of auditing tools.\nIn the rest of the paper we shall build on this scenario to exemplify the notion of normative structure and to illustrate our approach to conflict detection and resolution in a distributed setting.\n3.\nNORMATIVE STRUCTURE\nIn MASs agents interact according to protocols which naturally are distributed.\nWe advocate that actions in one such protocol may have an effect on the enactment of other protocols.\nCertain actions can become prohibited or obligatory, for example.\nWe take normative positions to be obligations, prohibitions and permissions akin to work described in [26].\nThe intention of adding or removing a normative position we call normative command.\nOccurrences of normative positions in one protocol may also have consequences for other protocolsi.\nIn order to define our norm language and specify how normative positions are propagated, we have been inspired by multi-context systems [14].\nThese systems allow the structuring of knowledge into distinct formal theories and the definition of relationships between them.\nThe relationships are expressed as bridge rules--deducibility of formulae in some contexts leads to the deduction of other formulae in other contexts.\nRecently, these systems have been successfully used to define agent architectures [11, 23].\nThe metaphor translates to our current work as follows: the utterance of illocutions and\/or the existence of normative positions in some normative scenes leads to the deduction of normative positions in other normative scenes.\nWe are concerned with the propagation and distribution of normative positions within a network of distributed, normative scenes as a consequence of agents' actions.\nWe take normative scenes to be sets of normative positions and utterances that are associated with an underlying interaction protocol corresponding to an activity.\nIn this section, we first present a simple language capturing these aspects and formally introduce the notions of normative scene, normative transition rule and normative structure.\nWe give the intended semantics of these rules and show how to control a MAS via norms in an example.\n3.1 Basic Concepts\nThe building blocks of our language are terms and atomic formulae:\nSome examples of terms and functions are Credit, price or offer (bible, 30) being respectively a variable, a constant and a function.\nWe will be making use of identifiers throughout the paper, which are constant terms and also need the following definition: DEF.\n2.\nAnatomic formula is any construct p (ti,..., tn), where p is an n-ary predicate symbol and ti,..., tn are terms.\nThe set of all atomic formulae is denoted as \u0394.\nWe focus on an expressive class of MASs in which interaction is carried out by means of illocutionary speech acts exchanged among participating agents: DEF.\n3.\nIllocutions I are ground atomic formulae which have the form p (ag, r, ag', r', \u03b4, t) where p is an element of iHere, we abstract from protocols and refer to them generically as activities.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 637\na set of illocutionary particles (e.g. inform, request, offer); ag, ag' are agent identifiers; r, r' are role identifiers; 5, an arbitrary ground term, is the content of the message, built from a shared content language; t E N is a time stamp.\nThe intuitive meaning of p (ag, r, ag', r', m, t) is that agent ag playing role r sent message m to agent ag' playing role r' at time t.\nAn example of an illocution is inform (ag4, supp, ag3, client, offer (wire, 12), 10).\nSometimes it is useful to refer to illocutions that are not fully grounded, that is, those that may contain uninstantiated (free) variables.\nIn the description of a protocol, for instance, the precise values of the message exchanged can be left unspecified.\nDuring the enactment of the protocol agents will produce the actual values which will give rise to a ground illocution.\nWe can thus define illocution schemata:\n3.2 Formal Definition of the Notion of NS\nWe first define normative scenes as follows: DEF.\n5.\nA normative scene is a tuple s = (ids, \u0394s) where ids is a scene identifier and \u0394s is the set of atomic formulae 5 (i.e. utterances and normative positions) that hold in s.\nWe will also refer to \u0394s as the state of normative scene s. For instance, a snapshot of the state of the delivery normative scene of our scenario could be represented as: utt (request (sean, client, kev, wm, receive (wire, 200), 20)), utt (accept (kev, wm, sean, client, receive (wire, 200), 30)), obl (inform (kev, wm, sean, client, delivered (wire, 200), 30)) That is, agent Sean taking up the client role has requested agent Kev (taking up the warehouse manager role wm) to receive 200kg of wire, and agent Kev is obliged to deliver 200kg of wire to Sean since he accepted the request.\nNote that the state of a normative scene \u0394s evolves over time.\nThese normative scenes are connected to one another via normative transitions that specify how utterances and normative positions in one scene affect other normative scenes.\nAs mentioned above, activities are not independent since illocutions uttered in some of them may have an effect on other ones.\nNormative transition rules define the conditions under which a normative command is generated.\nThese conditions are either utterances or normative positions associated with a given protocol (denoted e.g. activity: utterance) which yield a normative command, i.e. the addition or removal of another normative position, possibly related to a different activity.\nOur transition rules are thus defined:\nwhere \u00af I is an illocution schema, N is a normative position (i.e. permission, prohibition or obligation), ids is an identifier for activity s and C is a normative command.\nWe endow our language with the usual semantics of rulebased languages [19].\nRules map an existing normative structure to a new normative structure where only the state of the normative scenes change.\nIn the definitions below we rely on the standard concept of substitution [9].\nDEF.\n7.\nA normative transition is a tuple b = (idb, rb) where idb is an identifier and rb is a normative transition rule.\nWe are proposing to extend the notion of MAS, regulated by protocols, with an extra layer consisting of normative scenes and normative transitions.\nThis layer is represented as a bi-partite graph that we term normative structure.\nA normative structure relates normative scenes and normative transitions specifying which normative positions are to be generated or removed in which normative scenes.\n1.\nEach atomic formula appearing in the LHS of a rule rb must be of the form (ids: D) where s E S and D E \u0394 and 3ain E Ain such that ain = (s, b) and Lin (ain) = D. 2.\nThe atomic formula appearing in the RHS of a rule rb must be of the form add (ids: N) or remove (ids: N) where s E S and 3aout E Aout such that aout = (b, s) and Lout (aout) = N. 3.\nVa E Ain such that a = (s, b) and b = (idb, rb) and Lin (a) = D then (ids:D) must occur in the LHS of rb.\n4.\nVa E Aout such that a = (b, s) and b = (idb, rb) and Lout (a) = N then add (ids: N) or remove (ids: N) must occur in the RHS of rb.\nThe first two points ensure that every atomic formulae on the LHS of a normative transition rule labels an arc entering the appropriate normative transition in the normative structure, and that the atomic formula on the RHS labels the corresponding outgoing arc.\nPoints three and four ensure that labels from all incoming arcs are used in the LHS of the normative transition rule that these arcs enter into, and that the labels from all outgoing arcs are used in the RHS of the normative transition rule that these arcs leave.\n3.3 Intended Semantics\nThe formal semantics will be defined via a mapping to Coloured Petri Nets in Section 5.1.\nHere we start defining the intended semantics of normative transition rules by describing how a rule changes a normative scene of an existing normative structure yielding a new normative structure.\nEach rule is triggered once for each substitution that unifies the left-hand side V of the rule with the state of the corresponding normative scenes.\nAn atomic formula (i.e. an utterance or a normative position) holds iff it is unifiable with an utterance or normative position that belongs to the state of the corresponding normative scene.\nEvery time a rule is triggered, the normative command specified on the right-hand side of that rule is carried out, intending to add or remove a normative position from the state of the corresponding normative scene.\nHowever, addition is not unconditional as conflicts may arise.\nThis topic will be treated in Sections 4 and 6.1.\n638 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.4 Example\nIn our running example we have the following exemplary normative transition rule: \"\n\u201e payment: obl (inform (X, client, Y, acc, pay (Z, P, Q), T)), payment: utt (inform (X, client, Y, acc, pay (Z, P, Q), T')) 4 delivery: add (obl (inform (Y, wm, X, client, delivered (Z, Q), T\")))\nThat is, during the payment activity, an obligation on client X to inform accountant Y about the payment P of item Z at time T and the corresponding utterance which fulfills this obligation allows the flow of a norm to the delivery activity.\nThe norm is an obligation on agent Y (this time taking up the role of the warehouse manager wm) to send a message\nto client X that item Z has been delivered.\nWe show in Figure 2 a diagrammatic representation of how activities and a normative structure relate: Figure 2: Activities and Normative Structure\nAs illocutions are uttered during activities, normative positions arise.\nUtterances and normative positions are combined in transition rules, causing the flow of normative positions between normative scenes.\nThe connection between the two levels is described in Section 6.2.\n4.\nCONFLICT DEFINITION\nThe terms deontic conflict and deontic inconsistency have been used interchangeably in the literature.\nHowever, in this paper we adopt the view of [7] in which the authors suggest that a deontic inconsistency arises when an action is simultaneously permitted and prohibited--since a permission may not be acted upon, no real conflict occurs.\nThe situations when an action is simultaneously obliged and prohibited are, however, deontic conflicts, as both obligations and prohibitions influence behaviours in a conflicting fashion.\nThe content of normative positions in this paper are illocutions.\nTherefore, a normative conflict arises when an illocution is simultaneously obliged and prohibited.\nWe propose to use the standard notion of unification [9] to detect when a prohibition and a permission overlap.\nFor instance, an obligation obl (inform (A1, R1, A2, R2, p (c, X), T)) and a prohibition prh (inform (a1, r1, a2, r2, p (Y, d), T')) are in conflict as they unify under \u03c3 = {A1\/a1, R1\/r1, A2\/a2, R2\/r2, Y\/c, X\/d, T\/T'}).\nWe formally capture this notion:\nThat is, a prohibition and an obligation are in conflict if, and only if, their illocutions unify under \u03c3.\nThe substitution \u03c3, called here the conflict set, unifies the agents, roles and atomic formulae.\nWe assume that unify is a suitable implementation of a unification algorithm which i) always terminates (possibly failing, if a unifier cannot be found); ii) is correct; and iii) has linear computational complexity.\nInconsistencies caused by the same illocution being simultaneously permitted and prohibited can be formalised similarly.\nIn this paper we focus on prohibition\/obligation conflicts, but the computational machinery introduced in Section 6.1 can equally be used to detect prohibition\/permission inconsistencies, if we substitute modality \"obl\" for \"per\".\n5.\nFORMALISING CONFLICT-FREEDOM\nIn this section we introduce some background knowledge on CPNs assuming a basic understanding of ordinary Petri Nets.\nFor technical details we refer the reader to [16].\nWe then map NSs to CPNs and analyse their properties.\nCPNs combine the strength of Petri nets with the strength of functional programming languages.\nOn the one hand, Petri nets provide the primitives for the description of the synchronisation of concurrent processes.\nAs noticed in [16], CPNs have a semantics which builds upon true concurrency, instead of interleaving.\nIn our opinion, a true-concurrency semantics is easier to work with because it is the way we envisage the connection between the coordination level and the normative level of a multi-agent system to be.\nOn the other hand, the functional programming languages used by CPNs provide the primitives for the definition of data types and the manipulation of their data values.\nThus, we can readily translate expressions of a normative structure.\nLast but not least, CPNs have a well-defined semantics which unambiguously defines the behaviour of each CPN.\nFurthermore, CPNs have a large number of formal analysis methods and tools by which properties of CPNs can be proved.\nSumming up, CPNs provide us with all the necessary features to formally reason about normative structures given that an adequate mapping is provided.\nIn accordance with Petri nets, the states of a CPN are represented by means of places.\nBut unlike Petri Nets, each place has an associated data type determining the kind of data which the place may contain.\nA state of a CPN is called a marking.\nIt consists of a number of tokens positioned on the individual places.\nEach token carries a data value which has the type of the corresponding place.\nIn general, a place may contain two or more tokens with the same data value.\nThus, a marking of a CPN is a function which maps each place into a multi-set2 of tokens of the correct type.\nOne often refers to the token values as token colours and one also refers to the data types as colour sets.\nThe types of a CPN can be arbitrarily complex.\nActions in a CPN are represented by means of transitions.\nAn incoming arc into a transition from a place indicates that the transition may remove tokens from the corresponding place while an outgoing arc indicates that the transition may add tokens.\nThe exact number of tokens and their data values are determined by the arc expressions, which are encoded using the programming language chosen for the CPN.\nA transition is enabled in a CPN if and only if all the\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 639\nvariables in the expressions of its incoming arcs are bound to some value (s) (each one of these bindings is referred to as a binding element).\nIf so, the transition may occur by removing tokens from its input places and adding tokens to its output places.\nIn addition to the arc expressions, it is possible to attach a boolean guard expression (with variables) to each transition.\nPutting all the elements above together we obtain a formal definition of CPN that shall be employed further ahead for mapping purposes.\nDEF.\n10.\nA CPN is a tuple (\u03a3, P, T, A, N, C, G, E, I) where: (i) \u03a3 is a finite set of non-empty types, also called colour sets; (ii) P is a finite set of places; (iii) T is a finite set of transitions; (iv) A is a finite set of arcs; (v) N is a node function defined from A into P \u00d7 T \u222a T \u00d7 P; (vi) C is a colour function from P into \u03a3; (vii) G is a guard function from T into expressions; (viii) E is an arc expression function from A into expressions; (ix) I is an initialisation function from P into closed expressions; Notice that the informal explanation of the enabling and occurrence rules given above provides the foundations to understand the behaviour of a CPN.\nIn accordance with ordinary Petri nets, the concurrent behaviour of a CPN is based on the notion of step.\nFormally, a step is a non-empty and finite multi-set over the set of all binding elements.\nLet step S be enabled in a marking M. Then, S may occur, changing the marking M to M'.\nMoreover, we say that marking M' is directly reachable from marking M by the occurrence of step S, and we denote it by M [S> M'.\nA finite occurrence sequence is a finite sequence of steps and markings: M1 [S1> M2...Mn [Sn> Mn +1 such that n \u2208 N and Mi [Si> Mi +1 b' i \u2208 {1,..., n}.\nThe set of all possible markings reachable for a net Net from a marking M is called its reachability set, and is denoted as R (Net, M).\n5.1 Mapping to Coloured Petri Nets\nOur normative structure is a labelled bi-partite graph.\nThe same is true for a Coloured Petri Net.\nWe are presenting a mapping f from one to the other, in order to provide semantics for the normative structure and prove properties about it by using well-known theoretical results from work on CPNs.\nThe mapping f makes use of correspondences between normative scenes and CPN places, normative transitions and CPN transitions and finally, between arc labels and CPN arc expressions.\nThe set of types is the singleton set containing the colour NP (i.e. \u03a3 = {NP}).\nThis complex type is structured as follows (we use CPN-ML [4] syntax): color NPT = with Obl | Per | Prh | NoMod color IP = with inform | declare | offer color UTT = record\nModelling illocutions as norms without modality (NoMod) is a formal trick we use to ensure that sub-nets can be combined as explained below.\nArcs are mapped almost directly.\nA is a finite set of arcs and N is a node function, such that b' a \u2208 A 3a' \u2208 Ain \u222a Aout.\nN (a) = a'.\nThe initialisation function I is defined as I (p) = \u0394s (b's \u2208 S where p is obtained from s using the mapping; remember that s = (ids, \u0394s)).\nFinally, the colour function C assigns the colour NP to every place: C (p) = NP (b' p \u2208 P).\nWe are not making use of the guard function G.\nIn future work, this function can be used to model constraints when we extend the expressiveness of our norm language.\n5.2 Properties of Normative Structures\nHaving defined the mapping from normative structures to Coloured Petri Nets, we now look at properties of CPNs that help us understand the complexity of conflict detection.\nOne question we would like to answer is, whether at a given point in time, a given normative structure is conflict-free.\nSuch a snapshot of a normative structure corresponds to a marking in the mapped CPN.\nDEF.\n11.\nGiven a marking Mi, this marking is conflictfree if \u00ac 3p \u2208 P. 3np1, np2 \u2208 Mi (p) such that np1.mode = Obl and np2.mode = Prh and np1.illoc and np2.illoc unify under a valid substitution.\nAnother interesting question would be, whether a conflict will occur from such a snapshot of the system by propagating the normative positions.\nIn order to answer this question, we first translate the snapshot of the normative structure to the corresponding CPN and then execute the finite occurence sequence of markings and steps, verifying the conflict-freedom of each marking as we go along.\nHowever, the main question we would like to investigate, is whether or not a given normative structure is conflictresistant, that is, whether or not the agents enacting the MAS are able to bring about conflicts through their actions.\nAs soon as one includes the possibility of actions (or utterances) from autonomous agents, one looses determinism.\nHaving mapped the normative structure to a CPN, we now add CPN models of the agents' interactions.\nEach form of agent interaction (i.e. each activity) can be modelled using CPNs along the lines of Cost et al. [5].\nThese nondeterministic CPNs \"feed\" tokens into the CPN that models the normative structure.\nThis leads to the introduction of non-determinism into the combined CPN.\nThe lower half of figure 3 shows part of a CPN model of an agent protocol where the arc denoted with ` 1' represents some utterance of an illocution by an agent.\nThe target transition of this arc, not only moves a token on to the next state of this CPN, but also places a token in the place corresponding to the appropriate normative scene in the CPN model of the normative structure (via arc ` 2').\nTransition ` 3' finally could propagate that token in form of an obligation, for example.\nThus, from a given marking, many different occurrence sequences are possible depending on the agents' actions.\nWe make use of the reachability set R to define a situation in which agents cannot cause conflicts.\n640 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: Constructing the combined CPN\nChecking conflict-freedom of a marking can be done in polynomial time by checking all places of the CPN for conflicting tokens.\nConflict-freedom of an occurrence sequence in the CPN that represents the normative structure can also be done in polynomial time since this sequence is deterministic given a snapshot.\nWhether or not a normative structure is designed safely corresponds to checking the conflict-resistance of the initial marking M0.\nNow, verifying conflict-resistance of a marking becomes a very difficult task.\nIt corresponds to the reachability problem in a CPN: \"can a state be reached or a marking achieved, that contains a conflict?\"\n.\nThis reachability problem is known to be NP-complete for ordinary Petri Nets [22] and since CPNs are functionally identical, we cannot hope to verify conflict-resistance of a normative structure off-line in a reasonable amount of time.\nTherefore, distributed, run-time mechanisms are needed to ensure that a normative structure maintains consistency.\nWe present one such mechanism in the following section.\n6.\nMANAGING NORMATIVE STRUCTURES\nOnce a conflict (as defined in Section 4) has been detected, we propose to employ the unifier to resolve the conflict.\nIn our example, if the variables in prh (inform (a1, r1, a2, r2, p (Y, d), T')) do not get the values specified in substitution \u03c3 then there will not be a conflict.\nHowever, rather than computing the complement set of a substitution (which can be an infinite set) we propose to annotate the prohibition with the unifier itself and use it to determine what the variables of that prohibition cannot be in future unifications in order to avoid a conflict.\nWe therefore denote annotated prohibitions as prh (\u00af I) O \u03a3, where \u03a3 = {\u03c31,..., \u03c3n}, is a set of unifiers.\nAnnotated norms3 are interpreted as deontic constructs with curtailed influences, that is, their effect (on agents, roles and illocutions) has been limited by the set \u03a3 of unifiers.\nA prohibition may be in conflict with various obligations in a given normative scene s = (id, \u0394) and we need to record (and possibly avoid) all these conflicts.\nWe define below an algorithm which ensures that a normative position will be added to a normative scene in such a way that it will not cause any conflicts.\n6.1 Conflict Resolution\nWe propose a fine-grained way of resolving normative conflicts via unification.\nWe detect the overlapping of the influences of norms, i.e. how they affect the behaviour of the concerned agents, and we curtail the influence of the normative position, by appropriately using the annotations when checking if the norm applies to illocutions.\nThe algorithm shown in Figure 4 depicts how we maintain a conflict-free set of norms.\nIt adds a given norm N to an existing, conflictfree normative state \u0394, obtaining a resulting new normative state \u0394' which is conflict-free, that is, its prohibitions are annotated with a set of conflict sets indicating which bindings for variables have to be avoided for conflicts not to take place.\nFigure 4: Algorithm to Preserve Conflict-Freedom\nThe algorithm uses a case of structure to differentiate the different possibilities for a given norm N. Line 3 addresses the case when the given norm is a permission: N is simply added to \u0394.\nLines 4-5 address the case when we attempt to add a ground prohibition to a normative state: if it conflicts with any obligation, then it is discarded; otherwise it is added to the normative state.\nLines 6-12 describe the situation when the normative position to be added is a nonground prohibition.\nIn this case, the algorithm initialises \u03a3 to an empty set and loops (line 9-10) through the norms N' in the old normative state \u0394.\nUpon finding one that conflicts with N, the algorithm updates \u03a3 by adding the newly found conflict set \u03c3 to it (line 10).\nBy looping through \u0394, we are able to check any conflicts between the new prohibition and the existing obligations, adequately building the annotation \u03a3 to be used when adding N to \u0394 in line 11.\nLines 13-27 describe how a new obligation is accommodated to an existing normative state.\nWe make use of two initially empty, temporary sets, \u0394' 1, \u0394' 2.\nThe algorithm loops through \u0394 (lines 16-25) picking up those annotated prohibitions N' O \u03a3 which conflict with the new obligation.\nThere are, however, two cases to deal with: the one when a ground\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 641\nprohibition is found (line 17), and its exception, covering non-ground prohibitions (line 20).\nIn both cases, the old prohibition is stored in \u0394' 1 (lines 18 and 23) to be later removed from \u0394 (line 26).\nHowever, in the case of a nonground prohibition, the algorithm updates its annotation of conflict sets (line 24).\nThe loop guarantees that an exhaustive (linear) search through a normative state takes place, checking if the new obligation is in conflict with any existing prohibitions, possibly updating the annotations of these conflicting prohibitions.\nIn line 26 the algorithm builds the new updated \u0394' by removing the old prohibitions stored in \u0394' 1 and adding the updated prohibitions stored in \u0394' 2 (if any), as well as the new obligation N.\nOur proposed algorithm is correct in that, for a given normative position N and a normative state \u0394, it provides a new normative state \u0394' in which all prohibitions have annotations recording how they unify with existing obligations.\nThe annotations can be empty, though: this is the case when we have a ground prohibition or a prohibition which does not unify\/conflict with any obligation.\nPermissions do not affect our algorithm and they are appropriately dealt with (line 3).\nAny attempt to insert a ground prohibition which conflicts, yields the same normative state (line 4).\nWhen a new obligation is being added then the algorithm guarantees that all prohibitions are considered (lines 14-27), leading to the removal of conflicting ground prohibitions or the update of annotations of non-ground prohibitions.\nThe algorithm always terminates: the loops are over a finite set \u0394 and the conflict checks and set operations always terminate.\nThe complexity of the algorithm is linear: the set \u0394 is only examined once for each possible case of norm to be added.\nWhen managing normative states we may also need to remove normative positions.\nThis is straightforward: permissions can be removed without any problems; annotated prohibitions can also be removed without further considerations; obligations, however, require some housekeeping.\nWhen an obligation is to be removed, we must check it against all annotated prohibitions in order to update their annotations.\nWe apply the conflict check and obtain a unifier, then remove this unifier from the prohibition's annotation.\nWe invoke the removal algorithm as removeNorm (N, \u0394): it returns a new normative state \u0394' in which N has been removed, with possible alterations to other normative positions as explained.\n6.2 Enactment of a Normative Structure\nThe enactment of a normative structure amounts to the parallel, distributed execution of normative scenes and normative transitions.\nFor illustrative purposes, hereafter we shall describe the interplay between the payment and delivery normative scenes and the normative transition nt linking them in the upper half of figure 2.\nWith this aim, consider for instance that obl (inform (jules, client, rod, acc, pay (copper, 400, 350), T) \u2208 \u0394payment and that \u0394delivery holds prh (inform (rod, wm, jules, client, delivered (Z, Q), T)).\nSuch states indicate that client Jules is obliged to pay #400 for 350kg of copper to accountant Rod according to the payment normative scene, whereas Rod, taking up the role of warehouse manager this time, is prohibited to deliver anything to client Jules according to the delivery normative scene.\nFor each normative scene, the enactment process goes as follows.\nFirstly, it processes its incoming message queue that contains three types of messages: utterances from the activity it is linked to; and normative commands either to add or remove normative positions.\nFor instance, in our example, the payment normative scene collects the illocution I = utt ((inform (jules, client, rod, acc, pay (copper, 400, 350), 35)) standing for client Jules' pending payment for copper (via arrow A in figure 2).\nUtterances are timestamped and subsequently added to the normative state.\nWe would have \u0394payment = \u0394payment \u222a {I}, in our example.\nUpon receiving normative commands to either add or remove a normative position, the normative scene invokes the corresponding addition or removal algorithm described in Section 6.1.\nSecondly, the normative scene acknowledges its state change by sending a trigger message to every outgoing normative transition it is connected to.\nIn our example, the payment normative scene would be signalling its state change to normative transition nt.\nFor normative transitions, the process works differently.\nBecause each normative transition controls the operation of a single rule, upon receiving a trigger message, it polls every incoming normative scene for substitutions for the relevant illocution schemata on the LHS of its rule.\nIn our example, nt (being responsible for the rule described in Section 3.4), would poll the payment normative scene (via arrow B) for substitutions.\nUpon receiving replies from them (in the form of sets of substitutions together with time-stamps), it has to unify substitutions from each of these normative scenes.\nFor each unification it finds, the rule is fired, and hence the corresponding normative command is sent along to the output normative scene.\nThe normative transition then keeps track of the firing message it sent on and of the time-stamps of the normative positions that triggered the firing.\nThis is done to ensure that the very same normative positions in the LHS of a rule only trigger its firing once.\nIn our example, nt would be receiving \u03c3 = {X\/jules, Y\/rod, Z\/copper, Q\/350} from the payment normative scene.\nSince the substitions in \u03c3 unify with nt's rule, the rule is fired, and the normative command add (delivery: obl (rod, wm, jules, client, delivered (copper, 350), T)) is sent along to the delivery normative scene to oblige Rod to deliver to client Jules 350kg of copper.\nAfter that, the delivery normative scene would invoke the addNorm algorithm from figure 4 with \u0394delivery and N = obl (rod, wm, jules, client, delivered (copper, 350)) as arguments.\n7.\nRELATED WORK AND CONCLUSIONS\nOur contributions in this paper are three-fold.\nFirstly, we introduce an approach for the management of and reasoning about norms in a distributed manner.\nTo our knowledge, there is little work published in this direction.\nIn [8, 21], two languages are presented for the distributed enforcement of norms in MAS.\nHowever, in both works, each agent has a local message interface that forwards legal messages according to a set of norms.\nSince these interfaces are local to each agent, norms can only be expressed in terms of actions of that agent.\nThis is a serious disadvantage, e.g. when one needs to activate an obligation to one agent due to a certain message of another one.\nThe second contribution is the proposal of a normative structure.\nThe notion is fruitful because it allows the separation of normative and procedural concerns.\nThe normative structure we propose makes evident the similarity between the propagation of normative positions and the propagation\n642 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nof tokens in Coloured Petri Nets.\nThat similarity readily suggests a mapping between the two, and gives grounds to a convenient analytical treatment of the normative structure, in general, and the complexity of conflict detection, in particular.\nThe idea of modelling interactions (in the form of conversations) via Petri Nets has been investigated in [18], where the interaction medium and individual agents are modelled as CPN sub-nets that are subsequently combined for analysis.\nIn [5], conversations are first designed and analysed at the level of CPNs and thereafter translated into protocols.\nLin et al. [20] map conversation schemata to CPNs.\nTo our knowledge, the use of this representation in the support of conflict detection in regulated MAS has not been reported elsewhere.\nFinally, we present a distributed mechanism to resolve normative conflicts.\nSartor [25] treats normative conflicts from the point of view of legal theory and suggests a way to order the norms involved.\nHis idea is implemented in [12] but requires a central resource for norm maintenance.\nThe approach to conflict detection and resolution is an adaptation and extension of the work on instantiation graphs reported in [17] and a related algorithm in [27].\nThe algorithm presented in the current paper can be used to manage normative states distributedly: normative scenes that happen in parallel have an associated normative state \u0394 to which the algorithm is independently applied each time a new norm is to be introduced.\nThese three contributions we present in this paper open many possibilities for future work.\nWe should mention first, that as a broad strategy we are working on a generalisation of the notion of normative structure to make it operate with different coordination models, with richer deontic content and on top of different computational realisations of regulated MAS.\nAs a first step in this direction we are taking advantage of the de-coupling between interaction protocols and declarative normative guidance that the normative structure makes available, to provide a normative layer for electronic institutions (as defined in [1]).\nWe expect such coupling will endow electronic institutions with a more flexible--and more expressive--normative environment.\nFurthermore, we want to extend our model along several directions: (1) to handle negation and constraints as part of the norm language, and in particular the notion of time; (2) to accommodate multiple, hierarchical norm authorities based on roles, along the lines of Cholvy and Cuppens [3] and power relationships as suggested by Carabelea et al. [2]; (3) to capture in the conflict resolution algorithm different semantics relating the deontic notions by supporting different axiomations (e.g., relative strength of prohibition versus obligation, default deontic notions, deontic inconsistencies).\nOn the theoretical side, we intend to use analysis techniques of CPNs in order to characterise classes of CPNs (e.g., acyclic, symmetric, etc.) corresponding to families of Normative Structures that are susceptible to tractable offline conflict detection.\nThe combination of these techniques along with our online conflict resolution mechanisms is intended to endow MAS designers with the ability to incorporate norms into their systems in a principled way.","keyphrases":["coordin","activ","norm structur","norm posit","prohibit","conflict","algorithm","scenario","protocol","norm scene","norm transit rule","bi-partit graph","permiss overlap","token","regul multi-agent system","norm conflict","electron institut","organis"],"prmu":["P","P","P","P","P","P","P","U","U","M","M","U","M","U","M","R","U","U"]} {"id":"C-40","title":"Edge Indexing in a Grid for Highly Dynamic Virtual Environments","abstract":"Newly emerging game-based application systems such as Second Life1 provide 3D virtual environments where multiple users interact with each other in real-time. They are filled with autonomous, mutable virtual content which is continuously augmented by the users. To make the systems highly scalable and dynamically extensible, they are usually built on a client-server based grid subspace division where the virtual worlds are partitioned into manageable sub-worlds. In each sub-world, the user continuously receives relevant geometry updates of moving objects from remotely connected servers and renders them according to her viewpoint, rather than retrieving them from a local storage medium. In such systems, the determination of the set of objects that are visible from a user's viewpoint is one of the primary factors that affect server throughput and scalability. Specifically, performing real-time visibility tests in extremely dynamic virtual environments is a very challenging task as millions of objects and sub-millions of active users are moving and interacting. We recognize that the described challenges are closely related to a spatial database problem, and hence we map the moving geometry objects in the virtual space to a set of multi-dimensional objects in a spatial database while modeling each avatar both as a spatial object and a moving query. Unfortunately, existing spatial indexing methods are unsuitable for this kind of new environments. The main goal of this paper is to present an efficient spatial index structure that minimizes unexpected object popping and supports highly scalable real-time visibility determination. We then uncover many useful properties of this structure and compare the index structure with various spatial indexing methods in terms of query quality, system throughput, and resource utilization. We expect our approach to lay the groundwork for next-generation virtual frameworks that may merge into existing web-based services in the near future.","lvl-1":"Edge Indexing in a Grid for Highly Dynamic Virtual Environments\u2217 Beomjoo Seo bseo@usc.edu Roger Zimmermann rzimmerm@imsc.usc.edu Computer Science Department University of Southern California Los Angeles, CA 90089 ABSTRACT Newly emerging game-based application systems such as Second Life1 provide 3D virtual environments where multiple users interact with each other in real-time.\nThey are filled with autonomous, mutable virtual content which is continuously augmented by the users.\nTo make the systems highly scalable and dynamically extensible, they are usually built on a client-server based grid subspace division where the virtual worlds are partitioned into manageable sub-worlds.\nIn each sub-world, the user continuously receives relevant geometry updates of moving objects from remotely connected servers and renders them according to her viewpoint, rather than retrieving them from a local storage medium.\nIn such systems, the determination of the set of objects that are visible from a user``s viewpoint is one of the primary factors that affect server throughput and scalability.\nSpecifically, performing real-time visibility tests in extremely dynamic virtual environments is a very challenging task as millions of objects and sub-millions of active users are moving and interacting.\nWe recognize that the described challenges are closely related to a spatial database problem, and hence we map the moving geometry objects in the virtual space to a set of multi-dimensional objects in a spatial database while modeling each avatar both as a spatial object and a moving query.\nUnfortunately, existing spatial indexing methods are unsuitable for this kind of new environments.\nThe main goal of this paper is to present an efficient spatial index structure that minimizes unexpected object popping and supports highly scalable real-time visibility determination.\nWe then uncover many useful properties of this structure and compare the index structure with various spatial indexing methods in terms of query quality, system throughput, and resource utilization.\nWe expect our approach to lay the groundwork for next-generation virtual frameworks that may merge into existing web-based services in the near future.\nCategories and Subject Descriptors: C.2.4 [Computer - Communication Networks]: Distributed Systems - Client\/server, Distributed applications, Distributed databases; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - Virtual Reality General Terms: Algorithms, Design, Performance 1.\nINTRODUCTION Recently, Massively Multiplayer Online Games (MMOGs) have been studied as a framework for next-generation virtual environments.\nMany MMOG applications, however, still limit themselves to a traditional design approach where their 3D scene complexity is carefully controlled in advance to meet real-time rendering constraints at the client console side.\nTo enable a virtual landscape in next-generation environments that is seamless, endless, and limitless, Marshall et al. [1] identified four new requirements2 : dynamic extensibility (a system allows the addition or the change of components at run time); scalability (although the number of concurrent users increases, the system continues to function effectively); interactibility; and interoperability.\nIn this paper, we mainly focus on the first two requirements.\nDynamic extensibility allows regular game-users to deploy their own created content.\nThis is a powerful concept, but unfortunately, user-created content tends to create imbalances among the existing scene complexity, causing system-wide performance problems.\nFull support for dynamic extensibility will, thus, continue to be one of the biggest challenges for game developers.\nAnother important requirement is scalability.\nAlthough MMOG developers proclaim that their systems can support hundreds of thousands of concurrent users, it usually does not mean that all the users can interact with each other in the same world.\nBy carefully partitioning the world into multiple sub-worlds or replicating worlds at geographically dispersed locations, massive numbers of concurrent users can be supported.\nTypically, the maximum number of users in the same world managed by a single server or a server-cluster is limited to several thousands, assuming a rather stationary world [2, 3].\nSecond Life [4] is the first successfully deployed MMOG system that meets both requirements.\nTo mitigate the dynamics of the game world, where a large number of autonomous objects are continuously moving, it partitions the space in a grid-like manner and 2 Originally, these requirements were specified for their dedicated platform.\nBut we acknowledge that these requirements are also valid for new virtual environments.\n402 Avatar Object PoppingAutonomous Entities (a) At time t (b) At time t+\u0394 Figure 1: Object popping occurred as a user moves forward (screenshots from Second Life) where \u0394 = 2 seconds.\nemploys a client\/server based 3D object streaming model [5].\nIn this model, a server continuously transmits both update events and geometry data to every connected user.\nAs a result, this extensible gaming environment has accelerated the deployment of usercreated content and provides users with unlimited freedom to pursue a navigational experience in its space.\nOne of the main operations in MMOG applications that stream 3D objects is to accurately calculate all objects that are visible to a user.\nThe traditional visibility determination approach, however, has an object popping problem.\nFor example, a house outside a user``s visible range is not drawn at time t, illustrated in Figure 1(a).\nAs the user moves forward, the house will suddenly appear at time (t + \u0394) as shown in Figure 1(b).\nIf \u0394 is small, or the house is large enough to collide with the user, it will disrupt the user``s navigational experience.\nThe visibility calculation for each user not only needs to be accurate, but also fast.\nThis challenge is illustrated by the fact that the maximum number of concurrent users per server of Second Life is still an order of magnitude smaller than for stationary worlds.\nTo address these challenges, we propose a method that identifies the most relevant visible objects from a given geometry database (view model) and then put forth a fast indexing method that computes the visible objects for each user (spatial indexing).\nOur two novel methods represent the main contributions of this work.\nThe organization of this paper is as follows.\nSection 2 presents related work.\nSection 3 describes our new view method.\nIn Section 4, we present assumptions on our target application and introduce a new spatial indexing method designed to support real-time visibility computations.\nWe also discuss its optimization issues.\nSection 5 reports on the quantitative analysis and Section 6 presents preliminary results of our simulation based experiments.\nFinally, we conclude and address future research directions in Section 7.\n2.\nRELATED WORK Visibility determination has been widely explored in the field of 3D graphics.\nVarious local rendering algorithms have been proposed to eliminate unnecessary objects before rendering or at any stage in the rendering pipeline.\nView-frustum culling, back-face culling, and occlusion culling are some of the well-known visibility culling techniques [6].\nHowever, these algorithms assume that all the candidate visible objects have been stored locally.\nIf the target objects are stored on remote servers, the clients receive the geometry items that are necessary for rendering from the server databases.\nTeller et al. described a geometry data scheduling algorithm that maximizes the quality of the frame rate over time in remote walkthroughs of complex 3D scenes from a user``s navigational path [5].\nFunkhouser et al. showed that multi-resolutional representation, such as Levels Of Detail (LOD), can be used to improve rendering frame rates and memory utilization during interactive visualization [7].\nHowever, these online optimization algorithms fail to address performance issue at the server in highly crowded environments.\nOn the other hand, our visibility computation model, a representative of this category, is based on different assumptions on the data representation of virtual entities.\nIn the graphics area, there has been little work on supporting real-time visibility computations for a massive number of moving objects and users.\nHere we recognize that such graphics related issues have a very close similarity to spatial database problems.\nRecently, a number of publications have addressed the scalability issue on how to support massive numbers of objects and queries in highly dynamic environments.\nTo support frequent updates, two partitioning policies have been studied in depth: (1) R-tree based spatial indexing, and (2) grid-based spatial indexing.\nThe R-tree is a well-known spatial index structure that allows overlapping between the regions in different branches which are represented by Minimum Bounding Rectangles (MBR).\nThe grid-based partitioning model is a special case of fixed partitioning.\nRecently, it has been re-discovered since it can be efficient in highly dynamic environments.\nMany studies have reported that the R-tree and its variants (R+ tree, R\u2217 -tree) suffer from unacceptable performance degradation in a highly dynamic environment, primarily due to the computational complexity of the split algorithm [8, 9, 10, 11, 12].\nA bottom-up update strategy proposed for R-trees [9] optimizes update operations of the index while maintaining a top down query processing mechanism.\nInstead of traversing a tree from the root node for frequent update requests (top-down approach), it directly accesses the leaf node of the object to be updated via an object hash table.\nQ-Index [13, 11] is one of the earlier work that re-discovers the usefulness of grid-based space partitioning for emerging moving object environments.\nIn contrast to traditional spatial indexing methods that construct an index on the moving objects, it builds an index on the continuous range queries, assuming that the queries move infrequently while the objects move freely.\nThe basic idea of the Q+Rtree [14] is to separate indexing structures for quasistationary objects and moving objects: fast-moving objects are indexed in a Quadtree and quasi-stationary objects are stored in an R\u2217 -tree.\nSINA [10] was proposed to provide efficient query evaluations for any combination of stationary\/moving objects and stationary\/moving queries.\nSpecifically, this approach only detects newly discovered (positive) or no longer relevant (negative) object updates efficiently.\nUnlike other spatial indexing methods that focus on reducing the query evaluation cost, Hu et al. [12] proposed a general framework that minimizes the communication cost for location updates by maintaining a rectangular area called a safe region around moving objects.\nAs long as any object resides in this region, all the query results are guaranteed to be valid in the system.\nIf objects move out of their region, location update requests should be delivered to the database server and the affected queries are re-evaluated on the fly.\nOur indexing method is very similar to the above approaches.\nThe major difference is that we are more concentrating on real-time visibility determination while others assume loose timing constraints.\n3.\nOBJECT-INITIATED VIEW MODEL In this section we illustrate how the object popping problem can be associated with a typical view decision model.\nWe then propose our own model, and finally we discuss its strengths and limitations.\nTo begin with, we define the terminologies commonly used throughout this paper.\nEntities in a virtual space can be categorized into three types 403 based on their role - autonomous entities, spectator entities, and avatars.\nThe term autonomous entity refers to an ordinary moving or stationary geometric object that can be visible to other entities.\nThe spectator entity corresponds to a player``s viewpoint, but is invisible to other entities.\nIt has no shape and is represented only by a point location.\nIt is designed to allow a game participant to see from a third-person viewpoint.\nIt functions similar to a camera control in the 3D graphics field.\nIt also has a higher degree of mobility than other entities.\nThe avatar represents a normal game user who can freely navigate in the space and interact with other entities.\nIt possesses both features: its own viewpoint and visibility.\nFor the remainder we use the term object entity to refer to an autonomous entity or an avatar while we use user entity to denote an avatar or a spectator entity.\nThe visible range of an entity refers to the spatial extent within which any other entity can recognize its existence.\nIt is based on the assumptions that there always exists an optimal visible distance between a user and an object at any given time and every user possesses equal visibility.\nThus, the user and the object, only when their current distance is smaller than or equal to the optimal, can see each other.\nTo specify the visible range, much literature in the graphics area [5, 6] uses a circular Area Of Interest (AOI) whose center is the location of an entity.\nIts omnidirectional nature allows rapid directional changes without any display disruptions at the periphery of the viewable area.\nHowever, we employ a squareshaped AOI at the expense of accuracy because the square-shaped spatial extension is very simple and efficient to be indexed in a grid partitioned world.\nThe traditional view model, which we call user-initiated view model, assumes that a user entity has an AOI while an object entity does not.\nAs the user navigates, she continuously searches for all the entities within her AOI.\nDue to its simple design and its low indexing overhead, many Location Based Services (LBSs) and game applications use this model.\nHowever, the user-initiated model has a serious object popping problem during navigation.\nRecall, as shown in Figure 1, that the house that will have appeared at time t + \u0394 does not appear at time t because the user cannot recognize objects that are outside of her AOI at time t.\nIn fact, it turned out that the side length of her AOI was smaller than the optimal distance between the user and the house at the time t. Therefore, there is no other way but to increase the visible range of the user in this model to make such an experience unlikely.\nA large AOI, however, may lead to a significant system degradation.\nTo overcome the object popping problem, we propose a new view model which we call object-initiated view model.\nAll object entities have their own AOI centered at their current location while all spectator entities have no AOI.\nEvery user entity recognizes the objects whose AOIs cover its point location.\nThe main strengths of the new model are that (1) it has no object popping problem as long as the underlying system can manage the optimal visible range of all object entities correctly and that (2) the content creators can produce an enriched expressiveness of various behavioral and temporal changes.\nA huge object may have a farther visible range than a small one; an object has a broader visible range during day-time than at night; even during the night the visible range of an object that owns a light source will have a much wider visible area than a non-illuminated object; if an object is located inside a building, its visible range would be constrained by the surrounding structure.\nOne of the potential arguments against the object-initiated view is that indexing of the spatial extension of an object is too complex to be practical, compared with the user-initiated view.\nWe agree E2 E1 A S Client S Client A Sub-world Server Figure 2: Target system in a 4 \u00d7 4 grid partition.\nthat existing spatial indexing methods are inefficient in supporting our view model.\nTo refute this argument, we propose a novel spatial indexing solution detailed in Section 4.4.\nOur spatial indexing solution offers a very promising performance even with a large number of mobile entities and visibility calculations in real-time.\nFor the rest of the paper our design scope is limited to a 2D space, although our application is targeted for 3D environments3 .\nNote that our view model is not intended to rival a sophisticated visibility decision algorithm such as visibility culling [6], but to efficiently filter out unnecessary entities that do not contribute to the final image.\nIn Section 6.1 we evaluate both models through quantitatively measures such as the degree of expressiveness and the quality of the two view models and we discuss simulation results.\n4.\nDESIGN OF EDGE INDEXING In Section 4.1 we introduce our target application model.\nNext, Section 4.2 presents an abstraction of our node and edge structures whose detailed indexing and cell evaluation methods are explained later in Sections 4.3 and 4.4.\nSeveral optimization issues for edge indexing follow in Section 4.5.\n4.1 Target Application Our target application assumes both 3D object streaming and sub-world hosting.\nThe sub-world hosting is a collaborative virtual environment where every server hosts one sub-world, thus constructing a single world.\nSecond Life is the classic example of such an approach.\nA virtual space is partitioned into equal-sized sub-worlds.\nThe sample sub-world separated with bold-dashed lines in Figure 2 contains four virtual entities: two autonomous entities (E1, E2); one spectator entity S; and one avatar A.\nAs mentioned in Section 3, all object entities (E1, E2, A) have their own square-shaped AOI.\nTwo user entities (S, A) are associated with individual client machines, (client S and client A in the figure).\nThe spatial condition that the point location of S resides inside the AOI of E2 can be symbolized as S.P \u2208 E2.R.\nEvery sub-world is managed by its dedicated server machine.\nEach server indexes all the entities, delivers any new events (i.e., a new user enters into the sub-world or an object moves from one place to another) to clients, and resolves any inconsistencies among the entities.\nFor efficient management of moving entities, a server further divides its sub-world into smaller partitions, called grid cells.\nFigure 2 shows the 4 \u00d7 4 grid enclosed by the dashed lines.\nInstead of indexing the object entities with a user entity structure, our system indexes their visible regions on the grid cells.\nRetrieval of the indexed objects for a given user includes the search and de3 A better indexing method for a 3D space is work in progress.\n404 Tokens: IT(E1) IT(E2) IT(A) IT(S) Tokens: AT(E1) DT(E1) AT(E2) DT(E2) AT(A) DT(A) IT(S) (a) node indexing (b) edge indexing (c) edge indexing with row-wise cell evaluation Figure 3: Illustration of different data structures for node indexing and edge indexing for the sample space in Figure 2.\nThere are three object entities, {E1, E2, A}, and two user entities, {S, A} in the world.\nlivery of the indices stored on the cell it is located in.\nThis retrieval process is interchangeably called a user (or query) evaluation.\nOur application only considers the efficient indexing of virtual entities and the search for the most relevant entities - that is, how many entities per sub-world are indexed and how quickly index updates are recognized and retrieved.\nEfficient delivery of retrieved real geometry data is out of the scope of this paper.\n4.2 Abstraction We define a token as an abstraction of a virtual entity that satisfies a specific spatial relationship with a given cell.\nIn our application, we use three types of tokens: Inclusion Token (IT) indicates that its entity overlaps with or is covered by the given cell.\nAppearance Token (AT) denotes that its entity is an IT for the given cell, but not for the previously adjacent cell.\nDisappearance Token (DT) is the opposite of AT, meaning that while its entity does not satisfy the IT relationship with the given cell, it does so with the previously adjacent cell.\nWe also define two data structures for storing and retrieving the tokens: a node and an edge.\nA node is a data structure that stores ITs of a cell.\nThus, the node for cell i is defined as a set of IT entities and formally expressed as Ni = {o|o.R\u2229i.R = \u2205}, where R is either an AOI or a cell region.\nAn edge is another data structure for two adjacent cells that stores their ATs or DTs.\nIf the edge only stores the AT entities, it is termed an Appearance Edge (AE); otherwise, if it stores DTs, it is termed a Disappearance Edge (DE).\nThe AE for two adjacent cells i and j is defined as a set of ATs and expressed as E+(i, j) = Nj \u2212 (Ni \u2229 Nj ) (1) where Ni and Nj are the node structures for the cells i and j.\nThe DE for two adjacent cells i, j is defined as a set of DTs, satisfying: E\u2212(i, j) = Ni \u2212 (Ni \u2229 Nj ) (2) In a 2D map, depending on the adjacency relationship between two neighboring cells, edges are further classified as either rowwise, if two neighbors are adjacent horizontally (Er ), or columnwise, if they are adjacent vertically (Ec ).\nConsequently, edges are of four different types, according to their token type and adjacency: Er +(i, j), Er \u2212(i, j), Ec +(i, j), and Ec \u2212(i, j).\n4.3 Node Indexing Grid partitioning is a popular space subdivision method that has recently gained popularity for indexing moving entities in highly dynamic virtual environments [12, 8, 13, 10].\nTo highlight the difference to our newly proposed method, we term all existing grid partitioning-based indexing methods node indexing.\nNode indexing partitions the space into equi-sized subspaces (grid cells), indexes entities on each cell, and searches for entities that satisfy a spatial condition with a given query.\nIn many LBS applications, node indexing maintains a node structure per cell and stores an index of entities whose spatial extent is a point location.\nFor a given range query, a search is performed from the node structures of the cells whose region intersects with the spatial extent of the range query.\nDue to the use of a simple point geometry for entities, this allows for lightweight index updates.\nMuch of the existing work falls into this category.\nHowever, if the spatial extent of an entity is a complex geometry such as rectangle, node indexing will suffer from significant system degradation due to expensive update overhead.\nFor example, a single movement of an entity whose spatial extent overlaps with 100 grid cells requires 100 token deletions and 100 token insertions, in the worst case.\nOne of the popular node indexing methods, Query Indexing, has been reported to have such performance degradation during the update of rectangle-shaped range queries [13].\nFor the sample space shown in Figure 2, the concept of node indexing is illustrated in Figure 3(a).\nEvery cell stores IT entities that intersect with its region.\nQuery processing for the spectator S means to search the node structure whose cell region intersects with S.\nIn Figure 3(a), E2 is indexed on the same cell, thus being delivered to the client S after the query evaluation.\n4.4 Edge Indexing Our new indexing method, edge indexing, is designed to provide an efficient indexing method for the specific spatial extension (square) of the entities in a grid.\nIts features are (1) an edge structure and (2) periodic entity update and cell evaluation.\n4.4.1 Idea Edge Structure The main characteristic of our approach is that it maintains edge structures instead of using node structures.\nWith this approach, redundant ITs between two adjacent cells (Ni \u2229Nj ) are eliminated.\nIn a 2D M \u00d7 M grid map, each cell i is surrounded by four neighboring cells (i\u2212 1), (i+ 1), (i\u2212 M), (i+ M) (except for the 405 outermost cells) and eight different edge structures.\nIf the first two neighbor cells are horizontally adjacent to i and the last two cells (i\u2212M), (i+M) are vertically nearby, the eight edge structures are Ec +(i\u2212M, i), Ec \u2212(i\u2212M, i), Er +(i\u22121, i), Er \u2212(i\u22121, i), Er +(i, i+ 1), Er \u2212(i, i + 1), Ec +(i, i + M), and Ec \u2212(i, i + M).\nFigure 3(b) illustrates how edge structures are constructed from node structures, using Equations 1 and 2.\nInversely, the cell evaluation process with edge indexing derives node structures from the edge structures.\nIf any node structure and all the edge structures are known a priori, we can derive all the node structures as defined by Lemma 1.\nThe proof of Lemma 1 is trivial as it is easily induced from Equations 1 and 2.\nLemma 1.\nNj , a set of ITs of a given cell j can be derived from a set of ITs of its neighbor cell i, Ni and its edges E+(i, j) \u2212 E\u2212(i, j): Nj = Ni + E+(i, j) \u2212 E\u2212(i, j) Row-wise and column-wise edge structures, however, capture some redundant information.\nThus, na\u00a8\u0131ve edge indexing stores more tokens than node indexing - the total number of edge tokens shown in Figure 3(b) is 35 (17 ATs + 17 DTs + 1 IT); for node indexing in Figure 3(a) the number is 25.\nTo reduce such redundancy, a subsequent two-step algorithm can be applied to the original edge indexing.\nPeriodic Entity Update and Cell Evaluation Many objects are continuously moving and hence index structures must be regularly updated.\nGenerally, this is done through a twostep algorithm [13] that works as follows.\nThe algorithm begins by updating all the corresponding indices of newly moved entities (the entity update step) and then computes the node structures of every cell (the cell evaluation step).\nAfter one cell evaluation, the indexed user entities are retrieved and the computed node structure is delivered for every client that is associated with a user.\nAfter all the cells are evaluated, the algorithm starts over.\nThe two-step algorithm can also be used for our edge indexing by updating the edge structures of the entities that moved during the previous time period and by applying Lemma 1 during the cell evaluations.\nIn addition to this adaptability, the Lemma also reveals another important property of cell evaluations: either row edges or column edges are enough to obtain all the node structures.\nLet us assume that the system maintains the row-wise edges.\nThe leftmost node structures are assumed to be obtained in advance.\nOnce we know the node structure of the leftmost cell per row, we can compute that of its right-hand cell from the leftmost node structure and the row-wise edges.\nWe repeat this computation until we reach the rightmost cell.\nHence, without any column-wise edges we can obtain all the node structures successfully.\nAs a result, we reduce the complexity of the index construction and update by a factor of two.\nFigure 3(c) illustrates the concept of our row-wise edge indexing method.\nThe total number of tokens is reduced to 17 (8 ATs + 8 DTs + 1 IT).\nThe detailed analysis of its indexing complexity is presented in Section 5.\n4.4.2 Another Example Figure 4 illustrates how to construct edge structures from two nearby cells.\nIn the figure, two row-wise adjacent cells 3 and 4 have two row-wise edge transitions between them, E+(3, 4), E\u2212(3, 4); two point entities P1, P2; and two polygonal entities R1, R2.\nAs shown in the figure, N3 indexes {P2, R1, R2} and N4 maintains the indices of {P1, R2}.\nE+(3, 4) is obtained from Equation 1: N4 \u2212 (N3 \u2229 N4) = {P1}.\nSimilarly, E\u2212(3, 4) = N3 \u2212 Cell 3 Cell 4 E+(3, 4)={P1} E_(3, 4)={P2,R1} P2 P1 R2 R1 Figure 4: Example of edge indexing of two point entities {P1, P2} and two polygonal entities {R1, R2} between two row-wise adjacent cells.\n(N3 \u2229 N4) = {P2, R1}.\nIf we know N3, E+(3, 4), and E\u2212(3, 4), we can compute N4 according to Lemma 1, N4 = N3+E+(3, 4)\u2212 E\u2212(3, 4) = {P1, R2}.\nThe above calculation also corresponds to our intuition.\nP2, R1, R2 overlap with cell 3 while P1, R2 overlap with cell 4.\nWhen transiting from cell 3 to 4, the algorithm will recognize that P2, R1 disappear and P1 is newly appearing while the spatial condition of R2 is unchanged.\nThus, we can insert P2, R1 in the disappearance edge set and insert P1 in the appearance edge set.\nObviously, edge indexing is inefficient for indexing a point geometry.\nNode indexing has one IT per point entity and requires one token removal and one insertion upon any location movement.\nEdge indexing, however, requires one AT and one DT per point entity and two token removals and two insertions during the update, in the worst case.\nIn such a situation, we take advantage of using both according to the spatial property of entity extension.\nIn summary, as shown in Figure 3(c), our edge indexing method uses edge structures for the AOI enabled entities (A, E1, E2) while it uses node structures for the point entity (S).\n4.5 Optimization Issues In this section, we describe several optimization techniques for edge indexing, which reduces the algorithm complexity significantly.\n4.5.1 Single-table Approach: Update Typically, there exist two practical policies for a region update: Full Update simply removes every token of the previous entity region and re-inserts newly updated tokens into newly positioned areas.\nIncremental Update only removes the tokens whose spatial relationship with the cells changed upon an update and inserts them into new edge structures that satisfy the new spatial conditions.\n4.5.2 Two-table Approach: Separating Moving Entities from Stationary Entities So far, we have not addressed any side-effect of token removals during the update operation.\nLet us assume that an edge index is realized with a hash table.\nInserting a token is implemented by inserting it at the head of the corresponding hash bucket, hence the processing time becomes constant.\nHowever, the token removal time depends on the expected number of tokens per hash bucket.\nTherefore, the hash implementation may suffer from a significant system penalty when used with a huge number of populated entities.\nTwo-table edge indexing is designed to make the token removal overhead constant.\nFirst, we split a single edge structure that indexes both stationary and moving entities into two separate edge 406 Table 1: Summary of notations for virtual entities and their properties.\nSymbol Meaning U set of populated object entities O set of moving object entities, O \u2286 U Uq set of populated user entities Q set of moving user entities, Q \u2286 Uq A set of avatars, A = {a|a \u2208 U \u2229 Uq} i.P location of entity i where i \u2208 (U \u222a Uq) i.R AOI of entity i where i \u2208 (U \u222a Uq) mi side length of entity i where i \u2208 (U \u222a Uq).\nIt is represented by the number of cell units.\nm average side length of the AOI of entities V ar(mi) variance of random variable mi v maximum reachable distance.\nIt is represented by the number of cell units.\nstructures.\nIf an entity is not moving, its tokens will be placed in a stationary edge structure.\nOtherwise, it will be placed with a moving edge.\nSecond, all moving edge structures are periodically reconstructed.\nAfter the reconstruction, all grid cells are evaluated to compute their visible sets.\nOnce all the cells are evaluated, the moving edges are destroyed and the reconstruction step follows.\nAs a result, search operations on the moving edge structures are no longer necessary and the system becomes insensitive to any underlying distribution pattern and moving speed of the entities.\nA singly linked list implementation is used for the moving edge structure.\n5.\nANALYSIS We analyze three indexing schemes quantitatively (node indexing, edge indexing, and two-table edge indexing) in terms of memory utilization and processing time.\nIn this analysis, we assume that node and edge structures are implemented with hash tables.\nFor hash table manipulations we assume three memory-access functions: token insertion, token removal, and token scan.\nTheir processing costs are denoted by Ta, Td, and Ts, respectively.\nA token scan operation reads the tokens in a hash bucket sequentially.\nIt is extensively used during cell evaluations.\nTs and Td are a function of the number of tokens in the bucket while Ta is constant.\nFor the purpose of analysis, we define two random variables.\nOne variable, denoted by mo, represents the side length of the AOI of an entity o.\nThe side lengths are uniformly distributed in the range of [mmin, mmax].\nThe average value of mo is denoted by m.\nThe second random variable v denotes the x-directional or ydirectional maximum distance of a moving entity during a time interval.\nThe simulated movement of an entity during the given time is also uniformly distributed in the range of [0, v].\nFor a simple calculation, both random variables are expressed as the number of cell units.\nTable 1 summarizes the symbolic notations and their meaning.\n5.1 Memory Requirements Let the token size be denoted by s. Node indexing uses s \u00b7 |Uq| memory units for user entities and s \u00b7 \u00c8o\u2208U (mo + 1)2 \u2248 s(m2 + 2m + 1 + V ar(mo))|U| units for object entities.\nSingle-table edge indexing consumes s \u00b7 |Uq| storage units for the user entities and s \u00b7 \u00c8o\u2208U 2(mo + 1) \u2248 2s(m + 1)|U| for the object entities.\nTwo-table edge indexing occupies s \u00b7 |Uq| units for the users and s{ \u00c8i\u2208O 2(mi+1)+ \u00c8j\u2208(U\u2212O) 2(mj +1)} \u2248 2s(m+1)|U| units for the objects.\nTable 2 summarizes these results.\nIn our target apTable 2: Memory requirements of different indexing methods.\nindexing method user entities object entities node indexing s \u00b7 |Uq| s((m + 1)2 + V ar(mo))|U| single-table edge s \u00b7 |Uq| 2s(m + 1)|U| two-table edge s \u00b7 |Uq| 2s(m + 1)|U| plication, our edge indexing methods consume approximately m+1 2 times less memory space than node indexing.\nDifferent grid cell partitioning with edge methods will lead to different memory requirements.\nFor example, here are two grids: a M \u00d7 M grid and a 2M \u00d7 2M grid.\nThe memory requirement for the user entities is unchanged because it depends only on the total number of user entities.\nThe memory requirements for the object entities are approximately 2s(m + 1)|U| in the M \u00d7 M grid case and 2s(2m + 1)|U| for the (2M) \u00d7 (2M) grid.\nThus, a four times larger cell size will lead to an approximately two times smaller number of tokens.\n5.2 Processing Cost In this section, we focus on the cost analysis of update operations and cell evaluations.\nFor a fair comparison of the different methods, we only analyze the run-time complexity of moving objects and moving users.\n5.2.1 Update Cost We assume that a set of moving objects O and a set of moving users Q are known in advance.\nSimilar to edge indexing, node indexing has two update policies: full update and incremental update.\nFull update, implemented in Q-Index [13] and SINA [10], removes all the old tokens from the old cell node structures and inserts all the new tokens into the new cell nodes.\nThe incremental update policy, implemented by no existing work, removes and inserts all the tokens whose spatial condition changed during a period.\nIn this analysis, we only consider incremental node indexing.\nTo analyze the update cost of node indexing, we introduce the maximum reachable distance (v), where the next location of a moving entity, whose previous location was at cell(0,0), is uniformly distributed over the (\u00b1v, \u00b1v) grid cell space as illustrated in Figure 5.\nWe also assume that the given maximum reachable distance is less than any side length of the AOI of the objects in the system; that is, v < mo where o \u2208 O.\nAs seen in Figure 5, the next location may fall into three categories: areas A, B, and the center cell area (0,0).\nIf an object resides in the same cell, there will be no update.\nIf the object moves into the area A, there will be (i + j)(mo + 1) \u2212 ij token insertions and removals, where 1 \u2264 i, j \u2264 v. Otherwise, there will be k(mo + 1) token insertions and removals, where 1 \u2264 k \u2264 v. Thus, the expected processing time of an object update for node indexing is the summation of three different movement types T node per update(o) = 4 \u00b7 (A) + 4 \u00b7 (B) (2v + 1)2 \u00b7 (Ta + Td) = v(v + 1){v(4mo + 3 \u2212 v) + 2(mo + 1)} (2v + 1)2 \u00b7 (Ta + Td) (3) and the expected processing time of any object for node indexing is obtained by T node per update = \u00c8o\u2208O,v 0.8, in bold typeface) whereas when Z(s- 1) = 1 the state transition probabilities to that event cover the range of 0.15 to 0.61.\nThat means that past no loss events do not affect the loss process as much as past loss events.\nIntuitively this seems to make sense, because a successfully arriving packet can be seen as a indicator for congestion relief.\nAndren et.\nal. ([l]) as well as Yajnik et.\nal. ([20]) both confirmed this by measuring the cross correlation of the loss- and no-loss-runlengths.\nThey came to the result that such correlation is very weak.\nThis implies that patterns of short loss bursts interspersed with short periods of successful packet arrivals occur rarely (note in this context that in `&able 1 the pattern 101 has by far the lowest state probability).\nThus, in the following we employ a model ([12]) which only considers the past loss events for the state transition probability.\nThe number of states of the model can be reduced from 2m to m + 1.\nThis means that we only consider the state transition probability P( Z(s) ] Z(s - k), ... ,Z(s - 1) ) with Z(s - Ic + i) = 1 V i E [0, Ic - 11, where Ic (0 < k 5 m) is a variable parameter.\nWe define a loss run length lc for a sequence of Ic consecutively lost packets detected at sj (sj > Ic > 0) with Z(sj - k - 1) = O,Z(sj) = 0 and Z(sj - k + i) = 1 V i E [0, Ic - 11, j being the j-th `klX>k-l)= P(X>knX>k-1) P(X>k-1) ZZ P ( X > k ) P(X>k-1) These conditional loss probabilities again can be approxPzl,cum(k) _ Cnm,k on i m a t e d by PL,cond@) = PL,cum(k-l) - C;Ek--l 0,.\nFor 443 Figure 2: Loss run-length model with m + 1 states P(X = m]X = m) we have: PL,,,d(rn) = %Lf;:P Fig. 2 shows the Markov chain for the model.\n(1) Additionally, we also define a random variable Y which de scribes the distribution of burst loss lengths with respect to the bunt loss events j (and not to packet events like in the definition of X).\nWe have the burst loss length rate ga = * as an estimate for P(Y = k).\nThus, the mean burst loss length is g = h = e = CT==, kgb corresponding to E[Y] (average loss gap, [5]).\nThe run-length model implies a geometric distribution for residing in state X = m. For the probability of a burst loss length of k packets we thus can compute estimates for performance parameters in a higher order model representation (note that here Y represent the random variable used in the higher-order models).\nFor a three state model we have e.g. for P(Y = k): P(Y = k) = P(Y=k)=l-pm: k=l pn p;;'' (1 -pm): k 2 2 c-4 For the special case of a system with a memory of only the previous packet (m = l), we can use the runlength distribution for a simple computation of the parameters of the commonly-used Gilbert model (Fig. 3) to characterize the loss process (X being the associated random variable with X = 0: no packet lost, X = 1 a packet lost).\nThen the probability of being in state m can be seen BS the uncow ditional loss probability ulp and approximated by the mean loss (pr.\n= pi,,,,).\nOnly one conditional loss probability clp for the transition 1 --t 1 exists: p~,E-d = CFL(~ - lbd (3) If losses of one flow are correlated (i.e., the loss probability of an arriving packet is itiuenced by the contribution to the state of the queue by a previous packet of the same flow and\/or both the previous and the current packet see busty arrivals of other traffic, [15]) we have pal < clp and thus ulp 5 clp.\nFor pal = elp the Gilbert model is equivalent to a l-state (Bernotilli) model with zllp = elp (no loss correlation).\nAs in equation 2 we can compute an estimate for the probability of a burst loss length of k packets for a higher order model representation: B(Y = k) = clpk-`(1 - clp) (4) Figure 3: Loss run-length (Gilbert model) model with two states Figure 4: Components of the end-to-end loss recovery\/control measurement setup.\n2.2 Objective Speech Quality Metrics Unlike the conventional methods like the Signal-t-Noise Ratio (SNR), novel objective quality measures attempt to estimate the subjective quality as closely as possible by modeling the human auditory system.\nIn our evaluation we use two objective quality measures: the Enhanced Mod&d Bark Spectral Distortion (EMBSD, [21]) and the Measw ing Normalizing Blocks (MNB) described in the Appendix II of the ITU-T Recommendation P.861 ([18]).\nThese two objective quality measure.\n~ are reported to have a very high correlation with subjective tests, their relation to the range of subjective test result values (MOS) is close to being linear and they are recommended as being suitable for the evaluation of speech degraded by transmission errors in real network environments such as bit errors and frame erasures ([18, 211).\nBoth metrics are distance measures, i.e., a IB suit value of 0 implies perfect quality, the larger the value, the worse the speech quality is (in Fig. 5 we show an axis with approximate corresponding MOS values).\nFor all simulations in this paper we employed both schemes.\nAs they yielded very similar results (though MNB results generally exhibited less variability) we only present EMBSD results.\n3.\nVOIP LOSS SENSITIVITY Figure 4 shows the components of the measurement setup which we will use to evaluate our approach to combined end-t-end and hopby-hop loss recovery and control.\nThe 444 shaded boxes show the components in the data path where mechanisms of loss recovery are located.\nTogether with the parameters of the network model (section 2.1) and the perceptual model we obtain a measurement setup which allows us to map a specific PCM signal input to a speech quality measure.\nWhile using a simple end-to-end loss characterization, we generate a large number of loss patterns by using different seeds for the pseudcerandom number generator (for the results presented here we used 300 patterns for each simulated condition for a single speech sample).\nThis procedure takes thus into account that the impact of loss on an input signal may not be homogenous (i.e., a loss burst within one segment of that signal can have a different perceptual impact than a loss burst of the same size within another segment).\nBy averaging the result of the objective quality measure for several loss patterns, we have a reliable indication for the performance of the codec operating under a certain network loss condition.\nWe employed a Gilbert model (Fig. 3) as the network model for the simulations, as we have found that here higher order models do not provide much additional information.\n3.1 Temporal sensitivity 3.1.1 Pa4 We first analyze the behaviour for ~-law PCM flows (64 kbit\/s) with and without loss concealment, where the loss concealment repairs isolated losses only (speech stationarity can only be assumed for small time intervals).\nResults are shown for the AP\/C concealment algorithm ([ll]).\nSimilar results were obtained with other concealment algorithms.\nFigure 5 shows the case without loss concealment enabled where Gilbert model parameters are varied.\nThe resulting speech quality is insensitive to the loss distribution parameter (elp).\nThe results are even slightly decressing for an increasing clp, pointing to a significant variability of theresuits.\nIn Figure 6 the results with loss concealment are depicted.\nWhen the loss correlation (dp) is low, loss concealment provides a significant performance improvement.\nThe relative improvement increases with increasing loss (pulp).\nFor higher clp values the cases with and without concealment become increasingly similar and show the same performance at clp x 0.3.\nFigures 5 and 6 respectively also contain a curve showing the performance under the assumption of random losses (Bernouilli model, ulp = elp).\nThus, considering a given ulp, a worst case loss pattern of alternating losses (I(s mod 2) = l,l([s + l] mod 2) = 0) would enable a considerable performance improvement (with ok = OVk > 1: p~,cond = 0, Eq.\n3).\nAs we found by visual inspection that the distributions of the perceptual distortion values for one loss condition seem to approximately follow a normal distribution we employ mean and standard deviation to describe the statistical variability of the measured values.\nFigure 7 presents the perceptual distortion as in the previous figure but also give the standard deviation as error bars for the respective loss condition.\nIt shows the increasing variability of the results with increasing loss correlation (clp), while the variability does not seem to change much with an increasing amount of loss (alp).\nOn one hand this points to some, though weak, sensitivity with regard to heterogeneity (i.e., it matters which area of the speech signal (voiced\/unvoiced) experiences a burst loss).\nOn the other hand it shows, that a large number of different Figure 5: Utility curve for PCM without loss conc e a l m e n t (EMBSD) Figure 6: Utility curve for PCM with loss concealment (EMBSD) Figure 7: Variability of the perceptual distortion with 10s~ concealment (EMBSD) 445 Figure 8: Utility curve for the G.729 codec (EMBSD) loss patterns is necessary for a certain speech sample when using objective speech quality measurement to assess the impact of loss corwlation on user perception.\n3.1.2 G.729 G.729 ([17]) uses the Conjugate Structure Algebraic Code Excited Linear Prediction (CS-ACELP) coding scheme and ooerates at 8 kbit\/s.\nIn G.729.\na speech fmme is 10 ms in d&ion.\nFor each frame, the``G.7i9 encoder analyzes the input data and extracts the parameters of the Code Excited Linear Prediction (CELP) model such as linear prediction filter coefficients and excitation vectors.\nWhen a frame is lost or corrupted, the G.729 decoder uses the parameters of the previous frame to interpolate those of the lost frame.\nThe line spectral pair coefficients (LSP3) of the last good frame are repeated and the gain coefficients are taken from the previous frame but they are damped to gradually reduce their impact.\nWhen a frame loss occurs, the decoder cannot update its state, resulting in a divergence of encoder and decoder state.\nThus, errors are not only introduced during the time period represented by the current frame but also in the following ones.\nIn addition to the impact of the missing codewords, distortion is increased by the missing update of the predictor filter memories for the line speo tral pairs and the linear prediction synthesis filter memo ries.\nFigure 8 shows that for similar network conditions the output quality of the G.729* is worse than PCM with loss concealment, demonstrating the compression versus quality tradeoff under packet loss.\nInterestingly the loss correlation (dp parameter) has some impact on the speech quality, however, the effect is weak pointing to a certain robustness of the G.729 codec with regard to the resiliency to consecutive packet losses due to the internal loss concealment.\nRosenberg has done a similar experiment ([9]), showing that the difference between the original and the concealed signal with increasing loss burstiness in terms of a mean-squared error is significant, however.\nThis demonstrates the importance of perceptual metrics which are able to include concealment `LSPs are another representation of the linear prediction coefficients.\n`Two G.729 frames are contained in a packet.\nFigure 9: Resynchronization time (in frames) of the G.729 decoder after the loss of k consecutive frames (k E [1,4]) as a function of frame position.\n(and not only reconstruction) operations in the quality assessment.\n3.2 Sensitivity due to ADU heterogeneity PCM is a memoryless encoding.\nTherefore the ADU content is only weakly heterogeneous (Figure 7).\nThus, in this section we concentrate on the G.729 coda.\nThe experiment we carry out is to meawre the resynchronization time of the d* coder after k consecutive frames are lost.\nThe G.729 decoder is said to have resynchronized with the encoder when the energy of the error signal falls below one percent of the energy of the decoded signal without frame loss (this is equivalent to a signal-t-noise ratio (SNR) threshold of 2OdB).\nThe error signal energy (and thus the SNR) is computed on a per-frame basis.\nFigure 9 shows the resynchronization time (expressed in the number of frames needed until the threshold is exceeded) plotted against the position of the loss (i.e., the index of the first lost frame) for d&rent values of k.\nThe speech sample is produced by a male speaker where an unvoiced\/voiced (au) transition occurs in the eighth frame.\nWe can see from Figure 9 that the position of a frame loss has a signilicant inlluence cm the resulting signal degradation'', while the degradation is not that sensitive to the length of the frame loss burst k.\nThe loss of unvoiced frames seems to have a rather small impact on the signal degradation and the decoder recovers the state information fast thereafter.\nThe loss of voiced frames causes a larger degradation of the speech signal and the decoder needs more time to resyncbronize with the sender.\nHowever, the loss of voiced frames at an unvoiced\/voiced transition leads to a significant degradation of the signal.\nWe have repeated the experiment for different male and female speakers and obtained similar results.\nlsking into account the wed coding scheme, the above phenomenon could be explained as follows: Because voiced sounds have a higher energy than unvoiced sounds, the loss of voiced frames causes a larger signal degradation than the loss of unvoiced frames.\nHowever, due to the periodic property of voiced sounds, the decoder can conceal `While on one hand we see that SNR n~easu~es often do not correlate well with subjective speech quality, on the other hand the large differences in the SNR-threshold-based resynchronization time clearly point to a significant impact on subjective speech quality.\n446 Figure 10: Differential RED drop probabilities as a function of average queue sizes the loss of voiced frames well once it has obtained suflicient information on them.\nThe decoder fails to conceal the loss of voiced frames at an unvoiced\/voiced transition because it attempts to conceal the loss of voiced frames using the filter coefficients and the excitation for an unvoiced sound.\nMoreover, because the G.729 encoder uses a moving average filter to predict the values of the line spectral pairs and only transmits the difference between the real and predicted vaues, it takes a lot of time for the decoder to resynchronize with the encoder once it has failed to build the appropriate linear prediction filter.\n4.\nQUEUE MANAGEMENT FOR INTRAFLOW LOSS CONTROL While we have highlighted the sensitivity of VoIP traffic to the distribution of loss in the last sections, we now want to briefly introduce a queue management mechanism ([13]) which is able to enforce the relative preferences of the ap plication with regard to loss.\nWe consider flows with packets marked with +l and `I1 (as described in the introduction) BS foreground traffic (FT) and other (best effort) flows as backaound traffic (BT).\nPacket marking, in addition to keeping the desirable property of state aggregration within the network core as proposed by the IETF Differentiated Services architecture, is exploited here to convey the intrbflow requirements of a llow.\nAs it should be avoided to introduce reordering of the packets of a flow in the network we consider mechanisms for the management of a single queue with different priority levels.\nOne approach to realize inter-flow service differentiation using a single queue is RIO (`RED with IN and OUT'', [3]).\nWith RIO, two average queue sizes as congestion indicators are computed: one just for the IN (high priority) packets and another for both IN and OUT (low priority) packets.\nPackets marked as OUT are dropped earlier (in terms of the average queue size) than IN packets.\nRIO has been designed to decrease the clip seen by particular Bows at the expense of other flows.\nIn this work, however, we want to keep the ulp as given by other parameters while modifying the loss distribution for the foreground traffic (FT).\nThis amounts to trading the loss of a +l packet against a -1 packet of the same flow (in a statistical sense).\nFig. 10 shows the conventional RED drop probability curve (po as a function of the average queue size for all arrivals avg), which is applied to all unmarked (0) traffic (background traffic: BT).\nThe necessary relationship between the drop probabilities for packets marked as -1 and +l can be derived 85 follows (note that this relationship is valid both at the end-tend level and every individual hop): Let a = a0 + a+, +a-~ be the overall number of emitted packets by an FT flow and a,, + E [-l,O, +l] be the number of packets belonging to a certain class (where the 0 class corresponds to (unmarked) best effort traflic).\nThen, with a+, = a-1 = a,~, and considering that the resulting service has to be best effort in the long term, we have: aopo + a+lp+l + LIP-1 A ape Qlll@+l fP-1) = (a--oo)m P - l = zpo-p+1 (5) Due to this relationship between the drop probability for +l and -1 packets, we designate this queue management algorithm as Differential RED (DiERED).\nFigure 10 shows the corresponding drop probability curves.\nDue to the condition of a+, = (I-L = all, in addition to the conventional RED behaviour, the DiffRED implementation should also monitor the +l and -1 arrival processes.\nIf the ratio of +l to -1 packets at a gateway is not 1 (either due to misbehaving flows or a significant number of flows which have already experienced loss at earlier hops) the -1 loss prob ability is decreased and the +l probability is increased at the same time thus degrading the service for all users.\nThe shaded areas above and below the po(avg) curve (Fig. 10) show the operating area when this correction is added.\nIn [13] it has been shown that using only the conventional RED average queue size avg for DSRED operation is not sufficient.\nThis is due to the potentially missing correlation of the computed aug value between consecutive +l and -1 arrivals, especially when the share of the FT traftic is low.\nAs this might result in a unfair distribution of losses between the FT and BT fractions, a specific avgl value is computed by sampling the queue size only at FT arrival instants.\nThus, a service differentiation for foreground traffic is possible which does not differ from conventional RED behaviour in the long term average (i.e., in the ulp).\n5.\nINTRA-FLOW LOSS RECOVERY AND CONTROL 5.1 Temporal sensitivity Considering a flow with temporal loss sensitivity, paragraph 3.1.1 has shown that a simple, periodic loss pattern enhances the performance of the end-to-end loss recovery.\nThe pattern is not tied to particular packets, therefore a per-flow characterization with the introduced metrics is applicable.\nIn this paragraph we assume that a flow expressed its temporal se*sitivity by marking its flow with an alternating pattern of ,c+l,>,`c~l,,, 447 Figure 11: Comparison of actual and estimated Figure 12: Comparison of actual and estimated burst loss length rates as a function of burst length burst, loss length rates as a function of burst length k: three state run-length-based model k: two state run-length-based model (Gilbert.)\nFigures 11 and 12 show the rates for the actual and the estimated burst loss lengths for a three-state (m = 2) and a two=state (m = 1, Gilbert) model respectively6.\nWe can observe that DiffRED shapes the burst probability curve in the desired way.\nMost of the probability mass is concentrated at, isolated losses (the ideal behaviour would be the occurence of only isolated losses (Ic = 1) which can be expressed with clp = 0 in terms of Gilbert model parameters).\nWith Drop Tail an approximately geometrically decreasing burst loss probability with increasing burst length (Eq.\n4) is obtainable, where the clp parameter is relatively large though.\nThus, considering voice with temporal loss sensitivity as the foreground traffic of interest, with DifFRED a large number of short annoying bursts can be traded against a larger number of isolated losses and few long loss bursts (which occur when the queue is under temporary overload, i.e., awg > maxth, Fig. 10).\nWe can see that the three-state model estimation (Eq.\n2) reflects the two areas of the DifFRED operation (the sharp drop of the burst loss length rate for k = 2 and the decrease along a geometrically decreasing asymptote for k > 2).\nThis effect cannot be captured by the two state model (Eq.\n4) which thus overestimates the burst loss length rate for Ic = 2 and then hugely underestimates it for k > 2.\nInterestingly, for Drop Tail, while both models capture the shape of the actual curve, the lower order model is more accurate in the estimation.\nThis can be explained as follows: if the burst loss length probabilities are in fact close to a geometrical distribution, the estimate is more robust if all data is included (note that the run-length based approximation of the conditional loss probability P(X = mlX = m) only includes loss run-length occurences larger or equal to m: Eq.\n1).\nsWe only discuss the results qualitatively here to give an example how an intra-flow loss control algorithm performs and to show how loss models can capture this performance.\nDetails on the simulation scenario and parameters can be found in [12].\n5.2 Sensitivity due to ADU heterogeneity In paragraph 3.1.2 we have seen that sensitivity to ADU heterogeneity results in a certain non-periodic loss pattern.\nThus, a mechanism at (or near) the sender is necessary which derives that pattern from the voice data.\nFurthermore, an explicit cooperation between end-to-end and hop by-hop mechanisms is necessary (Fig. 4).\nWe use the result of paragraph 3.2 to develop a new packet marking scheme called Speech Property-Based Selective Differential Packet Marking (SPB-DIFFMARK).\nThe DIFFMARK scheme concentrates the higher priority packets on the frames essential to the speech signal and relies on the decoder``s concealment for other frames.\nFigure 13 shows the simple algorithm written in a pseudocode that is used to detect an unvoiced\/voiced (uw) transition and protect the voiced frames at, the beginning of a voiced signal.\nThe procedure analysis0 is used to classify a block of Ic frames as voiced, unvoiced, or uv transition.\nsend0 is used to send a block of Ic frames as a single packet with the appropriate priority (either fl, 0 or -1).\nAs the core algorithm gives only a binary marking decision (protect the packet or not), we employ a simple algorithm to send the necessary -1 packets for compensation (Eq.\n5): after a burst of +l packets has been sent, a corresponding number of -1 packets is sent immediately.\nState about the necessary number of to-be-sent -1 packets is kept in the event that the SPB algorithm triggers the next +l burst before all -1 packets necessary for compensation are sent.\nThus, seen over time intervals which are long compared to the +1\/-l burst times, the mean loss for the flow will be equal to the best effort case (Eq.\n5).\nN is a pre-defined value and defines how many frames at the beginning of a voiced signal are to be protected.\nOur simulations (Fig. 9) have shown that the range from 10 to 20 are appropriate values for N (depending on the network loss condition).\nIn the simulations presented below, we choose Ic = 2, a typical 448 protect = 0 fcm?ach (k frames) classify = analysis(k frames)] if (protect > 0) if (classify == unvoiced) protect = 0 if (compensation > 0) compensation = compensation-k send(k frames, -1) else send(k frames, 0) endif else send(k frames, +l) protect = protect-k compensation = compensation+k endif else if (classify == uvfransition) send(k frames, +l) protect = N - k compensation = compensation+k else if (compensation > 0) compensation = compensation-k send(k frames, -1) else send(k frames, 0) endif endif endif endfor Figure 13: SPB-DIFFMARK Pseudo Code value for interactive speech transmissions over the Internet (20ms of audio data per packet).\nA larger number of Ic would help to reduce the relative overhead of the protocol header but also increases the packetization delay and makes sender classification and receiver concealment in case of packet loss (due to a larger loss gap) more difficult.\n5.2.1 En.d-to-end simulation description Due to the non-periodic loss pattern, we need to explicitly associate a drop probability with a single packet within an end-to-end model.\nTherefore we use a separate one-state Markov model (Bernouilli model) to describe the network behaviour as seen by each class of packets.\nBest effort packets (designated by 0 in Fig. 14) are dropped with the probability pc, whereas packets marked with +l and -1 are dropped with probabilities of p+i and p-1 respectively.\nThis is a reasonable assumption'' with regard to the interdependence of the different classes in fact, as it has been shown that DiffRED (Figs. 11 and 12) achieves a fair amount of decorrelation of +l and -1 packet losses.\nNevertheless to include some correlation between the classes we have set p+i = 10m3 pc for the subsequent simulations.\nThis should `The appropriateness of the simple end-to-end modeling used has been investigated in [12] with discrete event simulation using a multi-hop topology and detailed modeling of foreground and background traffic sources.\nMarking Scheme Network Model N O M A R K lolo\/ 0~0~0~ 01 F U L L M A R K +I +I +I +I +l +I SPB MARK 1 0 1+I j +I I 0 I 0 \/ 0 ALT MARK o\/+11 Ol+ll o\/+1 +I Pcl 330 po S P B D I F F M A R K 1 IJ \/ +l I +l I -1 j-1 \/ 0 I ALT DIFFMARK I-1 \/ +I 1-1 1+I \/-I 1+I 1 Figure 14: Marking schemes and corresponding network models.\nalso allow a reasonable evaluation of how losses in the fl class affect the performance of the SPB-algorithms.\nFor a direct comparison with SPB-DIFFMARK, we evaluate a scheme where packets are alternatingly marked as being either -1 or +l (ALT-DIFFMARK, Figure 14).\nWe furthermore include related inter-flow loss protection schemes.\nThe first scheme uses full protection (FULL MARK, all packets are marked as +l).\nThe SPB-MARK scheme operates similarly to SPB-DIFFMARK, but no -1 packets are sent for compensation (those packets are also marked as 0).\nFor comparison we again use a scheme where packets are alternatingly marked as being either 0 or +l (ALT-MARK).\nFinally, packets of pure best effort flows are dropped with the probability po (NO MARK case in Fig. 14).\nFor the SPB marking schemes the percentage of +l- and -l-marked packets respectively is 40.4% for the speech material used.\nWe obtained similar marking percentages for other speech samples.\nThe ALT marking schemes mark exactly 50% of their packets as being fl.\n5.2.2 Results Figure 15 shows the perceptual distortion for the marking schemes dependent on the drop probability pc.\nThe unprotected case (NO MARK) has the highest perceptual distortion and thus the worst speech quality*.\nThe differential marking scheme (SPB-DIFFMARK) offers a significantly better speech quality even when only using a network service which amounts to best effort in the long term.\nNote that the ALT-DIFFMARK marking strategy does not differ from the best effort case (which confirms the result of paragraph 3.1.2).\nSPB-DIFFMARK is also even better than the inter-flow QoS ALT-MARK scheme, especially for higher values of pe.\nThese results validate the strategy of our SPB marking schemes that do not equally mark all packets with a higher priority but rather protect a subset of frames that are essential to the speech quality.\nThe SPB-FEC scheme ([12]), *We have also perfo rmed informal listening tests which confirmed the results using the objective metrics.\n449 Figure 15: Perceptual Distortion (EMBSD) for the marking schemes and SPB-FEC which uses redundancy9 piggybacked on the main payload packets (RFC 2198, [7]) to protect a subset of the packets, enables a very good output speech quality for low loss rates.\nHowever, it should be noted that the amount of data sent over the network is increased by 40.4%.\nNote that the simulation presumes that this additionally consumed bandwidth itself does not contribute significantly to congestion.\nThis assumption is only valid if a small fraction of trafhc is voice ([S]).\nThe SPB-FEC curve is convex with increasing po, as due to the increasing loss correlation an increasing number of consecutive packets carrying redundancy are lost leading to unrecoverable losses.\nThe curve for SPB-DIFFMARK is concave, however, yielding better performance for pe & 0.22.\nThe inter-flow QoS ALT-MARK scheme (50% of the packets are marked) enhances the perceptual quality.\nHowever, the auditory distance and the perceptual distortion of the SPB-MARK scheme (with 40.4% of all packets marked) is significantly lower and very close to the quality of the decoded signal when all packets are marked (FULL MARK).\nThis also shows that by protecting the entire flow only a minor improvement in the perceptual quality is obtained.\nThe results for the FULL MARK scheme also show that, while the loss of some of the +l packets has some measurable impact, the impact on perceptual quality can still be considered to be very low.\n6.\nCONCLUSIONS In this paper we have characterized the behaviour of a samplebased codec (PCM) and a frame-based codec (G.729) in the presence of packet loss.\nWe have then developed intraflow loss recovery and control mechanisms to increase the perceptual quality.\nWhile we have tested other codecs only informally, we think that our results reflect the fundamental difference between codecs which either encode the speech wave\u2018We also used the G.729 encoder for the redundant source coding.\nform directly or which are based on linear prediction.\nFor PCM without loss concealment we have found that it neither exhibits significant temporal sensitivity nor sensitivity to payload heterogeneity.\nWith loss concealment, however, the speech quality is increased but the amount of increase exhibits strong temporal sensitivity.\nFrame-based codecs amplify on one hand the impact of loss by error propagation, though on the other hand such coding schemes help to perform loss concealment by extrapolation of decoder state.\nContrary to sample-based codecs we have shown that the concealment performance of the G.729 decoder may break at transitions within the speech signal however, thus showing strong sensitivity to payload heterogeneity.\nWe have briefly introduced a queue management algorithm which is able to control loss patterns without changing the amount of loss and characterized its performance for the loss control of a flow exhibiting temporal sensitivity.\nThen we developed a new packet marking scheme called Speech Property-Based Selective Differential Packet Marking for an efficient protection of frame-based codecs.\nThe SPBDIFFMARK scheme concentrates the higher priority packets on the frames essential to the speech signal and relies on the decoder``s concealment for other frames.\nWe have also evaluated the mapping of an end-to-end algorithm to interflow protection.\nWe have found that the selective marking scheme performs almost as good as the protection of the entire flow at a significantly lower number of necessary highpriority packets.\nThus, combined intra-flow end-to-end \/ hopby-hop schemes seem to be well-suited for heavily-loaded networks with a relatively large fraction of voice traffic.\nThis is the case because they neither need the addition of redundancy nor feedback (which would incur additional data and delay overhead) and thus yield stable voice quality also for higher loss rates due to absence of FEC and feedback loss.\nSuch schemes can better accomodate codecs with fixed output bitrates, which are difficult to integrate into FEC schemes requiring adaptivity of both the codec and the redundancy generator.\nAlso, it is useful for adaptive codecs running at the lowest possible bitrate.\nAvoiding redundancy and feedback is also interesting in multicast conferencing scenarios where the end-to-end loss characteristics of the different paths leading to members of the session are largely different.\nOur work has clearly focused on linking simple end-to-end models which can be easily parametrized with the known characteristic of hopby-hop loss control to user-level metrics.\nAn analysis of a large scale deployment of non-adaptive or adaptive FEC as compared to a deployment of our combined scheme requires clearly further study.\n7.\nACKNOWLEDGMENTS We would like to thank Wonho Yang and Robert Yantorno, Temple University, for providing the EMBSD software for the objective speech quality measurements.\nMichael Zander, GMD Fokus, helped with the simulations of the queue management schemes.\n8.\nADDITIONAL AUTHORS Additional author: Georg Carle (GMD Fokus, email: carle@fokus.gmd.de).\n450 9.\nPI PI [31 PI [51 PI [71 PI PI WI WI w REFERENCES J. Andren, M. Hilding, and D. Veitch.\nUnderstanding end-to-end internet traffic dynamics.\nIn Proceedings IEEE GLOBECOM, Sydney, Australia, November 1998.\nJ.-C.\nBolot, S. Fosse-Parisis, and D. Towsley.\nAdaptive FEC-based error control for interactive audio in the Internet.\nIn Proceedings IEEE INFOCOM, New York, NY, March 1999.\nD. Clark and W. Fang.\nExplicit allocation of best effort packet delivery service.\nTechnical Report, MIT LCS, 1997.\nhttp:\/\/diffserv.lcs.mit.edu\/Papers\/expallot-ddc-wf.pdf.\nR. Cox and P. Kroon.\nLow bit-rate speech coders for multimedia communication.\nIEEE Communications Magazine, pages 34-41, December 1996.\nJ. Ferrandiz and A. Lazar.\nConsecutive packet loss in real-time packet traffic.\nIn Proceedings of the Fourth International Conference on Data Communications Systems, IFIP TC6, pages 306-324, Barcelona, June 1990.\nW. Jiang and H. Schulzrinne.\nQoS measurement of Internet real-time multimedia services.\nIn Proceedings NOSSDAV, Chapel Hill, NC, June 2000.\nC. Perkins, I. Kouvelas, 0.\nHodson, M. Handley, and J. Bolot.\nRTP payload for redundant audio data.\nRFC 2198, IETF, September 1997.\nftp:\/\/ftp.ietf.org\/rfc\/rfc2198.txt.\nM. Podolsky, C. Romer, and S. McCanne.\nSimulation of FEC-based error control for packet audio on the Internet.\nIn Proceedings IEEE INFOCOM, pages 48-52, San Francisco, CA, March 1998.\nJ. Rosenberg.\nG. 729 error recovery for Internet Telephony.\nProject report, Columbia University, 1997.\nJ. Rosenberg, L. Qiu, and H. Schulzrinne.\nIntegrating packet FEC into adaptive voice playout buffer algorithms on the Internet.\nIn Proceedings IEEE INFOCOM, Tel Aviv, Israel, March 2000.\nH. Sanneck.\nConcealment of lost speech packets using adaptive packetization.\nIn Proceedings IEEE Multimedia Systems, pages 140-149, Austin, TX, June 1998.\nftp:\/\/ftp.fokus.gmd.de\/pub\/glone\/papers\/Sann9806: Adaptive.ps.gz.\nH. Sanneck.\nPacket Loss Recovery and Control for Voice tinsmission over the Internet.\nPhD thesis, GMD Fokus \/ Telecommunication Networks Group, Technical University of Berlin, October 2000.\nP31 PI P51 WI P71 P31 w PI PI http:\/\/sanneck.net\/research\/publications\/thesis\/ SannOOlOLoss.pdf.\nH. Sanneck and G. Carle.\nA queue management algorithm for intracflow service differentiation in the best effort Internet.\nIn Proceedings of the Eighth Conference on Computer Communications and Networks (ICCCN), pages 419426, Natick, MA, October 1999.\nftp:\/\/ftp.fokus.gmd.de\/pub\/glone\/papers\/Sann9910: Intra-Flow.ps.gz.\nH. Sanneck, N. Le, and A. Wolisz.\nEfficient QoS support for Voice-over-IP applications using selective packet marking.\nIn Special Session on Error Control Techniques for Real-time Delivery of Multimedia data, First Intenzational Workshop on Intelligent Multimedia Computing (IMMCN), pages 553-556, Atlantic City, NJ, February 2000.\nftp:\/\/ftp.fokus.gmd.de\/pub\/glone\/papers\/Sann0002: VoIP-marking.ps.gz.\nH. Schulzrinne, J. Kurose, and D. Towsley.\nLoss correlation for queues with bursty input streams.\nIn Proceedings ICC, pages 219-224, Chicago, IL, 1992.\nD. Sisalem and A. Wolisz.\nLDAf TCP-friendly adaptation: A measurement and comparison study.\nIn Proceedings NOSSDAV, Chapel Hill, NC, June 2000.\nInternational Telecommunication Union.\nCoding of speech at 8 kbit\/s using conjugate-structure algebraic-code-excited linear-prediction (CS-ACELP).\nRecommendation G.729, ITU-T, March 1996.\nInternational Telecommunication Union.\nObjective oualitv measurement of telephone-band (300-3400 Hz) speech codecs.\nRecommendation P.861, ITU-T, February 1998.\nM. Yajnik, J. Kurose, and D. Towsley.\nPacket loss correlation in the MBone multicast network: Experimental measurements and markov chain models.\nTechnical Report 95-115, Department of Computer Science, University of Massachusetts, Amherst, 1995.\nM. Yajnik, S. Moon, J. Kurose, and D. Towsley.\nMeasurement and modelling of the temporal dependence in packet loss.\nTechnical Report 98-78, Department of Computer Science, University of Massachusetts, Amherst, 1998.\nW. Yang and R. Yantorno.\nImprovement of MBSD scaling noise masking threshold and correlation analysis with MOS difference instead of MOS.\nIn Proceedings ICASSP, pages 673-676, Phoenix, AZ, March 1999.\nby 451","lvl-3":"Intra-flow Loss Recovery and Control for\nABSTRACT\n\"Best effort\" packet-switched networks, like the Internet, do not offer a reliable transmission of packets to applications with real-time constraints such voice.\nThus, the loss of packets impairs the application-level utility.\nFor voice this utility impairment is twofold: on one hand, even short bursts of lost packets may decrease significantly the ability of the receiver to conceal the packet loss and the speech signal out is interrupted.\nOn the other hand, some packets may be particular sensitive to loss as they carry more important information in terms of user perception than other packets.\nWe first develop an end-to-end model based on loss lengths with which we can describe the loss distribution within a These packet-level metrics are then linked to user-level objective speech quality metrics.\nUsing this framework, we find that for low-compressing sample-based codecs (PCM) with loss concealment isolated packet losses can be concealed well, whereas burst losses have a higher perceptual impact.\nFor high-compressing frame-based codecs (G. 729) on one hand the impact of loss is amplified through error propagation caused by the decoder filter memories, though on the other hand such coding schemes help to perform loss concealment by extrapolation of decoder state.\nContrary to sample-based codecs we show that the concealment performance may \"break\" at transitions within the speech signal however.\nWe then propose mechanisms which differentiate between packets within a voice data to minimize the impact of packet loss.\nWe designate these methods as loss recovery and control.\nAt the end-to-end level, identification of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow to (statistically) trade the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nAs both\nets require the same cost in terms of network transmission, a gain in user perception is obtainable.\nWe show that significant speech quality improvements can be achieved and additional data and delay overhead can be avoided while still maintaining a network service which is virtually identical to best effort in the long term.\n1.\nINTRODUCTION\nConsidering that a real-time may experience some packet loss, the impact of loss may vary significantly dependent on which packets are lost within a flow.\nIn the following we distinguish two reasons for such a variable loss sensitivity: Temporal sensitivity: Loss of which is correlated in time may lead to disruptions in the service.\nNote that this effect is further aggravated by some interdependence between (i.e., that one ADU can only be decoded when a previous ADU before has successfully been received and decoded).\nFor voice, as a single packet contains typically several (voice frames) this effect is thus more significant than e.g. for video.\nIt translates basically to isolated packet losses versus losses that occur in bursts.\nSensitivity due to ADU heterogeneity: Certain might contain parts of the encoded signal which are ` Application Data Unit: the unit of data emitted by a source coder such as a video or voice frame.\nFigure 1: Schematic utility functions dependent on\nthe loss of more and less (-1) important packets more important with regard to user perception than others of the same flow.\nLet us consider a flow with two frame types of largely different perceptual importance (we same size, frequency and no interdependence between the frames).\nUnder the loss of 50% of the packets, the perceptual quality varies hugely between the where the 50% of the frames with high perceptual importance are received and the where the 50% less important frames received.\nNetwork support for real-time multimedia flows can on one hand aim at offering a service, which, however, to be implemented within pa & et-switched network, will be costly for the network provider and thus for the user.\nOn the other hand, within a lossy service, the above sensitivity constraints must be taken into account.\nIt is strong belief that this needs to be done in generic way, i.e., no application-specific knowledge (about particular coding schemes e.g.) should be necessary the network and, vice versa, no knowledge about network specifics should be necessary within an application.\nLet us now consider the case that 50% of packets of flow identified more important (designated by or less important due to any of the above sensitivity constraints.\nFigure 1 a) shows a generic utility function describing the level Quality of Service dependent on the percentage of packets lost.\nFor real-time multimedia traffic, such utility should correspond to perceived video\/voice quality.\nIf the relative importance of the packets is not known by the transmission system, the loss rates for the and -1 packets are equal.\nDue to the over-proportional sensitivity of the packets to loss as well as the dependence of the end loss recovery performance on the packets, the utility function is decreasing significantly in a non-linear way (approximated in the figure by piece-wise linear functions) with an increasing loss rate.\nFigure 1 b) presents the where all packets are protected at the expense of -1 The decay of the utility function (for loss rates <50%) is reduced, because the packets are protected and the endto-end loss recovery can thus operate properly a wider range of loss rates indicated by the shaded area.\nThis results in a graceful degradation of the application's utility.\nNote that the higher the non-linearity of the utility contribution of the packets is (deviation from the dotted curve in Fig. 1 a), the higher is the potential gain in utility when the protection for is enabled.\nResults for actual perceived quality utility for multimedia applications exhibit such non-linear behavior *.\nTo describe this effect and provide a taxonomy for different enhancement approaches, we introduce a novel terminology: we designate mechanisms influence parameters between (thus decrease the loss rate of one flow at the expense of other flows) as inter-flow Schemes which, in the presence of loss, differentiate between within a flow as demonstrated in Figure 1 above, provide enhancement.\nAs mechanisms have to be implemented within the network (hopby-hop) and\/or in the end systems (end-to-end), we have another axis of classification.\nThe adaptation of the sender's to the current network congestion state an scheme (loss avoidance, is difficult to apply to voice.\nConsidering that voice flows have very low the relative cost of transmitting the feedback information is (when compared e.g. to a video flow).\nTo reduce this cost the feedback interval would need to be increased, then leading to a higher probability of wrong adaptation decisions.\nThe major however, the lack of a codec is truly scalable in terms of its output and corresponding perceptual quality.\nCurrently standardized voice codecs usually only have a output While it has been proposed to switch between voice the MOS (subjective quality) for the employed do not e.g., codecs G. 723.1, G. 729, G. 728, G. 711 range from 5.3 to 64 while the subjective quality differs by less than 0.25 on a l-to-5 MOS scale 1: bad, 5: excellent quality).\nwhen the availability of computing power is assumed, the lowest codec can be chosen permanently without actually decreasing the perceptual quality.\nFor loss on an end-to-end basis, due to the realtime delay constraints, open-loop schemes like Forward Error Correction (FEC) have been proposed While attractive because they can be used on the Internet today, they also have several drawbacks.\nThe amount of redundant information needs to be adaptive to avoid taking bandwidth away from other flows.\nThis adaptation is crucial especially when the fraction of using redundancy schemes is large If the redundancy a coding itself, like it has often been proposed the comments from above on adaptation also apply.\nUsing redundancy has also implications to the delay adaptation ([lo]) employed to de-jitter the packets at the receiver.\nNote that the presented types of loss sensitivity also apply to we have obtained results which confirm the shape of the \"overall utility\" curve shown in Fig. 1, clearly the utility functions of the \"sub\".\nflows and their relationship are more complex and only approximately additive.\nTable 1: State and transition probabilities computed for an end-to-end Internet trace using a general Markov model (third order) by Yajnik et.\nal. .\nwhich are enhanced by end-to-end loss recovery mechanisms.\nEnd-to-end mechanisms can reduce and shift such sensitivities but cannot come close to eliminate them.\nTherefore in this work we assume that the lowest possible trate which provides the desired quality is chosen.\nNeither feedback\/adaptation nor redundancy is used, however, at the end-to-end level, identification\/marking of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow trading the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nWe employ actual and measure their utility in the presence of packet loss using objective speech quality measurement.\nThe paper is structured as follows: Section 2 introduces packet - and user-level metrics.\nWe employ these metrics to describe the sensitivity of traffic to packet loss in section 3.\nSection 4 briefly introduces a queue management algorithm which can be used for intra-flow loss control.\nIn section 5, we present results documenting the performance of the proposed mechanisms at both the end-to-end and by-hop level.\nSection 6 concludes the paper.\n2.\nMETRICS\n2.1 Packet-level metrics\n2.2 Objective Speech Quality Metrics\n3.\nVOIP LOSS SENSITIVITY\n3.1 Temporal sensitivity\n3.2 Sensitivity due to ADU heterogeneity\n4.\nQUEUE MANAGEMENT FOR FLOW LOSS CONTROL\n5.\nINTRA-FLOW LOSS RECOVERY AND CONTROL\n5.1 Temporal sensitivity\n5.2 Sensitivity due to ADU heterogeneity\na block of frames as voiced, unvoiced, or transition.\n5.2.2 Results\n6.\nCONCLUSIONS\nIn this paper we have characterized the behaviour of a\n7.\nACKNOWLEDGMENTS\n8.\nADDITIONAL AUTHORS\n9.\nREFERENCES","lvl-4":"Intra-flow Loss Recovery and Control for\nABSTRACT\n\"Best effort\" packet-switched networks, like the Internet, do not offer a reliable transmission of packets to applications with real-time constraints such voice.\nThus, the loss of packets impairs the application-level utility.\nFor voice this utility impairment is twofold: on one hand, even short bursts of lost packets may decrease significantly the ability of the receiver to conceal the packet loss and the speech signal out is interrupted.\nOn the other hand, some packets may be particular sensitive to loss as they carry more important information in terms of user perception than other packets.\nWe first develop an end-to-end model based on loss lengths with which we can describe the loss distribution within a These packet-level metrics are then linked to user-level objective speech quality metrics.\nUsing this framework, we find that for low-compressing sample-based codecs (PCM) with loss concealment isolated packet losses can be concealed well, whereas burst losses have a higher perceptual impact.\nFor high-compressing frame-based codecs (G. 729) on one hand the impact of loss is amplified through error propagation caused by the decoder filter memories, though on the other hand such coding schemes help to perform loss concealment by extrapolation of decoder state.\nContrary to sample-based codecs we show that the concealment performance may \"break\" at transitions within the speech signal however.\nWe then propose mechanisms which differentiate between packets within a voice data to minimize the impact of packet loss.\nWe designate these methods as loss recovery and control.\nAt the end-to-end level, identification of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow to (statistically) trade the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nAs both\nets require the same cost in terms of network transmission, a gain in user perception is obtainable.\nWe show that significant speech quality improvements can be achieved and additional data and delay overhead can be avoided while still maintaining a network service which is virtually identical to best effort in the long term.\n1.\nINTRODUCTION\nConsidering that a real-time may experience some packet loss, the impact of loss may vary significantly dependent on which packets are lost within a flow.\nIn the following we distinguish two reasons for such a variable loss sensitivity: Temporal sensitivity: Loss of which is correlated in time may lead to disruptions in the service.\nFor voice, as a single packet contains typically several (voice frames) this effect is thus more significant than e.g. for video.\nIt translates basically to isolated packet losses versus losses that occur in bursts.\nFigure 1: Schematic utility functions dependent on\nthe loss of more and less (-1) important packets more important with regard to user perception than others of the same flow.\nLet us consider a flow with two frame types of largely different perceptual importance (we same size, frequency and no interdependence between the frames).\nUnder the loss of 50% of the packets, the perceptual quality varies hugely between the where the 50% of the frames with high perceptual importance are received and the where the 50% less important frames received.\nNetwork support for real-time multimedia flows can on one hand aim at offering a service, which, however, to be implemented within pa & et-switched network, will be costly for the network provider and thus for the user.\nOn the other hand, within a lossy service, the above sensitivity constraints must be taken into account.\nLet us now consider the case that 50% of packets of flow identified more important (designated by or less important due to any of the above sensitivity constraints.\nFigure 1 a) shows a generic utility function describing the level Quality of Service dependent on the percentage of packets lost.\nFor real-time multimedia traffic, such utility should correspond to perceived video\/voice quality.\nIf the relative importance of the packets is not known by the transmission system, the loss rates for the and -1 packets are equal.\nDue to the over-proportional sensitivity of the packets to loss as well as the dependence of the end loss recovery performance on the packets, the utility function is decreasing significantly in a non-linear way (approximated in the figure by piece-wise linear functions) with an increasing loss rate.\nFigure 1 b) presents the where all packets are protected at the expense of -1 The decay of the utility function (for loss rates <50%) is reduced, because the packets are protected and the endto-end loss recovery can thus operate properly a wider range of loss rates indicated by the shaded area.\nThis results in a graceful degradation of the application's utility.\nNote that the higher the non-linearity of the utility contribution of the packets is (deviation from the dotted curve in Fig. 1 a), the higher is the potential gain in utility when the protection for is enabled.\nResults for actual perceived quality utility for multimedia applications exhibit such non-linear behavior *.\nAs mechanisms have to be implemented within the network (hopby-hop) and\/or in the end systems (end-to-end), we have another axis of classification.\nThe adaptation of the sender's to the current network congestion state an scheme (loss avoidance, is difficult to apply to voice.\nConsidering that voice flows have very low the relative cost of transmitting the feedback information is (when compared e.g. to a video flow).\nThe major however, the lack of a codec is truly scalable in terms of its output and corresponding perceptual quality.\nwhen the availability of computing power is assumed, the lowest codec can be chosen permanently without actually decreasing the perceptual quality.\nFor loss on an end-to-end basis, due to the realtime delay constraints, open-loop schemes like Forward Error Correction (FEC) have been proposed While attractive because they can be used on the Internet today, they also have several drawbacks.\nThe amount of redundant information needs to be adaptive to avoid taking bandwidth away from other flows.\nUsing redundancy has also implications to the delay adaptation ([lo]) employed to de-jitter the packets at the receiver.\nNote that the presented types of loss sensitivity also apply to we have obtained results which confirm the shape of the \"overall utility\" curve shown in Fig. 1, clearly the utility functions of the \"sub\".\nflows and their relationship are more complex and only approximately additive.\nTable 1: State and transition probabilities computed for an end-to-end Internet trace using a general Markov model (third order) by Yajnik et.\nal. .\nwhich are enhanced by end-to-end loss recovery mechanisms.\nEnd-to-end mechanisms can reduce and shift such sensitivities but cannot come close to eliminate them.\nTherefore in this work we assume that the lowest possible trate which provides the desired quality is chosen.\nNeither feedback\/adaptation nor redundancy is used, however, at the end-to-end level, identification\/marking of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow trading the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nWe employ actual and measure their utility in the presence of packet loss using objective speech quality measurement.\nThe paper is structured as follows: Section 2 introduces packet - and user-level metrics.\nWe employ these metrics to describe the sensitivity of traffic to packet loss in section 3.\nSection 4 briefly introduces a queue management algorithm which can be used for intra-flow loss control.\nIn section 5, we present results documenting the performance of the proposed mechanisms at both the end-to-end and by-hop level.\nSection 6 concludes the paper.","lvl-2":"Intra-flow Loss Recovery and Control for\nABSTRACT\n\"Best effort\" packet-switched networks, like the Internet, do not offer a reliable transmission of packets to applications with real-time constraints such voice.\nThus, the loss of packets impairs the application-level utility.\nFor voice this utility impairment is twofold: on one hand, even short bursts of lost packets may decrease significantly the ability of the receiver to conceal the packet loss and the speech signal out is interrupted.\nOn the other hand, some packets may be particular sensitive to loss as they carry more important information in terms of user perception than other packets.\nWe first develop an end-to-end model based on loss lengths with which we can describe the loss distribution within a These packet-level metrics are then linked to user-level objective speech quality metrics.\nUsing this framework, we find that for low-compressing sample-based codecs (PCM) with loss concealment isolated packet losses can be concealed well, whereas burst losses have a higher perceptual impact.\nFor high-compressing frame-based codecs (G. 729) on one hand the impact of loss is amplified through error propagation caused by the decoder filter memories, though on the other hand such coding schemes help to perform loss concealment by extrapolation of decoder state.\nContrary to sample-based codecs we show that the concealment performance may \"break\" at transitions within the speech signal however.\nWe then propose mechanisms which differentiate between packets within a voice data to minimize the impact of packet loss.\nWe designate these methods as loss recovery and control.\nAt the end-to-end level, identification of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow to (statistically) trade the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nAs both\nets require the same cost in terms of network transmission, a gain in user perception is obtainable.\nWe show that significant speech quality improvements can be achieved and additional data and delay overhead can be avoided while still maintaining a network service which is virtually identical to best effort in the long term.\n1.\nINTRODUCTION\nConsidering that a real-time may experience some packet loss, the impact of loss may vary significantly dependent on which packets are lost within a flow.\nIn the following we distinguish two reasons for such a variable loss sensitivity: Temporal sensitivity: Loss of which is correlated in time may lead to disruptions in the service.\nNote that this effect is further aggravated by some interdependence between (i.e., that one ADU can only be decoded when a previous ADU before has successfully been received and decoded).\nFor voice, as a single packet contains typically several (voice frames) this effect is thus more significant than e.g. for video.\nIt translates basically to isolated packet losses versus losses that occur in bursts.\nSensitivity due to ADU heterogeneity: Certain might contain parts of the encoded signal which are ` Application Data Unit: the unit of data emitted by a source coder such as a video or voice frame.\nFigure 1: Schematic utility functions dependent on\nthe loss of more and less (-1) important packets more important with regard to user perception than others of the same flow.\nLet us consider a flow with two frame types of largely different perceptual importance (we same size, frequency and no interdependence between the frames).\nUnder the loss of 50% of the packets, the perceptual quality varies hugely between the where the 50% of the frames with high perceptual importance are received and the where the 50% less important frames received.\nNetwork support for real-time multimedia flows can on one hand aim at offering a service, which, however, to be implemented within pa & et-switched network, will be costly for the network provider and thus for the user.\nOn the other hand, within a lossy service, the above sensitivity constraints must be taken into account.\nIt is strong belief that this needs to be done in generic way, i.e., no application-specific knowledge (about particular coding schemes e.g.) should be necessary the network and, vice versa, no knowledge about network specifics should be necessary within an application.\nLet us now consider the case that 50% of packets of flow identified more important (designated by or less important due to any of the above sensitivity constraints.\nFigure 1 a) shows a generic utility function describing the level Quality of Service dependent on the percentage of packets lost.\nFor real-time multimedia traffic, such utility should correspond to perceived video\/voice quality.\nIf the relative importance of the packets is not known by the transmission system, the loss rates for the and -1 packets are equal.\nDue to the over-proportional sensitivity of the packets to loss as well as the dependence of the end loss recovery performance on the packets, the utility function is decreasing significantly in a non-linear way (approximated in the figure by piece-wise linear functions) with an increasing loss rate.\nFigure 1 b) presents the where all packets are protected at the expense of -1 The decay of the utility function (for loss rates <50%) is reduced, because the packets are protected and the endto-end loss recovery can thus operate properly a wider range of loss rates indicated by the shaded area.\nThis results in a graceful degradation of the application's utility.\nNote that the higher the non-linearity of the utility contribution of the packets is (deviation from the dotted curve in Fig. 1 a), the higher is the potential gain in utility when the protection for is enabled.\nResults for actual perceived quality utility for multimedia applications exhibit such non-linear behavior *.\nTo describe this effect and provide a taxonomy for different enhancement approaches, we introduce a novel terminology: we designate mechanisms influence parameters between (thus decrease the loss rate of one flow at the expense of other flows) as inter-flow Schemes which, in the presence of loss, differentiate between within a flow as demonstrated in Figure 1 above, provide enhancement.\nAs mechanisms have to be implemented within the network (hopby-hop) and\/or in the end systems (end-to-end), we have another axis of classification.\nThe adaptation of the sender's to the current network congestion state an scheme (loss avoidance, is difficult to apply to voice.\nConsidering that voice flows have very low the relative cost of transmitting the feedback information is (when compared e.g. to a video flow).\nTo reduce this cost the feedback interval would need to be increased, then leading to a higher probability of wrong adaptation decisions.\nThe major however, the lack of a codec is truly scalable in terms of its output and corresponding perceptual quality.\nCurrently standardized voice codecs usually only have a output While it has been proposed to switch between voice the MOS (subjective quality) for the employed do not e.g., codecs G. 723.1, G. 729, G. 728, G. 711 range from 5.3 to 64 while the subjective quality differs by less than 0.25 on a l-to-5 MOS scale 1: bad, 5: excellent quality).\nwhen the availability of computing power is assumed, the lowest codec can be chosen permanently without actually decreasing the perceptual quality.\nFor loss on an end-to-end basis, due to the realtime delay constraints, open-loop schemes like Forward Error Correction (FEC) have been proposed While attractive because they can be used on the Internet today, they also have several drawbacks.\nThe amount of redundant information needs to be adaptive to avoid taking bandwidth away from other flows.\nThis adaptation is crucial especially when the fraction of using redundancy schemes is large If the redundancy a coding itself, like it has often been proposed the comments from above on adaptation also apply.\nUsing redundancy has also implications to the delay adaptation ([lo]) employed to de-jitter the packets at the receiver.\nNote that the presented types of loss sensitivity also apply to we have obtained results which confirm the shape of the \"overall utility\" curve shown in Fig. 1, clearly the utility functions of the \"sub\".\nflows and their relationship are more complex and only approximately additive.\nTable 1: State and transition probabilities computed for an end-to-end Internet trace using a general Markov model (third order) by Yajnik et.\nal. .\nwhich are enhanced by end-to-end loss recovery mechanisms.\nEnd-to-end mechanisms can reduce and shift such sensitivities but cannot come close to eliminate them.\nTherefore in this work we assume that the lowest possible trate which provides the desired quality is chosen.\nNeither feedback\/adaptation nor redundancy is used, however, at the end-to-end level, identification\/marking of packets sensitive to loss (sender) as well as loss concealment (receiver) takes place.\nHop-by-hop support schemes then allow trading the loss of one packet, which is considered more important, against another one of the same flow which is of lower importance.\nWe employ actual and measure their utility in the presence of packet loss using objective speech quality measurement.\nThe paper is structured as follows: Section 2 introduces packet - and user-level metrics.\nWe employ these metrics to describe the sensitivity of traffic to packet loss in section 3.\nSection 4 briefly introduces a queue management algorithm which can be used for intra-flow loss control.\nIn section 5, we present results documenting the performance of the proposed mechanisms at both the end-to-end and by-hop level.\nSection 6 concludes the paper.\n2.\nMETRICS\n2.1 Packet-level metrics\nA general Markov model which describes the loss process is defined as follows:..., appear in\nthe state space.\nAs an example = 1 1) = 01) gives the state transition probability when the current packet is lost, the previous packet 1 has also been lost and packet 2 has not been lost.\nThe number of states of the model is 2\".\nTwo state transitions can take place from any of the states.\nThus, the number of parameters which have to be computed is Even for relatively small m this amount of parameters is difficult to be evaluated and compared.\nTable 1 shows some values for the state and transition probabilities for a general Markov model of third order measured end-to-end in the Internet by\nbold typeface) whereas when 1) 1 the state transition probabilities to that event cover the range of 0.15 to 0.61.\nThat means that past \"no loss\" events do not affect the loss process as much as past loss events.\nIntuitively this seems to make sense, because a successfully arriving packet can be seen as a indicator for congestion relief.\nAndren et.\nal. as well as Yajnik et.\nal. both confirmed this by measuring the cross correlation of the loss - and lengths.\nThey came to the result that such correlation is very weak.\nThis implies that patterns of short loss bursts interspersed with short periods of successful packet arrivals occur rarely (note in this context that in 1 the pattern 101 has by far the lowest state probability).\nThus, in the following we employ a model which only considers the past loss events for the state transition probability.\nThe number of states of the model can be reduced from to m + 1.\nThis means that we only consider the state transition probability k),...1)) with = 1 [0, where (0 > 0) with 1) = = 0 and + = 1 [0, being the j-th loss event\".\nNote that the parameters of the model become independent of the sequence number and can now rather be described by the occurence of a loss run length We define the random variable X as follows: X = 0: packet lost\", X = (0 < 1: = 3).\nAs we found by visual inspection that the distributions of the perceptual distortion values for one loss condition seem to approximately follow a normal distribution we employ mean and standard deviation to describe the statistical variability of the measured values.\nFigure 7 presents the perceptual distortion in the previous figure but also give the standard deviation error bars for the respective loss condition.\nIt shows the increasing variability of the results with increasing loss correlation while the variability does not seem to change much with an increasing amount of loss On one hand this points to some, though weak, sensitivity with regard to heterogeneity (i.e., it matters which area of the speech signal (voiced\/unvoiced) experiences burst loss).\nOn the other hand it shows, that a large number of different\nFigure 6: Utility curve for PCM with loss concealment (EMBSD) Figure Variability of the perceptual distortion\nwith concealment\nFigure 9: time (in frames) of the G. 729 decoder the loss of consecutive frames (k a function of frame position.\nFigure 8: Utility for the G. 729 codec (and not only reconstruction) operations in the quality asBSD) sessment.\nd & ion.\nFor each frame, encoder analyzes the input data and extracts the parameters of the Code Excited Linear Prediction (CELP) model as linear prediction filter coefficients and excitation vectors.\nWhen frame is lost or corrupted, the G. 729 decoder uses the parameters of the previous frame to interpolate those of the lost frame.\nThe line spectral pair coefficients of the last good frame are repeated and the gain coefficients are taken from the previous frame but they are damped to gradually reduce their impact.\na frame loss occurs, the decoder cannot update its state, resulting in divergence of encoder and decoder state.\nThus, errors are not only introduced ing the time period represented by the current frame but also in the following ones.\nIn addition to the impact of the missing codewords, distortion is increased by the missing update of the predictor filter memories for the line tral pairs and the linear prediction synthesis filter\nFigure 8 shows that for similar network conditions the\noutput quality of the is worse than PCM with loss concealment, demonstrating the compression versus quality tradeoff under packet loss.\nInterestingly the loss correlation (dp parameter) has some impact on the speech quality, however, the effect is weak pointing to a certain robustness of the G. 729 codec with regard to the resiliency to consecutive packet losses due to the internal loss concealment.\nRosenberg has done a similar experiment showing that the between the original and the concealed signal with increasing loss in terms of mean-squared is significant, however.\nThis demonstrates the importance of perceptual metrics able to include concealment are another representation of the linear prediction coefficients.\n` Two G. 729 frames are contained in a packet.\n3.2 Sensitivity due to ADU heterogeneity\nPCM is a encoding.\nTherefore the ADU content is only weakly heterogeneous (Figure 7).\nThus, in section concentrate on the G. 729 The experiment we carry out is to the resynchronization time of the coder after k consecutive frames are lost.\nThe G. 729 decoder is said to have with the encoder when the energy of the error signal falls below one percent of the energy of the decoded signal without frame loss (this is equivalent to signal-t-noise ratio (SNR) threshold of The error signal energy (and thus the SNR) is computed on a per-frame basis.\nFigure 9 shows the resynchronization time (expressed in the number of frames needed until the threshold is exceeded) plotted against the position of the loss (i.e., the index of the first lost frame) for values of k.\nThe speech sample is produced by a male speaker where an unvoiced\/voiced transition occurs in the eighth frame.\nWe can see from Figure 9 that the position of frame loss has a cm the resulting signal degradation', while the degradation is not that sensitive to the length of the frame loss burst k.\nThe loss of unvoiced frames to have a rather small impact on the signal degradation and the decoder recovers the state information fast thereafter.\nThe loss of voiced frames a larger degradation of the speech signal and the decoder needs time to with the sender.\nHowever, the loss of voiced frames at an unvoiced\/voiced transition leads to a significant degradation of the signal.\nWe have repeated the experiment for different male and female speakers and obtained similar results.\ninto account the wed coding scheme, the above phenomenon could be explained as follows: Because voiced sounds have a higher energy than unvoiced sounds, the loss of voiced frames causes a larger signal degradation than the loss of unvoiced frames.\nHowever, due to the riodic property of voiced sounds, the decoder can conceal on one hand we see that SNR often do not correlate well with subjective speech quality, on the other hand the large differences in the SNR-threshold-based resynchronization time clearly point to a significant impact on subjective speech quality.\nFigure 10: \"Differential\" RED drop probabilities as a function of average queue sizes\nthe loss of voiced frames well once it has obtained information on them.\nThe decoder fails to conceal the loss of voiced frames at an unvoiced\/voiced transition because it attempts to conceal the loss of voiced frames using the filter coefficients and the excitation for an unvoiced sound.\nMoreover, because the G. 729 encoder uses a moving average filter to predict the values of the line spectral pairs and only transmits the difference between the real and predicted it takes a lot of time for the decoder to with the encoder once it has failed to build the appropriate linear prediction filter.\n4.\nQUEUE MANAGEMENT FOR FLOW LOSS CONTROL\nWhile have highlighted the sensitivity of traffic to the distribution of loss in the last sections, we now want to briefly introduce a queue management mechanism which is able to enforce the relative preferences of the with regard to loss.\nWe consider flows with packets marked with and 1\" (as described in the introduction) \"foreground traffic\" (FT) and other (\"best effort\") flows traffic\" (BT).\nPacket marking, in addition to keeping the desirable property of state within the network core proposed by the IETF Differentiated Services architecture, is exploited here to convey the requirements of a As it should be avoided to introduce reordering of the packets of a flow in the network we consider mechanisms for the management of single queue with different priority levels.\nOne approach to realize inter-flow service differentiation using single queue is RIO (` RED with IN and OUT', With RIO, two average queue sizes congestion indicators are computed: one just for the IN (high priority) packets and another for both IN and OUT (low priority) packets.\nPackets marked OUT are dropped earlier (in terms of the average queue size) than IN packets.\nRIO has been designed to decrease the seen by particular at the expense of other flows.\nIn this work, however, we want to keep the as given by other parameters while modifying the loss distribution for the foreground traffic This amounts to trading the loss of a packet against a packet of the same flow (in a statistical sense).\nFig. 10 shows the conventional RED drop probability a function of the average queue size for all arrivals which is applied to all unmarked traffic (background traffic: BT).\nThe necessary relationship between the drop probabilities for packets marked as \"-1\" and can be derived follows (note that this relationship is valid both at the end-tend level and every individual hop): Let + be the overall number of emitted packets by an FT flow and a,, be the number of packets belonging to a certain class (where the class corresponds to (unmarked) \"best effort\" Then, with and considering that the resulting service has to be effort in the long term, we have: Due to this relationship between the drop probability for and \"-1\" we designate this queue management algorithm \"Differential\" Figure 10 shows the corresponding drop probability curves.\nDue to the condition of in addition to the conventional RED the implementation should also monitor the and -1 arrival processes.\nIf the ratio of to -1 packets at a gateway is not 1 (either due to behaving or a significant number of flows which have already experienced loss at earlier hops) the -1 loss ability is decreased and the probability is increased at the same time thus degrading the service for all users.\nThe shaded above and below the curve (Fig. 10) show the operating area when this correction is added.\nIn it has been shown that using only the conventional RED average queue size for operation is not sufficient.\nThis is due to the potentially missing correlation of the computed value between consecutive and -1 arrivals, especially when the share of the FT is low.\nAs this might result in a unfair distribution of losses between the and BT fractions, value is computed by sampling the queue size only at arrival instants.\nThus, a service differentiation for foreground traffic is possible which does not differ from conventional RED in the long term average (i.e., in the\n5.\nINTRA-FLOW LOSS RECOVERY AND CONTROL\n5.1 Temporal sensitivity\nConsidering a flow with temporal loss sensitivity, paragraph 3.1.1 has shown that a simple, periodic loss pattern enhances the performance of the end-to-end loss recovery.\nThe pattern is not tied to particular packets, therefore a with the introduced metrics is applicable.\nIn this paragraph we that a flow expressed its temporal sitivity by marking its flow with an alternating pattern of\nFigure 11: Comparison of actual and estimated Figure 12: Comparison of actual and estimated\nburst loss length rates as a function of burst length burst, loss length rates as a function of burst length three state run-length-based model two state run-length-based model (Gilbert.)\nFigures 11 and 12 show the rates for the actual and the estimated burst loss lengths for a three-state (m = 2) and a\nobserve that shapes the burst probability curve in the desired way.\nMost of the probability mass is concentrated at, isolated losses (the ideal behaviour would be the of only isolated losses = 1) which can be expressed with = 0 in terms of Gilbert model parameters).\nWith Drop Tail an approximately geometrically decreasing burst loss probability with increasing burst length (Eq.\n4) is obtainable, where the parameter is relatively large though.\nThus, considering voice with temporal loss sensitivity as the foreground traffic of interest, with a large number of short annoying bursts can be traded against a larger number of isolated losses and few long loss bursts (which occur when the queue is under temporary overload, i.e.,> Fig. 10).\nWe can see that the three-state model estimation (Eq.\n2) reflects the two areas of the operation (the sharp drop of the burst loss length rate for = 2 and the decrease along a geometrically decreasing asymptote for> 2).\nThis effect cannot be captured by the two state model (Eq.\n4) which thus overestimates the burst loss length rate for = 2 and then hugely underestimates it for> 2.\nInterestingly, for Drop Tail, while both models capture the shape of the actual curve, the lower order model is more accurate in the estimation.\nThis can be explained as follows: if the burst loss length probabilities are in fact close to a geometrical distribution, the estimate is more robust if all data is included (note that the run-length based approximation of the conditional loss probability = = m) only includes loss run-length larger or equal to m: Eq.\n1).\nonly discuss the results qualitatively here to give an example how an intra-flow loss control algorithm performs and to show how loss models can capture this performance.\nDetails on the simulation scenario and parameters can be found in\n5.2 Sensitivity due to ADU heterogeneity\nIn paragraph 3.1.2 we have seen that sensitivity to ADU heterogeneity results in a certain non-periodic loss pattern.\nThus, a mechanism at (or near) the sender is necessary which derives that pattern from the voice data.\nFurthermore, an explicit cooperation between end-to-end and hop by-hop mechanisms is necessary (Fig. 4).\nWe use the result of paragraph 3.2 to develop a new packet marking scheme called Speech Property-Based Selective Differential Packet Marking (SPB-DIFFMARK).\nThe MARK scheme concentrates the higher priority packets on the frames essential to the speech signal and relies on the decoder's concealment for other frames.\nFigure 13 shows the simple algorithm written in a pseudocode that is used to detect an unvoiced\/voiced transition and protect the voiced frames at, the beginning of a voiced signal.\nThe procedure is used to classify\na block of frames as voiced, unvoiced, or transition.\nis used to send a block of frames as a single packet with the appropriate priority (either or As the core algorithm gives only a binary marking decision (protect the packet or not), we employ a simple algorithm to send the necessary \"-1\" packets for compensation (Eq.\n5): after a burst of packets has been sent, a corresponding number of \"-1\" packets is sent immediately.\nState about the necessary number of to-be-sent \"-1\" packets is kept in the event that the SPB algorithm triggers the next burst before all \"-1\" packets necessary for compensation are sent.\nThus, seen over time intervals which are long compared to the burst times, the mean loss for the flow will be equal to the \"best effort\" case (Eq.\n5).\nis a pre-defined value and defines how many frames at the beginning of a voiced signal are to be protected.\nOur simulations (Fig. 9) have shown that the range from 10 to 20 are appropriate values for (depending on the network loss condition).\nIn the simulations presented below, we choose = 2, a typical\nFigure 13: SPB-DIFFMARK Pseudo Code\nvalue for interactive speech transmissions over the Internet (20ms of audio data per packet).\nA larger number of would help to reduce the relative overhead of the protocol header but also increases the packetization delay and makes sender classification and receiver concealment in case of packet loss (due to a larger loss gap) more difficult.\nsimulation description Due to the non-periodic loss pattern, we need to explicitly associate a drop probability with a single packet within an end-to-end model.\nTherefore we use a separate one-state Markov model (Bernouilli model) to describe the network behaviour as seen by each class of packets.\n\"Best effort\" packets (designated by \"0\" in Fig. 14) are dropped with the probability whereas packets marked with and \"-1\" are dropped with probabilities of and respectively.\nThis is a reasonable assumption' with regard to the interdependence of the different classes in fact, as it has been shown that (Figs. 11 and 12) achieves a fair amount of decorrelation of and -1 packet losses.\nNevertheless to include some correlation between the classes we have set = for the subsequent simulations.\nThis should ` The appropriateness of the simple end-to-end modeling used has been investigated in with discrete event simulation using a multi-hop topology and detailed modeling of foreground and background sources.\nFigure 14: Marking schemes and corresponding network models.\nalso allow a reasonable evaluation of how losses in the class affect the performance of the SPB-algorithms.\nFor a direct comparison with SPB-DIFFMARK, we evaluate a scheme where packets are alternatingly marked as being either \"-1\" or (ALT-DIFFMARK, Figure 14).\nWe furthermore include related inter-flow loss protection schemes.\nThe first scheme uses full protection (FULL MARK, all packets are marked as The SPB-MARK scheme operates similarly to SPB-DIFFMARK, but no \"-1\" packets are sent for compensation (those packets are also marked as For comparison we again use a scheme where packets are alternatingly marked as being either \"0\" or (ALT-MARK).\nFinally, packets of pure \"best effort\" flows are dropped with the probability (NO MARK case in Fig. 14).\nFor the SPB marking schemes the percentage of and \"- l\" - marked packets respectively is 40.4% for the speech material used.\nWe obtained similar marking percentages for other speech samples.\nThe ALT marking schemes mark exactly 50% of their packets as being\n5.2.2 Results\nFigure 15 shows the perceptual distortion for the marking schemes dependent on the drop probability pc.\nThe unprotected case (\"NO MARK\") has the highest perceptual distortion and thus the worst speech quality *.\nThe differential marking scheme (SPB-DIFFMARK) offers a significantly better speech quality even when only using a network service which amounts to \"best effort\" in the long term.\nNote that the ALT-DIFFMARK marking strategy does not differ from the \"best effort\" case (which confirms the result of paragraph 3.1.2).\nSPB-DIFFMARK is also even better than the inter-flow ALT-MARK scheme, especially for higher values of These results validate the strategy of our SPB marking schemes that do not equally mark all packets with a higher priority but rather protect a subset of frames that are essential to the speech quality.\nThe SPB-FEC scheme have also performed informal listening tests which confirmed the results using the objective metrics.\nFigure 15: Perceptual Distortion (EMBSD) for the marking schemes and SPB-FEC\nwhich uses piggybacked on the main payload packets (RFC 2198, to protect a subset of the packets, enables a very good output speech quality for low loss rates.\nHowever, it should be noted that the amount of data sent over the network is increased by 40.4%.\nNote that the simulation presumes that this additionally consumed bandwidth itself does not contribute significantly to congestion.\nThis assumption is only valid if a small fraction of is voice The SPB-FEC curve is convex with increasing as due to the increasing loss correlation an increasing number of consecutive packets carrying redundancy are lost leading to unrecoverable losses.\nThe curve for SPB-DIFFMARK is concave, however, yielding better performance for 0.22.\nThe inter-flow ALT-MARK scheme (50% of the packets are marked) enhances the perceptual quality.\nHowever, the auditory distance and the perceptual distortion of the SPB-MARK scheme (with 40.4% of all packets marked) is significantly lower and very close to the quality of the decoded signal when all packets are marked (FULL MARK).\nThis also shows that by protecting the entire flow only a minor improvement in the perceptual quality is obtained.\nThe results for the FULL MARK scheme also show that, while the loss of some of the packets has some measurable impact, the impact on perceptual quality can still be considered to be very low.\n6.\nCONCLUSIONS\nIn this paper we have characterized the behaviour of a\nbased codec (PCM) and a frame-based codec (G. 729) in the presence of packet loss.\nWe have then developed loss recovery and control mechanisms to increase the perceptual quality.\nWhile we have tested other codecs only informally, we think that our results reflect the fundamental difference between codecs which either encode the speech also used the G. 729 encoder for the redundant source coding.\nform directly or which are based on linear prediction.\nFor PCM without loss concealment we have found that it neither exhibits significant temporal sensitivity nor sensitivity to payload heterogeneity.\nWith loss concealment, however, the speech quality is increased but the amount of increase exhibits strong temporal sensitivity.\nFrame-based codecs amplify on one hand the impact of loss by error propagation, though on the other hand such coding schemes help to perform loss concealment by extrapolation of decoder state.\nContrary to sample-based codecs we have shown that the concealment performance of the G. 729 decoder may \"break\" at transitions within the speech signal however, thus showing strong sensitivity to payload heterogeneity.\nWe have briefly introduced a queue management algorithm which is able to control loss patterns without changing the amount of loss and characterized its performance for the loss control of a flow exhibiting temporal sensitivity.\nThen we developed a new packet marking scheme called Speech Property-Based Selective Differential Packet Marking for an efficient protection of frame-based codecs.\nThe DIFFMARK scheme concentrates the higher priority packets on the frames essential to the speech signal and relies on the decoder's concealment for other frames.\nWe have also evaluated the mapping of an end-to-end algorithm to flow protection.\nWe have found that the selective marking scheme performs almost as good as the protection of the entire flow at a significantly lower number of necessary priority packets.\nThus, combined intra-flow end-to-end schemes seem to be well-suited for heavily-loaded networks with a relatively large fraction of voice traffic.\nThis is the case because they neither need the addition of redundancy nor feedback (which would incur additional data and delay overhead) and thus yield stable voice quality also for higher loss rates due to absence of FEC and feedback loss.\nSuch schemes can better codecs with output bitrates, which are difficult to integrate into FEC schemes requiring adaptivity of both the codec and the redundancy generator.\nAlso, it is useful for adaptive codecs running at the lowest possible Avoiding redundancy and feedback is also interesting in multicast conferencing scenarios where the end-to-end loss characteristics of the different paths leading to members of the session are largely different.\nOur work has clearly focused on linking simple end-to-end models which can be easily parametrized with the known characteristic of by-hop loss control to user-level metrics.\nAn analysis of a large scale deployment of non-adaptive or adaptive FEC as compared to a deployment of our combined scheme requires clearly further study.\n7.\nACKNOWLEDGMENTS\nWe would like to thank Yang and Robert Yantorno, Temple University, for providing the EMBSD software for the objective speech quality measurements.\nMichael der, GMD Fokus, helped with the simulations of the queue management schemes.\n8.\nADDITIONAL AUTHORS\n9.\nREFERENCES\nH. Sanneck and G. Carle.\nA queue management algorithm for service differentiation in the \"best effort\" Internet.\nIn Proceedings of the Eighth Conference on Computer Communications and Networks (ICCCN), pages 419426, Natick, MA, October 1999.\nspeech Recommendation P. 861, ITU-T, February 1998.\nM. Yajnik, J. Kurose, and D. Towsley.\nPacket loss correlation in the multicast network: Experimental measurements and markov chain models.\nTechnical Report 95-115, Department of Computer Science, University of Massachusetts, Amherst, 1995.\nM. Yajnik, S. Moon, J. Kurose, and D. Towsley.\nMeasurement and modelling of the temporal dependence in packet loss.\nTechnical Report 98-78, Department of Computer Science, University of Massachusetts, Amherst, 1998.\nW. Yang and R. Yantorno.\nImprovement of MBSD scaling noise masking threshold and correlation analysis with MOS difference instead of MOS.\nIn Proceedings ICASSP, pages 673-676, Phoenix, AZ, March 1999.","keyphrases":["loss recoveri and control","end-to-end model","packet-level metric","loss conceal","sampl-base codec","loss sensit","network support for real-time multimedia","servic qualiti","end-to-end loss recoveri","voip traffic","intra-flow loss control","gener markov model","voip traffic sensit","queue manag algorithm","frame-base codec","voic over ip","loss metric","object speech qualiti measur","queue manag","differenti servic"],"prmu":["P","P","P","P","M","R","M","R","R","U","R","M","M","U","M","M","R","M","U","R"]} {"id":"I-19","title":"Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods","abstract":"We derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes. We first consider a model where all other bidders are local and participate in a single auction. For this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders' valuation distribution. Furthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions. These results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution. This analysis extends to online markets where, typically, auctions occur both concurrently and sequentially. In addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders. Finally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.","lvl-1":"Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods Enrico H. Gerding, Rajdeep K. Dash, David C. K. Yuen and Nicholas R. Jennings University of Southampton, Southampton, SO17 1BJ, UK.\n{eg,rkd,dy,nrj}@ecs.\nsoton.ac.uk ABSTRACT We derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes.\nWe first consider a model where all other bidders are local and participate in a single auction.\nFor this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders'' valuation distribution.\nFurthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions.\nThese results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution.\nThis analysis extends to online markets where, typically, auctions occur both concurrently and sequentially.\nIn addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders.\nFinally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics General Terms Economics 1.\nINTRODUCTION The recent surge of interest in online auctions has resulted in an increasing number of auctions offering very similar or even identical goods and services [9, 10].\nIn eBay alone, for example, there are often hundreds or sometimes even thousands of concurrent auctions running worldwide selling such substitutable items1 .\nAgainst this background, it is essential to develop bidding strategies that autonomous agents can use to operate effectively across a wide number of auctions.\nTo this end, in this paper we devise and analyse optimal bidding strategies for an important yet barely studied setting - namely, an agent that participates in multiple, concurrent (i.e., simultaneous) second-price auctions for goods that are perfect substitutes.\nAs we will show, however, this analysis is also relevant to a wider context where auctions are conducted sequentially, as well as concurrently.\nTo date, much of the existing literature on multiple auctions focuses either on sequential auctions [6] or on simultaneous auctions for complementary goods, where the value of items together is greater than the sum of the individual items (see Section 2 for related research on simultaneous auctions).\nIn contrast, here we consider bidding strategies for markets with multiple concurrent auctions and perfect substitutes.\nIn particular, our focus is on Vickrey or second-price sealed bid auctions.\nWe choose these because they require little communication and are well known for their capacity to induce truthful bidding, which makes them suitable for many multi-agent system settings.\nHowever, our results generalise to settings with English auctions since these are strategically equivalent to second-price auctions.\nWithin this setting, we are able to characterise, for the first time, a bidder``s utilitymaximising strategy for bidding simultaneously in any number of such auctions and for any type of bidder valuation distribution.\nIn more detail, we first consider a market where a single bidder, called the global bidder, can bid in any number of auctions, whereas the other bidders, called the local bidders, are assumed to bid only in a single auction.\nFor this case, we find the following results: \u2022 Whereas in the case of a single second-price auction a bidder``s best strategy is to bid its true value, the best strategy for a global bidder is to bid below it.\n\u2022 We are able to prove that, even if a global bidder requires only one item, the expected utility is maximised by participating in all the auctions that are selling the desired item.\n\u2022 Finding the optimal bid for each auction can be an arduous task when considering all possible combinations.\nHowever, for most common bidder valuation distributions, we are able to significantly reduce this search space and thus the computation required.\n\u2022 Empirically, we find that a bidder``s expected utility is maximised by bidding relatively high in one of the auctions, and equal or lower in all other auctions.\nWe then go on to consider markets with more than one global bidder.\nDue to the complexity of the problem, we combine analytical results with a discrete simulation in order to numerically derive the optimal bidding strategy.\nBy so doing, we find that, in a market with only global bidders, the dynamics of the best response do not converge to a pure strategy.\nIn fact it fluctuates between two states.\nIf the market consists of both local and global bidders, however, the global bidders'' strategy quickly reaches a stable solution and we approximate a symmetric Nash equilibrium.\nThe remainder of the paper is structured as follows.\nSection 2 discusses related work.\nIn Section 3 we describe the bidders and the auctions in more detail.\nIn Section 4 we investigate the case with a single global bidder and characterise the optimal bidding behaviour for it.\nSection 5 considers the case with multiple global bidders and in Section 6 we address the market efficiency.\nFinally, Section 7 concludes.\n2.\nRELATED WORK Research in the area of simultaneous auctions can be segmented along two broad lines.\nOn the one hand, there is the game-theoretic and decision-theoretic analysis of simultaneous auctions which concentrates on studying the equilibrium strategy of rational agents [3, 7, 8, 9, 12, 11].\nSuch analyses are typically used when the auction format employed in the concurrent auctions is the same (e.g. there are M Vickrey auctions or M first-price auctions).\nOn the other hand, heuristic strategies have been developed for more complex settings when the sellers offer different types of auctions or the buyers need to buy bundles of goods over distributed auctions [1, 13, 5].\nThis paper adopts the former approach in studying a market of M simultaneous Vickrey auctions since this approach yields provably optimal bidding strategies.\nIn this case, the seminal paper by Engelbrecht-Wiggans and Weber provides one of the starting points for the gametheoretic analysis of distributed markets where buyers have substitutable goods.\nTheir work analyses a market consisting of couples having equal valuations that want to bid for a dresser.\nThus, the couple``s bid space can at most contain two bids since the husband and wife can be at most at two geographically distributed auctions simultaneously.\nThey derive a mixed strategy Nash equilibrium for the special case where the number of buyers is large.\nOur analysis differs from theirs in that we study concurrent auctions in which bidders have different valuations and the global bidder can bid in all the auctions concurrently (which is entirely possible given autonomous agents).\nFollowing this, [7] then studied the case of simultaneous auctions with complementary goods.\nThey analyse the case of both local and global bidders and characterise the bidding of the buyers and resultant market efficiency.\nThe setting provided in [7] is further extended to the case of common values in [9].\nHowever, neither of these works extend easily to the case of substitutable goods which we consider.\nThis case is studied in [12], but the scenario considered is restricted to three sellers and two global bidders and with each bidder having the same value (and thereby knowing the value of other bidders).\nThe space of symmetric mixed equilibrium strategies is derived for this special case, but again our result is more general.\nFinally, [11] considers the case of concurrent English auctions, in which he develops bidding algorithms for buyers with different risk attitudes.\nHowever, he forces the bids to be the same across auctions, which we show in this paper not always to be optimal.\n3.\nBIDDING IN MULTIPLE AUCTIONS The model consists of M sellers, each of whom acts as an auctioneer.\nEach seller auctions one item; these items are complete substitutes (i.e., they are equal in terms of value and a bidder obtains no additional benefit from winning more than one item).\nThe M auctions are executed concurrently; that is, they end simultaneously and no information about the outcome of any of the auctions becomes available until the bids are placed2 .\nHowever, we briefly address markets with both sequential and concurrent auctions in Section 4.4.\nWe also assume that all the auctions are equivalent (i.e., a bidder does not prefer one auction over another).\nFinally, we assume free disposal (i.e., a winner of multiple items incurs no additional costs by discarding unwanted ones) and risk neutral bidders.\n3.1 The Auctions The seller``s auction is implemented as a Vickrey auction, where the highest bidder wins but pays the second-highest price.\nThis format has several advantages for an agent-based setting.\nFirstly, it is communication efficient.\nSecondly, for the single-auction case (i.e., where a bidder places a bid in at most one auction), the optimal strategy is to bid the true value and thus requires no computation (once the valuation of the item is known).\nThis strategy is also weakly dominant (i.e., it is independent of the other bidders'' decisions), and therefore it requires no information about the preferences of other agents (such as the distribution of their valuations).\n3.2 Global and Local Bidders We distinguish between global and local bidders.\nThe former can bid in any number of auctions, whereas the latter only bid in a single one.\nLocal bidders are assumed to bid according to the weakly dominant strategy and bid their true valuation3 .\nWe consider two ways of modelling local bidders: static and dynamic.\nIn the first model, the number of local bidders is assumed to be known and equal to N for each auction.\nIn the latter model, on the other hand, the average number of bidders is equal to N, but the exact number is unknown and may vary for each auction.\nThis uncertainty is modelled using a Poisson distribution (more details are provided in Section 4.1).\nAs we will later show, a global bidder who bids optimally has a higher expected utility compared to a local bidder, even though the items are complete substitutes and a bidder only requires one of them.\nHowever, we can identify a number of compelling reasons why not all bidders would choose to bid globally.\nFirstly, participation costs such as entry fees and time to set up an account may encourage occasional users to 2 Although this paper focuses on sealed-bid auctions, where this is the case, the conditions are similar for last-minute bidding in English auctions such as eBay [10].\n3 Note that, since bidding the true value is optimal for local bidders irrespective of what others are bidding, their strategy is not affected by the presence of global bidders.\n280 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) participate in auctions that they are already familiar with.\nSecondly, bidders may simply not be aware of other auctions selling the same type of item.\nEven if this is known, however, additional information such as the distribution of the valuations of other bidders and the number of participating bidders is required for bidding optimally across multiple auctions.\nThis lack of expert information often drives a novice to bid locally.\nThirdly, an optimal global strategy is harder to compute than a local one.\nAn agent with bounded rationality may therefore not have the resources to compute such a strategy.\nLastly, even though a global bidder profits on average, such a bidder may incur a loss when inadvertently winning multiple auctions.\nThis deters bidders who are either risk averse or have budget constraints from participating in multiple auction.\nAs a result, in most market places we expect a combination of global and local bidders.\nIn view of the above considerations, human buyers are more likely to bid locally.\nThe global strategy, however, can be effectively executed by autonomous agents since they can gather data from many auctions and perform the required calculations within the desired time frame.\n4.\nA SINGLE GLOBAL BIDDER In this section, we provide a theoretical analysis of the optimal bidding strategy for a global bidder, given that all other bidders are local and simply bid their true valuation.\nAfter we describe the global bidder``s expected utility in Section 4.1, we show in Section 4.2 that it is always optimal for a global bidder to participate in the maximum number of auctions available.\nIn Section 4.3 we discuss how to significantly reduce the complexity of finding the optimal bids for the multi-auction problem, and we then apply these methods to find optimal strategies for specific examples.\nFinally, in Section 4.4 we extend our analysis to sequential auctions.\n4.1 The Global Bidder``s Expected Utility In what follows, the number of sellers (auctions) is M \u2265 2 and the number of local bidders is N \u2265 1.\nA bidder``s valuation v \u2208 [0, vmax] is randomly drawn from a cumulative distribution F with probability density f, where f is continuous, strictly positive and has support [0, vmax].\nF is assumed to be equal and common knowledge for all bidders.\nA global bid B is a set containing a bid bi \u2208 [0, vmax] for each auction 1 \u2264 i \u2264 M (the bids may be different for different auctions).\nFor ease of exposition, we introduce the cumulative distribution function for the first-order statistics G(b) = F(b)N \u2208 [0, 1], denoting the probability of winning a specific auction conditional on placing bid b in this auction, and its probability density g(b) = dG(b)\/db = NF(b)N\u22121 f(b).\nNow, the expected utility U for a global bidder with global bid B and valuation v is given by: U(B, v) = v \u23a1 \u23a31 \u2212 bi\u2208B (1 \u2212 G(bi)) \u23a4 \u23a6 \u2212 bi\u2208B bi 0 yg(y)dy (1) Here, the left part of the equation is the valuation multiplied by the probability that the global bidder wins at least one of the M auctions and thus corresponds to the expected benefit.\nIn more detail, note that 1 \u2212 G(bi) is the probability of not winning auction i when bidding bi, bi\u2208B(1 \u2212 G(bi)) is the probability of not winning any auction, and thus 1 \u2212 bi\u2208B(1 \u2212 G(bi)) is the probability of winning at least one auction.\nThe right part of equation 1 corresponds to the total expected costs or payments.\nTo see the latter, note that the expected payment of a single secondprice auction when bidding b equals b 0 yg(y)dy (see [6]) and is independent of the expected payments for other auctions.\nClearly, equation 1 applies to the model with static local bidders, i.e., where the number of bidders is known and equal for each auction (see Section 3.2).\nHowever, we can use the same equation to model dynamic local bidders in the following way: Lemma 1 By replacing the first-order statistic G(y) with \u02c6G(y) = eN(F (y)\u22121) , (2) and the corresponding density function g(y) with \u02c6g(y) = d \u02c6G(y)\/dy = N f(y)eN(F (y)\u22121) , equation 1 becomes the expected utility where the number of local bidders in each auction is described by a Poisson distribution with average N (i.e., where the probability that n local bidders participate is given by P(n) = Nn e\u2212N \/n!)\n.\nProof To prove this, we first show that G(\u00b7) and F(\u00b7) can be modified such that the number of bidders per auction is given by a binomial distribution (where a bidder``s decision to participate is given by a Bernoulli trial) as follows: G (y) = F (y)N = (1 \u2212 p + p F (y))N , (3) where p is the probability that a bidder participates in the auction, and N is the total number of bidders.\nTo see this, note that not participating is equivalent to bidding zero.\nAs a result, F (0) = 1 \u2212 p since there is a 1 \u2212 p probability that a bidder bids zero at a specific auction, and F (y) = F (0) + p F(y) since there is a probability p that a bidder bids according to the original distribution F(y).\nNow, the average number of participating bidders is given by N = p N. By replacing p with N\/N, equation 3 becomes G (y) = (1 \u2212 N\/N + (N\/N)F(y))N .\nNote that a Poisson distribution is given by the limit of a binomial distribution.\nBy keeping N constant and taking the limit N \u2192 \u221e, we then obtain G (y) = eN(F (y)\u22121) = \u02c6G(y).\nThis concludes our proof.\nThe results that follow apply to both the static and dynamic model unless stated otherwise.\n4.2 Participation in Multiple Auctions We now show that, for any valuation 0 < v < vmax, a utilitymaximising global bidder should always place non-zero bids in all available auctions.\nTo prove this, we show that the expected utility increases when placing an arbitrarily small bid compared to not participating in an auction.\nMore formally, Theorem 1 Consider a global bidder with valuation 0 < v < vmax and global bid B, where bi \u2264 v for all bi \u2208 B. Suppose B contains no bid for auction j \u2208 {1, 2, ... , M}, then there exists a bj > 0 such that U(B\u222a{bj }, v) > U(B, v).\nProof Using equation 1, the marginal expected utility for participating in an additional auction can be written as: U(B \u222a {bj }, v) \u2212 U(B, v) = vG(bj ) bi\u2208B (1 \u2212 G(bi)) \u2212 bj 0 yg(y)dy Now, using integration by parts, we have bj 0 yg(y) = bjG(bj)\u2212 bj 0 G(y)dy and the above equation can be rewritten as: U(B \u222a {bj }, v) \u2212 U(B, v) = G(bj ) \u23a1 \u23a3v bi\u2208B (1 \u2212 G(bi)) \u2212 bj \u23a4 \u23a6 + bj 0 G(y)dy (4) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 281 Let bj = , where is an arbitrarily small strictly positive value.\nClearly, G(bj) and bj 0 G(y)dy are then both strictly positive (since f(y) > 0).\nMoreover, given that bi \u2264 v < vmax for bi \u2208 B and that v > 0, it follows that v bi\u2208B(1 \u2212 G(bi)) > 0.\nNow, suppose bj = 1 2 v bi\u2208B(1 \u2212 G(bi)), then U(B \u222a {bj }, v) \u2212 U(B, v) = G(bj ) 1 2 v bi\u2208B(1 \u2212 G(bi)) + bj 0 G(y)dy > 0 and thus U(B \u222a {bj }, v) > U(B, v).\nThis completes our proof.\n4.3 The Optimal Global Bid A general solution to the optimal global bid requires the maximisation of equation 1 in M dimensions, an arduous task, even when applying numerical methods.\nIn this section, however, we show how to reduce the entire bid space to two dimensions in most cases (one continuous, and one discrete), thereby significantly simplifying the problem at hand.\nFirst, however, in order to find the optimal solutions to equation 1, we set the partial derivatives to zero: \u2202U \u2202bi = g(bi) \u23a1 \u23a3v bj \u2208B\\{bi} (1 \u2212 G(bj)) \u2212 bi \u23a4 \u23a6 = 0 (5) Now, equality 5 holds either when g(bi) = 0 or when bj \u2208B\\{bi}(1 \u2212 G(bj ))v \u2212 bi = 0.\nIn the dynamic model, g(bi) is always greater than zero, and can therefore be ignored (since g(0) = Nf(0)e\u2212N and we assume f(y) > 0).\nIn the static model, g(bi) = 0 only when bi = 0.\nHowever, theorem 1 shows that the optimal bid is non-zero for 0 < v < vmax.\nTherefore, we can ignore the first part, and the second part yields: bi = v bj \u2208B\\{bi} (1 \u2212 G(bj)) (6) In other words, the optimal bid in auction i is equal to the bidder``s valuation multiplied by the probability of not winning any of the other auctions.\nIt is straightforward to show that the second partial derivative is negative, confirming that the solution is indeed a maximum when keeping all other bids constant.\nThus, equation 6 provides a means to derive the optimal bid for auction i, given the bids in all other auctions.\n4.3.1 Reducing the Search Space In what follows, we show that, for non-decreasing probability density functions (such as the uniform and logarithmic distributions), the optimal global bid consists of at most two different values for any M \u2265 2.\nThat is, the search space for finding the optimal bid can then be reduced to two continuous values.\nLet these values be bhigh and blow, where bhigh \u2265 blow.\nMore formally: Theorem 2 Suppose the probability density function f is non-decreasing within the range [0, vmax], then the following proposition holds: given v > 0, for any bi \u2208 B, either bi = bhigh, bi = blow, or bi = bhigh = blow.\nProof Using equation 6, we can produce M equations, one for each auction, with M unknowns.\nNow, by combining these equations, we obtain the following relationship: b1(1 \u2212 G(b1)) = b2(1 \u2212 G(b2)) = ... = bm(1 \u2212 G(bm)).\nBy defining H(b) = b(1 \u2212 G(b)) we can rewrite the equation to: H(b1) = H(b2) = ... = H(bm) = v bj \u2208B (1 \u2212 G(bj )) (7) In order to prove that there exist at most two different bids, it is sufficient to show that b = H\u22121 (y) has at most two solutions that satisfy 0 \u2264 b \u2264 vmax for any y. To see this, suppose H\u22121 (y) has two solutions but there exists a third bid bj = blow = bhigh.\nFrom equation 7 it then follows that there exists a y such that H(bj) = H(blow) = H(bhigh) = y. Therefore, H\u22121 (y) must have at least three solutions, which is a contradiction.\nNow, note that, in order to prove that H\u22121 (y) has at most two solutions, it is necessary and sufficient to show that H(b) has at most one local maximum for 0 \u2264 b \u2264 vmax.\nA sufficient conditions, however, is for H(b) to be strictly concave4 .\nThe function H is strictly concave if and only if the following condition holds: H (b) = d db (1 \u2212 b \u00b7 g(b) \u2212 G(b)) = \u2212 b dg db + 2g(b) < 0 (8) where H (b) = d2 H\/db2 .\nBy performing standard calculations, we obtain the following condition for the static model: b (N \u2212 1) f(b) F(b) + N f (b) f(b) > \u22122 for 0 \u2264 b \u2264 vmax, (9) and similarly for the dynamic model we have: b N f(b) + f (b) f(b) > \u22122 for 0 \u2264 b \u2264 vmax, (10) where f (b) = df\/db.\nSince both f and F are positive, conditions 9 and 10 clearly hold for f (b) \u2265 0.\nIn other words, conditions 9 and 10 show that H(b) is strictly concave when the probability density function is non-decreasing for 0 \u2264 b \u2264 vmax, completing our proof.\nNote from conditions 9 and 10 that the requirement of non-decreasing density functions is sufficient, but far from necessary.\nMoreover, condition 8 requiring H(b) to be strictly concave is also stronger than necessary to guarantee only two solutions.\nAs a result, in practice we find that the reduction of the search space applies to most cases.\nGiven there are at most 2 possible bids, blow and bhigh, we can further reduce the search space by expressing one bid in terms of the other.\nSuppose the buyer places a bid of blow in Mlow auctions and bhigh for the remaining Mhigh = M\u2212Mlow auctions, equation 6 then becomes: blow = v(1 \u2212 G(blow))Mlow\u22121 (1 \u2212 G(bhigh))Mhigh , and can be rearranged to give: bhigh = G\u22121 1 \u2212 blow v(1 \u2212 G(blow))Mlow\u22121 1 Mhigh (11) Here, the inverse function G\u22121 (\u00b7) can usually be obtained quite easily.\nFurthermore, note that, if Mlow = 1 or Mhigh = 1, equation 6 can be used directly to find the desired value.\nUsing the above, we are able to reduce the bid search space to a single continuous dimension, given Mlow or Mhigh.\nHowever, we do not know the number of auctions in which to bid blow and bhigh, and thus we need to search M different combinations to find the optimal global bid.\nMoreover, for each 4 More precisely, H(b) can be either strictly convex or strictly concave.\nHowever, it is easy to see that H is not convex since H(0) = H(vmax) = 0, and H(b) \u2265 0 for 0 < b < vmax.\n282 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 valuation (v) bidfraction(x) 0 0.5 1 0 0.05 0.1 0.15 local M=2 M=4 M=6 valuation (v) expectedutility Figure 1: The optimal bid fractions x = b\/v and corresponding expected utility for a single global bidder with N = 5 static local bidders and varying number of auctions (M).\nIn addition, for comparison, the dark solid line in the right figure depicts the expected utility when bidding locally in a randomly selected auction, given there are no global bidders (note that, in case of local bidders only, the expected utility is not affected by M).\ncombination, the optimal blow and bhigh can vary.\nTherefore, in order to find the optimal bid for a bidder with valuation v, it is sufficient to search along one continuous variable blow \u2208 [0, v], and a discrete variable Mlow = M \u2212 Mhigh \u2208 {1, 2, ... , M}.\n4.3.2 Empirical Evaluation In this section, we present results from an empirical study and characterise the optimal global bid for specific cases.\nFurthermore, we measure the actual utility improvement that can be obtained when using the global strategy.\nThe results presented here are based on a uniform distribution of the valuations with vmax = 1, and the static local bidder model, but they generalise to the dynamic model and other distributions (not shown due to space limitations).\nFigure 1 illustrates the optimal global bids and the corresponding expected utility for various M and N = 5, but again the bid curves for different values of M and N follow a very similar pattern.\nHere, the bid is normalised by the valuation v to give the bid fraction x = b\/v.\nNote that, when x = 1, a bidder bids its true value.\nAs shown in Figure 1, for bidders with a relatively low valuation, the optimal strategy is to submit M equal bids at, or very close to, the true value.\nThe optimal bid fraction then gradually decreases for higher valuations.\nInterestingly, in most cases, placing equal bids is no longer the optimal strategy after the valuation reaches a certain point.\nA socalled pitchfork bifurcation is then observed and the optimal bids split into two values: a single high bid and M \u2212 1 low ones.\nThis transition is smooth for M = 2, but exhibits an abrupt jump for M \u2265 3.\nIn all experiments, however, we consistently observe that the optimal strategy is always to place a high bid in one auction, and an equal or lower bid in all others.\nIn case of a bifurcation and when the valuation approaches vmax, the optimal high bid goes to the true value and the low bids go to zero.\nAs illustrated in Figure 1, the utility of a global bidder becomes progressively higher with more auctions.\nIn absolute terms, the improvement is especially high for bidders that have an above average valuation, but not too close to vmax.\nThe bidders in this range thus benefit most from bidding globally.\nThis is because bidders with very low valuations have a very small chance of winning any auction, whereas bidders with a very high valuation have a high probability of winning a single auction and benefit less from participating in more auctions.\nIn contrast, if we consider the utility relative to bidding in a single auction, this is much higher for bidders with relatively low valuations (this effect cannot be seen clearly in Figure 1 due to the scale).\nIn particular, we notice that a global bidder with a low valuation can improve its utility by up to M times the expected utility of bidding locally.\nIntuitively, this is because the chance of winning one of the auctions increases by up to a factor M, whereas the increase in the expected cost is negligible.\nFor high valuation buyers, however, the benefit is not that obvious because the chances of winning are relatively high even in case of a single auction.\n4.4 Sequential and Concurrent Auctions In this section we extend our analysis of the optimal bidding strategy to sequential auctions.\nSpecifically, the auction process consists of R rounds, and in each round any number of auctions are running simultaneously.\nSuch a combination of sequential and concurrent auctions is very common in practice, especially online5 .\nIt turns out that the analysis for the case of simultaneous auctions is quite general and can be easily extended to include sequential auctions.\nIn the following, the number of simultaneous auctions in round r is denoted by Mr, and the set of bids in that round by Br.\nAs before, the analysis assumes that all other bidders are local and bid in a single auction.\nFurthermore, we assume that the global bidders have complete knowledge about the number of rounds and the number of auctions in each round.\nThe expected utility in round r, denoted by Ur, is similar to before (equation 1 in Section 4.1) except that now additional benefit can be obtained from future auctions if the desired item is not won in one of the current set of simultaneous auctions.\nFor convenience, Ur(Br, Mr) is abbreviated to Ur in the following.\nThe expected utility thus becomes: Ur = v \u00b7 Pr(Br) \u2212 bri\u2208Br bri 0 yg(y)dy + Ur+1 \u00b7 (1 \u2212 Pr(Br)) = Ur+1 + (v \u2212 Ur+1)Pr(Br) \u2212 bri\u2208Br bri 0 yg(y)dy, (12) where Pr(Br) = 1 \u2212 bri\u2208Br (1 \u2212 G(bri)) is the probability of winning at least one auction in round r. Now, we take the partial derivative of equation 12 in order to find the optimal bid brj for auction j in round r: \u2202Us \u2202brj = g(brj) \u23a1 \u23a3(v \u2212 Us+1) bri\u2208Br\\{brj } (1 \u2212 G(bri)) \u2212 brj \u23a4 \u23a6 (13) 5 Rather than being purely sequential in nature, online auctions also often overlap (i.e., new auctions can start while others are still ongoing).\nIn that case, however, it is optimal to wait and bid in the new auctions only after the outcome of the earlier auctions is known, thereby reducing the chance of unwittingly winning multiple items.\nUsing this strategy, overlapping auctions effectively become sequential and can thus be analysed using the results in this section.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 283 Note that equation 13 is almost identical to equation 5 in Section 4.3, except that the valuation v is now replaced by v\u2212Ur+1.\nThe optimal bidding strategy can thus be found by backward induction (where UR+1 = 0) using the procedure outlined in Section 4.3.\n5.\nMULTIPLE GLOBAL BIDDERS As argued in section 3.2, we expect a real-world market to exhibit a mix of global and local bidders.\nWhereas so far we assumed a single global bidder, in this section we consider a setting where multiple global bidders interact with one another and with local bidders as well.\nThe analysis of this problem is complex, however, as the optimal bidding strategy of a global bidder depends on the strategy of other global bidders.\nA typical analytical approach is to find the symmetric Nash equilibrium solution [9, 12], which occurs when all global bidders use the same strategy to produce their bids, and no (global) bidder has any incentive to unilaterally deviate from the chosen strategy.\nDue to the complexity of the problem, however, here we combine a computational simulation approach with analytical results.\nThe simulation works by iteratively finding the best response to the optimal bidding strategies in the previous iteration.\nIf this should result in a stable outcome (i.e., when the optimal bidding strategies remains unchanged for two subsequent iterations), the solution is by definition a (symmetric) Nash equilibrium.\n5.1 The Global Bidder``s Expected Utility In order to find a global bidder``s best response, we first need to calculate the expected utility given the global bid B and the strategies of both the other global bidders as well as the local bidders.\nIn the following, let Ng denote the number of other global bidders.\nFurthermore, let the strategies of the other global bidders be represented by the set of functions \u03b2k(v), 1 \u2264 k \u2264 M, producing a bid for each auction given a bidder``s valuation v. Note that all other global bidders use the same set of functions since we consider symmetric equilibria.\nHowever, we assume that the assignment of functions to auctions by each global bidder occurs in a random fashion without replacement (i.e., each function is assigned exactly once by each global bidder).\nLet \u03a9 denote the set of all possible assignments.\nEach such assignment \u03c9 \u2208 \u03a9 is a (M, Ng) matrix, where each entry \u03c9i,j identifies the function used by global bidder j in auction i. Note that the cardinality of \u03a9, denoted by |\u03a9|, is equal to M!Ng .\nNow, the expected utility is the average expected utility over all possible assignments and is given by: U(B, v) = 1 |\u03a9| \u03c9\u2208\u03a9 v \u239b \u239d1 \u2212 bi\u2208B (1 \u2212 \u02dcG\u03c9i (bi)) \u239e \u23a0 \u2212 1 |\u03a9| \u03c9\u2208\u03a9 bi\u2208B bi 0 y\u02dcg\u03c9i (y)dy, (14) where \u02dcG\u03c9i (b) = G(b) \u00b7 Ng j=1 b 0 \u03b2\u03c9i,j (y)f(y)dy denotes the probability of winning auction i, given that each global bidder 1 \u2264 j \u2264 Ng bids according to the function \u03b2\u03c9i,j , and \u02dcg\u03c9i (y) = d \u02dcG\u03c9i (y)\/dy.\nHere, G(b) is the probability of winning an auction with only local bidders as described in Section 4.1, and f(y) is the probability density of the bidder valuations as before.\n5.2 The Simulation The simulation works by discretising the space of possible valuations and bids and then finding a best response to an initial set of bidding functions.\nThe best response is found by maximising equation 14 for each discrete valuation, which, in turn, results in a new set of bidding functions.\nThese functions then affect the probabilities of winning in the next iteration for which the new best response strategy is calculated.\nThis process is then repeated for a fixed number of iterations or until a stable solution has been found6 .\nClearly, due to the large search space, finding the utilitymaximising global bid quickly becomes infeasible as the number of auctions and global bidders increases.\nTherefore, we reduce the search space by limiting the global bid to two dimensions where a global bidder bids high in one of the auctions and low in all the others7 .\nThis simplification is justified by the results in Section 4.3.1 which show that, for a large number of commonly used distributions, the optimal global bid consist of at most two different values.\nThe results reported here are based on the following settings.8 In order to emphasize that the valuations are discrete, we use integer values ranging from 1 to 1000.\nEach valuation occurs with equal probability, equivalent to a uniform valuation distribution in the continuous case.\nA bidder can select between 300 different equally-spaced bid levels.\nThus, a bidder with valuation v can place bids b \u2208 {0, v\/300, 2v\/300, ... , v}.\nThe local bidders are static and bid their valuation as before.\nThe initial set of functions can play an important role in the experiments.\nTherefore, to ensure our results are robust, experiments are repeated with different random initial functions.\n5.3 The Results First, we describe the results with no local bidders.\nFor this case, we find that the simulation does not converge to a stable state.\nThat is, when there is at least one other global bidder, the best response strategy keeps fluctuating, irrespective of the number of iterations and of the initial state.\nThe fluctuations, however, show a distinct pattern and alternate between two states.\nFigure 2 depicts these two states for NG = 10 and M = 5.\nThe two states vary most when there are at least as many auctions as there are global bidders.\nIn that case, one of the best response states is to bid truthfully in one auction and zero in all others.\nThe best response to that, however, is to bid an equal positive amount close to zero in all auctions; this strategy guarantees at least one object at a very low payment.\nThe best response is then again to bid truthfully in a single auction since this appropriates the object in that particular auction.\nAs a result, there exists no stable solution.\nThe same result is observed when the number of global bidders is less than the number of auctions.\nThis oc6 This approach is similar to an alternating-move bestresponse process with pure strategies [4], although here we consider symmetric strategies within a setting where an opponent``s best response depends on its valuation.\n7 Note that the number of possible allocations still increases with the number of auctions and global bids.\nHowever, by merging all utility-equivalent permutations, we significantly increase computation speed, allowing experiments with relatively large numbers of auctions and bidders to be performed (e.g., a single iteration with 50 auctions and 10 global bidders takes roughly 30 seconds on a 3.00 Ghz PC).\n8 We also performed experiments with different precision, other valuation distributions, and dynamic local bidders.\nWe find that the prinicipal conclusions generalise to these different settings, and therefore we omit the results to avoid repetitiveness.\n284 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 200\u00a0400\u00a0600\u00a0800 1000 200 400 600 800 1000 valuation (v) bid(b) state 1 state 2 Figure 2: The two states of the best response strategy for M = 5 and Ng = 10 without local bidders.\n5 10 15 0 1 2 3 4 x 10 4 number of static local bidders variance Ng = 5 Ng = 10 Ng = 15 Figure 3: The variance of the best response strategy over 10 iterations and 10 experiments with different initial settings and M = 5.\nThe errorbars show the (small) standard deviations.\ncurs since global bidders randomise over auctions, and thus they cannot coordinate and choose to bid high in different auctions.\nAs shown in Figure 2, a similar fluctuation is observed when the number of global bidders increases relative to the number of auctions.\nHowever, the bids in the equal-bid state (state 2 in Figure 2), as well as the low bids of the other state, increase.\nMoreover, if the number of global bidders is increased even further, a bifurcation occurs in the equal-bid state similar to the case without local bidders.\nWe now consider the best response strategies when both local and global bidders participate and each auction contains the same number of local bidders.\nTo this end, Figure 3 shows the average variance of the best response strategies.\nThis is measured as the variance of an actual best-response bid over different iterations, and then taking the average over the discrete bidder valuations.\nHere, the variance is a gauge for the amount of fluctuation and thus the instability of the strategy.\nAs can be seen from this figure, local bidders have a large stabilising effect on the global bidder strategies.\nAs a result, the best response strategy approximates a pure symmetric Nash equilibrium.\nWe note that the results converge after only a few iterations.\nThe results show that the principal conclusions in the case of a single global bidder carry over to the case of multiple global bidders.\nThat is, the optimal strategy is to bid positive in all auctions (as long as there are at least as many bidders as auctions).\nFurthermore, a similar bifurcation point is observed.\nThese results are very robust to changes to the auction settings and the parameters of the simulation.\nTo conclude, even though a theoretical analysis proves difficult in case of several global bidders, we can approximate a (symmetric) Nash equilibrium for specific settings using a discrete simulation in case the system consists of both local and global bidders.\nThus, our simulation can be used as a tool to predict the market equilibrium and to find the optimal bidding strategy for practical settings where we expect a combination of local and global bidders.\n6.\nMARKET EFFICIENCY Efficiency is an important system-wide property since it characterises to what extent the market maximises social welfare (i.e. the sum of utilities of all agents in the market).\nTo this end, in this section we study the efficiency of markets with either static or dynamic local bidders, and the impact that a global bidder has on the efficiency in these markets.\nSpecifically, efficiency in this context is maximised when the bidders with the M highest valuations in the entire market obtain a single item each.\nMore formally, we define the efficiency of an allocation as: Definition 1 Efficiency of Allocation.\nThe efficiency \u03b7K of an allocation K is the obtained social welfare proportional to the maximum social welfare that can be achieved in the market and is given by: \u03b7K = NT i=1 vi(K) NT i=1 vi(K\u2217) , (15) where K\u2217 = arg maxK\u2208K NT i=1 vi(K) is an efficient allocation, K is the set of all possible allocations, vi(K) is bidder i``s utility for the allocation K \u2208 K, and NT is the total number of bidders participating across all auctions (including any global bidders).\nNow, in order to measure the efficiency of the market and the impact of a global bidder, we run simulations for the markets with the different types of local bidders.\nThe experiments are carried out as follows.\nEach bidder``s valuation is drawn from a uniform distribution with support [0, 1].\nThe local bidders bid their true valuations, whereas the global bidder bids optimally in each auction as described in Section 4.3.\nThe experiments are repeated 5000 times for each run to obtain an accurate mean value, and the final average results and standard deviations are taken over 10 runs in order to get statistically significant results.\nThe results of these experiments are shown in Figure 4.\nNote that a degree of inefficiency is inherent to a multiauction market with only local bidders [2].9 For example, if there are two auctions selling one item each, and the two bidders with the highest valuations both bid locally in the same auction, then the bidder with the second-highest value does not obtain the good.\nThus, the allocation of items to bidders is inefficient.\nAs can be observed from Figure 4, however, the efficiency increases when N becomes larger.\nThis is because the differences between the bidders with the highest valuations become smaller, thereby decreasing the loss of efficiency.\nFurthermore, Figure 4 shows that the presence of a global bidder has a slightly positive effect on the efficiency in case the local bidders are static.\nIn the case of dynamic bidders, however, the effect of a global bidder depends on the number of sellers.\nIf M is low (i.e., for M = 2), a global bidder significantly increases the efficiency, especially for low values of 9 Trivial exceptions are when either M = 1 or N = 1 and bidders are static, since the market is then completely efficient without a global bidder.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 285 2 4 6 8 10 12 0.75 0.8 0.85 0.9 0.95 1 1 3 4 5 6 7 8 2 M Local Bidders Global Bidder 2 Dynamic No 2 Dynamic Yes 2 Static No 2 Static Yes 6 Dynamic No 6 Dynamic Yes 6 Static No 6 Static Yes 2 1 3 4 5 6 7 8 (average) number of local bidders (N) efficiency(\u03b7K) Figure 4: Average efficiency for different market settings as shown in the legend.\nThe error-bars indicate the standard deviation over the 10 runs.\nN. For M = 6, on the other hand, the presence of a global bidder has a negative effect on the efficiency (this effect becomes even more pronounced for higher values of M).\nThis result is explained as follows.\nThe introduction of a global bidder potentially leads to a decrease of efficiency since this bidder can unwittingly win more than one item.\nHowever, as the number of local bidders increase, this is less likely to happen.\nRather, since the global bidder increases the number of bidders, its presence makes an overall positive (albeit small) contribution in case of static bidders.\nIn a market with dynamic bidders, however, the market efficiency depends on two other factors.\nOn the one hand, the efficiency increases since items no longer remain unsold (this situation can occur in the dynamic model when no bidder turns up at an auction).\nOn the other hand, as a result of the uncertainty concerning the actual number of bidders, a global bidder is more likely to win multiple items (we confirmed this analytically).\nAs M increases, the first effect becomes negligible whereas the second one becomes more prominent, reducing the efficiency on average.\nTo conclude, the impact of a global bidder on the efficiency clearly depends on the information that is available.\nIn case of static local bidders, the number of bidders is known and the global bidder can bid more accurately.\nIn case of uncertainty, however, the global bidder is more likely to win more than one item, decreasing the overall efficiency.\n7.\nCONCLUSIONS In this paper, we derive utility-maximising strategies for bidding in multiple, simultaneous second-price auctions.\nWe first analyse the case where a single global bidder bids in all auctions, whereas all other bidders are local and bid in a single auction.\nFor this setting, we find the counter-intuitive result that it is optimal to place non-zero bids in all auctions that sell the desired item, even when a bidder only requires a single item and derives no additional benefit from having more.\nThus, a potential buyer can achieve considerable benefit by participating in multiple auctions and employing an optimal bidding strategy.\nFor a number of common valuation distributions, we show analytically that the problem of finding optimal bids reduces to two dimensions.\nThis considerably simplifies the original optimisation problem and can thus be used in practice to compute the optimal bids for any number of auctions.\nFurthermore, we investigate a setting with multiple global bidders by combining analytical solutions with a simulation approach.\nWe find that a global bidder``s strategy does not stabilise when only global bidders are present in the market, but only converges when there are local bidders as well.\nWe argue, however, that real-world markets are likely to contain both local and global bidders.\nThe converged results are then very similar to the setting with a single global bidder, and we find that a bidder benefits by bidding optimally in multiple auctions.\nFor the more complex setting with multiple global bidders, the simulation can thus be used to find these bids for specific cases.\nFinally, we compare the efficiency of a market with multiple concurrent auctions with and without a global bidder.\nWe show that, if the bidder can accurately predict the number of local bidders in each auction, the efficiency slightly increases.\nIn contrast, if there is much uncertainty, the efficiency significantly diminishes as the number of auctions increases due to the increased probability that a global bidder wins more than two items.\nThese results show that the way in which the efficiency, and thus social welfare, is affected by a global bidder depends on the information that is available to that global bidder.\nIn future work, we intend to extend the results to imperfect substitutes (i.e., when a global bidder gains from winning additional items), and to settings where the auctions are no longer identical.\nThe latter arises, for example, when the number of (average) local bidders differs per auction or the auctions have different settings for parameters such as the reserve price.\n8.\nREFERENCES [1] S. Airiau and S. Sen. Strategic bidding for multiple units in simultaneous and sequential auctions.\nGroup Decision and Negotiation, 12(5):397-413, 2003.\n[2] P. Cramton, Y. Shoham, and R. Steinberg.\nCombinatorial Auctions.\nMIT Press, 2006.\n[3] R. Engelbrecht-Wiggans and R. Weber.\nAn example of a multiobject auction game.\nManagement Science, 25:1272-1277, 1979.\n[4] D. Fudenberg and D. Levine.\nThe Theory of Learning in Games.\nMIT Press, 1999.\n[5] A. Greenwald, R. Kirby, J. Reiter, and J. Boyan.\nBid determination in simultaneous auctions: A case study.\nIn Proc.\nof the Third ACM Conference on Electronic Commerce, pages 115-124, 2001.\n[6] V. Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[7] V. Krishna and R. Rosenthal.\nSimultaneous auctions with synergies.\nGames and Economic Behaviour, 17:1-31, 1996.\n[8] K. Lang and R. Rosenthal.\nThe contractor``s game.\nRAND J. Econ, 22:329-338, 1991.\n[9] R. Rosenthal and R. Wang.\nSimultaneous auctions with synergies and common values.\nGames and Economic Behaviour, 17:32-55, 1996.\n[10] A. Roth and A. Ockenfels.\nLast-minute bidding and the rules for ending second-price auctions: Evidence from ebay and amazon auctions on the internet.\nThe American Economic Review, 92(4):1093-1103, 2002.\n[11] O. Shehory.\nOptimal bidding in multiple concurrent auctions.\nInt.\nJournal of Cooperative Information Systems, 11:315-327, 2002.\n[12] B. Szentes and R. Rosenthal.\nThree-object two-bidder simultaeous auctions:chopsticks and tetrahedra.\nGames and Economic Behaviour, 44:114-133, 2003.\n[13] D. Yuen, A. Byde, and N. R. Jennings.\nHeuristic bidding strategies for multiple heterogeneous auctions.\nIn Proc.\n17th European Conference on AI (ECAI), pages 300-304, 2006.\n286 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods\nABSTRACT\nWe derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes.\nWe first consider a model where all other bidders are local and participate in a single auction.\nFor this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders' valuation distribution.\nFurthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions.\nThese results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution.\nThis analysis extends to online markets where, typically, auctions occur both concurrently and sequentially.\nIn addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders.\nFinally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.\n1.\nINTRODUCTION\nThe recent surge of interest in online auctions has resulted in an increasing number of auctions offering very similar or\neven identical goods and services [9, 10].\nIn eBay alone, for example, there are often hundreds or sometimes even thousands of concurrent auctions running worldwide selling such substitutable items1.\nAgainst this background, it is essential to develop bidding strategies that autonomous agents can use to operate effectively across a wide number of auctions.\nTo this end, in this paper we devise and analyse optimal bidding strategies for an important yet barely studied setting--namely, an agent that participates in multiple, concurrent (i.e., simultaneous) second-price auctions for goods that are perfect substitutes.\nAs we will show, however, this analysis is also relevant to a wider context where auctions are conducted sequentially, as well as concurrently.\nTo date, much of the existing literature on multiple auctions focuses either on sequential auctions [6] or on simultaneous auctions for complementary goods, where the value of items together is greater than the sum of the individual items (see Section 2 for related research on simultaneous auctions).\nIn contrast, here we consider bidding strategies for markets with multiple concurrent auctions and perfect substitutes.\nIn particular, our focus is on Vickrey or second-price sealed bid auctions.\nWe choose these because they require little communication and are well known for their capacity to induce truthful bidding, which makes them suitable for many multi-agent system settings.\nHowever, our results generalise to settings with English auctions since these are strategically equivalent to second-price auctions.\nWithin this setting, we are able to characterise, for the first time, a bidder's utilitymaximising strategy for bidding simultaneously in any number of such auctions and for any type of bidder valuation distribution.\nIn more detail, we first consider a market where a single bidder, called the global bidder, can bid in any number of auctions, whereas the other bidders, called the local bidders, are assumed to bid only in a single auction.\nFor this case, we find the following results:\n\u2022 Whereas in the case of a single second-price auction a bidder's best strategy is to bid its true value, the best strategy for a global bidder is to bid below it.\n\u2022 We are able to prove that, even if a global bidder requires only one item, the expected utility is maximised by participating in all the auctions that are selling the desired item.\n\u2022 Finding the optimal bid for each auction can be an arduous task when considering all possible combinations.\nHowever, for most common bidder valuation distribu\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nRELATED WORK\nResearch in the area of simultaneous auctions can be segmented along two broad lines.\nOn the one hand, there is the game-theoretic and decision-theoretic analysis of simultaneous auctions which concentrates on studying the equilibrium strategy of rational agents [3, 7, 8, 9, 12, 11].\nSuch analyses are typically used when the auction format employed in the concurrent auctions is the same (e.g. there are M Vickrey auctions or M first-price auctions).\nOn the other hand, heuristic strategies have been developed for more complex settings when the sellers offer different types of auctions or the buyers need to buy bundles of goods over distributed auctions [1, 13, 5].\nThis paper adopts the former approach in studying a market of M simultaneous Vickrey auctions since this approach yields provably optimal bidding strategies.\nIn this case, the seminal paper by Engelbrecht-Wiggans and Weber provides one of the starting points for the gametheoretic analysis of distributed markets where buyers have substitutable goods.\nTheir work analyses a market consisting of couples having equal valuations that want to bid for a dresser.\nThus, the couple's bid space can at most contain two bids since the husband and wife can be at most at two geographically distributed auctions simultaneously.\nThey derive a mixed strategy Nash equilibrium for the special case where the number of buyers is large.\nOur analysis differs from theirs in that we study concurrent auctions in which bidders have different valuations and the global bidder can bid in all the auctions concurrently (which is entirely possible given autonomous agents).\nFollowing this, [7] then studied the case of simultaneous auctions with complementary goods.\nThey analyse the case of both local and global bidders and characterise the bidding of the buyers and resultant market efficiency.\nThe setting provided in [7] is further extended to the case of common values in [9].\nHowever, neither of these works extend easily to the case of substitutable goods which we consider.\nThis case is studied in [12], but the scenario considered is restricted to three sellers and two global bidders and with each bidder having the same value (and thereby knowing the value of other bidders).\nThe space of symmetric mixed equilibrium strategies is derived for this special case, but again our result is more general.\nFinally, [11] considers the case of concurrent English auctions, in which he develops bidding algorithms for buyers with different risk attitudes.\nHowever, he forces the bids to be the same across auctions, which we show in this paper not always to be optimal.\n3.\nBIDDING IN MULTIPLE AUCTIONS\n3.1 The Auctions\n3.2 Global and Local Bidders\n280 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nA SINGLE GLOBAL BIDDER\n4.1 The Global Bidder's Expected Utility\n4.2 Participation in Multiple Auctions\n4.3 The Optimal Global Bid\n4.3.1 Reducing the Search Space\n282 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.3.2 Empirical Evaluation\n4.4 Sequential and Concurrent Auctions\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 283\n5.\nMULTIPLE GLOBAL BIDDERS\n5.1 The Global Bidder's Expected Utility\n5.2 The Simulation\n5.3 The Results\n6.\nMARKET EFFICIENCY\n7.\nCONCLUSIONS\nIn this paper, we derive utility-maximising strategies for bidding in multiple, simultaneous second-price auctions.\nWe first analyse the case where a single global bidder bids in all auctions, whereas all other bidders are local and bid in a single auction.\nFor this setting, we find the counter-intuitive result that it is optimal to place non-zero bids in all auctions that sell the desired item, even when a bidder only requires a single item and derives no additional benefit from having more.\nThus, a potential buyer can achieve considerable benefit by participating in multiple auctions and employing an optimal bidding strategy.\nFor a number of common valuation distributions, we show analytically that the problem of finding optimal bids reduces to two dimensions.\nThis considerably simplifies the original optimisation problem and can thus be used in practice to compute the optimal bids for any number of auctions.\nFurthermore, we investigate a setting with multiple global bidders by combining analytical solutions with a simulation approach.\nWe find that a global bidder's strategy does not stabilise when only global bidders are present in the market, but only converges when there are local bidders as well.\nWe argue, however, that real-world markets are likely to contain both local and global bidders.\nThe converged results are then very similar to the setting with a single global bidder, and we find that a bidder benefits by bidding optimally in multiple auctions.\nFor the more complex setting with multiple global bidders, the simulation can thus be used to find these bids for specific cases.\nFinally, we compare the efficiency of a market with multiple concurrent auctions with and without a global bidder.\nWe show that, if the bidder can accurately predict the number of local bidders in each auction, the efficiency slightly increases.\nIn contrast, if there is much uncertainty, the efficiency significantly diminishes as the number of auctions increases due to the increased probability that a global bidder wins more than two items.\nThese results show that the way in which the efficiency, and thus social welfare, is affected by a global bidder depends on the information that is available to that global bidder.\nIn future work, we intend to extend the results to imperfect substitutes (i.e., when a global bidder gains from winning additional items), and to settings where the auctions are no longer identical.\nThe latter arises, for example, when the number of (average) local bidders differs per auction or the auctions have different settings for parameters such as the reserve price.","lvl-4":"Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods\nABSTRACT\nWe derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes.\nWe first consider a model where all other bidders are local and participate in a single auction.\nFor this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders' valuation distribution.\nFurthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions.\nThese results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution.\nThis analysis extends to online markets where, typically, auctions occur both concurrently and sequentially.\nIn addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders.\nFinally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.\n1.\nINTRODUCTION\nThe recent surge of interest in online auctions has resulted in an increasing number of auctions offering very similar or\nIn eBay alone, for example, there are often hundreds or sometimes even thousands of concurrent auctions running worldwide selling such substitutable items1.\nAgainst this background, it is essential to develop bidding strategies that autonomous agents can use to operate effectively across a wide number of auctions.\nAs we will show, however, this analysis is also relevant to a wider context where auctions are conducted sequentially, as well as concurrently.\nIn contrast, here we consider bidding strategies for markets with multiple concurrent auctions and perfect substitutes.\nIn particular, our focus is on Vickrey or second-price sealed bid auctions.\nHowever, our results generalise to settings with English auctions since these are strategically equivalent to second-price auctions.\nWithin this setting, we are able to characterise, for the first time, a bidder's utilitymaximising strategy for bidding simultaneously in any number of such auctions and for any type of bidder valuation distribution.\nIn more detail, we first consider a market where a single bidder, called the global bidder, can bid in any number of auctions, whereas the other bidders, called the local bidders, are assumed to bid only in a single auction.\nFor this case, we find the following results:\n\u2022 Whereas in the case of a single second-price auction a bidder's best strategy is to bid its true value, the best strategy for a global bidder is to bid below it.\n\u2022 We are able to prove that, even if a global bidder requires only one item, the expected utility is maximised by participating in all the auctions that are selling the desired item.\n\u2022 Finding the optimal bid for each auction can be an arduous task when considering all possible combinations.\nHowever, for most common bidder valuation distribu\n2.\nRELATED WORK\nResearch in the area of simultaneous auctions can be segmented along two broad lines.\nSuch analyses are typically used when the auction format employed in the concurrent auctions is the same (e.g. there are M Vickrey auctions or M first-price auctions).\nThis paper adopts the former approach in studying a market of M simultaneous Vickrey auctions since this approach yields provably optimal bidding strategies.\nTheir work analyses a market consisting of couples having equal valuations that want to bid for a dresser.\nThus, the couple's bid space can at most contain two bids since the husband and wife can be at most at two geographically distributed auctions simultaneously.\nThey derive a mixed strategy Nash equilibrium for the special case where the number of buyers is large.\nOur analysis differs from theirs in that we study concurrent auctions in which bidders have different valuations and the global bidder can bid in all the auctions concurrently (which is entirely possible given autonomous agents).\nFollowing this, [7] then studied the case of simultaneous auctions with complementary goods.\nThey analyse the case of both local and global bidders and characterise the bidding of the buyers and resultant market efficiency.\nThe setting provided in [7] is further extended to the case of common values in [9].\nHowever, neither of these works extend easily to the case of substitutable goods which we consider.\nThe space of symmetric mixed equilibrium strategies is derived for this special case, but again our result is more general.\nFinally, [11] considers the case of concurrent English auctions, in which he develops bidding algorithms for buyers with different risk attitudes.\nHowever, he forces the bids to be the same across auctions, which we show in this paper not always to be optimal.\n7.\nCONCLUSIONS\nIn this paper, we derive utility-maximising strategies for bidding in multiple, simultaneous second-price auctions.\nWe first analyse the case where a single global bidder bids in all auctions, whereas all other bidders are local and bid in a single auction.\nFor this setting, we find the counter-intuitive result that it is optimal to place non-zero bids in all auctions that sell the desired item, even when a bidder only requires a single item and derives no additional benefit from having more.\nThus, a potential buyer can achieve considerable benefit by participating in multiple auctions and employing an optimal bidding strategy.\nFor a number of common valuation distributions, we show analytically that the problem of finding optimal bids reduces to two dimensions.\nThis considerably simplifies the original optimisation problem and can thus be used in practice to compute the optimal bids for any number of auctions.\nFurthermore, we investigate a setting with multiple global bidders by combining analytical solutions with a simulation approach.\nWe find that a global bidder's strategy does not stabilise when only global bidders are present in the market, but only converges when there are local bidders as well.\nWe argue, however, that real-world markets are likely to contain both local and global bidders.\nThe converged results are then very similar to the setting with a single global bidder, and we find that a bidder benefits by bidding optimally in multiple auctions.\nFor the more complex setting with multiple global bidders, the simulation can thus be used to find these bids for specific cases.\nFinally, we compare the efficiency of a market with multiple concurrent auctions with and without a global bidder.\nWe show that, if the bidder can accurately predict the number of local bidders in each auction, the efficiency slightly increases.\nIn contrast, if there is much uncertainty, the efficiency significantly diminishes as the number of auctions increases due to the increased probability that a global bidder wins more than two items.\nThese results show that the way in which the efficiency, and thus social welfare, is affected by a global bidder depends on the information that is available to that global bidder.\nIn future work, we intend to extend the results to imperfect substitutes (i.e., when a global bidder gains from winning additional items), and to settings where the auctions are no longer identical.\nThe latter arises, for example, when the number of (average) local bidders differs per auction or the auctions have different settings for parameters such as the reserve price.","lvl-2":"Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods\nABSTRACT\nWe derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes.\nWe first consider a model where all other bidders are local and participate in a single auction.\nFor this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders' valuation distribution.\nFurthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions.\nThese results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution.\nThis analysis extends to online markets where, typically, auctions occur both concurrently and sequentially.\nIn addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders.\nFinally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.\n1.\nINTRODUCTION\nThe recent surge of interest in online auctions has resulted in an increasing number of auctions offering very similar or\neven identical goods and services [9, 10].\nIn eBay alone, for example, there are often hundreds or sometimes even thousands of concurrent auctions running worldwide selling such substitutable items1.\nAgainst this background, it is essential to develop bidding strategies that autonomous agents can use to operate effectively across a wide number of auctions.\nTo this end, in this paper we devise and analyse optimal bidding strategies for an important yet barely studied setting--namely, an agent that participates in multiple, concurrent (i.e., simultaneous) second-price auctions for goods that are perfect substitutes.\nAs we will show, however, this analysis is also relevant to a wider context where auctions are conducted sequentially, as well as concurrently.\nTo date, much of the existing literature on multiple auctions focuses either on sequential auctions [6] or on simultaneous auctions for complementary goods, where the value of items together is greater than the sum of the individual items (see Section 2 for related research on simultaneous auctions).\nIn contrast, here we consider bidding strategies for markets with multiple concurrent auctions and perfect substitutes.\nIn particular, our focus is on Vickrey or second-price sealed bid auctions.\nWe choose these because they require little communication and are well known for their capacity to induce truthful bidding, which makes them suitable for many multi-agent system settings.\nHowever, our results generalise to settings with English auctions since these are strategically equivalent to second-price auctions.\nWithin this setting, we are able to characterise, for the first time, a bidder's utilitymaximising strategy for bidding simultaneously in any number of such auctions and for any type of bidder valuation distribution.\nIn more detail, we first consider a market where a single bidder, called the global bidder, can bid in any number of auctions, whereas the other bidders, called the local bidders, are assumed to bid only in a single auction.\nFor this case, we find the following results:\n\u2022 Whereas in the case of a single second-price auction a bidder's best strategy is to bid its true value, the best strategy for a global bidder is to bid below it.\n\u2022 We are able to prove that, even if a global bidder requires only one item, the expected utility is maximised by participating in all the auctions that are selling the desired item.\n\u2022 Finding the optimal bid for each auction can be an arduous task when considering all possible combinations.\nHowever, for most common bidder valuation distribu\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\ntions, we are able to significantly reduce this search space and thus the computation required.\n\u2022 Empirically, we find that a bidder's expected utility is maximised by bidding relatively high in one of the auctions, and equal or lower in all other auctions.\nWe then go on to consider markets with more than one global bidder.\nDue to the complexity of the problem, we combine analytical results with a discrete simulation in order to numerically derive the optimal bidding strategy.\nBy so doing, we find that, in a market with only global bidders, the dynamics of the best response do not converge to a pure strategy.\nIn fact it fluctuates between two states.\nIf the market consists of both local and global bidders, however, the global bidders' strategy quickly reaches a stable solution and we approximate a symmetric Nash equilibrium.\nThe remainder of the paper is structured as follows.\nSection 2 discusses related work.\nIn Section 3 we describe the bidders and the auctions in more detail.\nIn Section 4 we investigate the case with a single global bidder and characterise the optimal bidding behaviour for it.\nSection 5 considers the case with multiple global bidders and in Section 6 we address the market efficiency.\nFinally, Section 7 concludes.\n2.\nRELATED WORK\nResearch in the area of simultaneous auctions can be segmented along two broad lines.\nOn the one hand, there is the game-theoretic and decision-theoretic analysis of simultaneous auctions which concentrates on studying the equilibrium strategy of rational agents [3, 7, 8, 9, 12, 11].\nSuch analyses are typically used when the auction format employed in the concurrent auctions is the same (e.g. there are M Vickrey auctions or M first-price auctions).\nOn the other hand, heuristic strategies have been developed for more complex settings when the sellers offer different types of auctions or the buyers need to buy bundles of goods over distributed auctions [1, 13, 5].\nThis paper adopts the former approach in studying a market of M simultaneous Vickrey auctions since this approach yields provably optimal bidding strategies.\nIn this case, the seminal paper by Engelbrecht-Wiggans and Weber provides one of the starting points for the gametheoretic analysis of distributed markets where buyers have substitutable goods.\nTheir work analyses a market consisting of couples having equal valuations that want to bid for a dresser.\nThus, the couple's bid space can at most contain two bids since the husband and wife can be at most at two geographically distributed auctions simultaneously.\nThey derive a mixed strategy Nash equilibrium for the special case where the number of buyers is large.\nOur analysis differs from theirs in that we study concurrent auctions in which bidders have different valuations and the global bidder can bid in all the auctions concurrently (which is entirely possible given autonomous agents).\nFollowing this, [7] then studied the case of simultaneous auctions with complementary goods.\nThey analyse the case of both local and global bidders and characterise the bidding of the buyers and resultant market efficiency.\nThe setting provided in [7] is further extended to the case of common values in [9].\nHowever, neither of these works extend easily to the case of substitutable goods which we consider.\nThis case is studied in [12], but the scenario considered is restricted to three sellers and two global bidders and with each bidder having the same value (and thereby knowing the value of other bidders).\nThe space of symmetric mixed equilibrium strategies is derived for this special case, but again our result is more general.\nFinally, [11] considers the case of concurrent English auctions, in which he develops bidding algorithms for buyers with different risk attitudes.\nHowever, he forces the bids to be the same across auctions, which we show in this paper not always to be optimal.\n3.\nBIDDING IN MULTIPLE AUCTIONS\nThe model consists of M sellers, each of whom acts as an auctioneer.\nEach seller auctions one item; these items are complete substitutes (i.e., they are equal in terms of value and a bidder obtains no additional benefit from winning more than one item).\nThe M auctions are executed concurrently; that is, they end simultaneously and no information about the outcome of any of the auctions becomes available until the bids are placed2.\nHowever, we briefly address markets with both sequential and concurrent auctions in Section 4.4.\nWe also assume that all the auctions are equivalent (i.e., a bidder does not prefer one auction over another).\nFinally, we assume free disposal (i.e., a winner of multiple items incurs no additional costs by discarding unwanted ones) and risk neutral bidders.\n3.1 The Auctions\nThe seller's auction is implemented as a Vickrey auction, where the highest bidder wins but pays the second-highest price.\nThis format has several advantages for an agent-based setting.\nFirstly, it is communication efficient.\nSecondly, for the single-auction case (i.e., where a bidder places a bid in at most one auction), the optimal strategy is to bid the true value and thus requires no computation (once the valuation of the item is known).\nThis strategy is also weakly dominant (i.e., it is independent of the other bidders' decisions), and therefore it requires no information about the preferences of other agents (such as the distribution of their valuations).\n3.2 Global and Local Bidders\nWe distinguish between global and local bidders.\nThe former can bid in any number of auctions, whereas the latter only bid in a single one.\nLocal bidders are assumed to bid according to the weakly dominant strategy and bid their true valuation3.\nWe consider two ways of modelling local bidders: static and dynamic.\nIn the first model, the number of local bidders is assumed to be known and equal to N for each auction.\nIn the latter model, on the other hand, the average number of bidders is equal to N, but the exact number is unknown and may vary for each auction.\nThis uncertainty is modelled using a Poisson distribution (more details are provided in Section 4.1).\nAs we will later show, a global bidder who bids optimally has a higher expected utility compared to a local bidder, even though the items are complete substitutes and a bidder only requires one of them.\nHowever, we can identify a number of compelling reasons why not all bidders would choose to bid globally.\nFirstly, participation costs such as entry fees and time to set up an account may encourage occasional users to 2Although this paper focuses on sealed-bid auctions, where this is the case, the conditions are similar for last-minute bidding in English auctions such as eBay [10].\n3Note that, since bidding the true value is optimal for local bidders irrespective of what others are bidding, their strategy is not affected by the presence of global bidders.\n280 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nparticipate in auctions that they are already familiar with.\nSecondly, bidders may simply not be aware of other auctions selling the same type of item.\nEven if this is known, however, additional information such as the distribution of the valuations of other bidders and the number of participating bidders is required for bidding optimally across multiple auctions.\nThis lack of expert information often drives a novice to bid locally.\nThirdly, an optimal global strategy is harder to compute than a local one.\nAn agent with bounded rationality may therefore not have the resources to compute such a strategy.\nLastly, even though a global bidder profits on average, such a bidder may incur a loss when inadvertently winning multiple auctions.\nThis deters bidders who are either risk averse or have budget constraints from participating in multiple auction.\nAs a result, in most market places we expect a combination of global and local bidders.\nIn view of the above considerations, human buyers are more likely to bid locally.\nThe global strategy, however, can be effectively executed by autonomous agents since they can gather data from many auctions and perform the required calculations within the desired time frame.\n4.\nA SINGLE GLOBAL BIDDER\nIn this section, we provide a theoretical analysis of the optimal bidding strategy for a global bidder, given that all other bidders are local and simply bid their true valuation.\nAfter we describe the global bidder's expected utility in Section 4.1, we show in Section 4.2 that it is always optimal for a global bidder to participate in the maximum number of auctions available.\nIn Section 4.3 we discuss how to significantly reduce the complexity of finding the optimal bids for the multi-auction problem, and we then apply these methods to find optimal strategies for specific examples.\nFinally, in Section 4.4 we extend our analysis to sequential auctions.\n4.1 The Global Bidder's Expected Utility\nIn what follows, the number of sellers (auctions) is M \u2265 2 and the number of local bidders is N \u2265 1.\nA bidder's valuation v \u2208 [0, vmax] is randomly drawn from a cumulative distribution F with probability density f, where f is continuous, strictly positive and has support [0, vmax].\nF is assumed to be equal and common knowledge for all bidders.\nA global bid B is a set containing a bid bi \u2208 [0, vmax] for each auction 1 \u2264 i \u2264 M (the bids may be different for different auctions).\nFor ease of exposition, we introduce the cumulative distribution function for the first-order statistics G (b) = F (b) N \u2208 [0, 1], denoting the probability of winning a specific auction conditional on placing bid b in this auction, and its probability density g (b) = dG (b) \/ db = NF (b) N \u2212 1f (b).\nNow, the expected utility U for a global bidder with global bid B and valuation v is given by:\nHere, the left part of the equation is the valuation multiplied by the probability that the global bidder wins at least one of the M auctions and thus corresponds to the expected benefit.\nIn more detail, note that 1 \u2212 G (bi) is the probability of not winning auction i when bidding bi, bi \u2208 B (1 \u2212 G (bi)) is the probability of not winning any auction, and thus 1 \u2212 bi \u2208 B (1 \u2212 G (bi)) is the probability of winning at least one auction.\nThe right part of equation 1 corresponds to the total expected costs or payments.\nTo see the latter, note that the expected payment of a single secondprice auction when bidding b equals 0b yg (y) dy (see [6]) and is independent of the expected payments for other auctions.\nClearly, equation 1 applies to the model with static local bidders, i.e., where the number of bidders is known and equal for each auction (see Section 3.2).\nHowever, we can use the same equation to model dynamic local bidders in the following way: Lemma 1 By replacing the first-order statistic G (y) with\nand the corresponding density function g (y) with \u02c6g (y) = d \u02c6G (y) \/ dy = N f (y) eN (F (y) \u2212 1), equation 1 becomes the expected utility where the number of local bidders in each auction is described by a Poisson distribution with average N (i.e., where the probability that n local bidders participate is given by P (n) = Nne \u2212 N\/n!)\n.\nPROOF To prove this, we first show that G (\u00b7) and F (\u00b7) can be modified such that the number of bidders per auction is given by a binomial distribution (where a bidder's decision to participate is given by a Bernoulli trial) as follows:\nwhere p is the probability that a bidder participates in the auction, and N is the total number of bidders.\nTo see this, note that not participating is equivalent to bidding zero.\nAs a result, F ~ (0) = 1 \u2212 p since there is a 1 \u2212 p probability that a bidder bids zero at a specific auction, and F ~ (y) = F ~ (0) + p F (y) since there is a probability p that a bidder bids according to the original distribution F (y).\nNow, the average number of participating bidders is given by N = p N. By replacing p with N\/N, equation 3 becomes G ~ (y) = (1 \u2212 N\/N + (N\/N) F (y)) N. Note that a Poisson distribution is given by the limit of a binomial distribution.\nBy keeping N constant and taking the limit N \u2192 \u221e, we then obtain G ~ (y) = eN (F (y) \u2212 1) = \u02c6G (y).\nThis concludes our proof.\nThe results that follow apply to both the static and dynamic model unless stated otherwise.\n4.2 Participation in Multiple Auctions\nWe now show that, for any valuation 0 0 such that U (B \u222a {bj}, v)> U (B, v).\nPROOF Using equation 1, the marginal expected utility for participating in an additional auction can be written as:\n4.3 The Optimal Global Bid\nA general solution to the optimal global bid requires the maximisation of equation 1 in M dimensions, an arduous task, even when applying numerical methods.\nIn this section, however, we show how to reduce the entire bid space to two dimensions in most cases (one continuous, and one discrete), thereby significantly simplifying the problem at hand.\nFirst, however, in order to find the optimal solutions to equation 1, we set the partial derivatives to zero:\n~ Now, equality 5 holds either when g (bi) = 0 or when bj \u2208 B \\ {bi} (1--G (bj)) v--bi = 0.\nIn the dynamic model, g (bi) is always greater than zero, and can therefore be ignored (since g (0) = Nf (0) e \u2212 N and we assume f (y)> 0).\nIn the static model, g (bi) = 0 only when bi = 0.\nHowever, theorem 1 shows that the optimal bid is non-zero for 0 2.\nThat is, the search space for finding the optimal bid can then be reduced to two continuous values.\nLet these values be bhigh and blow, where bhigh> blow.\nMore formally: Theorem 2 Suppose the probability density function f is non-decreasing within the range [0, vmax], then the following proposition holds: given v> 0, for any bi E 13, either bi = bhigh, bi = blow, or bi = bhigh = blow.\nPROOF Using equation 6, we can produce M equations, one for each auction, with M unknowns.\nNow, by combining these equations, we obtain the following relationship: b1 (1 --\nIn order to prove that there exist at most two different bids, it is sufficient to show that b = H \u2212 1 (y) has at most two solutions that satisfy 0 0.\nIn other words, conditions 9 and 10 show that H (b) is strictly concave when the probability density function is non-decreasing for 0 3.\nIn all experiments, however, we consistently observe that the optimal strategy is always to place a high bid in one auction, and an equal or lower bid in all others.\nIn case of a bifurcation and when the valuation approaches vmax, the optimal high bid goes to the true value and the low bids go to zero.\nAs illustrated in Figure 1, the utility of a global bidder becomes progressively higher with more auctions.\nIn absolute terms, the improvement is especially high for bidders that have an above average valuation, but not too close to vmax.\nThe bidders in this range thus benefit most from bidding globally.\nThis is because bidders with very low valuations have a very small chance of winning any auction, whereas bidders with a very high valuation have a high probability of winning a single auction and benefit less from participating in more auctions.\nIn contrast, if we consider the utility relative to bidding in a single auction, this is much higher for bidders with relatively low valuations (this effect cannot be seen clearly in Figure 1 due to the scale).\nIn particular, we notice that a global bidder with a low valuation can improve its utility by up to M times the expected utility of bidding locally.\nIntuitively, this is because the chance of winning one of the auctions increases by up to a factor M, whereas the increase in the expected cost is negligible.\nFor high valuation buyers, however, the benefit is not that obvious because the chances of winning are relatively high even in case of a single auction.\n4.4 Sequential and Concurrent Auctions\nIn this section we extend our analysis of the optimal bidding strategy to sequential auctions.\nSpecifically, the auction process consists of R rounds, and in each round any number of auctions are running simultaneously.\nSuch a combination of sequential and concurrent auctions is very common in practice, especially online5.\nIt turns out that the analysis for the case of simultaneous auctions is quite general and can be easily extended to include sequential auctions.\nIn the following, the number of simultaneous auctions in round r is denoted by Mr, and the set of bids in that round by Br.\nAs before, the analysis assumes that all other bidders are local and bid in a single auction.\nFurthermore, we assume that the global bidders have complete knowledge about the number of rounds and the number of auctions in each round.\nThe expected utility in round r, denoted by Ur, is similar to before (equation 1 in Section 4.1) except that now additional benefit can be obtained from future auctions if the desired item is not won in one of the current set of simultaneous auctions.\nFor convenience, Ur (Br, Mr) is abbreviated to Ur in the following.\nThe expected utility thus becomes:\nwhere Pr (Br) = 1 \u2212 Hbri \u2208 Br (1 \u2212 G (bri)) is the probability of winning at least one auction in round r. Now, we take the partial derivative of equation 12 in order to find the optimal bid brj for auction j in round r: 5Rather than being purely sequential in nature, online auctions also often overlap (i.e., new auctions can start while others are still ongoing).\nIn that case, however, it is optimal to wait and bid in the new auctions only after the outcome of the earlier auctions is known, thereby reducing the chance of unwittingly winning multiple items.\nUsing this strategy, overlapping auctions effectively become sequential and can thus be analysed using the results in this section.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 283\nNote that equation 13 is almost identical to equation 5 in Section 4.3, except that the valuation v is now replaced by v--Ur +1.\nThe optimal bidding strategy can thus be found by backward induction (where UR +1 = 0) using the procedure outlined in Section 4.3.\n5.\nMULTIPLE GLOBAL BIDDERS\nAs argued in section 3.2, we expect a real-world market to exhibit a mix of global and local bidders.\nWhereas so far we assumed a single global bidder, in this section we consider a setting where multiple global bidders interact with one another and with local bidders as well.\nThe analysis of this problem is complex, however, as the optimal bidding strategy of a global bidder depends on the strategy of other global bidders.\nA typical analytical approach is to find the symmetric Nash equilibrium solution [9, 12], which occurs when all global bidders use the same strategy to produce their bids, and no (global) bidder has any incentive to unilaterally deviate from the chosen strategy.\nDue to the complexity of the problem, however, here we combine a computational simulation approach with analytical results.\nThe simulation works by iteratively finding the best response to the optimal bidding strategies in the previous iteration.\nIf this should result in a stable outcome (i.e., when the optimal bidding strategies remains unchanged for two subsequent iterations), the solution is by definition a (symmetric) Nash equilibrium.\n5.1 The Global Bidder's Expected Utility\nIn order to find a global bidder's best response, we first need to calculate the expected utility given the global bid 13 and the strategies of both the other global bidders as well as the local bidders.\nIn the following, let Ng denote the number of other global bidders.\nFurthermore, let the strategies of the other global bidders be represented by the set of functions \u03b2k (v), 1 0 as: \u03bek = z\u2217 1 \u2194 (\u03c6\u2217 1) if i = 1 z\u2217 i \u2194 (\u03c6\u2217 i \u2227i\u22121 j=1 \u03bej) otherwise.\nAnd we define the formula to be model checked as: \u03c6\u2217 k \u2227k\u22121 j=1 \u03bej It is now straightforward from construction that this formula is true under the interpretation iff zk is true in the snsat instance.\nThe proof of the latter half of the theorem is immediate from the special case where k = 1.\n3.2 Some Properties We have thus defined a language which can be used to express properties of judgment aggregation rules.\nAn interesting question is then: what are the universal properties of aggregation rules expressible in the language; which formulae are valid?\nHere, in order to illustrate the logic, we discuss some of these logical properties.\nIn Section 5 we give a complete axiomatisation of all of them.\nRecall that we defined the set O of outcomes as the set of all conjunctions with exactly one, possibly negated, atom from \u03a3.\nLet P = {o \u2227 \u03c3, o \u2227 \u00ac\u03c3 : o \u2208 O}; p \u2208 P completely describes the decisions of the agents and the aggregation function.\nLet denote exclusive or.\nWe have that: |=L p\u2208Pp - any agent and the JAR always have to make a decision |=L (i \u2227 \u00acj) \u2192 \u00aci - if some agent can think differently about an item than i does, then also i can change his mind about it.\nIn fact this principle can be strengthened to |=L ( i \u2227 \u00acj) \u2192 (\u00aci \u2227 j) |=L x - for any x \u2208 {i, \u00aci, \u03c3, \u00ac\u03c3 : i \u2208 \u03a3} - both the individual agents and the JAR will always judge some agenda item to be true, and conversely, some agenda item to be false |=L (i \u2227 j) - there exist admissible judgment sets such that agents i and j agree on some judgment.\n|=L (i \u2194 j) - there exist admissible judgment sets such that agents i and j always agree.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 569 The interpretation of formulae depends on the agenda A and the underlying logic L, in the quantification over the set J(A, L)n of admissible, e.g., complete and L-consistent, judgment profiles.\nNote that this means that some jal formula might be valid under one underlying logic, while not under another.\nFor example, if the agenda contains some formula which is inconsistent in the underlying logic (and, by implication, some tautology), then the following hold: |=L (i \u2227 \u03c3) - for every judgment profile, there is some agenda item (take a tautology) which both agent i and the JAR judges to be true But this property does not hold when every agenda item is consistent with respect to the underlying logic.\nOne such agenda and underlying logic will be discussed in Section 6.\n4.\nEXPRESSIVITY EXAMPLES Non-dictatorship can be expressed as follows: ND = i\u2208\u03a3 \u00ac(\u03c3 \u2194 i) (1) Lemma 1.\nf |=L ND iff f has the property ND1.\nIndependence can be expressed as follows: IND = o\u2208O ((o \u2227 \u03c3) \u2192 (o \u2192 \u03c3)) (2) Lemma 2.\nf |=L IND iff f has the property IND..\nUnanimity can be expressed as follows: UNA = ((1 \u2227 \u00b7 \u00b7 \u00b7 \u2227 n) \u2192 \u03c3) (3) Lemma 3.\nf |=L UNA iff f has the property UNA.\n4.1 The Discursive Paradox As illustrated in Example 1, the following formula expresses proposition-wise majority voting over some proposition p MV = \u03c3 \u2194 G\u2286\u03a3,|G|> n 2 i\u2208G i (4) i.e., the following property of a JAR f and admissible profile A1, ... , An : p \u2208 f(A1, ... , An) \u21d4 |{i : p \u2208 Ai}| > |{i : p Ai}| f |= MV exactly iff f has the above property for all judgment profiles and propositions.\nHowever, we have the following in our logic.\nAssume that the agenda contains at least two distinct formulae and their material implication (i.e., A contains p, q, p \u2192 q for some p, q \u2208 L(L)).\nProposition 1 (Discursive Paradox).\n|=L (( MV) \u2192 \u22a5) when there are at least three agents and the agenda contains at least two distinct formulae and their material implication.\nProof.\nAssume the opposite, e.g., that A = {p, p \u2192 q, q, \u00acp, \u00ac(p \u2192 q), \u00acq, ...} and there exists an aggregation rule f over A such that f |=L (\u03c3 \u2194 G\u2286\u03a3,|G|> n 2 i\u2208G i).\nLet \u03b3 be the judgment profile \u03b3 = A1, A2, A3 where A1 = {p, p \u2192 q, q, ...}, A2 = {p, \u00ac(p \u2192 q), \u00acq, ...} and A3 = {\u00acp, p \u2192 q, \u00acq, ...}.\nWe have that f, \u03b3, p |=L (\u03c3 \u2194 G\u2286\u03a3,|G|> n 2 i\u2208G i) for any p , so f, \u03b3, p |=L \u03c3 \u2194 G\u2286\u03a3,|G|> n 2 i\u2208G i. Because f, \u03b3, p |=L 1 \u2227 2, it follows that f, \u03b3, p |=L \u03c3.\nIn a similar manner it follows that f, \u03b3, p \u2192 q |=L \u03c3 and f, \u03b3, q |=L \u00ac\u03c3.\nIn other words, p \u2208 f(\u03b3), p \u2192 q \u2208 f(\u03b3) and q f(\u03b3).\nSince f(\u03b3) is complete, \u00acq \u2208 f(\u03b3).\nBut that contradicts the fact that f(\u03b3) is required to be consistent.\nProposition 1 is a logical statement of a variant of the well-known discursive dilemma: if three agents are voting on propositions p, q and p \u2192 q, proposition-wise majority voting might not yield a consistent result.\n5.\nAXIOMATISATION Given an underlying logic L, a finite agenda A over L, and a set of agents \u03a3, Judgment Aggregation Logic (jal(L), or just jal when L is understood) for the language L(\u03a3, A), is defined in Table 2.\n\u00ac(hp \u2227 hq) if p q Atmost p\u2208A hp Atleast hp p \u2208 A Agenda (hp \u2227 \u03d5) \u2192 (hp \u2192 \u03d5) Once (hp \u2227 x) \u2228 (hp \u2227 x) CpJS all instantiations of propositional tautologies taut (\u03c81 \u2192 \u03c82) \u2192 ( \u03c81 \u2192 \u03c82) K \u03c8 \u2192 \u03c8 T \u03c8 \u2192 \u03c8 4 \u00ac \u03c8 \u2192 \u00ac \u03c8 5 ( i \u2227 \u00acj) \u2192 o\u2208O o C \u03c8 \u2194 \u03c8 (COMM) From p1, ... pn L q infer (hp1 \u2227 x) \u2227 \u00b7 \u00b7 \u00b7 \u2227 (hpn \u2227 x) \u2192 (hq \u2192 x) \u2227 (hq \u2192 \u00acx) Closure From \u03d5 \u2192 \u03c8 and \u03d5 infer \u03c8 MP From \u03c8 infer \u03c8 Nec Table 2: The logic jal(L) for the language L(\u03a3, A).\np, pi, q range over the agenda A; \u03c6,\u03c8,\u03c8i over L(\u03a3, A); x over {\u03c3, i : i \u2208 \u03a3}; over { , }; i, j over \u03a3; o over the set of outcomes O. hp means hq when p = \u00acq for some q, otherwise it means h\u00acp. L is the underlying logic.\nThe first 5 axioms represent properties of a table and of judgment sets.\nAxiom Atmost says that there is at most one item on the table at a time, and Atleast says that we always have an item on the table.\nAxiom Agenda says that every agenda item will appear on the table, whereas Once says that every item of the agenda only appears on the table once.\nNote that a conjunction hp \u2227 x reads: item p is on the agenda, and x is in favour of it, or x judges it true.\nAxiom CpJS corresponds to the requirement that judgment sets are complete.\nNote that from Agenda, CsJS and CpJS we derive the scheme x \u2227 \u00acx, which says that everybody should at least express one opinion in favour of something, and against something.\nThe axioms taut \u2212 5 are well familiar from modal logic: they directly reflect the unrestricted quantification in the truth definition of and .\nAxiom C says that for any agenda item for which it is possible to have opposing opinions, every possible outcome for that item should be achievable.\nCOMM says that everything that is true for an arbitrary profile and item, is also true for an arbitrary item and profile.\nClosure guarantees that agents behave consistently with respect to consequence in the logic L. MP and Nec are standard.\nWe use JAL(L) to denote derivability in jal(L).\nTheorem 3.\nIf the agenda is finite, we have that for any formula \u03c8 \u2208 L(\u03a3, A), JAL(L) \u03c8 iff |=L \u03c8.\nProof.\nSoundness is straightforward.\nFor completeness (we focus on the main idea here and leave out trivial details), we build a 570 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) jal table for a consistent formula \u03c8 as follows.\nIn fact, our axiomatisation completely determines a table, except for the behaviour of f. To be more precise, let a table description be a conjunction of the form hp \u2227 o \u2227 (\u00ac)\u03c3.\nIt is easy to see that table descriptions are mutually exclusive, and, moreover, we can derive \u03c4\u2208T \u03c4, where T is the set of all table descriptions.\nLet D be the set of all maximal consistent sets \u0394.\nWe don``t want all of those: it might well be that \u03c8 requires \u03c3 to be in a certain way, which is incompatible with some \u0394``s.\nWe define two accessibility relations in the standard way: R \u03941\u03942 iff for all \u03c8: \u03c8 \u2208 \u03941 \u21d2 \u03c8 \u2208 \u03942.\nSimilarly for R with respect to .\nBoth relations are equivalences (due to taut-5), and moreover, when R \u03941\u03942 and R \u03942\u03943 then for some \u03942, also R \u03941\u03942 and R \u03942\u03943 (because of axiom COMM).\nLet \u03940 be a MCS containing \u03c8.\nWe now define the set Tables = {\u03940} \u222a {\u03941, \u03942 | (R \u03940\u03941 and R \u03941\u03942) or (R \u03940\u03941 and R \u03941\u03942)} Every \u0394 \u2208 Tables can be conceived as a pair \u03b3, p, since every \u0394 contains a unique (hq \u2227 o \u2227 (\u00ac)\u03c3) for every hq and a unique hp.\nIt is then easy to verify that, for every \u0394 \u2208 Tables, and every formula \u03d5, \u0394 |= \u03d5 iff \u03d5 \u2208 \u0394, where |= here means truth in the ordinary modal logic sense when the set of states is taken to be Tables.\nNow, we extract an aggregation function f and pairs \u03b3, p as follows: For every \u0394 \u2208 Tables, find a conjunction hp \u2227 o \u2227 (\u00ac)\u03c3.\nThere will be exactly one such p.\nThis defines the p we are looking for.\nFurthermore, the \u03b3 is obtained, for every agent i, by finding all q for which (hq \u2227 i) is currently true.\nFinally, the function f is a table of all tuples hp, o(p), \u03c3 for which (hp \u2227 o(o) \u2227 \u03c3) is contained in some set in Tables.\nWe point out that jal has all the axioms taut, K, T, 4, 5 and the rules MP and Nec of the modal logic S5.\nHowever, uniform substitution, a principle of all normal modal logics (cf., e.g., [3]), does not hold.\nA counter example is the fact that the following is valid: \u03c3 (5) - no matter what preferences the agents have, the JAR will always make some judgment - while this is not valid: (\u03c3 \u2227 i) (6) - the JAR will not necessarily make the same judgments as agent i. So, for example, we have that the discursive paradox is provable in jal(L): JAL(L) (( MV) \u2192 \u22a5).\nAn example of a derivation of the less complicated (valid) property (i \u2227 j) is shown in Table 3.\n6.\nPREFERENCE AGGREGATION Recently, Dietrich and List [5] showed that preference aggregation can be embedded in judgment aggregation.\nIn this section we show that our judgment aggregation logic also can be used to reason about preference aggregation.\nGiven a set K of alternatives, [5] defines a simple predicate logic LK with language L(LK ) as follows: \u2022 L(LK ) has one constant a for each alternative a \u2208 K, variables v1, v2, ..., a binary identity predicate =, a binary predicate P for strict preference, and the usual propositional and first order connectives \u2022 Z is the collection of the following axioms: - \u2200v1 \u2200v2 (v1Pv2 \u2192 \u00acv2Pv1) - \u2200v1 \u2200v2 \u2200v3 ((v1Pv2 \u2227 v2Pv3) \u2192 v1Pv3) - \u2200v1 \u2200v2 (\u00acv1 = v2 \u2192 (v1Pv2 \u2228 v2Pv1)) \u2022 When \u0393 \u2286 L(LK ) and \u03c6 is a formula, \u0393 |= \u03c6 is defined to hold iff \u0393 \u222a Z entails \u03c6 in the standard sense of predicate logic 1 (hp \u2227 i) \u2228 (hp \u2227 i) CpJS(i) 2 (hp \u2227 j) \u2228 (hp \u2227 j) CpJS(j) 3 Call 1 A \u2228 B and 2 C \u2228 D abbreviation, 1, 2 4 (A \u2227 C) \u2228 (A \u2227 D) \u2228 (B \u2227 C) \u2228 (B \u2227 D) taut, 3 5 derive (i \u2227 j) from every disjunct of 4 strategy is \u2228 elim 6 (hp \u2227 i) \u2227 (hp \u2227 j) assume A \u2227 C 7 (hp \u2192 (i \u2227 j)) Once, 6, K( ) 8 (i \u2227 j) 7, Agenda 9 (i \u2227 j) 8, T( ) 10 (hp \u2227 i) \u2227 (hp \u2227 j) assume A \u2227 D 11 (hp \u2227 x) \u2194 (hp \u2227 \u00acx) Agenda, Closure 12 (hp \u2227 i) \u2227 (hp \u2227 \u00acj) 10, 11 13 (hp \u2227 i \u2227 \u00acj) 12, Once, K( ) 14 (i \u2227 \u00acj) 13, taut 15 (i \u2227 \u00acj) 14, K( ) 16 (i \u2227 \u00acj) 15, COMM 17 ( i \u2227 D\u00acj) 16, K( ) 18 (i \u2227 j) 17, C 19 (hp \u2227 i) \u2227 (hp \u2227 j) assume B \u2227 D 20 goes as 6-9 21 (hp \u2227 i) \u2227 (hp \u2227 j) assume B \u2227 C 22 goes as 10 - 18 23 (i \u2227 j) \u2228-elim, 1, 2, 9, 18, 20, 22 Table 3: jar derivation of (i \u2227 j) It is easy to see that there is an one-to-one correspondence between the set of preference relations (total linear orders) over K and the set of LK -consistent and complete judgment sets over the preference agenda AK = {aPb, \u00acaPb : a, b \u2208 K, a b} Given a SWF F over K, the corresponding JAR fF over the preference agenda AK is defined as follows fF (A1, ... , An) = A, where A is the consistent and complete judgment set corresponding to F(L1, ... , Ln) where Li is the preference relation corresponding to the consistent and complete judgment set Ai.\nThus we can use jal to reason about preference aggregation as follows.\nTake the logical language L(\u03a3, AK ), for some set of agents \u03a3, and take the underlying logic to be LK .\nWe can then interpret our formulae in an SWF F over K, a preference profile L \u2208 L(K) and a pair (a, b) \u2286 K \u00d7 K, a b, as follows: F, L, (a, b) |=swf \u03c6 \u21d4 fF , \u03b3L , aPb |=LK \u03c6 where \u03b3L is the judgment profile corresponding to the preference profile L.\nWhile in the general judgment aggregation case a formula is interpreted in the context of an agenda item, in the preference aggregation case a formula is thus interpreted in the context of a pair of alternatives.\nExample 2.\nThree agents must decide between going to dinner (d), a movie (m) or a concert (c).\nTheir individual preferences are illustrated on the right in Table 1 in Section 3, along with the result of a SWF Fmaj implementing pair-wise majority voting.\nLet L = mdc, mcd, cmd be the preference profile corresponding to the preferences in the example.\nWe have the following: \u2022 Fmaj, L, (m, d) |=swf 1 \u2227 2 \u2227 3 (all agents agree, under the individual rankings L, on the relative ranking of m and dthey agree that d is better than m) \u2022 Fmaj, L, (m, d) |=swf \u00ac(1 \u2194 2) (under the individual rankings L, there is some pair of alternatives on which agents 1 and 2 disagree) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 571 \u2022 Fmaj, L, (m, d) |=swf (1 \u2227 2) (agents 1 and 2 can choose their preferences such that they will agree on some pair of alternatives) \u2022 Fmaj, L, (m, d) |=swf \u03c3 \u2194 G\u2286{1,2,3},|G|\u22652 i\u2208G i (the SWF Fmaj implements pair-wise majority voting) As usual, we write F |=swf \u03c6 when F, L, (a, b) |=swf \u03c6 for any L and (a, b), and so on.\nThus, our formulae can be seen as expressing properties of social welfare functions.\nExample 3.\nTake the formula (i \u2194 \u03c3).\nWhen this formula is interpreted as a statement about a social welfare function, it says that there exists a preference profile such that for all pairs (a, b) of alternatives, b is preferred over a in the aggregation (by the SWF) of the preference profile if and only if agent i prefers b over a. 6.1 Expressivity Examples We make precise the claim in Section 2.2 that the three mentioned SWF properties correspond to the three mentioned JAR properties, respectively.\nRecall the formulae defined in Section 4.\nProposition 2.\nF |=swf ND iff F has the property ND2 F |=swf IND iff F has the property IIA F |=swf UNA iff F has the property PO The properties expressed above are properties of SWFs.\nLet us now look at properties of the set of alternatives K we can express.\nProperties involving cardinality is often of interest, for example in Arrow``s theorem.\nLet: MT2 = ( (1 \u2227 2) \u2227 (1 \u2227 \u00ac2)) Proposition 3.\nLet F \u2208 F (K).\n|K| > 2 iff F |=swf MT2.\nProof.\nFor the direction to the left, let F |=swf MT2.\nThus, there is a \u03b3 such that there exists (a1 , b1 ), (a2 , b2 ) \u2208 K \u00d7 K, where a1 b1 , and a2 b2 , such that (i) a1 Pb1 \u2208 \u03b31, (ii) a1 Pb1 \u2208 \u03b32, (iii) a2 Pb2 \u2208 \u03b31 and (iv) a2 Pb2 \u03b32.\nFrom (ii) and (iv) we get that (a1 , b1 ) (a2 , b2 ), and from that and (i) and (iii) it follows that \u03b31 contains two different pairs a1 Pb1 and a2 Pb2 each having two different elements.\nBut that is not possible if |K| = 2, because if K = {a, b} then AK = {aPb, \u00acaPb, bPa, \u00acbPa} and thus it is impossible that \u03b31 \u2286 AK since we cannot have aPb, bPa \u2208 \u03b31.\nFor the direction to the right, let |K| > 2; let a, b, c be three distinct elements of K. Let \u03b31 be the judgment set corresponding to the ranking abc and \u03b32 the judgment set corresponding to acb.\nNow, for any aggregation rule f, f, \u03b3, aPb |= 1 \u2227 2 and f, \u03b3, bPc |= 1 \u2227 \u00ac2.\nThus, F |=swf MT2, for any SWF F.\nWe now have everything we need to express Arrow``s statement as a formula.\nIt follows from his theorem that the formula is valid on the class of all social welfare functions.\nTheorem 4.\n|=swf MT2 \u2192 \u00ac(PO \u2227 ND \u2227 IIA) Proof.\nNote that MT2, PO, ND and IIA are true SWF properties, their truth value wrt.\na table is determined solely by the SWF.\nFor example, F, L, (a, b) |=swf MT2 iff F |= MT2, for any F, L, a, b. Let F \u2208 F (K), and F, L, (a, b) |=swf MT2 for some L and a, b. By Proposition 3, K has more than two alternatives.\nBy Arrow``s theorem, F cannot have all the properties PO, ND2 and IIA.\nW.l.o.g assume that F does not have the PO property.\nBy Proposition 2, F |=swf PO.\nSince PO is a SWF property, this means that F, L, (a, b) |=swf PO (satisfaction of PO is independent of L, a, b), and thus that F, L, (a, b) |=swf \u00acPO \u2228 \u00acND \u2228 \u00acIIA.\nNote that the formula in Theorem 4 does not mention any agenda items (i.e., pairs of alternatives) such as haPb directly in an expression.\nThis means that the formula is a member of L(\u03a3, AK ) for any set of alternatives K, and is valid no matter which set of alternatives we assume.\nThe formula MV which in the general judgment aggregation case expresses proposition-wise majority voting, expresses in the preference aggregation case pair-wise majority voting, as illustrated in Example 2.\nThe preference aggregation correspondent to the discursive paradox of judgment aggregation is the well known Condorcet``s voting paradox, stating that pair-wise majority voting can lead to aggregated preferences which are cyclic (even if the individual preferences are not).\nWe can express Condorcet``s paradox as follows, again as a universally valid logical property of SWFs.\nProposition 4.\n|=swf MT2 \u2192 \u00acMV, when there are at least three agents.\nProof.\nThe proof is similar to the proof of the discursive paradox.\nLet fF , \u03b3, aPb |=LK MT2; there are thus three distinct elements a, b, c \u2208 K. Assume that fF , \u03b3, aPb |=LK MV.\nLet \u03b3 be the judgment profile corresponding to the preference profile X = (abc, cab, bca).\nWe have that fF , \u03b3 , aPb |=LK 1 \u2227 2 and, since fF , \u03b3 , aPb |=LK MV, we have that fF , \u03b3 , aPb |=LK \u03c3 and thus that aPb \u2208 fF (\u03b3 ) and (a, b) \u2208 F(X).\nIn a similar manner we get that (c, a) \u2208 F(X) and (b, c) \u2208 F(X).\nBut that is impossible, since by transitivity we would also have that (a, c) \u2208 F(X) which contradicts the fact that F(X) is antisymmetric.\nThus, it follows that fF , \u03b3, aPb |=LK MV.\n6.2 Axiomatisation and Logical Properties We immediately get, from Theorem 3, a sound and complete axiomatisation of preference aggregation over a finite set of alternatives.\nCorollary 1.\nIf the set of alternatives K is finite, we have that for any formula \u03c8 \u2208 L(\u03a3, AK ), JAL(LK ) \u03c8 iff |=swf \u03c8.\nProof.\nFollows immediately from Theorem 3 and the fact that for any JAR f, there is a SWF F such that f = fF .\nSo, for example, Arrow``s theorem is provable in jal(LK ): JAL(LK ) MT2 \u2192 \u00ac(PO \u2227 ND \u2227 IIA).\nEvery formula which is valid with respect to judgment aggregation rules is also valid with respect to social welfare functions, so all general logical properties of JARs are also properties of SWFs.\nDepending on the agenda, SWFs may have additional properties, induced by the logic LK , which are not always shared by JARs with other underlying logics.\nOne such property is i.\nWhile we have |=swf i, for other agendas there are underlying logics L such that |=L i To see the latter, take an agenda with a formula p which is inconsistent in the underlying logic L - p can never be included in a judgment set.\nTo see the former, take an arbitrary pair of alternatives (a, b).\nThere exists some preference profile in which agent i prefers b over a. Technically speaking, the formula i holds in SWFs because the agenda AK does not contain a formula which (alone) is inconsistent wrt.\nthe underlying logic LK .\nBy the same reason, the following properties also hold in SWFs but not in JARs in general.\n|=swf o\u2208O o 572 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) - for any pair of alternatives (a, b), any possible combination of the relative ranking of a and b among the agents is possible.\n|=swf i \u2192 \u00aci - given an alternative b which is preferred over some other alternative a by agent i, there is some other pair of alternatives c and d such that d is not preferred over c - namely (c, d) = (b, a).\n|=swf ( (i \u2228 j) \u2192 (i \u2227 \u00acj)) - if, given preferences of agents and a SWF, for any two alternatives it is always the case that either agent i or agent j prefers the second alternative over the first, then there must exist a pair of alternatives for which the two agents disagree.\nA justification is that no single agent can prefer the second alternative over the first for every pair of alternatives, so in this case if i prefers b over a then j must prefer a over b. Again, this property does not necessarily hold for other agendas, because the agenda might contain an inconsistency the agents could not possibly disagree upon.\nProof theoretically, these additional properties of SWFs are derived using the Closure rule.\n7.\nRELATED WORK Formal logics related to social choice have focused mostly on the logical representation of preferences when the set of alternatives is large and on the computation properties of computing aggregated preferences for a given representation [6, 7, 8].\nA notable and recent exception is a logical framework for judgment aggregation developed by Marc Pauly in [10], in order to be able to characterise the logical relationships between different judgment aggregation rules.\nWhile the motivation is similar to the work in this paper, the approaches are fundamentally different: in [10], the possible results from applying a rule to some judgment profile are taken as primary and described axiomatically; in our approach the aggregation rule and its possible inputs, i.e., judgment profiles, are taken as primary and described axiomatically.\nThe two approaches do not seem to be directly related to each other in the sense that one can be embedded in the other.\nThe modal logic arrow logic [11] is designed to reason about any object that can be graphically represented as an arrow, and has various modal operators for expressing properties of and relationships between these arrows.\nIn the preference aggregation logic jal(LK ) we interpreted formulae in pairs of alternatives - which can be seen as arrows.\nThus, (at least) the preference aggregation variant of our logic is related to arrow logic.\nHowever, while the modal operators of arrow logic can express properties of preference relations such as transitivity, they cannot directly express most of the properties we have discussed in this paper.\nNevertheless, the relationship to arrow logic could be investigated further in future work.\nIn particular, arrow logics are usually proven complete wrt.\nan algebra.\nThis could mean that it might be possible to use such algebras as the underlying structure to represent individual and collective preferences.\nThen, changing the preference profile takes us from one algebra to another, and a SWF determines the collective preference, in each of the algebras.\n8.\nDISCUSSION We have presented a sound and complete logic jal for representing and reasoning about judgment aggregation.\njal is expressive: it can express judgment aggregation rules such as majority voting; complicated properties such as independence; and important results such as the discursive paradox, Arrow``s theorem and Condorcet``s paradox.\nWe argue that these results show exactly which logical capabilities an agent needs in order to be able to reason about judgment aggregation.\nIt is perhaps surprising that a relatively simple language provides these capabilities.\njal provides a proof theory, in which results such as those mentioned above can be derived3 .\nThe axiomatisation describes the logical principles of judgment aggregation, and can also be instantiated to reason about specific instances of judgment aggregation, such as classical Arrovian preference aggregation.\nThus our framework sheds light on the differences between the logical principles behind general judgment aggregation on the one hand and classical preference aggregation on the other.\nIn future work it would be interesting to relax the completeness and consistency requirements of judgment sets, and try to characterise these in the logical language, as properties of general judgment sets, instead.\n9.\nACKNOWLEDGMENTS We thank the anonymous reviewers for their helpful remarks.\nThomas \u00c5gotnes'' work on this paper was supported by grants 166525\/V30 and 176853\/S10 from the Research Council of Norway.\n10.\nREFERENCES [1] K. J. Arrow.\nSocial Choice and Individual Values.\nWiley, 1951.\n[2] K. J. Arrow, A. K. Sen, and K. Suzumura, eds.\nHandbook of Social Choice and Welfare, volume 1.\nNorth-Holland, 2002.\n[3] P. Blackburn, M. de Rijke, and Y. Venema.\nModal Logic.\nCambridge University Press, 2001.\n[4] E. M. Clarke, O. Grumberg, and D. A. Peled.\nModel Checking.\nThe MIT Press: Cambridge, MA, 2000.\n[5] F. Dietrich and C. List.\nArrow``s theorem in judgment aggregation.\nSocial Choice and Welfare, 2006.\nForthcoming.\n[6] C. Lafage and J. Lang.\nLogical representation of preferences for group decision making.\nIn Proceedings of the Conference on Principles of Knowledge Representation and Reasoning (KR-00), pages 457-470.\nMorgan Kaufman, 2000.\n[7] J. Lang.\nFrom preference representation to combinatorial vote.\nProceedings of the Eighth International Conference on Principles and Knowledge Representation and Reasoning (KR-02), pages 277-290.\nMorgan Kaufmann, 2002.\n[8] J. Lang.\nLogical preference representation and combinatorial vote.\nAnn.\nMath.\nArtif.\nIntell, 42(1-3):37-71, 2004.\n[9] C. H. Papadimitriou.\nComputational Complexity.\nAddison-Wesley: Reading, MA, 1994.\n[10] M. Pauly.\nAxiomatizing collective judgment sets in a minimal logical language, 2006.\nManuscript.\n[11] Y. Venema.\nA crash course in arrow logic.\nIn M. Marx, M. Masuch, and L. Polos, editors, Arrow Logic and Multi-Modal Logic, pages 3-34.\nCSLI Publications, Stanford, 1996.\n3 Dietrich and List [5] prove a general version of Arrow``s theorem for JARs: for a strongly connected agenda, a JAR has the IND and UNA properties iff it does not have the ND1 property, where strong connectedness is an algebraic and logical condition on agendas.\nThus, if we assume that the agenda is strongly connected then (ND \u2227 UNA) \u2194 \u00acND1 is valid, and derivable in jar.\nAn interesting possibility for future work is to try to characterise conditions such as strong connectedness directly as a logical formula.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 573","lvl-3":"Reasoning about Judgment and Preference Aggregation \u25e6\nABSTRACT\nAgents that must reach agreements with other agents need to reason about how their preferences, judgments, and beliefs might be aggregated with those of others by the social choice mechanisms that govern their interactions.\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and considers how multiple sets of logical formulae can be aggregated to a single consistent set.\nAs a special case, judgment aggregation can be seen to subsume classical preference aggregation.\nWe present a modal logic that is intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules.\nWe present a sound and complete axiomatisation of such rules.\nWe show that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow's theorem and Condorcet's paradox--which are derivable as formal theorems of the logic.\nThe logic is parameterised in such a way that it can be used as a general framework for comparing the logical properties of different types of aggregation--including classical preference aggregation.\n1.\nINTRODUCTION\nIn this paper, we are interested in knowledge representation formalisms for systems in which agents need to aggregate their pref\nerences, judgments, beliefs, etc. .\nFor example, an agent may need to reason about majority voting in a group he is a member of.\nPreference aggregation--combining individuals' preference relations over some set of alternatives into a preference relation which represents the joint preferences of the group by so-called social welfare functions--has been extensively studied in social choice theory [2].\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and discusses how, given a consistent set of logical formulae for each agent, representing the agent's beliefs or judgments, we can aggregate these to a single consistent set of formulae.\nA variety of judgment aggregation rules have been developed to this end.\nAs a special case, judgment aggregation can be seen to subsume preference aggregation [5].\nIn this paper we present a logic, called Judgment Aggregation Logic (jal), for reasoning about judgment aggregation.\nThe formulae of the logic are interpreted as statements about judgment aggregation rules, and we give a sound and complete axiomatisation of all such rules.\nThe axiomatisation is parameterised in such a way that we can instantiate it to get a range of different judgment aggregation logics.\nFor example, one instance is an axiomatisation, in our language, of all social welfare functions--thus we get a logic of classical preference aggregation as well.\nAnd this is one of the main contributions of this paper: we identify the logical properties of judgment aggregation, and we can compare the logical properties of different classes of judgment aggregation--and of general judgment aggregation and preference aggregation in particular.\nOf course, a logic is only interesting as long as it is expressive.\nOne of the goals of this paper is to investigate the representational and logical capabilities an agent needs for judgment and preference aggregation; that is, what kind of logical language might be used to represent and reason about judgment aggregation?\nAn agent's knowledge representation language should be able to express: common aggregation rules such as majority voting; commonly discussed properties of judgment aggregation rules and social welfare functions such as independence; paradoxes commonly used to illustrate judgment aggregation and preference aggregation, viz.\nthe discursive paradox and Condorcet's paradox respectively; and other important properties such as Arrow's theorem.\nIn order to illustrate in more detail what such a language would need to be able to express, take the example of a potential property of social welfare functions (SWFs) called independence of irrelevant alternatives (IIA): given two preference profiles (each consisting of one preference relation for each agent) and two alternatives, if for each agent the two alternatives have the same order in the two preference profiles, then the two alternatives must have the same order in the two preference relations resulting from applying the SWF to the two preference profiles, respectively.\nFrom this example it seems that a formal language for SWFs should be able to express:\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS \u2022 Quantification on several levels: over alternatives; over preference profiles, i.e., over relations over alternatives (secondorder quantification); and over agents.\n\u2022 Properties of preference relations for different agents, and properties of several different preference relations for the same agent in the same formula.\n\u2022 Comparison of different preference relations.\n\u2022 The preference relation resulting from applying a SWF to other preference relations.\nFrom these points it might seem that such a language would be rather complex (in particular, these requirements seem to rule out a standard propositional modal logic).\nPerhaps surprisingly, the language of JAL is syntactically and semantically rather simple; and yet the language is, nevertheless, expressive enough to give elegant and succinct expressions of, e.g., IIA, majority voting, the discursive dilemma, Condorcet's paradox and Arrow's theorem.\nThis means, for example, that Arrow's theorem is a formal theorem of JAL, i.e., a derivable formula; we thus have a formal proof theory for social choice.\nThe structure of the rest of the paper is as follows.\nIn the next section we review the basics of judgment aggregation as well as preference aggregation, and mention some commonly discussed properties of judgment aggregation rules and social welfare functions.\nIn Section 3 we introduce the syntax and semantics of JAL, and study the complexity of the model checking problem.\nFormulae of JAL are interpreted directly by, and thus represent properties of, judgment aggregation rules.\nIn Section 4 we demonstrate that the logic can express commonly discussed properties of judgment aggregation rules, such as the discursive paradox.\nWe give a sound and complete axiomatisation of the logic in Section 5, under the assumption that the agenda the agents make judgments over is finite.\nAs mentioned above, preference aggregation can be seen as a special case of judgment aggregation, and in Section 6 we introduce an alternative interpretation of JAL formulae directly in social welfare functions.\nWe obtain a sound and complete axiomatisation of the logic for preference aggregation as well.\nSections 7 and 8 discusses related work and concludes.\n2.\nJUDGMENT AND PREFERENCE AGGREGATION\n2.1 Judgment Aggregation Rules\n2.2 Social Welfare Functions\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 567\n3.\nJUDGMENT AGGREGATION LOGIC: SYNTAX AND SEMANTICS\n3.1 Model Checking\n568 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2 Some Properties\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 569\n4.\nEXPRESSIVITY EXAMPLES\n4.1 The Discursive Paradox\n5.\nAXIOMATISATION\n570 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nPREFERENCE AGGREGATION\n6.1 Expressivity Examples\n6.2 Axiomatisation and Logical Properties\n| = swf A\n572 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nRELATED WORK\nFormal logics related to social choice have focused mostly on the logical representation of preferences when the set of alternatives is large and on the computation properties of computing aggregated preferences for a given representation [6, 7, 8].\nA notable and recent exception is a logical framework for judgment aggregation developed by Marc Pauly in [10], in order to be able to characterise the logical relationships between different judgment aggregation rules.\nWhile the motivation is similar to the work in this paper, the approaches are fundamentally different: in [10], the possible results from applying a rule to some judgment profile are taken as primary and described axiomatically; in our approach the aggregation rule and its possible inputs, i.e., judgment profiles, are taken as primary and described axiomatically.\nThe two approaches do not seem to be directly related to each other in the sense that one can be embedded in the other.\nThe modal logic arrow logic [11] is designed to reason about any object that can be graphically represented as an arrow, and has various modal operators for expressing properties of and relationships between these arrows.\nIn the preference aggregation logic jal (LK) we interpreted formulae in pairs of alternatives--which can be seen as arrows.\nThus, (at least) the preference aggregation variant of our logic is related to arrow logic.\nHowever, while the modal operators of arrow logic can express properties of preference relations such as transitivity, they cannot directly express most of the properties we have discussed in this paper.\nNevertheless, the relationship to arrow logic could be investigated further in future work.\nIn particular, arrow logics are usually proven complete wrt.\nan algebra.\nThis could mean that it might be possible to use such algebras as the underlying structure to represent individual and collective preferences.\nThen, changing the preference profile takes us from one algebra to another, and a SWF determines the collective preference, in each of the algebras.\n8.\nDISCUSSION\nWe have presented a sound and complete logic jal for representing and reasoning about judgment aggregation.\njal is expressive: it can express judgment aggregation rules such as majority voting; complicated properties such as independence; and important results such as the discursive paradox, Arrow's theorem and Condorcet's paradox.\nWe argue that these results show exactly which logical capabilities an agent needs in order to be able to reason about judgment aggregation.\nIt is perhaps surprising that a relatively simple language provides these capabilities.\njal provides a proof theory, in which results such as those mentioned above can be derived3.\nThe axiomatisation describes the logical principles of judgment aggregation, and can also be instantiated to reason about specific instances of judgment aggregation, such as classical Arrovian preference aggregation.\nThus our framework sheds light on the differences between the logical principles behind general judgment aggregation on the one hand and classical preference aggregation on the other.\nIn future work it would be interesting to relax the completeness and consistency requirements of judgment sets, and try to characterise these in the logical language, as properties of general judgment sets, instead.","lvl-4":"Reasoning about Judgment and Preference Aggregation \u25e6\nABSTRACT\nAgents that must reach agreements with other agents need to reason about how their preferences, judgments, and beliefs might be aggregated with those of others by the social choice mechanisms that govern their interactions.\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and considers how multiple sets of logical formulae can be aggregated to a single consistent set.\nAs a special case, judgment aggregation can be seen to subsume classical preference aggregation.\nWe present a modal logic that is intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules.\nWe present a sound and complete axiomatisation of such rules.\nWe show that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow's theorem and Condorcet's paradox--which are derivable as formal theorems of the logic.\nThe logic is parameterised in such a way that it can be used as a general framework for comparing the logical properties of different types of aggregation--including classical preference aggregation.\n1.\nINTRODUCTION\nIn this paper, we are interested in knowledge representation formalisms for systems in which agents need to aggregate their pref\nerences, judgments, beliefs, etc. .\nFor example, an agent may need to reason about majority voting in a group he is a member of.\nPreference aggregation--combining individuals' preference relations over some set of alternatives into a preference relation which represents the joint preferences of the group by so-called social welfare functions--has been extensively studied in social choice theory [2].\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and discusses how, given a consistent set of logical formulae for each agent, representing the agent's beliefs or judgments, we can aggregate these to a single consistent set of formulae.\nA variety of judgment aggregation rules have been developed to this end.\nAs a special case, judgment aggregation can be seen to subsume preference aggregation [5].\nIn this paper we present a logic, called Judgment Aggregation Logic (jal), for reasoning about judgment aggregation.\nThe formulae of the logic are interpreted as statements about judgment aggregation rules, and we give a sound and complete axiomatisation of all such rules.\nThe axiomatisation is parameterised in such a way that we can instantiate it to get a range of different judgment aggregation logics.\nFor example, one instance is an axiomatisation, in our language, of all social welfare functions--thus we get a logic of classical preference aggregation as well.\nAnd this is one of the main contributions of this paper: we identify the logical properties of judgment aggregation, and we can compare the logical properties of different classes of judgment aggregation--and of general judgment aggregation and preference aggregation in particular.\nOf course, a logic is only interesting as long as it is expressive.\nOne of the goals of this paper is to investigate the representational and logical capabilities an agent needs for judgment and preference aggregation; that is, what kind of logical language might be used to represent and reason about judgment aggregation?\nAn agent's knowledge representation language should be able to express: common aggregation rules such as majority voting; commonly discussed properties of judgment aggregation rules and social welfare functions such as independence; paradoxes commonly used to illustrate judgment aggregation and preference aggregation, viz.\nthe discursive paradox and Condorcet's paradox respectively; and other important properties such as Arrow's theorem.\nFrom this example it seems that a formal language for SWFs should be able to express:\n\u2022 Properties of preference relations for different agents, and properties of several different preference relations for the same agent in the same formula.\n\u2022 Comparison of different preference relations.\n\u2022 The preference relation resulting from applying a SWF to other preference relations.\nFrom these points it might seem that such a language would be rather complex (in particular, these requirements seem to rule out a standard propositional modal logic).\nIn the next section we review the basics of judgment aggregation as well as preference aggregation, and mention some commonly discussed properties of judgment aggregation rules and social welfare functions.\nFormulae of JAL are interpreted directly by, and thus represent properties of, judgment aggregation rules.\nIn Section 4 we demonstrate that the logic can express commonly discussed properties of judgment aggregation rules, such as the discursive paradox.\nWe give a sound and complete axiomatisation of the logic in Section 5, under the assumption that the agenda the agents make judgments over is finite.\nAs mentioned above, preference aggregation can be seen as a special case of judgment aggregation, and in Section 6 we introduce an alternative interpretation of JAL formulae directly in social welfare functions.\nWe obtain a sound and complete axiomatisation of the logic for preference aggregation as well.\nSections 7 and 8 discusses related work and concludes.\n7.\nRELATED WORK\nFormal logics related to social choice have focused mostly on the logical representation of preferences when the set of alternatives is large and on the computation properties of computing aggregated preferences for a given representation [6, 7, 8].\nA notable and recent exception is a logical framework for judgment aggregation developed by Marc Pauly in [10], in order to be able to characterise the logical relationships between different judgment aggregation rules.\nThe modal logic arrow logic [11] is designed to reason about any object that can be graphically represented as an arrow, and has various modal operators for expressing properties of and relationships between these arrows.\nIn the preference aggregation logic jal (LK) we interpreted formulae in pairs of alternatives--which can be seen as arrows.\nThus, (at least) the preference aggregation variant of our logic is related to arrow logic.\nHowever, while the modal operators of arrow logic can express properties of preference relations such as transitivity, they cannot directly express most of the properties we have discussed in this paper.\nNevertheless, the relationship to arrow logic could be investigated further in future work.\nIn particular, arrow logics are usually proven complete wrt.\nan algebra.\nThis could mean that it might be possible to use such algebras as the underlying structure to represent individual and collective preferences.\nThen, changing the preference profile takes us from one algebra to another, and a SWF determines the collective preference, in each of the algebras.\n8.\nDISCUSSION\nWe have presented a sound and complete logic jal for representing and reasoning about judgment aggregation.\njal is expressive: it can express judgment aggregation rules such as majority voting; complicated properties such as independence; and important results such as the discursive paradox, Arrow's theorem and Condorcet's paradox.\nWe argue that these results show exactly which logical capabilities an agent needs in order to be able to reason about judgment aggregation.\nIt is perhaps surprising that a relatively simple language provides these capabilities.\nThe axiomatisation describes the logical principles of judgment aggregation, and can also be instantiated to reason about specific instances of judgment aggregation, such as classical Arrovian preference aggregation.\nThus our framework sheds light on the differences between the logical principles behind general judgment aggregation on the one hand and classical preference aggregation on the other.\nIn future work it would be interesting to relax the completeness and consistency requirements of judgment sets, and try to characterise these in the logical language, as properties of general judgment sets, instead.","lvl-2":"Reasoning about Judgment and Preference Aggregation \u25e6\nABSTRACT\nAgents that must reach agreements with other agents need to reason about how their preferences, judgments, and beliefs might be aggregated with those of others by the social choice mechanisms that govern their interactions.\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and considers how multiple sets of logical formulae can be aggregated to a single consistent set.\nAs a special case, judgment aggregation can be seen to subsume classical preference aggregation.\nWe present a modal logic that is intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules.\nWe present a sound and complete axiomatisation of such rules.\nWe show that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow's theorem and Condorcet's paradox--which are derivable as formal theorems of the logic.\nThe logic is parameterised in such a way that it can be used as a general framework for comparing the logical properties of different types of aggregation--including classical preference aggregation.\n1.\nINTRODUCTION\nIn this paper, we are interested in knowledge representation formalisms for systems in which agents need to aggregate their pref\nerences, judgments, beliefs, etc. .\nFor example, an agent may need to reason about majority voting in a group he is a member of.\nPreference aggregation--combining individuals' preference relations over some set of alternatives into a preference relation which represents the joint preferences of the group by so-called social welfare functions--has been extensively studied in social choice theory [2].\nThe recently emerging field of judgment aggregation studies aggregation from a logical perspective, and discusses how, given a consistent set of logical formulae for each agent, representing the agent's beliefs or judgments, we can aggregate these to a single consistent set of formulae.\nA variety of judgment aggregation rules have been developed to this end.\nAs a special case, judgment aggregation can be seen to subsume preference aggregation [5].\nIn this paper we present a logic, called Judgment Aggregation Logic (jal), for reasoning about judgment aggregation.\nThe formulae of the logic are interpreted as statements about judgment aggregation rules, and we give a sound and complete axiomatisation of all such rules.\nThe axiomatisation is parameterised in such a way that we can instantiate it to get a range of different judgment aggregation logics.\nFor example, one instance is an axiomatisation, in our language, of all social welfare functions--thus we get a logic of classical preference aggregation as well.\nAnd this is one of the main contributions of this paper: we identify the logical properties of judgment aggregation, and we can compare the logical properties of different classes of judgment aggregation--and of general judgment aggregation and preference aggregation in particular.\nOf course, a logic is only interesting as long as it is expressive.\nOne of the goals of this paper is to investigate the representational and logical capabilities an agent needs for judgment and preference aggregation; that is, what kind of logical language might be used to represent and reason about judgment aggregation?\nAn agent's knowledge representation language should be able to express: common aggregation rules such as majority voting; commonly discussed properties of judgment aggregation rules and social welfare functions such as independence; paradoxes commonly used to illustrate judgment aggregation and preference aggregation, viz.\nthe discursive paradox and Condorcet's paradox respectively; and other important properties such as Arrow's theorem.\nIn order to illustrate in more detail what such a language would need to be able to express, take the example of a potential property of social welfare functions (SWFs) called independence of irrelevant alternatives (IIA): given two preference profiles (each consisting of one preference relation for each agent) and two alternatives, if for each agent the two alternatives have the same order in the two preference profiles, then the two alternatives must have the same order in the two preference relations resulting from applying the SWF to the two preference profiles, respectively.\nFrom this example it seems that a formal language for SWFs should be able to express:\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS \u2022 Quantification on several levels: over alternatives; over preference profiles, i.e., over relations over alternatives (secondorder quantification); and over agents.\n\u2022 Properties of preference relations for different agents, and properties of several different preference relations for the same agent in the same formula.\n\u2022 Comparison of different preference relations.\n\u2022 The preference relation resulting from applying a SWF to other preference relations.\nFrom these points it might seem that such a language would be rather complex (in particular, these requirements seem to rule out a standard propositional modal logic).\nPerhaps surprisingly, the language of JAL is syntactically and semantically rather simple; and yet the language is, nevertheless, expressive enough to give elegant and succinct expressions of, e.g., IIA, majority voting, the discursive dilemma, Condorcet's paradox and Arrow's theorem.\nThis means, for example, that Arrow's theorem is a formal theorem of JAL, i.e., a derivable formula; we thus have a formal proof theory for social choice.\nThe structure of the rest of the paper is as follows.\nIn the next section we review the basics of judgment aggregation as well as preference aggregation, and mention some commonly discussed properties of judgment aggregation rules and social welfare functions.\nIn Section 3 we introduce the syntax and semantics of JAL, and study the complexity of the model checking problem.\nFormulae of JAL are interpreted directly by, and thus represent properties of, judgment aggregation rules.\nIn Section 4 we demonstrate that the logic can express commonly discussed properties of judgment aggregation rules, such as the discursive paradox.\nWe give a sound and complete axiomatisation of the logic in Section 5, under the assumption that the agenda the agents make judgments over is finite.\nAs mentioned above, preference aggregation can be seen as a special case of judgment aggregation, and in Section 6 we introduce an alternative interpretation of JAL formulae directly in social welfare functions.\nWe obtain a sound and complete axiomatisation of the logic for preference aggregation as well.\nSections 7 and 8 discusses related work and concludes.\n2.\nJUDGMENT AND PREFERENCE AGGREGATION\nJudgment aggregation is concerned with judgment aggregation rules aggregating sets of logical formulae; preference aggregation is concerned with social welfare functions aggregating preferences over some set of alternatives.\nLet n be a number of agents; we write \u03a3 for the set {1,..., n].\n2.1 Judgment Aggregation Rules\nLet L be a logic with language L (L).\nWe require that the language has negation and material implication, with the usual semantics.\nWe will sometimes refer to L as \"the underlying logic\".\nAn agenda over L is a non-empty set A C L (L), where for every formula 0 that does not start with a negation, 0 E A iff -0 E A.\nWe sometimes call a member of A an agenda item.\nA subset A' C A is consistent unless A' entails both -0 and 0 in L for some 0 E L (L); A' is complete if either 0 E A' or -0 E A' for every 0 E A which does not start with negation.\nAn (admissible) individual judgment set is a complete and consistent subset Ai C A of the agenda.\nThe idea here is that a judgment set Ai represents the choices from A made by agent i. Two rationality criteria demand that an agents' choices at least be internally consistent, and that each agent makes a decision between every item and its negation.\nAn (admissible) judgment profile is an n-tuple (A1,..., An), where Ai is the individual judgment set of agent i. J (A, L) denotes the set of all individual (complete and L-consistent) judgment sets over A, and J (A, L) n the set of all judgment profiles over A.\nWhen \u03b3 E J (A, L) n, we use \u03b3i to denote the ith element of \u03b3, i.e., agent i's individual judgment set in judgment profile \u03b3.\nAjudgment aggregation rule (JAR) is a function f that maps each judgment profile (A1,..., An) to a complete and consistent collective judgment set f (A1,..., An) E J (A, L).\nSuch a rule hence is a recipe to enforce a rational group decision, given an tuple of rational choices by the individual agents.\nOf course, such a rule should to a certain extent be ` fair'.\nSome possible properties of a judgment aggregation rule f over an agenda A: Non-dictatorship (ND1) There is no agent i such that for every judgment profile (A1,..., An), f (A1,..., An) = Ai Independence (IND) For any p E A and judgment profiles (A1,..., An) and (B1,..., Bn), if for all agents i (p E Ai iff p E Bi), then p E f (A1,..., An) iff p E f (B1,..., Bn) Unanimity (UNA) For any judgment profile (A1,..., An) and any p E A, if p E Ai for all agents i, then p E f (A1,..., An)\n2.2 Social Welfare Functions\nSocial welfare functions (SWFs) are usually defined in terms of ordinal preference structures, rather than cardinal structures such as utility functions.\nAn SWF takes a preference relation, a binary relation over some set of alternatives, for each agent, and outputs another preference relation representing the aggregated preferences.\nThe most well known result about SWFs is Arrow's theorem [1].\nMany variants of the theorem appear in the literature, differing in assumptions about the preference relations.\nIn this paper, we take the assumption that all preference relations are linear orders, i.e., that neither agents nor the aggregated preference can be indifferent between distinct alternatives.\nThis gives one of the simplest formulations of Arrow's theorem (Theorem 1 below).\nCf., e.g., [2] for a discussion and more general formulations.\nFormally, let K be a set of alternatives.\nWe henceforth implicitly assume that there are always at least two alternatives.\nA preference relation (over K) is, here, a total (linear) order on K, i.e., a relation R over K which is antisymmetric (i.e., (a, b) E R and (b, a) E R implies that a = b), transitive (i.e., (a, b) E R and (b, c) E R implies that (a, c) E R), and total (i.e., either (a, b) E R or (b, a) E R).\nWe sometimes use the infix notation aRb for (a, b) E R.\nThe set of preference relations over alternatives K is denoted L (K).\nAlternatively, we can view L (K) as the set of all permutations of K. Thus, we shall sometimes use a permutation of K to denote a member of L (K).\nFor example, when K = {a, b, c], we will sometimes use the expression acb to denote the relation {(a, c), (a, b), (c, b), (a, a), (b, b), (c, c)].\naRb means that b is preferred over a if a and b are different.\nRs denotes the irreflexive version of R, i.e., Rs = R \\ {(a, a): a E K].\naRsb means that b is preferred over a and that a #b.\nA preference profile for \u03a3 over alternatives K is a tuple (R1,..., Rn) E L (K) n, consisting of one preference relation Ri for each agent i.\nA social welfare function (SWF) is a function\nmapping each preference profile to an aggregated preference relation.\nThe class of all SWFs over alternatives K is denoted 91 (K).\nProperties of SWFs F corresponding to the judgment aggregation rule properties discussed in Section 2.1 are:\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 567\nPareto Optimality (PO) \u2200 (R1,..., Rn) \u2208 L (K) n \u2200 a \u2208 K \u2200 b \u2208 K ((\u2200 i \u2208 \u03a3aRsib) \u21d2 aF (R1,..., Rn) sb) (corresponds to UNA) Arrow's theorem says that the three properties above are inconsistent if there are more than two alternatives.\nTheorem 1 (Arrow).\nIf there are more than two alternatives, no SWF has all the properties PO, ND2 and IIA.\n3.\nJUDGMENT AGGREGATION LOGIC: SYNTAX AND SEMANTICS\nThe language of Judgment Aggregation Logic (jal) is parameterised by a set of agents \u03a3 = {1, 2,..., n} (we will assume that there are at least two agents) and an agenda A.\nThe following atomic propositions are used:\nwhere \u03b1 \u2208 \u03a0.\nThis language will be formally interpreted in structures consisting of an agenda item, a judgment profile and a judgment aggregation function; informally, i means that the agenda item is in agent i's judgment set in the current judgment profile; \u03c3 means that the agenda item is in the aggregated judgment set of the current judgment profile; hp means that the agenda item is p; 13\u03c6 means that \u03c6 is true in every judgment profile; m\u03c6 means that \u03c6 is true in every agenda item.\nWe define * \u03c8 = \u00ac 13 \u00ac \u03c8, intuitively meaning \"\u03c8 is true for some judgment profile\", and * \u03c8 = \u00ac m \u00ac \u03c8, intuitively meaning \"\u03c8 is true for some agenda item\", as usual, in addition to the usual derived propositional connectives.\nWe now define the formal semantics of L (\u03a3, A).\nA model wrt.\nL (\u03a3, A) and underlying logic L is a judgment aggregation rule f over A. Recall that J (A, L) n denotes the set of complete and Lconsistent judgment profiles over A.\nA table is a tuple T = ~ f, \u03b3, p such that f is a model, \u03b3 \u2208 J (A, L) n and p \u2208 A.\nA formula is interpreted on a table as follows.\nf, \u03b3, p | = L hq \u21d4 p = q f, \u03b3, p | = L i \u21d4 p \u2208 \u03b3i f, \u03b3, p | = L \u03c3 \u21d4 p \u2208 f (\u03b3) f, \u03b3, p | = L 13\u03c8 \u21d4 \u2200 \u03b3 \u2208 J (A, L) n f, \u03b3, p | = L \u03c8 f, \u03b3, p | = L m\u03c8 \u21d4 \u2200 p \u2208 A f, \u03b3, p | = L \u03c8 f, \u03b3, p | = L \u03c6 \u2227 \u03c8 \u21d4 f, \u03b3, p | = L \u03c6 and f, \u03b3, p | = L \u03c8 f, \u03b3, p | = L \u00ac \u03c6 \u21d4 f, \u03b3, p ~ | = L \u03c6 So, e.g., we have that f, \u03b3, p | = L i \u2208 \u03a3 i if everybody chooses p in \u03b3.\nTable 1: Examples\nshown.\nThis example can be modelled by taking the agenda to be A = {p, p \u2192 q, q, \u00ac p, \u00ac (p \u2192 q), \u00ac q} (recall that agendas are closed under single negation) and L to be propositional logic.\nThe agents' votes can be modelled by the following judgment profile:\n\u2022 fmaj, \u03b3, p | = L 1 \u2227 \u00ac 2 \u2227 3 (agents 1 and 3 judges p to be true in the profile \u03b3, while agent 2 does not) \u2022 fmaj, \u03b3, p | = L \u03c3 (majority voting on p given the preference profile \u03b3 leads to acceptance ofp) \u2022 fmaj, \u03b3, p | = L \u2666 (1 \u2227 2) (agents 1 and 2 agree on some agenda item, under the judgment profile \u03b3.\nNote that this formula does not depend on which agenda item is on the table.)\n\u2022 fmaj, \u03b3, p | = L O ((1 \u2194 2) \u2227 (2 \u2194 3) \u2227 (1 \u2194 3)) (there is some judgment profile on which all agents agree on p. Note that this formula does not depend on which judgment profile is on the table.)\n\u2022 fmaj, \u03b3, p | = L 0m ((1 \u2194 2) \u2227 (2 \u2194 3) \u2227 (1 \u2194 3)) (there is some judgment profile on which all agents agree on all agenda items.\nNote that this formula does not depend on any of the elements on the table.)\n\u2022 fmaj, \u03b3, p | = L \u03c3 \u2194 V Ai \u2208 G i (the JAR fmaj impleG \u2286 {1,2,3}, | G | \u2265 2\nments majority voting) We write f | = L \u03c6 iff f, \u03b3, p | = L \u03c6 for every \u03b3 over A and p \u2208 A; | = L \u03c6 iff f | = L \u03c6 for all models f. Given a possible property of a JAR, such as, e.g., independence, we say that a formula expresses the property if the formula is true in an aggregation rule f iff f has the property.\nNote that when we are given a formula \u03c6 \u2208 L (\u03a3, A), validity, i.e., | = L \u03c6, is defined with respect to models of the particular language L (\u03a3, A) defined over the particular agenda A (and similar for validity with respect to a JAR, i.e., f | = L \u03c6).\nThe agenda, like the set of agents \u03a3, is given when we define the language, and is thus implicit in the interpretation of the language1.\nLet an outcome o be a maximal conjunction of literals (\u00ac) 1,..., (\u00ac) n.\nThe set O is the set of all possible outcomes.\nNote that the decision of the society is not incorporated here: an outcome only collects votes of agents from \u03a3.\n3.1 Model Checking\nModel checking is currently one of the most active areas of research with respect to reasoning in modal logics [4], and it is natural to investigate the complexity of this problem for judgment aggregation logic.\nIntuitively, the model checking problem for judgment aggregation logic is as follows: Given f, \u03b3, p and formula \u03c6 of jal, is it the case that f, \u03b3, p | = \u03c6 or not?\n1Likewise, in classical modal logic the language is parameterised with a set of primitive propositions, and validity is defined with respect to all models with valuations over that particular set.\n568 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nWhile this problem is easy to understand mathematically, it presents some difficulties if we want to analyse it from a computational point of view.\nSpecifically, the problem lies in the representation of the judgment aggregation rule, f. Recall that this function maps judgment profiles to complete and consistent judgment sets.\nA JAR must be defined for all judgment profiles over some agenda, i.e., it must produce an output for all these possible inputs.\nBut how are we to represent such a rule?\nThe simplest representation of a function f: X \u2192 Y is as the set of ordered pairs {(x, y) | x \u2208 X & y = f (x)}.\nHowever, this is not a feasible representation for JARs, as there will be exponentially many judgment profiles in the size of the agenda, and so the representation would be unfeasibly large in practice.\nIf we did assume this representation for JARs, then it is not hard to see that model checking for our logic would be decidable in polynomial time: the naive algorithm, derivable from semantics, serves this purpose.\nHowever, we emphasise that this result is of no practical significance, since it assumes an unreasonable representation for models--a representation that simply could not be used in practice for examples of anything other than trivial size.\nSo, what is a more realistic representation for JARs?\nLet us say a representation Rf of a JAR f is reasonable if: (i) the size of Rf is polynomial in the size of the agenda; and (ii) there is a polynomial time algorithm A, which takes as input a representation Rf and a judgment profile \u03b3, and produces as output f (\u03b3).\nThere are, of course, many such representations Rf for JARs f. Here, we will look at one very general one: where the JAR is represented as a polynomially bounded two-tape Turing machine Tf, which takes on its first tape a judgment profile, and writes on its second tape the resulting judgment set.\nThe requirement that the Turing machine should be polynomially bounded roughly corresponds to the requirement that a JAR is \"reasonable\" to compute; if there is some JAR that cannot be represented by such a machine, then it is arguably of little value, since it could not be used in practice2.\nWith such a representation, we can investigate the complexity of our model checking problem.\nIn modal logics, the usual source of complexity, over and above the classical logic connectives, is the modal operators.\nWith respect to judgment aggregation logic, the operator o quantifies over all judgment profiles, and hence over all consistent subsets of the agenda.\nIt follows that this is a rather powerful operator: as we will see, it can be used as an NP oracle [9, p. 339].\nIn contrast, the operator E quantifies over members of the agenda, and is hence much weaker, from a computational perspective (we can think of it as a conjunction over elements of the agenda).\nThe power of the o quantifier suggests that the complexity of model checking judgment aggregation logic over relatively succinct representations of JAR is going to be relatively high; we now prove that the complexity of model checking judgment aggregation logic is as hard as solving a polynomial number of NP-hard problems [9, pp. 424--429].\nTHEoREM 2.\nThe model checking problem for judgment aggregation logic, assuming the representation of JARs described above, is \u0394p 2-hard; it is NP-hard even if the formula to be checked is of the form O\u03c8, where \u03c8 contains no further o or O operators.\nPRooF.\nFor \u0394p 2-hardness, we reduce sNsAT (\"sequentially nested 2Of course, we have no general way of checking whether any given Turing machine is guaranteed to terminate in polynomial time; the problem is undecidable.\nAs a consequence, we cannot always check whether a particular Turing machine representation of a JAR meets our requirements.\nHowever, this does not prevent specific JARs being so represented, with corresponding proofs that they terminate in polynomial time.\nsatisfiability\").\nAn instance is given by a series of equations of the form\nwhere X1,..., Xk are disjoint sets of variables, and each \u03c6i (Y) is a propositional logic formula over the variables Y; the idea is we first check whether \u03c61 (X1) is satisfiable, and if it is, we assign z1 the value true, otherwise assign it false; we then check whether \u03c62 is satisfiable under the assumption that z1 takes the value just derived, and so on.\nThus the result of each equation depends on the value of the previous one.\nThe goal is to determine whether zk is true.\nTo reduce this problem to judgment aggregation logic model checking, we first fix the JAR: this rule simply copies whatever agent 1's judgment set is.\n(Clearly this can be implemented by a polynomially bounded Turing machine.)\nThe agenda is assumed to contain the variables X1 \u222a \u00b7 \u00b7 \u00b7 \u222a Xk \u222a {z1,..., zk} and their negations.\nWe fix the initial judgment profile \u03b3 to be X1 \u222a \u00b7 \u00b7 \u00b7 \u222a Xk \u222a {z1,..., zk}, and fix p = x1.\nGiven a variable xi, define x \u2217 i to be * (hxi \u2227 1).\nIf \u03c6i is one of the formulae \u03c61,..., \u03c6k, define \u03c6 \u2217 i to be the formula obtained from \u03c6i by systematically substituting x \u2217 i for each variable xi and z \u2217 i similarly.\nNow, we define the function \u03bei for natural numbers i> 0 as:\nAnd we define the formula to be model checked as: ()\nIt is now straightforward from construction that this formula is true under the interpretation iff zk is true in the sNsAT instance.\nThe proof of the latter half of the theorem is immediate from the special case where k = 1.\n3.2 Some Properties\nWe have thus defined a language which can be used to express properties of judgment aggregation rules.\nAn interesting question is then: what are the universal properties of aggregation rules expressible in the language; which formulae are valid?\nHere, in order to illustrate the logic, we discuss some of these logical properties.\nIn Section 5 we give a complete axiomatisation of all of them.\nRecall that we defined the set O of outcomes as the set of all conjunctions with exactly one, possibly negated, atom from \u03a3.\nLet P = {o \u2227 \u03c3, o \u2227 \u00ac \u03c3: o \u2208 O}; p \u2208 P completely describes the decisions of the agents and the aggregation function.\nLet denote \"exclusive or\".\nWe have that: | = L p \u2208 Pp--any agent and the JAR always have to make a decision | = L (i \u2227 \u00ac j) \u2192 O \u00ac i--if some agent can think differently about an item than i does, then also i can change his mind about it.\nIn fact this principle can be strengthened to\n| = L o * x--for any x \u2208 {i, \u00ac i, \u03c3, \u00ac \u03c3: i \u2208 \u03a3}--both the individual agents and the JAR will always judge some agenda item to be true, and conversely, some agenda item to be false\n| = L O * (i \u2227 j)--there exist admissible judgment sets such that agents i and j agree on some judgment.\n| = L OE (i \u2194 j)--there exist admissible judgment sets such that agents i and j always agree.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 569\nThe interpretation of formulae depends on the agenda A and the underlying logic L, in the quantification over the set J (A, L) n of admissible, e.g., complete and L-consistent, judgment profiles.\nNote that this means that some 3AL formula might be valid under one underlying logic, while not under another.\nFor example, if the agenda contains some formula which is inconsistent in the underlying logic (and, by implication, some tautology), then the following hold: | = L ~ ~ (i \u2227 \u03c3)--for every judgment profile, there is some agenda item (take a tautology) which both agent i and the JAR judges to be true But this property does not hold when every agenda item is consistent with respect to the underlying logic.\nOne such agenda and underlying logic will be discussed in Section 6.\n4.\nEXPRESSIVITY EXAMPLES\n4.1 The Discursive Paradox\nAs illustrated in Example 1, the following formula expresses proposition-wise majority voting over some proposition p\ni.e., the following property of a JAR f and admissible profile ~ A1,..., An ~:\nf | = MV exactly iff f has the above property for all judgment profiles and propositions.\nHowever, we have the following in our logic.\nAssume that the agenda contains at least two distinct formulae and their material implication (i.e., A contains p, q, p \u2192 q for some p, q \u2208 L (L)).\nwhen there are at least three agents and the agenda contains at least two distinct formulae and their material implication.\nPROOF.\nAssume the opposite, e.g., that A = {p, p \u2192 q, q, \u00ac p, \u00ac (p \u2192 q), \u00ac q, ...} and there exists an aggregation rule f over A such that f | = L ~ ~ (\u03c3 \u2194 ~ ~ G \u2286 \u03a3, | G |> n i \u2208 G i).\nLet \u03b3 be the judg2 ment profile \u03b3 = ~ A1, A2, A3 ~ where A1 = {p, p \u2192 q, q, ...}, A2 = {p, \u00ac (p \u2192 q), \u00ac q, ...} and A3 = {\u00ac p, p \u2192 q, \u00ac q, ...}.\nWe have that f, \u03b3, p ~ | = L ~ (\u03c3 \u2194 ~ ~ i \u2208 G i) for any p ~, so f, \u03b3, p | = L ~ G \u2286 \u03a3, | G |> n 2 \u03c3 \u2194 ~ G \u2286 \u03a3, | G |> n i \u2208 G i. Because f, \u03b3, p | = L 1 \u2227 2, it follows that 2 f, \u03b3, p | = L \u03c3.\nIn a similar manner it follows that f, \u03b3, p \u2192 q | = L \u03c3 and f, \u03b3, q | = L \u00ac \u03c3.\nIn other words, p \u2208 f (\u03b3), p \u2192 q \u2208 f (\u03b3) and q ~ f (\u03b3).\nSince f (\u03b3) is complete, \u00ac q \u2208 f (\u03b3).\nBut that contradicts the fact that f (\u03b3) is required to be consistent.\nProposition 1 is a logical statement of a variant of the well-known discursive dilemma: if three agents are voting on propositions p, q and p \u2192 q, proposition-wise majority voting might not yield a consistent result.\n5.\nAXIOMATISATION\nGiven an underlying logic L, a finite agenda A over L, and a set of agents \u03a3, Judgment Aggregation Logic (3AL (L), or just 3AL when L is understood) for the language L (\u03a3, A), is defined in Table 2.\nTable 2: The logic 3AL (L) for the language L (\u03a3, A).\np, pi, q range\nover the agenda A; \u03c6, \u03c8, \u03c8i over L (\u03a3, A); x over {\u03c3, i: i \u2208 \u03a3}; ~ over {~, ~}; i, j over \u03a3; o over the set of outcomes O. h ~ p means hq when p = \u00ac q for some q, otherwise it means h \u00ac p. L is the underlying logic.\nThe first 5 axioms represent properties of a table and of judgment sets.\nAxiom Atmost says that there is at most one item on the table at a time, and Atleast says that we always have an item on the table.\nAxiom Agenda says that every agenda item will appear on the table, whereas Once says that every item of the agenda only appears on the table once.\nNote that a conjunction hp \u2227 x reads: item p is on the agenda, and x is in favour of it, or x judges it true.\nAxiom CpJS corresponds to the requirement that judgment sets are complete.\nNote that from Agenda, CsJS and CpJS we derive the scheme ~ x \u2227 ~ \u00ac x, which says that everybody should at least express one opinion in favour of something, and against something.\nThe axioms taut \u2212 5 are well familiar from modal logic: they directly reflect the unrestricted quantification in the truth definition of ~ and ~.\nAxiom C says that for any agenda item for which it is possible to have opposing opinions, every possible outcome for that item should be achievable.\nCOMM says that everything that is true for an arbitrary profile and item, is also true for an arbitrary item and profile.\nClosure guarantees that agents behave consistently with respect to consequence in the logic L. MP and Nec are standard.\nWe use JAL (L) to denote derivability in 3AL (L).\nTnEOREM 3.\nIf the agenda is finite, we have that for any formula \u03c8 \u2208 L (\u03a3, A), JAL (L) \u03c8 iff | = L \u03c8.\nPROOF.\nSoundness is straightforward.\nFor completeness (we focus on the main idea here and leave out trivial details), we build a\n570 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nJAL table for a consistent formula \u03c8 as follows.\nIn fact, our axiomatisation completely determines a table, except for the behaviour of f. To be more precise, let a table description be a conjunction of the form hp \u2227 o \u2227 (\u00ac) \u03c3.\nIt is easy to see that table descriptions are mutually exclusive, and, moreover, we can derive V\u03c4 \u2208 T \u03c4, where T is the set of all table descriptions.\nLet D be the set of all maximal consistent sets A.\nWe don't want all of those: it might well be that \u03c8 requires \u03c3 to be in a certain way, which is incompatible with some A's.\nWe define two accessibility relations in the standard way: R, A1A2 iff for all, \u03c8:, \u03c8 \u2208 A1 \u21d2 \u03c8 \u2208 A2.\nSimilarly for Rm with respect to m. Both relations are equivalences (due to taut-5), and moreover, when R, A1A2 and RmA2A3 then for some A ~ 2, also RmA1A ~ 2 and R, A ~ 2A3 (because of axiom COMM).\nEvery A \u2208 Tables can be conceived as a pair \u03b3, p, since every A contains a unique * (hq \u2227 o \u2227 (\u00ac) \u03c3) for every hq and a unique hp.\nIt is then easy to verify that, for every A \u2208 Tables, and every formula \u03d5, A | = \u03d5 iff \u03d5 \u2208 A, where | = here means truth in the ordinary modal logic sense when the set of states is taken to be Tables.\nNow, we extract an aggregation function f and pairs \u03b3, p as follows: For every A \u2208 Tables, find a conjunction hp \u2227 o \u2227 (\u00ac) \u03c3.\nThere will be exactly one such p.\nThis defines the p we are looking for.\nFurthermore, the \u03b3 is obtained, for every agent i, by finding all q for which * (hq \u2227 i) is currently true.\nFinally, the function f is a table of all tuples hp, o (p), \u03c3 for which * (hp \u2227 o (o) \u2227 \u03c3) is contained in some set in Tables.\nWe point out that JAL has all the axioms taut, K, T, 4, 5 and the rules MP and Nec of the modal logic S5.\nHowever, uniform substitution, a principle of all normal modal logics (cf., e.g., [3]), does not hold.\nA counter example is the fact that the following is valid:\n-- the JAR will not necessarily make the same judgments as agent i. So, for example, we have that the discursive paradox is provable in JAL (L): ~ JAL (L) O ((mMV) \u2192 \u22a5).\nAn example of a derivation of the less complicated (valid) property * O (i \u2227 J) is shown in Table 3.\n6.\nPREFERENCE AGGREGATION\nRecently, Dietrich and List [5] showed that preference aggregation can be embedded in judgment aggregation.\nIn this section we show that our judgment aggregation logic also can be used to reason about preference aggregation.\nGiven a set K of alternatives, [5] defines a simple predicate logic LK with language L (LK) as follows:\n\u2022 L (LK) has one constant a for each alternative a \u2208 K, variables v1, v2,..., a binary identity predicate =, a binary predicate P for strict preference, and the usual propositional and first order connectives \u2022 Z is the collection of the following axioms:\n\u2022 When F \u2286 L (LK) and \u03c6 is a formula, F | = \u03c6 is defined to hold iff F \u222a Z entails \u03c6 in the standard sense of predicate logic\nTable 3: JAR derivation of * O (i \u2227 J)\nIt is easy to see that there is an one-to-one correspondence between the set of preference relations (total linear orders) over K and the set of LK-consistent and complete judgment sets over the preference agenda AK = {aPb, \u00ac aPb: a, b \u2208 K, a #b} Given a SWF F over K, the corresponding JAR fF over the preference agenda AK is defined as follows fF (A1,..., An) = A, where A is the consistent and complete judgment set corresponding to F (L1,..., Ln) where Li is the preference relation corresponding to the consistent and complete judgment set Ai.\nThus we can use JAL to reason about preference aggregation as follows.\nTake the logical language L (E, AK), for some set of agents E, and take the underlying logic to be LK.\nWe can then interpret our formulae in an SWF F over K, a preference profile L \u2208 L (K) and a pair (a, b) \u2286 K \u00d7 K, a #b, as follows: F, L, (a, b) | = swf \u03c6 \u21d4 f F, \u03b3L, aPb | = LK \u03c6 where \u03b3L is the judgment profile corresponding to the preference profile L.\nWhile in the general judgment aggregation case a formula is interpreted in the context of an agenda item, in the preference aggregation case a formula is thus interpreted in the context of a pair of alternatives.\n\u2022 FmaJ, L, (m, d) | = swf 1 \u2227 2 \u2227 3 (all agents agree, under the individual rankings L, on the relative ranking of m and d--they agree that d is better than m) \u2022 FmaJ, L, (m, d) | = swf * \u00ac (1 \u2194 2) (under the individual rankings L, there is some pair of alternatives on which agents 1 and 2 disagree) The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 571 \u2022 Fmaj, L, (m, d) | = swf ~ ~ (1 \u2227 2) (agents 1 and 2 can choose their preferences such that they will agree on some pair of alternatives) \u2022 Fmaj, L, (m, d) | = swf o - \u2194 V ~ i \u2208 G i (the SWF Fmaj\nimplements pair-wise majority voting) As usual, we write F | = swf 0 when F, L, (a, b) | = swf 0 for any L and (a, b), and so on.\nThus, our formulae can be seen as expressing properties of social welfare functions.\nEXAMPLE 3.\nTake the formula ~ 0 (i \u2194 o -).\nWhen this formula is interpreted as a statement about a social welfare function, it says that there exists a preference profile such that for all pairs (a, b) of alternatives, b is preferred over a in the aggregation (by the SWF) of the preference profile if and only if agent i prefers b over a.\n6.1 Expressivity Examples\nWe make precise the claim in Section 2.2 that the three mentioned SWF properties correspond to the three mentioned JAR properties, respectively.\nRecall the formulae defined in Section 4.\nThe properties expressed above are properties of SWFs.\nLet us now look at properties of the set of alternatives K we can express.\nProperties involving cardinality is often of interest, for example in Arrow's theorem.\nLet:\nPROOF.\nFor the direction to the left, let F | = swf MT2.\nThus, there is a \u03b3 such that there exists (a1, b1), (a2, b2) \u2208 K \u00d7 K, where a1 ~ b1, and a2 ~ b2, such that (i) a1Pb1 \u2208 \u03b31, (ii) a1Pb1 \u2208 \u03b32, (iii) a2Pb2 \u2208 \u03b31 and (iv) a2Pb2 ~ \u03b32.\nFrom (ii) and (iv) we get that (a1, b1) ~ (a2, b2), and from that and (i) and (iii) it follows that \u03b31 contains two different pairs a1Pb1 and a2Pb2 each having two different elements.\nBut that is not possible if | K | = 2, because if K = {a, b} then AK = {aPb, \u00ac aPb, bPa, \u00ac bPa} and thus it is impossible that \u03b31 \u2286 AK since we cannot have aPb, bPa \u2208 \u03b31.\nFor the direction to the right, let | K |> 2; let a, b, c be three distinct elements of K. Let \u03b31 be the judgment set corresponding to the ranking abc and \u03b32 the judgment set corresponding to acb.\nNow, for any aggregation rule f, f, \u03b3, aPb | = 1 \u2227 2 and f, \u03b3, bPc | = 1 \u2227 \u00ac 2.\nThus, F | = swf MT2, for any SWF F.\nWe now have everything we need to express Arrow's statement as a formula.\nIt follows from his theorem that the formula is valid on the class of all social welfare functions.\nTHEOREM 4.\n| = swf MT2 \u2192 \u00ac (PO \u2227 ND \u2227 IIA) PROOF.\nNote that MT2, PO, ND and IIA are true SWF properties, their truth value wrt.\na table is determined solely by the SWF.\nFor example, F, L, (a, b) | = swf MT2 iff F | = MT2, for any F, L, a, b. Let F \u2208 F (K), and F, L, (a, b) | = swf MT2 for some L and a, b. By Proposition 3, K has more than two alternatives.\nBy Arrow's theorem, F cannot have all the properties PO, ND2 and IIA.\nW.l.o.g assume that F does not have the PO property.\nBy Proposition 2, F ~ | = swf PO.\nSince PO is a SWF property, this means that F, L, (a, b) ~ | = swf PO (satisfaction of PO is independent of L, a, b), and thus that F, L, (a, b) | = swf \u00ac PO \u2228 \u00ac ND \u2228 \u00ac IIA.\nNote that the formula in Theorem 4 does not mention any agenda items (i.e., pairs of alternatives) such as haPb directly in an expression.\nThis means that the formula is a member of L (\u03a3, AK) for any set of alternatives K, and is valid no matter which set of alternatives we assume.\nThe formula MV which in the general judgment aggregation case expresses proposition-wise majority voting, expresses in the preference aggregation case pair-wise majority voting, as illustrated in Example 2.\nThe preference aggregation correspondent to the discursive paradox of judgment aggregation is the well known Condorcet's voting paradox, stating that pair-wise majority voting can lead to aggregated preferences which are cyclic (even if the individual preferences are not).\nWe can express Condorcet's paradox as follows, again as a universally valid logical property of SWFs.\nPROPOSitiON 4.\n| = swf MT2 \u2192 ~ ~ \u00ac MV, when there are at least three agents.\nPROOF.\nThe proof is similar to the proof of the discursive paradox.\nLet fF, \u03b3, aPb | = LK MT2; there are thus three distinct elements a, b, c \u2208 K. Assume that fF, \u03b3, aPb | = LK \u2751 mMV.\nLet \u03b3 be the judgment profile corresponding to the preference profile X = (abc, cab, bca).\nWe have that fF, \u03b3, aPb | = LK 1 \u2227 2 and, since fF, \u03b3, aPb | = LK MV, we have that fF, \u03b3, aPb | = LK o - and thus that aPb \u2208 fF (\u03b3) and (a, b) \u2208 F (X).\nIn a similar manner we get that (c, a) \u2208 F (X) and (b, c) \u2208 F (X).\nBut that is impossible, since by transitivity we would also have that (a, c) \u2208 F (X) which contradicts the fact that F (X) is antisymmetric.\nThus, it follows that fF, \u03b3, aPb ~ | = LK \u2751 EMV.\n6.2 Axiomatisation and Logical Properties\nWe immediately get, from Theorem 3, a sound and complete axiomatisation of preference aggregation over a finite set of alternatives.\nCOROLLARY 1.\nIf the set of alternatives K is finite, we have that for any formula Ali \u2208 L (\u03a3, AK), ~ JAL (LK) Ali iff | = swf Ali.\nPROOF.\nFollows immediately from Theorem 3 and the fact that for any JAR f, there is a SWF F such that f = f F. So, for example, Arrow's theorem is provable in JAL (LK): ~ JAL (LK) MT2 \u2192 \u00ac (PO \u2227 ND \u2227 IIA).\nEvery formula which is valid with respect to judgment aggregation rules is also valid with respect to social welfare functions, so all general logical properties of JARs are also properties of SWFs.\nDepending on the agenda, SWFs may have additional properties, induced by the logic LK, which are not always shared by JARs with other underlying logics.\nOne such property is ~ i.\nWhile we have | = swf ~ i, for other agendas there are underlying logics L such that ~ | = L ~ i To see the latter, take an agenda with a formula p which is inconsistent in the underlying logic L--p can never be included in a judgment set.\nTo see the former, take an arbitrary pair of alternatives (a, b).\nThere exists some preference profile in which agent i prefers b over a. Technically speaking, the formula ~ i holds in SWFs because the agenda AK does not contain a formula which (alone) is inconsistent wrt.\nthe underlying logic LK.\nBy the same reason, the following properties also hold in SWFs but not in JARs in general.\n| = swf A\n572 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n-- for any pair of alternatives (a, b), any possible combination of the relative ranking of a and b among the agents is possible.\n| = swf i -4 O_i--given an alternative b which is preferred over some other alternative a by agent i, there is some other pair of alternatives c and d such that d is not preferred over c--namely (c, d) = (b, a).\n| = swf 11 (0 (i V j) -4 * (i A _ j))--if, given preferences of agents and a SWF, for any two alternatives it is always the case that either agent i or agent j prefers the second alternative over the first, then there must exist a pair of alternatives for which the two agents disagree.\nA justification is that no single agent can prefer the second alternative over the first for every pair of alternatives, so in this case if i prefers b over a then j must prefer a over b. Again, this property does not necessarily hold for other agendas, because the agenda might contain an inconsistency the agents could not possibly disagree upon.\nProof theoretically, these additional properties of SWFs are derived using the Closure rule.\n7.\nRELATED WORK\nFormal logics related to social choice have focused mostly on the logical representation of preferences when the set of alternatives is large and on the computation properties of computing aggregated preferences for a given representation [6, 7, 8].\nA notable and recent exception is a logical framework for judgment aggregation developed by Marc Pauly in [10], in order to be able to characterise the logical relationships between different judgment aggregation rules.\nWhile the motivation is similar to the work in this paper, the approaches are fundamentally different: in [10], the possible results from applying a rule to some judgment profile are taken as primary and described axiomatically; in our approach the aggregation rule and its possible inputs, i.e., judgment profiles, are taken as primary and described axiomatically.\nThe two approaches do not seem to be directly related to each other in the sense that one can be embedded in the other.\nThe modal logic arrow logic [11] is designed to reason about any object that can be graphically represented as an arrow, and has various modal operators for expressing properties of and relationships between these arrows.\nIn the preference aggregation logic jal (LK) we interpreted formulae in pairs of alternatives--which can be seen as arrows.\nThus, (at least) the preference aggregation variant of our logic is related to arrow logic.\nHowever, while the modal operators of arrow logic can express properties of preference relations such as transitivity, they cannot directly express most of the properties we have discussed in this paper.\nNevertheless, the relationship to arrow logic could be investigated further in future work.\nIn particular, arrow logics are usually proven complete wrt.\nan algebra.\nThis could mean that it might be possible to use such algebras as the underlying structure to represent individual and collective preferences.\nThen, changing the preference profile takes us from one algebra to another, and a SWF determines the collective preference, in each of the algebras.\n8.\nDISCUSSION\nWe have presented a sound and complete logic jal for representing and reasoning about judgment aggregation.\njal is expressive: it can express judgment aggregation rules such as majority voting; complicated properties such as independence; and important results such as the discursive paradox, Arrow's theorem and Condorcet's paradox.\nWe argue that these results show exactly which logical capabilities an agent needs in order to be able to reason about judgment aggregation.\nIt is perhaps surprising that a relatively simple language provides these capabilities.\njal provides a proof theory, in which results such as those mentioned above can be derived3.\nThe axiomatisation describes the logical principles of judgment aggregation, and can also be instantiated to reason about specific instances of judgment aggregation, such as classical Arrovian preference aggregation.\nThus our framework sheds light on the differences between the logical principles behind general judgment aggregation on the one hand and classical preference aggregation on the other.\nIn future work it would be interesting to relax the completeness and consistency requirements of judgment sets, and try to characterise these in the logical language, as properties of general judgment sets, instead.","keyphrases":["prefer aggreg","judgment aggreg","modal logic","judgment aggreg rule","complet axiomatis","express","discurs paradox","knowledg represent formal","social welfar function","jal syntax and semant","arrow's theorem","non-dictatorship","unanim","arrow logic","jal"],"prmu":["P","P","P","P","P","P","P","M","M","M","M","U","U","R","U"]} {"id":"I-30","title":"Distributed Task Allocation in Social Networks","abstract":"This paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network. We show that the complexity of this problem remains NP-hard. Moreover, it is not approximable within some factor. We develop an algorithm based on the contract-net protocol. Our algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources. We conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time. Three different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications. The results demonstrate that our algorithm works well and that it scales well to large-scale applications.","lvl-1":"Distributed Task Allocation in Social Networks Mathijs de Weerdt Delft Technical University Delft, The Netherlands M.M.deWeerdt@tudelft.nl Yingqian Zhang Delft Technical University Delft, The Netherlands Yingqian.Zhang@tudelft.nl Tomas Klos Center for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands tomas.klos@cwi.nl ABSTRACT This paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network.\nWe show that the complexity of this problem remains NPhard.\nMoreover, it is not approximable within some factor.\nWe develop an algorithm based on the contract-net protocol.\nOur algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources.\nWe conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time.\nThree different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications.\nThe results demonstrate that our algorithm works well and that it scales well to large-scale applications.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Algorithms, Experimentation 1.\nINTRODUCTION Recent years have seen a lot of work on task and resource allocation methods, which can potentially be applied to many real-world applications.\nHowever, some interesting applications where relations between agents play a role require a slightly more general model.\nSuch situations appear very frequently in real-world scenarios, and recent technological developments are bringing more of them within the range of task allocation methods.\nEspecially in business applications, preferential partner selection and interaction is very common, and this aspect becomes more important for task allocation research, to the extent that technological developments need to be able to support it.\nFor example, the development of semantic web and grid technologies leads to increased and renewed attention for the potential of the web to support business processes [7, 15].\nAs an example, virtual organizations (VOs) are being re-invented in the context of the grid, where they are composed of a number of autonomous entities (representing different individuals, departments and organizations), each of which has a range of problem-solving capabilities and resources at its disposal [15, p. 237].\nThe question is how VOs are to be dynamically composed and re-composed from individual agents, when different tasks and subtasks need to be performed.\nThis would be done by allocating them to different agents who may each be capable of performing different subsets of those tasks.\nSimilarly, supply chain formation (SCF) is concerned with the, possibly ad-hoc, allocation of services to providers in the supply chain, in such a way that overall profit is optimized [6, 21].\nTraditionally, such allocation decisions have been analyzed using transaction cost economics (TCE) [4], which takes the transaction between consecutive stages of development as its basic unit of analysis, and considers the firm and the market as alternative structural forms for organizing transactions.\n(Transaction cost) economics has traditionally built on analysis of comparative statics: the central problem of economic organization is considered to be the adaptation of organizational forms to the characteristics of transactions.\nMore recently, TCE``s founding father, Ronald Coase, acknowledged that this is too simplistic an approach [5, p. 245]: The analysis cannot be confined to what happens within a single firm.\n(... ) What we are dealing with is a complex interrelated structure.\nIn this paper, we study the problem of task allocation from the perspective of such a complex interrelated structure.\nIn particular, `the market'' cannot be considered as an organizational form without considering specific partners to interact with on the market [11].\nSpecifically, therefore, we consider agents to be connected to each other in a social network.\nFurthermore, this network is not fully connected: as informed by the business literature, firms typically have established working relations with limited numbers of preferred partners [10]; these are the ones they consider when new tasks arrive and they have to form supply chains to allocate those tasks [19].\nOther than modeling the interrelated 500 978-81-904262-7-5 (RPS) c 2007 IFAAMAS structure between business partners, the social network introduced in this paper can also be used to represent other types of connections or constraints among autonomous entities that arise from other application domains.\nThe next section gives a formal description of the task allocation problem on social networks.\nIn Section 3, we prove that the complexity of this problem remains NP-hard.\nWe then proceed to develop a distributed algorithm in Section 4, and perform a series of experiments with this algorithm, as described in Section 5.\nSection 6 discusses related work, and Section 7 concludes.\n2.\nPROBLEM DESCRIPTION We formulate the social task allocation problem in this section.\nThere is a set A of agents: A = {a1, ... , am}.\nAgents need resources to complete tasks.\nLet R = {r1, ... , rk} denote the collection of the resource types available to the agents A. Each agent a \u2208 A controls a fixed amount of resources for each resource type in R, which is defined by a resource function: rsc : A \u00d7 R \u2192 N. Moreover, we assume agents are connected by a social network.\nDefinition 1 (Social network).\nAn agent social network SN = (A, AE) is an undirected graph, where vertices A are agents, and each edge (ai, aj) \u2208 AE indicates the existence of a social connection between agents ai and aj.\nSuppose a set of tasks T = {t1, t2, ... , tn} arrives at such an agent social network.\nEach task t \u2208 T is then defined by a tuple u(t), rsc(t), loc(t) , where u(t) is the utility gained if task t is accomplished, and the resource function rsc : T \u00d7R \u2192 N specifies the amount of resources required for the accomplishment of task t. Furthermore, a location function loc : T \u2192 A defines the locations (i.e., agents) at which the tasks arrive in the social network.\nAn agent a that is the location of a task t, i.e. loc(t) = a, is called the manager of this task.\nEach task t \u2208 T needs some specific resources from the agents in order to complete the task.\nThe exact assignment of tasks to agents is defined by a task allocation.\nDefinition 2 (Task allocation).\nGiven a set of tasks T = {t1, ... , tn} and a set of agents A = {a1, ... , am} in a social network SN, a task allocation is a mapping \u03c6 : T \u00d7 A \u00d7 R \u2192 N.\nA valid task allocation in SN must satisfy the following constrains: \u2022 A task allocation must be correct.\nEach agent a \u2208 A cannot use more than its available resources, i.e. for each r \u2208 R, P t\u2208T \u03c6(t, a, r) \u2264 rsc(a, r).\n\u2022 A task allocation must be complete.\nFor each task t \u2208 T , either all allocated agents'' resources are sufficient, i.e. for each r \u2208 R, P a\u2208A \u03c6(t, a, r) \u2265 rsc(t, r), or t is not allocated, i.e. \u03c6(t, \u00b7, \u00b7) = 0.\n\u2022 A task allocation must obey the social relationships.\nEach task t \u2208 T can only be allocated to agents that are (direct) neighbors of agent loc(t) in the social network SN.\nEach such agent that can contribute to a task is called a contractor.\nWe write T\u03c6 to represent the tasks that are fully allocated in \u03c6.\nThe utility of \u03c6 is then the summation of the utilities of each task in T\u03c6, i.e., U\u03c6 = P t\u2208T\u03c6 u(t).\nUsing this notation, we define the efficient task allocation below.\nDefinition 3 (Efficient task allocation).\nWe say a task allocation \u03c6 is efficient if it is valid and U\u03c6 is maximized, i.e., U\u03c6 = max( P t\u2208T\u03c6 u(t)).\nWe are now ready to define the task allocation problem in social network that we study in this paper.\nDefinition 4 (Social task allocation problem).\nGiven a set of agents A connected by a social network SN = (A, AE), and a finite set of tasks T , the social task allocation problem (or STAP for short) is the problem of finding the efficient task allocation \u03c6, such that \u03c6 is valid and the social welfare U\u03c6 is maximized.\n3.\nCOMPLEXITY RESULTS The traditional task allocation problem, TAP (without the condition of the social network SN), is NP-complete [18], and the complexity comes from the fact that we need to evaluate the exponential number of subsets of the task set.\nAlthough we may consider the TAP as a special case of the STAP by assuming agents are fully connected, we cannot directly use the complexity results from the TAP, since we study the STAP in an arbitrary social network, which, as we argued in the introduction, should be partially connected.\nWe now show that the TAP with an arbitrary social network is also NP-complete, even when the utility of each task is 1, and the quantity of all required and available resources is 1.\nTheorem 1.\nGiven the social task allocation problem with an arbitrary social network, as defined in Definition 4, the problem of deciding whether a task allocation \u03c6 with utility more than k exists is NP-complete.\nProof.\nWe first show that the problem is in NP.\nGiven an instance of the problem and an integer k, we can verify in polynomial time whether an allocation \u03c6 is a valid allocation and whether the utility of \u03c6 is greater than k.\nWe now prove that the STAP is NP-hard by showing that MAXIMUM INDEPENDENT SET \u2264P STAP.\nGiven an undirected graph G = (V, E) and an integer k, we construct a network G = (V , E ) which has an efficient task allocation with k tasks of utility 1 allocated if and only if G has an independent set (IS) of size k. av1 av3 ae3 rsc(ae1 ) = {e1} rsc(ae4 ) = {e4} av4 ae2 av2 ae4 ae1 rsc(ae2 ) = {e2}{e3} rsc(av3 ) = {v3} rsc(av4 ) = {v4} t1 = {v1, e1, e3} t2 = {v2, e1, e2} rsc(ae3 ) = rsc(av1 ) = {v1} rsc(av2 ) = {v2} t3 = {v3, e3, e4} t4 = {v4, e2, e4} e1 e2 e4 e3 v1 v2 v4v3 Figure 1: The MIS problem can be reduced to the STAP.\nThe left figure is an undirected graph G, which has the optimal solution {v1, v4} or {v2, v3}; the right figure is the constructed instance of the STAP, where the optimal allocation is {t1, t4} or {t2, t3}.\nAn instance of the following construction is shown in Figure 1.\nFor each node v \u2208 V and each edge e \u2208 E in the graph G, we create a vertex agent av and an edge agent ae in G .\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 501 When v was incident to e in G we correspondingly add an edge e in G between av and ae.\nWe assign each agent in G one resource, which is related to the node or the edge in the graph G, i.e., for each v \u2208 V , rsc(av) = {v} (here we write rsc(a) and rsc(t) to represent the set of resources available to\/required by a and t), and for each e \u2208 E, rsc(ae) = {e}.\nEach vertex agent avi in G has a task ti that requires a set of neighboring resources ti = {vi} \u222a {e|e = (u, vi) \u2208 E}.\nThere is no task on the edge agents in G .\nWe define utility 1 for each task, and the quantity of all required and available resources to be 1.\nTaken an instance of the IS problem, suppose there is a solution of size k, i.e., a subset N \u2286 V such that no two vertices in N are joined by an edge in E and |N| = k. N specifies a set of vertex agents AN in the corresponding graph G .\nGiven two agents a1, a2 \u2208 AN we now know that there is no edge agent ae connected to both a1 and a2.\nThus, for each agent a \u2208 AN , a assigns its task to the edge agents which are connected to a. All other vertex agents a \/\u2208 AN are not able to assign their tasks, since the required resources of the edge agents are already used by the agents a \u2208 AN .\nThe set of tasks of the agents AN (|AN | = k) is thus the maximum set of tasks that can be allocated.\nThe utility of this allocation is k. Similarly, if there is a solution for the STAP with the utility value k, and the allocated task set is N, then for the IS problem, there exists a maximum independent set N of size k in G.\nAn example can be found in Figure 1.\nWe just proved that the STAP is NP-hard for an arbitrary graph.\nIn our proof, the complexity comes from the introduction of a social network.\nOne may expect that the complexity of this problem can be reduced for some networks where the number of neighbors of the agents is bounded by a fixed constant.\nWe now give a complexity result on this class of networks as follows.\nTheorem 2.\nLet the number of neighbors of each agent in the social network SN be bounded by \u0394 for \u0394 \u2265 3.\nComputing the efficient task allocation given such a network is NP-complete.\nIn addition, it is not approximable within \u0394\u03b5 for some \u03b5 > 0.\nProof.\nIt has been shown in [2] that the maximum independent set problem in the case of the degree bounded by \u0394 for \u0394 \u2265 3 is NP-complete and is not approximable within \u0394\u03b5 for some \u03b5 > 0.\nUsing the similar reduction from the proof of Theorem 1, this result also holds for the STAP.\nSince our problem is as hard as MIS as shown in Theorem 1, it is not possible to give a worst case bound better than \u0394\u03b5 for any polynomial time algorithm, unless P = NP.\n4.\nALGORITHMS To deal with the problem of allocating tasks in a social network, we present a distributed algorithm.\nWe introduce this algorithm by describing the protocol for the agents.\nAfter that we give the optimal, centralized algorithm and an upper bound algorithm, which we use in Section 5 to benchmark the quality of our distributed algorithm.\n4.1 Protocol for distributed task allocation We can summarize the description of the task allocation problem in social networks from Section 2 as follows.\nWe Algorithm 1 Greedy distributed allocation protocol (GDAP).\nEach manager a calculates the efficiency e(t) for each of their tasks t \u2208 Ta, and then while Ta = \u2205: 1.\nEach manager a selects the most efficient task t \u2208 Ta such that for each task t \u2208 Ta: e(t ) \u2264 e(t).\n2.\nEach manager a requests help for t from all its neighbors (of a) by informing these neighbors of the efficiency e(t) and the required resources for t. 3.\nContractors receive and store all requests, and then offer all relevant resources to the manager for the task with the highest efficiency.\n4.\nThe managers that have received sufficient offers allocate their tasks, and inform each contractor which part of the offer is accepted.\nWhen a task is allocated, or when a manager has received offers from all neighbors, but still cannot satisfy its task, the task is removed from the task list Ta.\n5.\nContractors update their used resources.\nhave a (social) network of agents.\nEach agent has a set of resources of different types at its disposal.\nWe also have a set of tasks.\nEach task requires some resources, has a fixed benefit, and is located at a certain agent.\nThis agent is called a manager.\nWe only allow neighboring agents to help with a task.\nThese agents are called contractors.\nAgents can fulfill the role of manager as well as contractor.\nThe problem is to find out which tasks to execute, and which resources of which contractors to use for these tasks.\nThe idea of the protocol is as follows.\nAll manager agents a \u2208 A try to find neighboring contractors to help them with their task(s) Ta = {ti \u2208 T | loc(ti) = a}.\nThey start with offering the task that is most efficient in terms of the ratio between benefit and required resources.\nOut of all tasks offered, contractors select the task with the highest efficiency, and send a bid to the related manager.\nA bid consists of all the resources the agent is able to supply for this task.\nIf sufficient resources have been offered, the manager selects the required resources and informs all contractors of its choice.\nThe efficiency of a task is defined as follows: Definition 5.\nThe efficiency e of a task t \u2208 T is defined by the utility of this task divided by the sum of all required resources: e(t) = u(t)P r\u2208R rsc(t,r) .\nA more detailed description of this protocol can be found in Algorithm 1.\nHere it is also defined how to determine when a task should not be offered anymore, because it is impossible to fulfill locally.\nObviously, a task is also not offered anymore when it has been allocated.\nThis protocol is such that, when no two tasks have exactly the same efficiency, in every iteration at least one task is removed from a task list.1 From this the computation and communication property of the algorithm follows.\nProposition 1.\nFor a STAP with n tasks and m agents, the run time of the distributed algorithm is O(nm), and the number of communication messages is O(n2 m).\n1 Even when some tasks have the same efficiency, it is straightforward to make this result work.\nFor example, the implementation can ensure that the contractors choose the task with the lowest task-id.\n502 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 2 Optimal social task allocation (OPT).\nRepeat the following for each combination of tasks: 1.\nIf the total reward for this combination is higher than any previous combination, test if this combination is feasible as follows: 2.\nCreate a network flow problem for each resource type r \u2208 R (separately) as follows: (a) Create a source s and a sink s .\n(b) For each agent a \u2208 A create an agent node and an edge from s to this node with capacity equal to the amount of resources of type r agent a has.\n(c) For each task t \u2208 T create a task node and an edge from this node to s with capacity equal to the amount of resources of type r task T requires.\n(d) For each agent a connect the agent node to all task nodes of neighboring tasks, i.e., t \u2208 {t \u2208 T | (a, loc(t)) \u2208 AE}.\nGive this connection unlimited capacity.\n3.\nSolve the maximum flow problem for the created flow networks.\nIf the maximum flow in each network is equal to the total required resources of that type, the current combination of tasks is feasible.\nIn that case, this is the current best combination of tasks.\nProof.\nIn the worst case, in each iteration exactly one task is removed from a task list, so there are n iterations.\nIn each iteration in the worst case (i.e., a fully connected network), for each of the O(n) managers, O(m) messages are sent.\nNext the task with the highest efficiency can be selected by each contractor in O(n).\nAssigning an allocation can be done in O(m).\nThis leads to a total of O(n + m) operations for each iteration, and thus O(n2 + nm) operations in total.\nThe number of messages sent is O(n(nm + nm + nm)) = O(n2 m).\nWe establish the quality of this protocol experimentally (in Section 5).\nPreferably, we compare the results to the optimal solution.\n4.2 Optimal social task allocation The optimal task allocation algorithm should deal with the restrictions posed by the social network.\nFor this NPcomplete problem we used an exponential brute-force algorithm to consider relevant combinations of tasks to execute.\nFor each combination we use a maximum-flow algorithm to check whether the resources are sufficient for the selected subset of tasks.\nThe flow network describes which resources can be used for which tasks, depending on the social network.\nIf the maximum flow is equal to the sum of all resources required by the subset of tasks, we know that a feasible solution exists (see Algorithm 2).\nClearly, we cannot expect this optimal algorithm to be able to find solutions for larger problem sizes.\nTo establish the quality of our protocol for large instances, we use the following method to determine an upper bound.\n4.3 Upper bound for social task allocation Given a social task allocation problem, if the number of resource types for every task t \u2208 T is bounded by 1, the Algorithm 3 An upper bound for social task allocation (UB).\nCreate a network flow problem with costs as follows: 1.\nCreate a source s and a sink s .\n2.\nFor each agent a \u2208 A and each resource type ri \u2208 R, create an agent-resource node ai, and an edge from s to this node with capacity equal to the amount of resources of type r agent a has available and with costs 0.\n3.\nFor each task t \u2208 T and each resource type ri \u2208 R, create a task-resource node ti, and an edge from this node to s with capacity equal to the amount of resources of type r task t requires and costs \u2212e(t).\n4.\nFor each resource type ri \u2208 R and for each agent a connect the agent-resource node ai to all task-resource nodes ti for neighboring tasks t \u2208 {t \u2208 T | (a, loc(t)) \u2208 AE or a = loc(t)}.\nGive this connection unlimited capacity and zero costs.\n5.\nCreate an edge directly from s to s with unlimited capacity and zero costs.\nSolve the minimum cost flow network problem for this network.\nThe costs of the resulting network is an upper bound for the social task allocation problem.\nproblem is polynomially solvable by transforming it to a flow network problem.\nOur method for efficiently calculating an upper bound for STAP makes use of this special case by converting any given STAP instance P into a new problem P where each task only has one resource type.\nMore specifically, for every task t \u2208 T with utility u(t), we do the following.\nLet m be the number of resource types {r1, ... , rm} required by t.\nWe then split t into a set of m tasks T = {t1, ... , tm} where each task ti only has one unique resource type (of {r1, ... , rm}) and each task has a fair share of the utility, i.e., the efficiency of t from Definition 5 times the amount of this resource type rsc(ti, ri).\nAfter polynomially performing this conversion for every task in T , the original problem P becomes the special case P .\nNote that the set of valid allocations in P is only a subset of the set of valid allocations in P , because it is now possible to partially allocate a task.\nFrom this it is easy to see that the solution of P gives an upper bound of the solution of the original problem P. To compute the optimal solution for P , we transform it to a minimum cost flow problem.\nWe model the cost in the flow network by the negation of the new task``s utility.\nA polynomial-time implementation of a scaling minimum cost flow algorithm [9] is used for the computation.\nThe resulting minimum cost flow represents a maximum allocation of the tasks for P .\nThe detailed modeling is described in Algorithm 3.\nIn the next section, we use this upper bound to estimate the quality of the GDAP for large-scale instances.\n5.\nEXPERIMENTS We implemented the greedy distributed allocation protocol (GDAP), the optimal allocation algorithm (OPT), and the upper bound algorithm (UB) in Java, and tested them on a Linux PC.\nThe purpose of these experiments is to study the performance of the distributed algorithm in different problem settings using different social networks.\nThe perThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 503 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 0.4 0.6 0.8 1 1.2 1.4 1.6 Rewardrelativetooptimal Resource ratio small-world - upper bound random - upper bound scale-free - upper bound small-world - GDAP random - GDAP scale-free - GDAP Figure 2: The solution qualities of the GDAP and the upper bound depend on the resource ratio.\n0 5 10 15 20 25 30 0 2 4 6 8 10 12 14 16 18 Numberofagents Degree small-world random scale-free Figure 3: The histogram of the degrees.\nformance measurements are the solution quality and computation time, where the solution quality (SQ) is computed as follows.\nWhen the number of tasks is small, we compare the output of the distributed algorithm with the optimal solution, i.e., SQ = GDAP OP T , but if it is not feasible to compute the optimal solution, we use the value returned by the upper bound algorithm for evaluation, i.e., SQ = GDAP UB .\nTo see whether the latter is a good measure, we also compare the quality of the upper bound to the optimal solution for smaller problems.\nIn the following, we describe the setup of all experiments, and present the results.\n5.1 Experimental settings We consider several experimental environments.\nIn all environments the agents are connected by a social network.\nIn the experiments, three different networks are used to simulate the social relationships among agents in potential realworld problems.\nSmall-world networks are networks where most neighbors of an agent are also connected to each other.\nFor the experiments we use a method for generating random small-world networks proposed by Watts et al. [22], with a fixed rewiring probability p = 0.05.\nScale-free networks have the property that a couple of nodes have many connections, and many nodes have only a small number of connections.\nTo generate these we use the implementation in the JUNG library of the generator proposed by Barab\u00b4asi and Albert [3].\nWe also generate random networks as follows.\nFirst we connect each agent to another agent such that all agents are connected.\nNext, we randomly add connections until the desired average degree has been reached.\nWe now describe the different settings used in our experiments with both small and large-scale problems.\nSetting 1.\nThe number of agents is 40, and the number of tasks is 20.\nThe number of different resource types is bounded by 5, and the average number of resources required by a task is 30.\nConsequently, the total number of resources required by the tasks is fixed.\nHowever, the resources available to the agents are varied.\nWe define the resource ratio to refer to the ratio between the total number of available resources and the total number of required resources.\nResources are allocated uniformly to the agents.\nThe average degrees of the networks may also change.\nIn this setting the task benefits are distributed normally around the number of resources required.\nSetting 2.\nThis setting is similar to Setting 1, but here we let the benefits of the tasks vary dramatically-40% of the tasks have around 10 times higher benefit than the other 60% of the tasks.\nSetting 3.\nThis setting is for large-scale problems.\nThe ratio between the number of agents and the number of tasks is set to 5\/3, and the number of agents varies from 100 to 2000.\nWe also fix the resource ratio to 1.2 and the average degree to 6.\nThe number of different resource types is 20, and the average resource requirement of a tasks is 100.\nThe task benefits are again normally distributed.\n5.2 Experimental results The experiments are done with the three different settings in the three different networks mentioned before, where each recorded data is the average over 20 random instances.\n5.2.1 Experiment 1 Experimental setting 1 is used for this set of experiments.\nWe would like to see how the GDAP behaves in the different networks when the number of resources available to the agents is changing.\nWe also study the behavior of our upper bound algorithm.\nFor this experiment we fix the average number of neighbors (degree) in each network type to six.\nIn Figure 2 we see how the quality of both the upper bound and the GDAP algorithm depends on the resource ratio.\nRemarkably, for lower resource ratios our GDAP is much closer to the optimal allocation than the upper bound.\nWhen the resource ratio grows above 1.5, the graphs of the upper bound and the GDAP converge, meaning that both are really close to the optimal solution.\nThis can be explained by the fact that when plenty of resources are available, all tasks can be allocated without any conflicts.\nHowever, when resources are very scarce, the upper bound is much too optimistic, because it is based on the allocation of sub-tasks per resource type, and does not reason about how many of the tasks can actually be allocated completely.\nWe also notice from the graph that the solution quality of the GDAP on all three networks is quite high (over 0.8) when the available resource is very limited (0.3).\nIt drops below 0.8 with the increased ratio and goes up again once there are plenty of resources available (resource ratio 0.9).\nClearly, if 504 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 2 4 6 8 10 12 14 16 Rewardrelativetooptimal Degree small-world - upper bound random - upper bound scale-free - upper bound small-world - GDAP random - GDAP scale-free - GDAP Figure 4: The quality of the GDAP and the upper bound depend on the network degree.\nresources are really scarce, only a few tasks can be successfully allocated even by the optimal algorithm.\nTherefore, the GDAP is able to give quite a good allocation.\nAlthough the differences are minor, it can also be seen that the results for the small-world network are consistently slightly better than those of random networks, which in turn outperform scale-free networks.\nThis can be understood by looking at the distribution of the agents'' degree, as shown in Figure 3.\nIn this experiment, in the small-world network almost every manager has a degree of six.\nIn random networks, the degree varies between one and about ten.\nHowever, in the scale-free network, most nodes have only three or four connections, and only a very few have up to twenty connections.\nAs we will see in the next experiment, having more connections means getting better results.\nFor the next experiment we fix the resource ratio to 1.0 and study the quality of both the upper bound and the GDAP algorithm related to the degree of the social network.\nThe result can be found in Figure 4.\nIn this figure we can see that a high average degree also leads to convergence of the upper bound and the GDAP.\nObviously, when managers have many connections, it becomes easier to allocate tasks.\nAn exception is, similar to what we have seen in Figure 2, that the solution of the GDAP is also very good if the connections are extremely limited (degree 2), due to the fact that the number of possibly allocated tasks is very small.\nAgain we see that the upper bound is not that good for problems where resources are hard to reach, i.e. in social networks with a low average degree.2 Since the solution quality clearly depends on the resource ratio as well as on the degree of the social network, we study the effect of changing both, to see whether they influence each other.\nFigure 5 shows how the solution quality depends on both the resource ratio and the network degree.\nThis graph confirms the results that the GDAP performs better for problems with higher degree and higher resource ratio.\nHowever, it is now also more clear that it performs better for very low degree and resource availability.\nFor this experiment with 40 agents and 20 tasks, the worst performance is met for instances with degree six and resource ratio 0.6 to instances with degree twelve and resource ratio 0.3.\nBut even for those instances, the performance lies above 0.7.\n2 The consistent standard deviation of about 15% over the 20 problem instances is not displayed as error-bars in these first graphs, because it would obfuscate the interesting correlations that can now be seen.\n4 6 8 10 12 14 16 0.4 0.6 0.8 1 1.2 1.4 1.6 0.7 0.75 0.8 0.85 0.9 0.95 1 Relative reward small-world Average degree Resource ratio Relative reward Figure 5: The quality of the GDAP depends on both the resource ratio and the network degree.\n5.2.2 Experiment 2 To study the robustness of the GDAP against different problem settings, we generate instances where the task benefit distribution is different: 40% of the tasks gets a 10 times higher benefit (as described in Setting 2).\nThe effect of this different distribution can be seen in Figure 6.\nThese two graphs show that the results for the skewed task benefit distribution are slightly better on average, both when varying the resource ratio, and when varying the average degree of the network.\nWe argue that this can be explained by the greedy nature of GDAP, which causes the tasks with high efficiency to be allocated first, and makes the algorithm perform better in this heterogeneous setting.\n5.2.3 Experiment 3 The purpose of this final experiment is to test whether the algorithm can be scaled to large problems, like applications running on the internet.\nWe therefore generate instances where the number of agents varies from 100 to 2000, and simultaneously increase the number of tasks from 166 to 3333 (Setting 3).\nFigure 7 shows the run time for these instances on a Linux machine with an AMD Opteron 2.4 GHz processor.\nThese graphs confirm the theoretical analysis from the previous section, saying that both the upper bound and the GDAP are polynomial.\nIn fact, the graphs show that the GDAP almost behaves linearly.\nHere we see that the locality of the GDAP really helps in reducing the computation time.\nAlso note that the GDAP requires even less computation time than the upper bound.\nThe quality of the GDAP for these large instances cannot be compared to the optimal solution.\nTherefore, in Figure 8 the upper bound is used instead.\nThis result shows that the GDAP behaves stably and consistently well with the increasing problem size.\nIt also shows once more that the GDAP performs better in a small-world network.\n6.\nRELATED WORK Task allocation in multiagent systems has been investigated by many researchers in recent years with different assumptions and emphases.\nHowever, most of the research to date on task allocation does not consider social connections among agents, and studies the problem in a centralized The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 505 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.4 0.6 0.8 1 1.2 1.4 1.6 Rewardrelativetooptimal Resource ratio skewed small-world skewed random skewed scale-free uniform small-world uniform random uniform scale-free 0.7 0.75 0.8 0.85 0.9 0.95 1 2 4 6 8 10 12 14 16 Rewardrelativetooptimal Degree skewed small-world skewed random skewed scale-free uniform small-world uniform random uniform scale-free Figure 6: The quality of the GDAP algorithm for a uniform and a skewed task benefit distribution related to the resource ratio (the first graph), and the network degree (the second graph).\nsetting.\nFor example, Kraus et al. [12] develop an auction protocol that enables agents to form coalitions with time constraints.\nIt assumes each agent knows the capabilities of all others.\nThe proposed protocol is centralized, where one manager is responsible for allocating the tasks to all coalitions.\nManisterski at al. [14] discuss the possibilities of achieving efficient allocations in both cooperative and noncooperative settings.\nThey propose a centralized algorithm to find the optimal solution.\nIn contrast to this work, we introduce also an efficient completely distributed protocol that takes the social network into account.\nTask allocation has also been studied in distributed settings by for example Shehory and Kraus [18] and by Lerman and Shehory [13].\nThey propose distributed algorithms with low communication complexity for forming coalitions in large-scale multiagent systems.\nHowever, they do not assume the existence of any agent network.\nThe work of Sander et al. [16] introduces computational geometry-based algorithms for distributed task allocation in geographical domains.\nAgents are then allowed to move and actively search for tasks, and the capability of agents to perform tasks is homogeneous.\nIn order to apply their approach, agents need to have some knowledge about the geographical positions of tasks and some other agents.\nOther work [17] proposes a location mechanism for open multiagent systems to allocate tasks to unknown agents.\nIn this approach each agent caches a list of agents they know.\nThe analysis of the communication complexity of this method is based on lattice-like graphs, while we investigate how to efficiently solve task allocation in a social network, whose topology can be arbitrary.\nNetworks have been employed in the context of task allocation in some other works as well, for example to limit the 0 1000 2000 3000 4000 5000 6000 7000 0 200\u00a0400\u00a0600\u00a0800 1000\u00a01200\u00a01400\u00a01600 1800 2000 Time(ms) Agents upper bound - small-world upper bound - random upper bound - scale-free GDAP - small-world GDAP - random GDAP - scale-free Figure 7: The run time of the GDAP algorithm.\n0.75 0.8 0.85 0.9 0.95 1 0 200\u00a0400\u00a0600\u00a0800 1000\u00a01200\u00a01400\u00a01600 1800 2000 Rewardrelativetoupperbound Agents small-world random scale-free Figure 8: The quality of the GDAP algorithm compared to the upper bound.\ninteractions between agents and mediators [1].\nMediators in this context are agents who receive the task and have connections to other agents.\nThey break up the task into subtasks, and negotiate with other agents to obtain commitments to execute these subtasks.\nTheir focus is on modeling the decision process of just a single mediator.\nAnother approach is to partition the network into cliques of nodes, representing coalitions which the agents involved may use as a coordination mechanism [20].\nThe focus of that work is distributed coalition formation among agents, but in our approach, we do not need agents to form groups before allocating tasks.\nEaswaran and Pitt [6] study `complex tasks'' that require `services'' for their accomplishment.\nThe problem concerns the allocation of subtasks to service providers in a supply chain.\nAnother study of task allocation in supply chains is [21], where it is argued that the defining characteristic of Supply Chain Formation is hierarchical subtask decomposition (HSD).\nHSD is implemented using task dependency networks (TDN), with agents and goods as nodes, and I\/O relations between them as edges.\nHere, the network is given, and the problem is to select a subgraph, for which the authors propose a market-based algorithm, in particular, a series of auctions.\nCompared to these works, our approach is more general in the sense that we are able to model different types of connections or constraints among agents for different problem domains in addition to supply chain formation.\nFinally, social networks have been used in the context of team formation.\nPrevious work has shown how to learn which relations are more beneficial in the long run [8], and adapt the social network accordingly.\nWe believe these results can be transferred to the domain of task allocation as well, leaving this as a topic for further study.\n506 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7.\nCONCLUSIONS In this paper we studied the task allocation problem in a social network (STAP), which can be seen as a new, more general, variant of the TAP.\nWe believe it has a great amount of potential for realistic problems.\nWe provided complexity results on computing the efficient solution for the STAP, as well as a bound on possible approximation algorithms.\nNext, we presented a distributed protocol, related to the contractnet protocol.\nWe also introduced an exponential algorithm to compute the optimal solution, as well as a fast upperbound algorithm.\nFinally, we used the optimal solution and the upper-bound (for larger instances) to conduct an extensive set of experiments to assess the solution quality and the computational efficiency of the proposed distributed algorithm in different types of networks, namely, small-world networks, random networks, and scale-free networks.\nThe results presented in this paper show that the distributed algorithm performs well in small-world, scale-free, and random networks, and for many different settings.\nAlso other experiments were done (e.g. on grid networks) and these results held up over a wider range of scenarios.\nFurthermore, we showed that it scales well to large networks, both in terms of quality and of required computation time.\nThe results also suggest that small-world networks are slightly better suited for local task allocation, because there are no nodes with very few neighbors.\nThere are many interesting extensions to our current work.\nIn this paper, we focus on the computational aspect in the design of the distributed algorithm.\nIn our future work, we would also like to address some of the related issues in game theory, such as strategic agents, and show desirable properties of a distributed protocol in such a context.\nIn the current algorithm we assume that agents can only contact their neighbors to request resources, which may explain why our algorithm performs not as good in the scalefree networks as in the small-world networks.\nOur future work may allow agents to reallocate (sub)tasks.\nWe are interested in seeing how such interactions will affect the performance of task allocation in different social networks.\nA third interesting topic for further work is the addition of reputation information among the agents.\nThis may help to model changing business relations and incentivize agents to follow the protocol.\nFinally, it would be interesting to study real-life instances of the social task allocation problem, and see how they relate to the randomly generated networks of different types studied in this paper.\nAcknowledgments.\nThis work is supported by the Technology Foundation STW, applied science division of NWO, and the Ministry of Economic Affairs.\n8.\nREFERENCES [1] S. Abdallah and V. Lesser.\nModeling Task Allocation Using a Decision Theoretic Model.\nIn Proc.\nAAMAS, pages 719-726.\nACM, 2005.\n[2] N. Alon, U. Feige, A. Wigderson, and D. Zuckerman.\nDerandomized Graph Products.\nComputational Complexity, 5(1):60-75, 1995.\n[3] A.-L.\nBarab\u00b4asi and R. Albert.\nEmergence of scaling in random networks.\nScience, 286(5439):509-512, 1999.\n[4] R. H. Coase.\nThe Nature of the Firm.\nEconomica NS, 4(16):386-405, 1937.\n[5] R. H. Coase.\nMy Evolution as an Economist.\nIn W. Breit and R. W. Spencer, editors, Lives of the Laureates, pages 227-249.\nMIT Press, 1995.\n[6] A. M. Easwaran and J. Pitt.\nSupply Chain Formation in Open, Market-Based Multi-Agent Systems.\nInternational J. of Computational Intelligence and Applications, 2(3):349-363, 2002.\n[7] I. Foster, N. R. Jennings, and C. Kesselman.\nBrain Meets Brawn: Why Grid and Agents Need Each Other.\nIn Proc.\nAAMAS, pages 8-15, Washington, DC, USA, 2004.\nIEEE Computer Society.\n[8] M. E. Gaston and M. desJardins.\nAgent-organized networks for dynamic team formation.\nIn Proc.\nAAMAS, pages 230-237, New York, NY, USA, 2005.\nACM Press.\n[9] A. Goldberg.\nAn Efficient Implementation of a Scaling Minimum-Cost Flow Algorithm.\nJ. of Algorithms, 22:1-29, 1997.\n[10] R. Gulati.\nDoes Familiarity Breed Trust?\nThe Implications of Repeated Ties for Contractual Choice in Alliances.\nAcademy of Management Journal, 38(1):85-112, 1995.\n[11] T. Klos and B. Nooteboom.\nAgent-based Computational Transaction Cost Economics.\nEconomic Dynamics and Control, 25(3-4):503-526, 01.\n[12] S. Kraus, O. Shehory, and G. Taase.\nCoalition formation with uncertain heterogeneous information.\nIn Proc.\nAAMAS, pages 1-8.\nACM, 2003.\n[13] K. Lerman and O. Shehory.\nCoalition formation for large-scale electronic markets.\nIn Proc.\nICMAS, pages 167-174.\nIEEE Computer Society, 2000.\n[14] E. Manisterski, E. David, S. Kraus, and N. Jennings.\nForming Efficient Agent Groups for Completing Complex Tasks.\nIn Proc.\nAAMAS, pages 257-264.\nACM, 2006.\n[15] J. Patel et al..\nAgent-Based Virtual Organizations for the Grid.\nMulti-Agent and Grid Systems, 1(4):237-249, 2005.\n[16] P. V. Sander, D. Peleshchuk, and B. J. Grosz.\nA scalable, distributed algorithm for efficient task allocation.\nIn Proc.\nAAMAS, pages 1191-1198, New York, NY, USA, 2002.\nACM Press.\n[17] O. Shehory.\nA scalable agent location mechanism.\nIn Proc.\nATAL, volume 1757 of LNCS, pages 162-172.\nSpringer, 2000.\n[18] O. Shehory and S. Kraus.\nMethods for Task Allocation via Agent Coalition Formation.\nArtificial Intelligence, 101(1-2):165-200, 1998.\n[19] R. M. Sreenath and M. P. Singh.\nAgent-based service selection.\nWeb Semantics, 1(3):261-279, 2004.\n[20] P. T. To\u02c7si\u00b4c and G. A. Agha.\nMaximal Clique Based Distributed Coalition Formation for Task Allocation in Large-Scale Multi-Agent Systems.\nIn Proc.\nMMAS, volume 3446 of LNAI, pages 104-120.\nSpringer, 2005.\n[21] W. E. Walsh and M. P. Wellman.\nModeling Supply Chain Formation in Multiagent Systems.\nIn Proc.\nAMEC II, volume 1788 of LNAI, pages 94-101.\nSpringer, 2000.\n[22] D. J. Watts and S. H. Strogatz.\nCollective dynamics of `small world'' networks.\nNature, 393:440-442, 1998.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 507","lvl-3":"Distributed Task Allocation in Social Networks\nABSTRACT\nThis paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network.\nWe show that the complexity of this problem remains NPhard.\nMoreover, it is not approximable within some factor.\nWe develop an algorithm based on the contract-net protocol.\nOur algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources.\nWe conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time.\nThree different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications.\nThe results demonstrate that our algorithm works well and that it scales well to large-scale applications.\n1.\nINTRODUCTION\nRecent years have seen a lot of work on task and resource allocation methods, which can potentially be applied to many real-world applications.\nHowever, some interesting applications where relations between agents play a role require a slightly more general model.\nSuch situations appear very frequently in real-world scenarios, and recent technological developments are bringing more of them within the\nrange of task allocation methods.\nEspecially in business applications, preferential partner selection and interaction is very common, and this aspect becomes more important for task allocation research, to the extent that technological developments need to be able to support it.\nFor example, the development of semantic web and grid technologies leads to increased and renewed attention for the potential of the web to support business processes [7, 15].\nAs an example, virtual organizations (VOs) are being re-invented in the context of the grid, where \"they are composed of a number of autonomous entities (representing different individuals, departments and organizations), each of which has a range of problem-solving capabilities and resources at its disposal\" [15, p. 237].\nThe question is how VOs are to be dynamically composed and re-composed from individual agents, when different tasks and subtasks need to be performed.\nThis would be done by allocating them to different agents who may each be capable of performing different subsets of those tasks.\nSimilarly, supply chain formation (SCF) is concerned with the, possibly ad-hoc, allocation of services to providers in the supply chain, in such a way that overall profit is optimized [6, 21].\nTraditionally, such allocation decisions have been analyzed using transaction cost economics (TCE) [4], which takes the transaction between consecutive stages of development as its basic unit of analysis, and considers the firm and the market as alternative structural forms for organizing transactions.\n(Transaction cost) economics has traditionally built on analysis of comparative statics: the central problem of economic organization is considered to be the adaptation of organizational forms to the characteristics of transactions.\nMore recently, TCE's founding father, Ronald Coase, acknowledged that this is too simplistic an approach [5, p. 245]: \"The analysis cannot be confined to what happens within a single firm.\n(...) What we are dealing with is a complex interrelated structure.\"\nIn this paper, we study the problem of task allocation from the perspective of such a complex interrelated structure.\nIn particular, ` the market' cannot be considered as an organizational form without considering specific partners to interact with on the market [11].\nSpecifically, therefore, we consider agents to be connected to each other in a social network.\nFurthermore, this network is not fully connected: as informed by the business literature, firms typically have established working relations with limited numbers of preferred partners [10]; these are the ones they consider when new tasks arrive and they have to form supply chains to allocate those tasks [19].\nOther than modeling the interrelated\nstructure between business partners, the social network introduced in this paper can also be used to represent other types of connections or constraints among autonomous entities that arise from other application domains.\nThe next section gives a formal description of the task allocation problem on social networks.\nIn Section 3, we prove that the complexity of this problem remains NP-hard.\nWe then proceed to develop a distributed algorithm in Section 4, and perform a series of experiments with this algorithm, as described in Section 5.\nSection 6 discusses related work, and Section 7 concludes.\n2.\nPROBLEM DESCRIPTION\nDEFINITION 4 (SOCIAL TASK ALLOCATION PROBLEM).\n3.\nCOMPLEXITY RESULTS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 501\n4.\nALGORITHMS\n4.1 Protocol for distributed task allocation\n502 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Optimal social task allocation\n4.3 Upper bound for social task allocation\n5.\nEXPERIMENTS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 503\n5.1 Experimental settings\n5.2 Experimental results\n5.2.1 Experiment 1\n5.2.2 Experiment 2\n5.2.3 Experiment 3\n6.\nRELATED WORK\nTask allocation in multiagent systems has been investigated by many researchers in recent years with different assumptions and emphases.\nHowever, most of the research to date on task allocation does not consider social connections among agents, and studies the problem in a centralized\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 505\nFigure 6: The quality of the GDAP algorithm for a uniform and a skewed task benefit distribution related to the resource ratio (the first graph), and the network degree (the second graph).\nsetting.\nFor example, Kraus et al. [12] develop an auction protocol that enables agents to form coalitions with time constraints.\nIt assumes each agent knows the capabilities of all others.\nThe proposed protocol is centralized, where one manager is responsible for allocating the tasks to all coalitions.\nManisterski at al. [14] discuss the possibilities of achieving efficient allocations in both cooperative and noncooperative settings.\nThey propose a centralized algorithm to find the optimal solution.\nIn contrast to this work, we introduce also an efficient completely distributed protocol that takes the social network into account.\nTask allocation has also been studied in distributed settings by for example Shehory and Kraus [18] and by Lerman and Shehory [13].\nThey propose distributed algorithms with low communication complexity for forming coalitions in large-scale multiagent systems.\nHowever, they do not assume the existence of any agent network.\nThe work of Sander et al. [16] introduces computational geometry-based algorithms for distributed task allocation in geographical domains.\nAgents are then allowed to move and actively search for tasks, and the capability of agents to perform tasks is homogeneous.\nIn order to apply their approach, agents need to have some knowledge about the geographical positions of tasks and some other agents.\nOther work [17] proposes a location mechanism for open multiagent systems to allocate tasks to unknown agents.\nIn this approach each agent caches a list of agents they know.\nThe analysis of the communication complexity of this method is based on lattice-like graphs, while we investigate how to efficiently solve task allocation in a social network, whose topology can be arbitrary.\nNetworks have been employed in the context of task allocation in some other works as well, for example to limit the\nFigure 8: The quality of the GDAP algorithm compared to the upper bound.\ninteractions between agents and mediators [1].\nMediators in this context are agents who receive the task and have connections to other agents.\nThey break up the task into subtasks, and negotiate with other agents to obtain commitments to execute these subtasks.\nTheir focus is on modeling the decision process of just a single mediator.\nAnother approach is to partition the network into cliques of nodes, representing coalitions which the agents involved may use as a coordination mechanism [20].\nThe focus of that work is distributed coalition formation among agents, but in our approach, we do not need agents to form groups before allocating tasks.\nEaswaran and Pitt [6] study ` complex tasks' that require ` services' for their accomplishment.\nThe problem concerns the allocation of subtasks to service providers in a supply chain.\nAnother study of task allocation in supply chains is [21], where it is argued that the defining characteristic of Supply Chain Formation is hierarchical subtask decomposition (HSD).\nHSD is implemented using task dependency networks (TDN), with agents and goods as nodes, and I\/O relations between them as edges.\nHere, the network is given, and the problem is to select a subgraph, for which the authors propose a market-based algorithm, in particular, a series of auctions.\nCompared to these works, our approach is more general in the sense that we are able to model different types of connections or constraints among agents for different problem domains in addition to supply chain formation.\nFinally, social networks have been used in the context of team formation.\nPrevious work has shown how to learn which relations are more beneficial in the long run [8], and adapt the social network accordingly.\nWe believe these results can be transferred to the domain of task allocation as well, leaving this as a topic for further study.\nFigure 7: The run time of the GDAP algorithm.\n506 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nCONCLUSIONS\nIn this paper we studied the task allocation problem in a social network (STAP), which can be seen as a new, more general, variant of the TAP.\nWe believe it has a great amount of potential for realistic problems.\nWe provided complexity results on computing the efficient solution for the STAP, as well as a bound on possible approximation algorithms.\nNext, we presented a distributed protocol, related to the contractnet protocol.\nWe also introduced an exponential algorithm to compute the optimal solution, as well as a fast upperbound algorithm.\nFinally, we used the optimal solution and the upper-bound (for larger instances) to conduct an extensive set of experiments to assess the solution quality and the computational efficiency of the proposed distributed algorithm in different types of networks, namely, small-world networks, random networks, and scale-free networks.\nThe results presented in this paper show that the distributed algorithm performs well in small-world, scale-free, and random networks, and for many different settings.\nAlso other experiments were done (e.g. on grid networks) and these results held up over a wider range of scenarios.\nFurthermore, we showed that it scales well to large networks, both in terms of quality and of required computation time.\nThe results also suggest that small-world networks are slightly better suited for local task allocation, because there are no nodes with very few neighbors.\nThere are many interesting extensions to our current work.\nIn this paper, we focus on the computational aspect in the design of the distributed algorithm.\nIn our future work, we would also like to address some of the related issues in game theory, such as strategic agents, and show desirable properties of a distributed protocol in such a context.\nIn the current algorithm we assume that agents can only contact their neighbors to request resources, which may explain why our algorithm performs not as good in the scalefree networks as in the small-world networks.\nOur future work may allow agents to reallocate (sub) tasks.\nWe are interested in seeing how such interactions will affect the performance of task allocation in different social networks.\nA third interesting topic for further work is the addition of reputation information among the agents.\nThis may help to model changing business relations and incentivize agents to follow the protocol.\nFinally, it would be interesting to study real-life instances of the social task allocation problem, and see how they relate to the randomly generated networks of different types studied in this paper.\nAcknowledgments.\nThis work is supported by the Technology Foundation STW, applied science division of NWO, and the Ministry of Economic Affairs.","lvl-4":"Distributed Task Allocation in Social Networks\nABSTRACT\nThis paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network.\nWe show that the complexity of this problem remains NPhard.\nMoreover, it is not approximable within some factor.\nWe develop an algorithm based on the contract-net protocol.\nOur algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources.\nWe conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time.\nThree different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications.\nThe results demonstrate that our algorithm works well and that it scales well to large-scale applications.\n1.\nINTRODUCTION\nRecent years have seen a lot of work on task and resource allocation methods, which can potentially be applied to many real-world applications.\nHowever, some interesting applications where relations between agents play a role require a slightly more general model.\nrange of task allocation methods.\nThe question is how VOs are to be dynamically composed and re-composed from individual agents, when different tasks and subtasks need to be performed.\nThis would be done by allocating them to different agents who may each be capable of performing different subsets of those tasks.\nIn this paper, we study the problem of task allocation from the perspective of such a complex interrelated structure.\nSpecifically, therefore, we consider agents to be connected to each other in a social network.\nOther than modeling the interrelated\nstructure between business partners, the social network introduced in this paper can also be used to represent other types of connections or constraints among autonomous entities that arise from other application domains.\nThe next section gives a formal description of the task allocation problem on social networks.\nIn Section 3, we prove that the complexity of this problem remains NP-hard.\nWe then proceed to develop a distributed algorithm in Section 4, and perform a series of experiments with this algorithm, as described in Section 5.\nSection 6 discusses related work, and Section 7 concludes.\n6.\nRELATED WORK\nTask allocation in multiagent systems has been investigated by many researchers in recent years with different assumptions and emphases.\nHowever, most of the research to date on task allocation does not consider social connections among agents, and studies the problem in a centralized\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 505\nFigure 6: The quality of the GDAP algorithm for a uniform and a skewed task benefit distribution related to the resource ratio (the first graph), and the network degree (the second graph).\nsetting.\nFor example, Kraus et al. [12] develop an auction protocol that enables agents to form coalitions with time constraints.\nIt assumes each agent knows the capabilities of all others.\nThe proposed protocol is centralized, where one manager is responsible for allocating the tasks to all coalitions.\nManisterski at al. [14] discuss the possibilities of achieving efficient allocations in both cooperative and noncooperative settings.\nThey propose a centralized algorithm to find the optimal solution.\nIn contrast to this work, we introduce also an efficient completely distributed protocol that takes the social network into account.\nTask allocation has also been studied in distributed settings by for example Shehory and Kraus [18] and by Lerman and Shehory [13].\nThey propose distributed algorithms with low communication complexity for forming coalitions in large-scale multiagent systems.\nHowever, they do not assume the existence of any agent network.\nThe work of Sander et al. [16] introduces computational geometry-based algorithms for distributed task allocation in geographical domains.\nAgents are then allowed to move and actively search for tasks, and the capability of agents to perform tasks is homogeneous.\nIn order to apply their approach, agents need to have some knowledge about the geographical positions of tasks and some other agents.\nOther work [17] proposes a location mechanism for open multiagent systems to allocate tasks to unknown agents.\nIn this approach each agent caches a list of agents they know.\nThe analysis of the communication complexity of this method is based on lattice-like graphs, while we investigate how to efficiently solve task allocation in a social network, whose topology can be arbitrary.\nNetworks have been employed in the context of task allocation in some other works as well, for example to limit the\nFigure 8: The quality of the GDAP algorithm compared to the upper bound.\ninteractions between agents and mediators [1].\nMediators in this context are agents who receive the task and have connections to other agents.\nThey break up the task into subtasks, and negotiate with other agents to obtain commitments to execute these subtasks.\nTheir focus is on modeling the decision process of just a single mediator.\nAnother approach is to partition the network into cliques of nodes, representing coalitions which the agents involved may use as a coordination mechanism [20].\nThe focus of that work is distributed coalition formation among agents, but in our approach, we do not need agents to form groups before allocating tasks.\nEaswaran and Pitt [6] study ` complex tasks' that require ` services' for their accomplishment.\nThe problem concerns the allocation of subtasks to service providers in a supply chain.\nAnother study of task allocation in supply chains is [21], where it is argued that the defining characteristic of Supply Chain Formation is hierarchical subtask decomposition (HSD).\nHSD is implemented using task dependency networks (TDN), with agents and goods as nodes, and I\/O relations between them as edges.\nHere, the network is given, and the problem is to select a subgraph, for which the authors propose a market-based algorithm, in particular, a series of auctions.\nCompared to these works, our approach is more general in the sense that we are able to model different types of connections or constraints among agents for different problem domains in addition to supply chain formation.\nFinally, social networks have been used in the context of team formation.\nPrevious work has shown how to learn which relations are more beneficial in the long run [8], and adapt the social network accordingly.\nWe believe these results can be transferred to the domain of task allocation as well, leaving this as a topic for further study.\nFigure 7: The run time of the GDAP algorithm.\n506 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nCONCLUSIONS\nIn this paper we studied the task allocation problem in a social network (STAP), which can be seen as a new, more general, variant of the TAP.\nWe believe it has a great amount of potential for realistic problems.\nWe provided complexity results on computing the efficient solution for the STAP, as well as a bound on possible approximation algorithms.\nNext, we presented a distributed protocol, related to the contractnet protocol.\nWe also introduced an exponential algorithm to compute the optimal solution, as well as a fast upperbound algorithm.\nThe results presented in this paper show that the distributed algorithm performs well in small-world, scale-free, and random networks, and for many different settings.\nAlso other experiments were done (e.g. on grid networks) and these results held up over a wider range of scenarios.\nFurthermore, we showed that it scales well to large networks, both in terms of quality and of required computation time.\nThe results also suggest that small-world networks are slightly better suited for local task allocation, because there are no nodes with very few neighbors.\nThere are many interesting extensions to our current work.\nIn this paper, we focus on the computational aspect in the design of the distributed algorithm.\nIn our future work, we would also like to address some of the related issues in game theory, such as strategic agents, and show desirable properties of a distributed protocol in such a context.\nIn the current algorithm we assume that agents can only contact their neighbors to request resources, which may explain why our algorithm performs not as good in the scalefree networks as in the small-world networks.\nOur future work may allow agents to reallocate (sub) tasks.\nWe are interested in seeing how such interactions will affect the performance of task allocation in different social networks.\nA third interesting topic for further work is the addition of reputation information among the agents.\nThis may help to model changing business relations and incentivize agents to follow the protocol.\nFinally, it would be interesting to study real-life instances of the social task allocation problem, and see how they relate to the randomly generated networks of different types studied in this paper.\nAcknowledgments.","lvl-2":"Distributed Task Allocation in Social Networks\nABSTRACT\nThis paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network.\nWe show that the complexity of this problem remains NPhard.\nMoreover, it is not approximable within some factor.\nWe develop an algorithm based on the contract-net protocol.\nOur algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources.\nWe conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time.\nThree different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications.\nThe results demonstrate that our algorithm works well and that it scales well to large-scale applications.\n1.\nINTRODUCTION\nRecent years have seen a lot of work on task and resource allocation methods, which can potentially be applied to many real-world applications.\nHowever, some interesting applications where relations between agents play a role require a slightly more general model.\nSuch situations appear very frequently in real-world scenarios, and recent technological developments are bringing more of them within the\nrange of task allocation methods.\nEspecially in business applications, preferential partner selection and interaction is very common, and this aspect becomes more important for task allocation research, to the extent that technological developments need to be able to support it.\nFor example, the development of semantic web and grid technologies leads to increased and renewed attention for the potential of the web to support business processes [7, 15].\nAs an example, virtual organizations (VOs) are being re-invented in the context of the grid, where \"they are composed of a number of autonomous entities (representing different individuals, departments and organizations), each of which has a range of problem-solving capabilities and resources at its disposal\" [15, p. 237].\nThe question is how VOs are to be dynamically composed and re-composed from individual agents, when different tasks and subtasks need to be performed.\nThis would be done by allocating them to different agents who may each be capable of performing different subsets of those tasks.\nSimilarly, supply chain formation (SCF) is concerned with the, possibly ad-hoc, allocation of services to providers in the supply chain, in such a way that overall profit is optimized [6, 21].\nTraditionally, such allocation decisions have been analyzed using transaction cost economics (TCE) [4], which takes the transaction between consecutive stages of development as its basic unit of analysis, and considers the firm and the market as alternative structural forms for organizing transactions.\n(Transaction cost) economics has traditionally built on analysis of comparative statics: the central problem of economic organization is considered to be the adaptation of organizational forms to the characteristics of transactions.\nMore recently, TCE's founding father, Ronald Coase, acknowledged that this is too simplistic an approach [5, p. 245]: \"The analysis cannot be confined to what happens within a single firm.\n(...) What we are dealing with is a complex interrelated structure.\"\nIn this paper, we study the problem of task allocation from the perspective of such a complex interrelated structure.\nIn particular, ` the market' cannot be considered as an organizational form without considering specific partners to interact with on the market [11].\nSpecifically, therefore, we consider agents to be connected to each other in a social network.\nFurthermore, this network is not fully connected: as informed by the business literature, firms typically have established working relations with limited numbers of preferred partners [10]; these are the ones they consider when new tasks arrive and they have to form supply chains to allocate those tasks [19].\nOther than modeling the interrelated\nstructure between business partners, the social network introduced in this paper can also be used to represent other types of connections or constraints among autonomous entities that arise from other application domains.\nThe next section gives a formal description of the task allocation problem on social networks.\nIn Section 3, we prove that the complexity of this problem remains NP-hard.\nWe then proceed to develop a distributed algorithm in Section 4, and perform a series of experiments with this algorithm, as described in Section 5.\nSection 6 discusses related work, and Section 7 concludes.\n2.\nPROBLEM DESCRIPTION\nWe formulate the social task allocation problem in this section.\nThere is a set A of agents: A = {a1,..., am}.\nAgents need resources to complete tasks.\nLet R = {r1,..., rk} denote the collection of the resource types available to the agents A. Each agent a \u2208 A controls a fixed amount of resources for each resource type in R, which is defined by a resource function: rsc: A \u00d7 R \u2192 N. Moreover, we assume agents are connected by a social network.\nSuppose a set of tasks T = {t1, t2,..., tn} arrives at such an agent social network.\nEach task t \u2208 T is then defined by a tuple ~ u (t), rsc (t), loc (t) ~, where u (t) is the utility gained if task t is accomplished, and the resource function rsc: T \u00d7 R \u2192 N specifies the amount of resources required for the accomplishment of task t. Furthermore, a location function loc: T \u2192 A defines the locations (i.e., agents) at which the tasks arrive in the social network.\nAn agent a that is the location of a task t, i.e. loc (t) = a, is called the manager of this task.\nEach task t \u2208 T needs some specific resources from the agents in order to complete the task.\nThe exact assignment of tasks to agents is defined by a task allocation.\nin a social network SN, a task allocation is a mapping 0: T \u00d7 A \u00d7 R \u2192 N.\nA valid task allocation in SN must satisfy the following constrains: \u2022 A task allocation must be correct.\nEach agent a \u2208 A cannot use more than its available resources, i.e. for each r \u2208 R, Pte, 0 (t, a, r) \u2264 rsc (a, r).\n\u2022 A task allocation must be complete.\nFor each task t \u2208 T, either all allocated agents' resources are sufficient, i.e. for each r \u2208 R, Pae.a 0 (t, a, r) \u2265 rsc (t, r), or t is not allocated, i.e. 0 (t, \u00b7, \u00b7) = 0.\n\u2022 A task allocation must obey the social relationships.\nEach task t \u2208 T can only be allocated to agents that are (direct) neighbors of agent loc (t) in the social network SN.\nEach such agent that can contribute to a task is called a contractor.\nWe write T\u03c6 to represent the tasks that are fully allocated each task in T\u03c6, i.e., U\u03c6 = P in 0.\nThe utility of 0 is then the summation of the utilities of teT\u03c6 u (t).\nUsing this notation, we define the efficient task allocation below.\nWe are now ready to define the task allocation problem in social network that we study in this paper.\nDEFINITION 4 (SOCIAL TASK ALLOCATION PROBLEM).\nGiven a set of agents A connected by a social network SN = (A, AE), and a finite set of tasks T, the social task allocation problem (or STAP for short) is the problem of finding the efficient task allocation 0, such that 0 is valid and the social welfare U\u03c6 is maximized.\n3.\nCOMPLEXITY RESULTS\nThe traditional task allocation problem, TAP (without the condition of the social network SN), is NP-complete [18], and the complexity comes from the fact that we need to evaluate the exponential number of subsets of the task set.\nAlthough we may consider the TAP as a special case of the STAP by assuming agents are fully connected, we cannot directly use the complexity results from the TAP, since we study the STAP in an arbitrary social network, which, as we argued in the introduction, should be partially connected.\nWe now show that the TAP with an arbitrary social network is also NP-complete, even when the utility of each task is 1, and the quantity of all required and available resources is 1.\nTHEOREM 1.\nGiven the social task allocation problem with an arbitrary social network, as defined in Definition 4, the problem of deciding whether a task allocation 0 with utility more than k exists is NP-complete.\nPROOF.\nWe first show that the problem is in NP.\nGiven an instance of the problem and an integer k, we can verify in polynomial time whether an allocation 0 is a valid allocation and whether the utility of 0 is greater than k.\nWe now prove that the STAP is NP-hard by showing that MAXIMUM INDEPENDENT SET \u2264 P STAP.\nGiven an undirected graph G = (V, E) and an integer k, we construct a network G' = (V', E') which has an efficient task allocation with k tasks of utility 1 allocated if and only if G has an independent set (IS) of size k.\nFigure 1: The MIS problem can be reduced to the\nSTAP.\nThe left figure is an undirected graph G, which has the optimal solution {v1, v4} or {v2, v3}; the right figure is the constructed instance of the STAP, where the optimal allocation is {t1, t4} or {t2, t3}.\nAn instance of the following construction is shown in Figure 1.\nFor each node v \u2208 V and each edge e \u2208 E in the graph G, we create a vertex agent av and an edge agent ae in G'.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 501\nWhen v was incident to e in G we correspondingly add an edge e' in G' between av and ae.\nWe assign each agent in G' one resource, which is related to the node or the edge in the graph G, i.e., for each v E V, rsc (av) = {v} (here we write rsc (a) and rsc (t) to represent the set of resources available to\/required by a and t), and for each e E E, rsc (ae) = {e}.\nEach vertex agent avi in G' has a task ti that requires a set of neighboring resources ti = {vi} U {eJe = (u, vi) E E}.\nThere is no task on the edge agents in G'.\nWe define utility 1 for each task, and the quantity of all required and available resources to be 1.\nTaken an instance of the IS problem, suppose there is a solution of size k, i.e., a subset N C V such that no two vertices in N are joined by an edge in E and JNJ = k. N specifies a set of vertex agents AN in the corresponding graph G'.\nGiven two agents a1, a2 E AN we now know that there is no edge agent ae connected to both a1 and a2.\nThus, for each agent a E AN, a assigns its task to the edge agents which are connected to a. All other vertex agents a' E \/ AN are not able to assign their tasks, since the required resources of the edge agents are already used by the agents a E AN.\nThe set of tasks of the agents AN (JANJ = k) is thus the maximum set of tasks that can be allocated.\nThe utility of this allocation is k. Similarly, if there is a solution for the STAP with the utility value k, and the allocated task set is N, then for the IS problem, there exists a maximum independent set N of size k in G.\nAn example can be found in Figure 1.\nWe just proved that the STAP is NP-hard for an arbitrary graph.\nIn our proof, the complexity comes from the introduction of a social network.\nOne may expect that the complexity of this problem can be reduced for some networks where the number of neighbors of the agents is bounded by a fixed constant.\nWe now give a complexity result on this class of networks as follows.\nTHEOREM 2.\nLet the number of neighbors of each agent in the social network SN be bounded by \u0394 for \u0394> 3.\nComputing the efficient task allocation given such a network is NP-complete.\nIn addition, it is not approximable within \u0394\u03b5 for some \u03b5> 0.\nPROOF.\nIt has been shown in [2] that the maximum independent set problem in the case of the degree bounded by \u0394 for \u0394> 3 is NP-complete and is not approximable within \u0394\u03b5 for some \u03b5> 0.\nUsing the similar reduction from the proof of Theorem 1, this result also holds for the STAP.\nSince our problem is as hard as MIS as shown in Theorem 1, it is not possible to give a worst case bound better than \u0394\u03b5 for any polynomial time algorithm, unless P = NP.\n4.\nALGORITHMS\nTo deal with the problem of allocating tasks in a social network, we present a distributed algorithm.\nWe introduce this algorithm by describing the protocol for the agents.\nAfter that we give the optimal, centralized algorithm and an upper bound algorithm, which we use in Section 5 to benchmark the quality of our distributed algorithm.\n4.1 Protocol for distributed task allocation\nWe can summarize the description of the task allocation problem in social networks from Section 2 as follows.\nWe Algorithm 1 Greedy distributed allocation protocol (GDAP).\nEach manager a calculates the efficiency e (t) for each of their tasks t E Ta, and then while Ta = ~ 0:\n1.\nEach manager a selects the most efficient task t E Ta such that for each task t' E Ta: e (t') : - S is a set of n agents {s1 ... sn}; - T \u2286 R+ or N+ is a set of dates with a total order <; - Vicinity : S \u00d7 T \u2192 2S .\nIn the sequel, we will assume that the agents share a common clock.\nFor a given agent and a given time, the vicinity relation returns the set of agents with whom it can communicate at that time.\nAs we have seen previously, this relation exists when the agents meet.\n1 This term will designate a satellite constellation with InterSatellite Links.\n3.2 Requests Requests are the observation tasks that the satellite swarm must achieve.\nAs we have seen previously, the requests are generated both on the ground and on board.\nEach agent is allocated a set of initial requests.\nDuring the mission, new requests are sent to the agents by the ground or agents can generate new requests by themselves.\nFormally, a request is defined as follows: Definition 2 (Request).\nA request R is defined as a tuple < idR, pos(R), prio(R), tbeg(R),bR >: - idR is an identifier; - pos(R) is the geographic position of R; - prio(R) \u2208 R is the request priority; - tbeg(R) \u2208 T is the desired date of observation; - bR \u2208 {true, false} specifies if R has been realized.\nThe priority prio(R) of a request represents how much it is important for the user, namely the request sender, that the request should be carried out.\nThus a request with a high priority must be realized at all costs.\nIn our application, priorities are comprised between 1 and 5 (the highest).\nIn the sequel, we will note Rt si the set of the requests that are known by agent si at time t \u2208 T. For each request R in Rt si , there is a cost value, noted costsi (R) \u2208 R, representing how far from the desired date of observation tbeg(R) an agent si can realize R. So, the more an agent can carry out a request in the vicinity of the desired date of observation, the lower the cost value.\n3.3 Candidacy An agent may have several intentions about a request, i.e. for a request R, an agent si may: - propose to carry out R : si may realize R; - commit to carry out R : si will realize R; - not propose to carry out R : si may not realize R; - refuse to carry out R : si will not realize R.\nWe can notice that these four propositions are modalities of proposition C: si realizes R: - 3C means that si proposes to carry out R; - 2C means that si commits to carry out R; - \u00ac3C means that si does not propose to carry out R; - \u00ac2C means that si refuses to carry out R.\nMore formally: Definition 3 (Candidacy).\nA candidacy C is a tuple < idC , modC, sC , RC , obsC, dnlC >: - idC is an identifier; - modC \u2208 {3, 2, \u00ac3, \u00ac2} is a modality; - sC \u2208 S is the candidate agent; - RC \u2208 Rt sC is the request on which sC candidates; - obsC \u2208 T is the realization date proposed by sC ; - dnlC \u2208 T is the download date.\n3.4 Problem formalization Then, our problem is the following: we would like each agent to build request allocations (i.e a plan) dynamically such as if these requests are carried out their number is the highest possible or the global cost is minimal.\nMore formally, Definition 4 (Problem).\nLet E be a swarm.\nAgents si in E must build a set {At s1 ... At sn } where At si \u2286 Rt si such 288 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) as: - | S si\u2208S At si | is maximal; P si\u2208S P R\u2208At si prio(R) is maximal.\nP si\u2208S P R\u2208At si costsi (R) is minimal.\nLet us notice that these criteria are not necessarily compatible.\nAs the choices of an agent will be influenced by the choices of the others, it is necessary that the agents should reason on a common knowledge about the requests.\nIt is thus necessary to set up an effective communication protocol.\n4.\nCOMMUNICATION PROTOCOL Communication is commonly associated with cooperation.\nDeliberative agents need communication to cooperate, whereas it is not necessarily the case for reactive agents [2, 41].\nGossip protocols [22, 24], or epidemic protocols, are used to share knowledge with multicast.\nEach agent selects a set of agents at a given time in order to share information.\nThe speed of information transmission is contingent upon the length of the discussion round.\n4.1 The corridor metaphor The suggested protocol is inspired from what we name the corridor metaphor, which represents well the satellite swarm problem.\nVarious agents go to and fro in a corridor where objects to collect appear from time to time.\nTwo objects that are too close to each other cannot be collected by the same agent because the action takes some time and an agent cannot stop its movement.\nIn order to optimize the collection, the agents can communicate when they meet.\nS 2 S ABel A 1 A 3S Figure 1: Time t 1 S 2S Bel non A 3S Figure 2: Time t Example 1.\nLet us suppose three agents, s1, s2, s3 and an object A to be collected.\nAt time t, s1 did not collect A and s2 does not know that A exists.\nWhen s1 meets s2, it communicates the list of the objects it knows, that is to say A. s2 now believes that A exists and prepares to collect it.\nIt is not certain that A is still there because another agent may have passed before s2, but it can take it into account in its plan.\nAt time t , s3 collects A.\nIn the vicinity of s2, s3 communicates its list of objects and A is not in the list.\nAs both agents meet in a place where it is possible for s3 to have collected A, the object would have been in the list if it had not been collected.\ns2 can thus believe that A does not exist anymore and can withdraw it from its plan.\n4.2 Knowledge to communicate In order to build up their plans, agents need to know the current requests and the others agents'' intentions.\nFor each agent two kinds of knowledge to maintain are defined: - requests (Definition 2); - candidacies (Definition 3).\nDefinition 5 (Knowledge).\nKnowledge K is a tuple < data(K), SK , tK >: - data(K) is a request R or a candidacy C; - SK \u2286 S is the set of agents knowing K; - tK \u2208 T is a temporal timestamp.\nIn the sequel, we will note Kt si the knowledge of agent si at time t \u2208 T. 4.3 An epidemic protocol From the corridor metaphor, we can define a communication protocol that benefits from all the communication opportunities.\nAn agent notifies any change within its knowledge and each agent must propagate these changes to its vicinity who update their knowledge bases and reiterate the process.\nThis protocol is a variant of epidemic protocols [22] inspired from the work on overhearing [27].\nProtocol 1 (Communication).\nLet si be an agent in S. \u2200t \u2208 T: - \u2200 sj \u2208 Vicinity(si, t), si executes: 1.\n\u2200 K \u2208 Kt si such as sj \u2208 SK : a. si communicates K to sj b. if sj acknowledges receipt of K, SK \u2190 SK \u222a {sj}.\n- \u2200 K \u2208 Kt si received by sj at time t: 1.\nsj updates Kt sj with K 2.\nsj acknowledges receipt of K to si.\nTwo kinds of updates exist for an agent: - an internal update from a knowledge modification by the agent itself; - an external update from received knowledge.\nFor an internal update, updating K depends on data(K): a candidacy C is modified when its modality changes and a request R is modified when an agent realizes it.\nWhen K is updated, the timestamp is updated too.\nProtocol 2 (Internal update).\nLet si \u2208 S be an agent.\nAn internal update from si at time t \u2208 T is performed: - when knowledge K is created; - when data(K) is modified.\nIn both cases: 1.\ntK \u2190 t; 2.\nSK \u2190 {si}.\nFor an external update, only the most recent knowledge K is taken into account because timestamps change only when data(K) is modified.\nIf K is already known, it is updated if the content or the set of agents knowing it have been modified.\nIf K is unknown, it is simply added to the agent``s knowledge.\nProtocol 3 (External update).\nLet si be an agent and K the knowledge transmitted by agent sj.\n\u2200 K \u2208 K, the external update at time t \u2208 T is defined as follows: 1.\nif \u2203 K \u2208 Kt si such as iddata(K) = iddata(K ) then a. if tK \u2265 tK then i. if tK > tK then SK \u2190 SK \u222a {si} ii.\nif tK = tK then SK \u2190 SK \u222a SK iii.\nKt si \u2190 (Kt si \\{K }) \u222a {K} 2.\nelse a. Kt si \u2190 Kt si \u222a {K} b. SK \u2190 SK \u222a {si} The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 289 If the incoming information has a more recent timestamp, it means that the receiver agent has obsolete information.\nConsequently, it replaces the old information by the new one and adds itself to the set of agents knowing K (1.\na.i).\nIf both timestamps are the same, both pieces of information are the same.\nOnly the set of the agents knowing K may have changed because agents si and sj may have already transmitted the information to other agents.\nConsequently, the sets of agents knowing K are unified (1.\na.ii).\n4.4 Properties Communication between two agents when they meet is made of the conjunction of Protocol 1 and Protocol 3.\nIn the sequel, we call this conjunction a communication occurrence.\n4.4.1 Convergence The structure of the transmitted information and the internal update mechanism (Protocol 2) allow the process to converge.\nIndeed, a request R can only be in two states (realized or not) given by the boolean bR.\nOnce an internal update is made - i.e. R is realized - R cannot go back to its former state.\nConsequently, an internal update can only be performed once.\nAs far as candidacies are concerned, updates only modify the modalities, which may change many times and go back to previous states.\nThen it seems that livelocks2 would be likely to appear.\nHowever, a candidacy C is associated to a request and a realization date (the deadline given by obsC ).\nAfter the deadline, the candidacy becomes meaningless.\nThus for each candidacy, there exists a date t \u2208 T when changes will propagate no more.\n4.4.2 Complexity It has been shown that in a set of N agents where a single one has a new piece of information, an epidemic protocol takes O(logN) steps to broadcast the information [33].\nDuring one step, each agent has a communication occurrence.\nAs agents do not have much time to communicate, such a communication occurrence must not have a too big temporal complexity, which we can prove formally: Proposition 1.\nThe temporal complexity of a communication occurrence at time t \u2208 T between two agents si and sj is, for agent si, O(|Rt si |.\n|Rt sj |.\n|S|2 ) Proof 1.\nFor the worst case, each agent sk sends |Rt sk | pieces of information on requests and |Rt sk |.\n|S| pieces of informations on candidacies (one candidacy for each request and for each agent of the swarm).\nLet si and sj two agents meeting at time t \u2208 T. For agent si, the complexity of Protocol 1 is O(|Rt si | + |Rt si |.\n|S| | {z } emission + |Rt sj | + |Rt sj |.\n|S| | {z } reception ) For each received piece of information, agent si uses Protocol 3 and searches through its knowledge bases: |Rt si | pieces of information for each received request and |Rt si |.\n|S| pieces of 2 Communicating endlessly without converging.\ninformation for each received candidacy.\nConsequently, the complexity of Protocol 3 is O(|Rt sj |.\n|Rt si | + |Rt sj |.\n|Rt si |.\n|S|2 ) Thus, the temporal complexity of a communication occurrence is: O(|Rt si | + |Rt si |.\n|S| + |Rt sj |.\n|Rt si | + |Rt sj |.\n|Rt si |.\n|S|2 )) Then: O(|Rt si |.\n|Rt sj |.\n|S|2 ) 5.\nON-BOARD PLANNING In space contexts, [5, 21, 6] present multi-agent architectures for on-board planning.\nHowever, they assume high communication and computation capabilities [10].\n[13] relax these constraints by cleaving planning modules: on the first hand, satellites have a planner that builds plans on a large horizon and on the second hand, they have a decision module that enables them to choose to realize or not a planned observation.\nIn an uncertain environment such as the one of satellite swarms, it may be advantageous to delay the decision until the last moment (i.e. the realization date), especially if there are several possibilities for a given request.\nThe main idea in contingency planning [15, 29] is to determine the nodes in the initial plan where the risks of failures are most important and to incrementally build contingency branches for these situations.\n5.1 A deliberative approach Inspired from both approaches, we propose to build allocations made up of a set of unquestionable requests and a set of uncertain disjunctive requests on which a decision will be made at the end of the decision horizon.\nThis horizon corresponds to the request realization date.\nProposing such partial allocations allows conflicts to be solved locally without propagating them through the whole plan.\nIn order to build the agents'' initial plans, let us assume that each agent is equipped with an on-board planner.\nA plan is defined as follows: Definition 6 (Plan).\nLet si be an agent, Rt si a set of requests and Ct si a set of candidacies.\nLet us define three sets: - the set of potential requests: Rp = {R \u2208 Rt si |bR = false} - the set of mandatory requests: Rm = {R \u2208 Rp |\u2203C \u2208 Ct si : modC = 2, sC = si, RC = R} - the set of given-up requests: Rg = {R \u2208 Rp |\u2203C \u2208 Ct si : modC = \u00ac2, sC = si, RC = R} A plan At si generated at time t \u2208 T is a set of requests such as Rm \u2286 At si \u2286 Rp and \u2203 R \u2208 Rg such as R \u2208 At si .\nBuilding a plan generates candidacies.\nDefinition 7 (Generating candidacies).\nLet si be an agent and At1 si a (possibly empty) plan at time t1.\nLet At2 si be the plan generated at time t2 with t2 > t1.\n- \u2200 R \u2208 At1 si such as R \u2208 At2 si , a candidacy C such as mod(C) = \u00ac3, sC = si and RC = R is generated; - \u2200 R \u2208 At2 si such as R \u2208 At1 si , a candidacy C such as mod(C) = 3, sC = si and RC = R is generated; - Protocol 2 is used to update Kt1 si in Kt2 si .\n290 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.2 Conflicts When two agents compare their respective plans some conflicts may appear.\nIt is a matter of redundancies between allocations on a given request, i.e.: several agents stand as candidates to carry out this request.\nWhereas such redundancies may sometimes be useful to ensure the realization of a request (the realization may fail, e.g. because of clouds), it may also lead to a loss of opportunity.\nConsequently, conflict has to be defined: Definition 8 (Conflict).\nLet si and sj be two agents with, at time t, candidacies Csi and Csj respectively (sCsi = si and sCsj = sj).\nsi and sj are in conflict if and only if: - RCsi = RCsj - modCsi and modCsj \u2208 {2, 3} Let us notice that the agents have the means to know whether they are in conflict with another one during the communication process.\nIndeed, they exchange information not only concerning their own plan but also concerning what they know about the other agents'' plans.\nAll the conflicts do not have the same strength, meaning that they can be solved with more or less difficulty according to the agents'' communication capacities.\nA conflict is soft when the concerned agents can communicate before one or the other carries out the request in question.\nA conflict is hard when the agents cannot communicate before the realization of the request.\nDefinition 9 (Soft\/Hard conflict).\nLet si and sj (i < j) two agents in conflict with, at time t, candidacies Csi and Csj respectively (sCsi = si and sCsj = sj).\nIf \u2203 V \u2286 S such as V = {si ... sj} and if \u2203 T \u2208 T such as T = {ti\u22121 ... tj\u22121} (ti\u22121 = t) where: \u2200 i \u2264 k dnlCsj and |costsi (R) \u2212 costsj (R)| < then modCsi = \u00ac2 and modCsj = 2.\nStrategy 3 (Insurance).\nLet si and sj be two agents in conflict on their respective candidacies Csi and Csj such as si is the expert agent.\nLet \u03b1 \u2208 R be a priority threshold.\nThe insurance strategy is : if prio(R) cardc(R)\u22121 > \u03b1 then modCsi = 3 and modCsj = 3.\n3 i.e. the agent using memory resources during a shorter time.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 291 In the insurance strategy, redundancy triggering is adjusted by the conflict cardinality cardc(R).\nThe reason is the following: the more redundancies on a given request, the less a new redundancy on this request is needed.\nThe three strategies are implemented in a negotiation protocol dedicated to soft conflicts.\nThe protocol is based on a subsumption architecture [7] on strategies: the insurance strategy (1) is the major strategy because it ensures redundancy for which the swarm is implemented.\nThen the altruist strategy comes (2) in order to allocate the resources so as to enhance the mission return.\nFinally, the expert strategy that does not have preconditions (3) enhances the cost of the plan.\nProtocol 4 (Soft conflict solving).\nLet R be a request in a soft conflict between two agents, si and sj.\nThese agents have Csi and Csj for respective candidacies.\nLet si be the expert agent.\nAgents apply strategies as follows: 1.\ninsurance strategy (\u03b1) 2.\naltruist strategy ( ) 3.\nexpert strategy The choice of parameters \u03b1 and allows to adjust the protocol results.\nFor example, if = 0, the altruist strategy is never used.\n6.3 Hard conflict solving strategies In case of a hard conflict, the agent that is not aware will necessarily realize the request (with success or not).\nConsequently, a redundancy is useful only if the other agent is more expert or if the priority of the request is high enough to need redundancy.\nTherefore, we will use the insurance strategy (refer to Section 6.2) and define a competitive strategy.\nThe latter is defined for two agents, si and sj, in a hard conflict on a request R. Let si be the agent that is aware of the conflict4 .\nStrategy 4 (Competitive).\nLet \u03bb \u2208 R+ be an cost threshold.\nThe competitive strategy is: if costsi (R) < costsj (R) \u2212 \u03bb then modCsi = 3.\nProtocol 5 (Hard conflict solving).\nLet si be an agent in a hard conflict with an agent sj on a request R. si applies strategies as follows: 1.\ninsurance strategy (\u03b1) 2.\ncompetitive strategy (\u03bb) 3.\nwithdrawal : modCsi = \u00ac2 6.4 Generalization Although agents use pair communication, they may have information about several agents and conflict cardinality may be more than 2.\nTherefore, we define a k-conflict as a conflict with a cardinality of k on a set of agents proposing or committing to realize the same request.\nFormally, Definition 13 (k-conflict).\nLet S = {s1 ... sk} be a set of agents with respective candidacies Cs1 ... Csk at time t.\nThe set S is in a k-conflict if and only if: - \u22001 \u2264 i \u2264 k, sCsi = si; - !\n\u2203R such as \u22001 \u2264 i \u2264 k, RCsi = R; 4 i.e. the agent that must make a decision on R. - \u22001 \u2264 i \u2264 k, modCsi \u2208 {2, 3}.\n- S is maximal (\u2286) among the sets that satisfy these properties.\nAs previously, a k-conflict can be soft or hard.\nA k-conflict is soft if each pair conflict in the k-conflict is a soft conflict with respect to Definition 9.\nAs conflicts bear on sets of agents, expertise is a total order on agents.\nWe define rank-i-expertise where the concerned agent is the ith expert.\nIn case of a soft k-conflict, the rank-i-expert agent makes its decision with respect to the rank-(i + 1)-expert agent according to Protocol 4.\nThe protocol is applied recursively and \u03b1 and parameters are updated at each step in order to avoid cost explosion5 .\nIn case of a hard conflict, the set S of agents in conflict can be splitted in SS (the subset of agents in a soft conflict) and SH (the subset of unaware agents).\nOnly agents in SS can take a decision and must adapt themselves to agents in SH .\nThe rank-i-expert agent in SS uses Protocol 5 on the whole set SH and the rank-(i \u2212 1)-expert agent in SS .\nIf an agent in SS applies the competitive strategy all the others withdraws.\n7.\nEXPERIMENTS Satellite swarm simulations have been implemented in JAVA with the JADE platform [3].\nThe on-board planner is implemented with linear programming using ILOG CPLEX [1].\nThe simulation scenario implements 3 satellites on 6hour orbits.\nTwo scenarios have been considered: the first one with a set of 40 requests with low mutual exclusion and conflict rate and the second one with a set of 74 requests with high mutual exclusion and conflict rate.\nFor each scenario, six simulations have been performed: one with centralized planning (all requests are planned by the ground station before the simulation), one where agents are isolated (they cannot communicate nor coordinate with one another), one informed simulation (agents only communicate requests) and three other simulations implementing the instanciated collaboration strategies (politics): - neutral politics: \u03b1, and \u03bb are set to average values; - drastic politics: \u03b1 and \u03bb are set to higher values, i.e. agents will ensure redundancy only if the priorities are high and, in case of a hard conflict, if the cost payoff is much higher; - lax politics: \u03b1 is set to a lower value, i.e. redundancies are more frequent.\nIn the case of low mutual exclusion and conflict rate (Table 1), centralized and isolated simulations lead to the same number of observations, with the same average priorities.\nIsolation leading to a lower cost is due to the high number of redundancies: many agents carry out the same request at different costs.\nThe informed simulation reduces the number of redundancies but sligthly increases the average cost for the same reason.\nWe can notice that the use of 5 For instance, the rank-1-expert agent withdraws due to the altruist strategy and the cost increases by in the worst case, then rank-2-expert agent withdraws due to the altruist strategy and the cost increases by in the worst case.\nSo the cost has increased by 2 in the worst case.\n292 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Simulation Observations Redundancies Messages Average priority Average cost Centralized 34 0 0 2.76 176.06 Isolated 34 21 0 2.76 160.88 Informed 34 6 457 2.65 165.21 Neutral politics 31 4 1056 2.71 191.16 Drastic politics 24 1 1025 2.71 177.42 Lax politics 33 5 1092 2.7 172.88 Table 1: Scenario 1 - the 40-request simulation results Simulation Observations Redundancies Messages Average priority Average cost Centralized 59 0 0 2.95 162.88 Isolated 37 37 0 3.05 141.62 Informed 55 27 836 2.93 160.56 Neutral politics 48 25 1926 3.13 149.75 Drastic politics 43 21 1908 3.19 139.7 Lax politics 53 28 1960 3 154.02 Table 2: Scenario 2 - the 74-request simulation results collaboration strategies allows the number of redundancies to be much more reduced but the number of observations decreases owing to the constraint created by commitments.\nFurthermore, the average cost is increased too.\nNevertheless each avoided redundancy corresponds to saved resources to realize on-board generated requests during the simulation.\nIn the case of high mutual exclusion and conflict rate (Table 2), noteworthy differences exist between the centralized and isolated simulations.\nWe can notice that all informed simulations (with or without strategies) allow to perform more observations than isolated agents do with less redundancies.\nLikewise, we can notice that all politics reduce the average cost contrary to the first scenario.\nThe drastic politics is interesting because not only does it allow to perform more observations than isolated agents do but it allows to highly reduce the average cost with the lowest number of redundancies.\nAs far as the number of exchanged messages is concerned, there are 12 meetings between 2 agents during the simulations.\nIn the worst case, at each meeting each agent sends N pieces of information on the requests plus 3N pieces of information on the agents'' intentions plus 1 message for the end of communication, where N is the total number of requests.\nConsequently, 3864 messages are exchanged in the worst case for the 40-request simulations and 7128 messages for the 74-request simulations.\nThese numbers are much higher than the number of messages that are actually exchanged.\nWe can notice that the informed simulations, that communicate only requests, allow a higher reduction.\nIn the general case, using communication and strategies allows to reduce redundancies and saves resources but increases the average cost: if a request is realized, agents that know it do not plan it even if its cost can be reduce afterwards.\nIt is not the case with isolated agents.\nUsing strategies on little constrained problems such as scenario 1 constrains the agents too much and causes an additional cost increase.\nStrategies are more useful on highly constrained problems such as scenario 2.\nAlthough agents constrain themselves on the number of observations, the average cost is widely reduce.\n8.\nCONCLUSION AND FUTURE WORK An observation satellite swarm is a cooperative multiagent system with strong constraints in terms of communication and computation capabilities.\nIn order to increase the global mission outcome, we propose an hybrid approach: deliberative for individual planning and reactive for collaboration.\nAgents reason both on requests to carry out and on the other agents'' intentions (candidacies).\nAn epidemic communication protocol uses all communication opportunities to update this information.\nReactive decision rules (strategies) are proposed to solve conflicts that may arise between agents.\nThrough the tuning of the strategies (\u03b1, and \u03bb) and their plastic interlacing within the protocol, it is possible to coordinate agents without additional communication: the number of exchanged messages remains nearly the same between informed simulations and simulations implementing strategies.\nSome simulations have been made to experimentally validate these protocols and the first results are promising but raise many questions.\nWhat is the trade-off between the constraint rate of the problem and the need of strategies?\nTo what extent are the number of redundancies and the average cost affected by the tuning of the strategies?\nFuture works will focus on new strategies to solve new conflicts, specially those arising when relaxing the independence assumption between the requests.\nA second point is to take into account the complexity of the initial planning problem.\nIndeed, the chosen planning approach results in a combinatory explosion with big sets of requests: an anytime or a fully reactive approach has to be considered for more complex problems.\nAcknowledgements We would like to thank Marie-Claire Charmeau (CNES6 ), Serge Rainjonneau and Pierre Dago (Alcatel Space Alenia) for their relevant comments on this work.\n6 The French Space Agency The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 293 9.\nREFERENCES [1] ILOG inc..\nCPLEX.\nhttp:\/\/www.ilog.com\/products\/cplex.\n[2] T. Balch and R. Arkin.\nCommunication in reactive multiagent robotic systems.\nAutonomous Robots, pages 27-52, 1994.\n[3] F. Bellifemine, A. Poggi, and G. Rimassa.\nJADE - a FIPA-compliant agent framework.\nIn Proceedings of PAAM``99, pages 97-108, 1999.\n[4] A. Blum and M. Furst.\nFast planning through planning graph analysis.\nArtificial Intelligence, Vol.\n90:281-300, 1997.\n[5] E. Bornschlegl, C. Guettier, G. L. Lann, and J.-C.\nPoncet.\nConstraint-based layered planning and distributed control for an autonomous spacecraft formation flying.\nIn Proceedings of the 1st ESA Workshop on Space Autonomy, 2001.\n[6] E. Bornschlegl, C. Guettier, and J.-C.\nPoncet.\nAutomatic planning for autonomous spacecraft constellation.\nIn Proceedings of the 2nd NASA Intl..\nWorkshop on Planning and Scheduling for Space, 2000.\n[7] R. Brooks.\nA robust layered control system for a mobile robot.\nMIT AI Lab Memo, Vol.\n864, 1985.\n[8] A. Chopra and M. Singh.\nNonmonotonic commitment machines.\nLecture Notes in Computer Science: Advances in Agent Communication, Vol.\n2922:183-200, 2004.\n[9] A. Chopra and M. Singh.\nContextualizing commitment protocols.\nIn Proceedings of the 5th AAMAS, 2006.\n[10] B. Clement and A. Barrett.\nContinual coordination through shared activites.\nIn Proceedings of the 2nd AAMAS, pages 57-64, 2003.\n[11] J. Cox and E. Durfee.\nEfficient mechanisms for multiagent plan merging.\nIn Proceedings of the 3rd AAMAS, 2004.\n[12] S. Curtis, M. Rilee, P. Clark, and G. Marr.\nUse of swarm intelligence in spacecraft constellations for the resource exploration of the asteroid belt.\nIn Proceedings of the Third International Workshop on Satellite Constellations and Formation Flying, pages 24-26, 2003.\n[13] S. Damiani, G. Verfaillie, and M.-C.\nCharmeau.\nAn Earth watching satellite constellation : How to manage a team of watching agents with limited communications.\nIn Proceedings of the 4th AAMAS, pages 455-462, 2005.\n[14] S. Das, P. Gonzales, R. Krikorian, and W. Truszkowski.\nMulti-agent planning and scheduling environment for enhanced spacecraft autonomy.\nIn Proceedings of the 5th ISAIRAS, 1999.\n[15] R. Dearden, N. Meuleau, S. Ramakrishnan, D. Smith, and R. Wahington.\nIncremental contingency planning.\nIn Proceedings of ICAPS``03 Workshop on Planning under Uncertainty and Incomplete Information, pages 1-10, 2003.\n[16] F. Dignum.\nAutonomous agents with norms.\nArtificial Intelligence and Law, Vol.\n7:69-79, 1999.\n[17] E. Durfee.\nScaling up agent coordination strategies.\nIEEE Computer, Vol.\n34(7):39-46, 2001.\n[18] K. Erol, J. Hendler, and D. Nau.\nHTN planning : Complexity and expressivity.\nIn Proceedings of the 12th AAAI, pages 1123-1128, 1994.\n[19] D. Escorial, I. F. Tourne, and F. J. Reina.\nFuego : a dedicated constellation of small satellites to detect and monitor forest fires.\nActa Astronautica, Vol.52(9-12):765-775, 2003.\n[20] B. Gerkey and M. Matari\u0107.\nA formal analysis and taxonomy of task allocation in multi-robot systems.\nJournal of Robotics Research, Vol.\n23(9):939-954, 2004.\n[21] C. Guettier and J.-C.\nPoncet.\nMulti-level planning for spacecraft autonomy.\nIn Proceedings of the 6th ISAIRAS, pages 18-21, 2001.\n[22] I. Gupta, A.-M.\nKermarrec, and A. Ganesh.\nEfficient epidemic-style protocols for reliable and scalable multicast.\nIn Proceedings of the 21st IEEE Symposium on Reliable Distributed Systems, pages 180-189, 2002.\n[23] G. Gutnik and G. Kaminka.\nRepresenting conversations for scalable overhearing.\nJournal of Artificial Intelligence Research, Vol.\n25:349-387, 2006.\n[24] K. Jenkins, K. Hopkinson, and K. Birman.\nA gossip protocol for subgroup multicast.\nIn Proceedings of the 21st International Conference on Distributed Computing Systems Workshops, pages 25-30, 2001.\n[25] N. Jennings, S. Parsons, P. Norriega, and C. Sierra.\nOn augumentation-based negotiation.\nIn Proceedings of the International Workshop on Multi-Agent Systems, pages 1-7, 1998.\n[26] J.-L.\nKoning and M.-P.\nHuget.\nA semi-formal specification language dedicated to interaction protocols.\nInformation Modeling and Knowledge Bases XII: Frontiers in Artificial Intelligence and Applications, pages 375-392, 2001.\n[27] F. Legras and C. Tessier.\nLOTTO: group formation by overhearing in large teams.\nIn Proceedings of 2nd AAMAS, 2003.\n[28] D. McAllester, D. Rosenblitt, P. Norriega, and C. Sierra.\nSystematic nonlinear planning.\nIn Proceedings of the 9th AAAI, pages 634-639, 1991.\n[29] N. Meuleau and D. Smith.\nOptimal limited contingency planning.\nIn Proceedings of the 19th AAAI, pages 417-426, 2003.\n[30] P. Modi and M. Veloso.\nBumping strategies for the multiagent agreement problem.\nIn Proceedings of the 4th AAMAS, pages 390-396, 2005.\n[31] J. B. Mueller, D. M. Surka, and B. Udrea.\nAgent-based control of multiple satellite formation flying.\nIn Proceedings of the 6th ISAIRAS, 2001.\n[32] J. Odell, H. Parunak, and B. Bauer.\nExtending UML for agents.\nIn Proceedings of the Agent-Oriented Information Systems Workshop at the 17th AAAI, 2000.\n[33] B. Pittel.\nOn spreading a rumor.\nSIAM Journal of Applied Mathematics, Vol.\n47:213-223, 1987.\n[34] B. Polle.\nAutonomy requirement and technologies for future constellation.\nAstrium Summary Report, 2002.\n[35] T. Sandholm.\nContract types for satisficing task allocation.\nIn Proceedings of the AAAI Spring Symposium: Satisficing Models, pages 23-25, 1998.\n[36] T. Schetter, M. Campbell, and D. M. Surka.\nMultiple agent-based autonomy for satellite constellation.\nArtificial Intelligence, Vol.\n145:147-180, 2003.\n[37] O. Shehory and S. Kraus.\nMethods for task allocation via agent coalition formation.\nArtificial Intelligence, Vol.\n101(1-2):165-200, 1998.\n[38] D. M. Surka.\nObjectAgent for robust autonomous control.\nIn Proceedings of the AAAI Spring Symposium, 2001.\n[39] W. Truszkowski, D. Zoch, and D. Smith.\nAutonomy for constellations.\nIn Proceedings of the SpaceOps Conference, 2000.\n[40] R. VanDerKrogt and M. deWeerdt.\nPlan repair as an extension of planning.\nIn Proceedings of the 15th ICAPS, pages 161-170, 2005.\n[41] B. Werger.\nCooperation without deliberation : A minimal behavior-based approach to multi-robot teams.\nArtificial Intelligence, Vol.\n110:293-320, 1999.\n[42] P. Zetocha.\nSatellite cluster command and control.\nIEEE Aerospace Conference, Vol.\n7:49-54, 2000.\n294 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Collaboration Among a Satellite Swarm\nABSTRACT\nThe paper deals with on-board planning for a satellite swarm via communication and negotiation.\nWe aim at defining individual behaviours that result in a global behaviour that meets the mission requirements.\nWe will present the formalization of the problem, a communication protocol, a solving method based on reactive decision rules, and first results.\n1.\nINTRODUCTION\nMuch research has been undertaken to increase satellite autonomy such as enabling them to solve by themselves problems that may occur during a mission, adapting their behaviour to new events and transferring planning on-board; even if the development cost of such a satellite is increased, there is an increase in performance and mission possibilities [34].\nMoreover, the use of satellite swarms - sets of satellites flying in formation or in constellation around the Earth makes it possible to consider joint activities, to distribute skills and to ensure robustness.\nMulti-agent architectures have been developed for satellite swarms [36, 38, 42] but strong assumptions on deliberation and communication capabilities are made in order to build a collective plan.\nMono-agent planning [4, 18, 28] and task allocation [20] are widely studied.\nIn a multi-agent context, agents that build a collective plan must be able to change their goals, reallocate resources and react to environment changes and to the others' choices.\nA coordination step must be added to the planning step [40, 30, 11].\nHowever, this step needs high communication and computation capabilities.\nFor instance, coalition-based [37], contract-based [35] and all negotiationbased [25] mechanisms need these capabilities, especially in dynamic environments.\nIn order to relax communication constraints, coordination based on norms and conventions [16] or strategies [17] are considered.\nNorms constraint agents in their decisions in such a way that the possibilities of conflicts are reduced.\nStrategies are private decision rules that allow an agent to draw benefit from the knowledgeable world without communication.\nHowever, communication is still needed in order to share information and build collective conjectures and plans.\nCommunication can be achieved through a stigmergic approach (via the environment) or through message exchange and a protocol.\nA protocol defines interactions between agents and cannot be uncoupled from its goal, e.g. exchanging information, finding a trade-off, allocating tasks and so on.\nProtocols can be viewed as an abstraction of an interaction [9].\nThey may be represented in a variety of ways, e.g. AUML [32] or Petri-nets [23].\nAs protocols are originally designed for a single goal, some works aim at endowing them with flexibility [8, 26].\nHowever, an agent cannot always communicate with another agent or the communication possibilites are restricted to short time intervals.\nThe objective of this work is to use intersatellite connections, called InterSatellite Links or ISL, in an Earth observation constellation inspired from the Fuego mission [13, 19], in order to increase the system reactivity and to improve the mission global return through a hybrid agent approach.\nAt the individual level, agents are deliberative in order to create a local plan but at the collective level, they use normative decision rules in order to coordinate with one another.\nWe will present the features of our problem, a communication protocol, a method for request allocation and finally, collaboration strategies.\n2.\nPROBLEM FEATURES\n3.\nA MULTI-AGENT APPROACH\n3.1 Satellite swarm\n3.2 Requests\n3.3 Candidacy\n3.4 Problem formalization\n288 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nCOMMUNICATION PROTOCOL\n4.1 The corridor metaphor\n4.2 Knowledge to communicate\n4.3 An epidemic protocol\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 289\n4.4 Properties\n4.4.1 Convergence\n4.4.2 Complexity\n5.\nON-BOARD PLANNING\n5.1 A deliberative approach\nDEFINITION 7 (GENERATING CANDIDACIES).\nLet si be\nAt2\n290 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Conflicts\n6.\nCOLLABORATION STRATEGIES\n6.1 Cost and expertise\n6.2 Soft conflict solving strategies\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 291\n6.3 Hard conflict solving strategies\n6.4 Generalization\n7.\nEXPERIMENTS\nSatellite swarm simulations have been implemented in JAVA with the JADE platform [3].\nThe on-board planner is implemented with linear programming using ILOG CPLEX [1].\nThe simulation scenario implements 3 satellites on 6hour orbits.\nTwo scenarios have been considered: the first one with a set of 40 requests with low mutual exclusion and conflict rate and the second one with a set of 74 requests with high mutual exclusion and conflict rate.\nFor each scenario, six simulations have been performed: one with centralized planning (all requests are planned by the ground station before the simulation), one where agents are isolated (they cannot communicate nor coordinate with one another), one informed simulation (agents only communicate requests) and three other simulations implementing the instanciated collaboration strategies (politics):--neutral politics: \u03b1, a and \u03bb are set to average values;--drastic politics: \u03b1 and \u03bb are set to higher values, i.e. agents will ensure redundancy only if the priorities are high and, in case of a hard conflict, if the cost payoff is much higher;--lax politics: \u03b1 is set to a lower value, i.e. redundancies are more frequent.\nIn the case of low mutual exclusion and conflict rate (Table 1), centralized and isolated simulations lead to the same number of observations, with the same average priorities.\nIsolation leading to a lower cost is due to the high number of redundancies: many agents carry out the same request at different costs.\nThe informed simulation reduces the number of redundancies but sligthly increases the average cost for the same reason.\nWe can notice that the use of 5For instance, the rank-1-expert agent withdraws due to the altruist strategy and the cost increases by a in the worst case, then rank-2-expert agent withdraws due to the altruist strategy and the cost increases by e in the worst case.\nSo the cost has increased by 2e in the worst case.\n292 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: Scenario 1 - the 40-request simulation results\nTable 2: Scenario 2 - the 74-request simulation results\ncollaboration strategies allows the number of redundancies to be much more reduced but the number of observations decreases owing to the constraint created by commitments.\nFurthermore, the average cost is increased too.\nNevertheless each avoided redundancy corresponds to saved resources to realize on-board generated requests during the simulation.\nIn the case of high mutual exclusion and conflict rate (Table 2), noteworthy differences exist between the centralized and isolated simulations.\nWe can notice that all informed simulations (with or without strategies) allow to perform more observations than isolated agents do with less redundancies.\nLikewise, we can notice that all politics reduce the average cost contrary to the first scenario.\nThe drastic politics is interesting because not only does it allow to perform more observations than isolated agents do but it allows to highly reduce the average cost with the lowest number of redundancies.\nAs far as the number of exchanged messages is concerned, there are 12 meetings between 2 agents during the simulations.\nIn the worst case, at each meeting each agent sends N pieces of information on the requests plus 3N pieces of information on the agents' intentions plus 1 message for the end of communication, where N is the total number of requests.\nConsequently, 3864 messages are exchanged in the worst case for the 40-request simulations and 7128 messages for the 74-request simulations.\nThese numbers are much higher than the number of messages that are actually exchanged.\nWe can notice that the informed simulations, that communicate only requests, allow a higher reduction.\nIn the general case, using communication and strategies allows to reduce redundancies and saves resources but increases the average cost: if a request is realized, agents that know it do not plan it even if its cost can be reduce afterwards.\nIt is not the case with isolated agents.\nUsing strategies on little constrained problems such as scenario 1 constrains the agents too much and causes an additional cost increase.\nStrategies are more useful on highly constrained problems such as scenario 2.\nAlthough agents constrain themselves on the number of observations, the average cost is widely reduce.\n8.\nCONCLUSION AND FUTURE WORK An observation satellite swarm is a cooperative multiagent system with strong constraints in terms of communication and computation capabilities.\nIn order to increase the global mission outcome, we propose an hybrid approach: deliberative for individual planning and reactive for collaboration.\nAgents reason both on requests to carry out and on the other agents' intentions (candidacies).\nAn epidemic communication protocol uses all communication opportunities to update this information.\nReactive decision rules (strategies) are proposed to solve conflicts that may arise between agents.\nThrough the tuning of the strategies (\u03b1, e and \u03bb) and their plastic interlacing within the protocol, it is possible to coordinate agents without additional communication: the number of exchanged messages remains nearly the same between informed simulations and simulations implementing strategies.\nSome simulations have been made to experimentally validate these protocols and the first results are promising but raise many questions.\nWhat is the trade-off between the constraint rate of the problem and the need of strategies?\nTo what extent are the number of redundancies and the average cost affected by the tuning of the strategies?\nFuture works will focus on new strategies to solve new conflicts, specially those arising when relaxing the independence assumption between the requests.\nA second point is to take into account the complexity of the initial planning problem.\nIndeed, the chosen planning approach results in a combinatory explosion with big sets of requests: an anytime or a fully reactive approach has to be considered for more complex problems.","lvl-4":"Collaboration Among a Satellite Swarm\nABSTRACT\nThe paper deals with on-board planning for a satellite swarm via communication and negotiation.\nWe aim at defining individual behaviours that result in a global behaviour that meets the mission requirements.\nWe will present the formalization of the problem, a communication protocol, a solving method based on reactive decision rules, and first results.\n1.\nINTRODUCTION\nMulti-agent architectures have been developed for satellite swarms [36, 38, 42] but strong assumptions on deliberation and communication capabilities are made in order to build a collective plan.\nIn a multi-agent context, agents that build a collective plan must be able to change their goals, reallocate resources and react to environment changes and to the others' choices.\nHowever, this step needs high communication and computation capabilities.\nIn order to relax communication constraints, coordination based on norms and conventions [16] or strategies [17] are considered.\nNorms constraint agents in their decisions in such a way that the possibilities of conflicts are reduced.\nStrategies are private decision rules that allow an agent to draw benefit from the knowledgeable world without communication.\nHowever, communication is still needed in order to share information and build collective conjectures and plans.\nCommunication can be achieved through a stigmergic approach (via the environment) or through message exchange and a protocol.\nA protocol defines interactions between agents and cannot be uncoupled from its goal, e.g. exchanging information, finding a trade-off, allocating tasks and so on.\nProtocols can be viewed as an abstraction of an interaction [9].\nHowever, an agent cannot always communicate with another agent or the communication possibilites are restricted to short time intervals.\nAt the individual level, agents are deliberative in order to create a local plan but at the collective level, they use normative decision rules in order to coordinate with one another.\nWe will present the features of our problem, a communication protocol, a method for request allocation and finally, collaboration strategies.\n7.\nEXPERIMENTS\nSatellite swarm simulations have been implemented in JAVA with the JADE platform [3].\nThe on-board planner is implemented with linear programming using ILOG CPLEX [1].\nThe simulation scenario implements 3 satellites on 6hour orbits.\nTwo scenarios have been considered: the first one with a set of 40 requests with low mutual exclusion and conflict rate and the second one with a set of 74 requests with high mutual exclusion and conflict rate.\nIn the case of low mutual exclusion and conflict rate (Table 1), centralized and isolated simulations lead to the same number of observations, with the same average priorities.\nIsolation leading to a lower cost is due to the high number of redundancies: many agents carry out the same request at different costs.\nThe informed simulation reduces the number of redundancies but sligthly increases the average cost for the same reason.\nWe can notice that the use of 5For instance, the rank-1-expert agent withdraws due to the altruist strategy and the cost increases by a in the worst case, then rank-2-expert agent withdraws due to the altruist strategy and the cost increases by e in the worst case.\nSo the cost has increased by 2e in the worst case.\n292 The Sixth Intl. .\nJoint Conf.\nTable 1: Scenario 1 - the 40-request simulation results\nTable 2: Scenario 2 - the 74-request simulation results\ncollaboration strategies allows the number of redundancies to be much more reduced but the number of observations decreases owing to the constraint created by commitments.\nFurthermore, the average cost is increased too.\nNevertheless each avoided redundancy corresponds to saved resources to realize on-board generated requests during the simulation.\nIn the case of high mutual exclusion and conflict rate (Table 2), noteworthy differences exist between the centralized and isolated simulations.\nWe can notice that all informed simulations (with or without strategies) allow to perform more observations than isolated agents do with less redundancies.\nLikewise, we can notice that all politics reduce the average cost contrary to the first scenario.\nThe drastic politics is interesting because not only does it allow to perform more observations than isolated agents do but it allows to highly reduce the average cost with the lowest number of redundancies.\nAs far as the number of exchanged messages is concerned, there are 12 meetings between 2 agents during the simulations.\nIn the worst case, at each meeting each agent sends N pieces of information on the requests plus 3N pieces of information on the agents' intentions plus 1 message for the end of communication, where N is the total number of requests.\nConsequently, 3864 messages are exchanged in the worst case for the 40-request simulations and 7128 messages for the 74-request simulations.\nThese numbers are much higher than the number of messages that are actually exchanged.\nWe can notice that the informed simulations, that communicate only requests, allow a higher reduction.\nIn the general case, using communication and strategies allows to reduce redundancies and saves resources but increases the average cost: if a request is realized, agents that know it do not plan it even if its cost can be reduce afterwards.\nIt is not the case with isolated agents.\nUsing strategies on little constrained problems such as scenario 1 constrains the agents too much and causes an additional cost increase.\nStrategies are more useful on highly constrained problems such as scenario 2.\nAlthough agents constrain themselves on the number of observations, the average cost is widely reduce.\n8.\nCONCLUSION AND FUTURE WORK An observation satellite swarm is a cooperative multiagent system with strong constraints in terms of communication and computation capabilities.\nIn order to increase the global mission outcome, we propose an hybrid approach: deliberative for individual planning and reactive for collaboration.\nAgents reason both on requests to carry out and on the other agents' intentions (candidacies).\nAn epidemic communication protocol uses all communication opportunities to update this information.\nReactive decision rules (strategies) are proposed to solve conflicts that may arise between agents.\nThrough the tuning of the strategies (\u03b1, e and \u03bb) and their plastic interlacing within the protocol, it is possible to coordinate agents without additional communication: the number of exchanged messages remains nearly the same between informed simulations and simulations implementing strategies.\nSome simulations have been made to experimentally validate these protocols and the first results are promising but raise many questions.\nWhat is the trade-off between the constraint rate of the problem and the need of strategies?\nTo what extent are the number of redundancies and the average cost affected by the tuning of the strategies?\nFuture works will focus on new strategies to solve new conflicts, specially those arising when relaxing the independence assumption between the requests.\nA second point is to take into account the complexity of the initial planning problem.\nIndeed, the chosen planning approach results in a combinatory explosion with big sets of requests: an anytime or a fully reactive approach has to be considered for more complex problems.","lvl-2":"Collaboration Among a Satellite Swarm\nABSTRACT\nThe paper deals with on-board planning for a satellite swarm via communication and negotiation.\nWe aim at defining individual behaviours that result in a global behaviour that meets the mission requirements.\nWe will present the formalization of the problem, a communication protocol, a solving method based on reactive decision rules, and first results.\n1.\nINTRODUCTION\nMuch research has been undertaken to increase satellite autonomy such as enabling them to solve by themselves problems that may occur during a mission, adapting their behaviour to new events and transferring planning on-board; even if the development cost of such a satellite is increased, there is an increase in performance and mission possibilities [34].\nMoreover, the use of satellite swarms - sets of satellites flying in formation or in constellation around the Earth makes it possible to consider joint activities, to distribute skills and to ensure robustness.\nMulti-agent architectures have been developed for satellite swarms [36, 38, 42] but strong assumptions on deliberation and communication capabilities are made in order to build a collective plan.\nMono-agent planning [4, 18, 28] and task allocation [20] are widely studied.\nIn a multi-agent context, agents that build a collective plan must be able to change their goals, reallocate resources and react to environment changes and to the others' choices.\nA coordination step must be added to the planning step [40, 30, 11].\nHowever, this step needs high communication and computation capabilities.\nFor instance, coalition-based [37], contract-based [35] and all negotiationbased [25] mechanisms need these capabilities, especially in dynamic environments.\nIn order to relax communication constraints, coordination based on norms and conventions [16] or strategies [17] are considered.\nNorms constraint agents in their decisions in such a way that the possibilities of conflicts are reduced.\nStrategies are private decision rules that allow an agent to draw benefit from the knowledgeable world without communication.\nHowever, communication is still needed in order to share information and build collective conjectures and plans.\nCommunication can be achieved through a stigmergic approach (via the environment) or through message exchange and a protocol.\nA protocol defines interactions between agents and cannot be uncoupled from its goal, e.g. exchanging information, finding a trade-off, allocating tasks and so on.\nProtocols can be viewed as an abstraction of an interaction [9].\nThey may be represented in a variety of ways, e.g. AUML [32] or Petri-nets [23].\nAs protocols are originally designed for a single goal, some works aim at endowing them with flexibility [8, 26].\nHowever, an agent cannot always communicate with another agent or the communication possibilites are restricted to short time intervals.\nThe objective of this work is to use intersatellite connections, called InterSatellite Links or ISL, in an Earth observation constellation inspired from the Fuego mission [13, 19], in order to increase the system reactivity and to improve the mission global return through a hybrid agent approach.\nAt the individual level, agents are deliberative in order to create a local plan but at the collective level, they use normative decision rules in order to coordinate with one another.\nWe will present the features of our problem, a communication protocol, a method for request allocation and finally, collaboration strategies.\n2.\nPROBLEM FEATURES\nAn observation satellite constellation is a set of satellites in various orbits whose mission is to take pictures of various areas on the Earth surface, for example hot points corresponding to volcanos or forest fires.\nThe ground sends the constellation observation requests characterized by their geographical positions, priorities specifying if the requests are urgent or not, the desired dates of observation and the desired dates for data downloading.\nThe satellites are equipped with a single observation instrument whose mirror can roll to shift the line of sight.\nA minimum duration is necessary to move the mirror, so requests that are too close together cannot be realized by the same satellite.\nThe satellites are also equipped with a detection instrument pointed forward that detects hot points and generates observation requests on-board.\nThe constellations that we consider are such as the orbits of the various satellites meet around the poles.\nA judicious positioning of the satellites in their orbits makes it possible to consider that two (or more) satellites meet in the polar areas, and thus can communicate without the ground intervention.\nIntuitively, intersatellite communication increases the reactivity of the constellation since each satellite is within direct view of a ground station (and thus can communicate with it) only 10% of the time.\nThe features of the problem are the following:--3 to 20 satellites in the constellation;--pair communication around the poles;--no ground intervention during the planning process;--asynchronous requests with various priorities.\n3.\nA MULTI-AGENT APPROACH\nAs each satellite is a single entity that is a piece of the global swarm, a multi-agent system fits to model satellite constellations [39].\nThis approach has been developped through the ObjectAgent architecture [38], TeamAgent [31], DIPS [14] or Prospecting ANTS [12].\n3.1 Satellite swarm\nAn observation satellite swarm1 is a multi-agent system where the requests do not have to be carried out in a fixed order and the agents (the satellites) do not have any physical interaction.\nCarrying out a request cannot prevent another agent from carrying out another one, even the same one.\nAt most, there will be a waste of resources.\nFormally, a swarm is defined as follows:\nIn the sequel, we will assume that the agents share a common clock.\nFor a given agent and a given time, the vicinity relation returns the set of agents with whom it can communicate at that time.\nAs we have seen previously, this relation exists when the agents meet.\n3.2 Requests\nRequests are the observation tasks that the satellite swarm must achieve.\nAs we have seen previously, the requests are generated both on the ground and on board.\nEach agent is allocated a set of initial requests.\nDuring the mission, new requests are sent to the agents by the ground or agents can generate new requests by themselves.\nFormally, a request is defined as follows:\nThe priority prio (R) of a request represents how much it is important for the user, namely the request sender, that the request should be carried out.\nThus a request with a high priority must be realized at all costs.\nIn our application, priorities are comprised between 1 and 5 (the highest).\nIn the sequel, we will note Rtsi the set of the requests that are known by agent si at time t \u2208 T. For each request R in Rtsi, there is a cost value, noted costsi (R) \u2208 R, representing how far from the desired date of observation tbeg (R) an agent si can realize R. So, the more an agent can carry out a request in the vicinity of the desired date of observation, the lower the cost value.\n3.3 Candidacy\nAn agent may have several intentions about a request, i.e. for a request R, an agent si may:--propose to carry out R: si may realize R;--commit to carry out R: si will realize R;--not propose to carry out R: si may not realize R;--refuse to carry out R: si will not realize R.\nWe can notice that these four propositions are modalities of proposition C: si realizes R:\n3.4 Problem formalization\nThen, our problem is the following: we would like each agent to build request allocations (i.e a plan) dynamically such as if these requests are carried out their number is the highest possible or the global cost is minimal.\nMore formally, DEFINITION 4 (PROBLEM).\nLet E be a swarm.\nAgents si in E must build a set {Ats1...Atsn} where Atsi \u2286 Rtsi such\n288 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAs the choices of an agent will be influenced by the choices of the others, it is necessary that the agents should reason on a common knowledge about the requests.\nIt is thus necessary to set up an effective communication protocol.\n4.\nCOMMUNICATION PROTOCOL\nCommunication is commonly associated with cooperation.\nDeliberative agents need communication to cooperate, whereas it is not necessarily the case for reactive agents [2, 41].\nGossip protocols [22, 24], or epidemic protocols, are used to share knowledge with multicast.\nEach agent selects a set of agents at a given time in order to share information.\nThe speed of information transmission is contingent upon the length of the discussion round.\n4.1 The corridor metaphor\nThe suggested protocol is inspired from what we name the corridor metaphor, which represents well the satellite swarm problem.\nVarious agents go to and fro in a corridor where objects to collect appear from time to time.\nTwo objects that are too close to each other cannot be collected by the same agent because the action takes some time and an agent cannot stop its movement.\nIn order to optimize the collection, the agents can communicate when they meet.\nFigure 2: Time t '\nEXAMPLE 1.\nLet us suppose three agents, s1, s2, s3 and an object A to be collected.\nAt time t, s1 did not collect A and s2 does not know that A exists.\nWhen s1 meets s2, it communicates the list of the objects it knows, that is to say A. s2 now believes that A exists and prepares to collect it.\nIt is not certain that A is still there because another agent may have passed before s2, but it can take it into account in its plan.\nAt time t', s3 collects A.\nIn the vicinity of s2, s3 communicates its list of objects and A is not in the list.\nAs both agents meet in a place where it is possible for s3 to have collected A, the object would have been in the list if it had not been collected.\ns2 can thus believe that A does not exist anymore and can withdraw it from its plan.\n4.2 Knowledge to communicate\nIn order to build up their plans, agents need to know the current requests and the others agents' intentions.\nFor each agent two kinds of knowledge to maintain are defined:--requests (Definition 2);--candidacies (Definition 3).\nIn the sequel, we will note Kt si the knowledge of agent si at time t \u2208 T.\n4.3 An epidemic protocol\nFrom the corridor metaphor, we can define a communication protocol that benefits from all the communication opportunities.\nAn agent notifies any change within its knowledge and each agent must propagate these changes to its vicinity who update their knowledge bases and reiterate the process.\nThis protocol is a variant of epidemic protocols [22] inspired from the work on overhearing [27].\nPROTOCOL 1 (COMMUNICATION).\nLet si be an agent\nb. if sj acknowledges receipt of K, SK \u2190 SK \u222a {sj}.\n-- \u2200 K \u2208 Kt si received by sj at time t: 1.\nsj updates Kt sj with K 2.\nsj acknowledges receipt of K to si.\nTwo kinds of updates exist for an agent:--an internal update from a knowledge modification by the agent itself;--an external update from received knowledge.\nFor an internal update, updating K depends on data (K): a candidacy C is modified when its modality changes and a request R is modified when an agent realizes it.\nWhen K is updated, the timestamp is updated too.\nPROTOCOL 2 (INTERNAL UPDATE).\nLet si \u2208 S be an agent.\nAn internal update from si at time t \u2208 T is performed:--when knowledge K is created;--when data (K) is modified.\nIn both cases:\n1.\ntK \u2190 t; 2.\nSK \u2190 {si}.\nFor an external update, only the most recent knowledge K is taken into account because timestamps change only when data (K) is modified.\nIf K is already known, it is updated if the content or the set of agents knowing it have been modified.\nIf K is unknown, it is simply added to the agent's knowledge.\nPROTOCOL 3 (EXTERNAL UPDATE).\nLet si be an agent and K the knowledge transmitted by agent sj.\n\u2200 K \u2208 K, the external update at time t \u2208 T is defined as follows:\nFigure 1: Time t\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 289\nIf the incoming information has a more recent timestamp, it means that the receiver agent has obsolete information.\nConsequently, it replaces the old information by the new one and adds itself to the set of agents knowing K (1.\na.i).\nIf both timestamps are the same, both pieces of information are the same.\nOnly the set of the agents knowing K may have changed because agents si and sj may have already transmitted the information to other agents.\nConsequently, the sets of agents knowing K are unified (1.\na.ii).\n4.4 Properties\nCommunication between two agents when they meet is made of the conjunction of Protocol 1 and Protocol 3.\nIn the sequel, we call this conjunction a communication occurrence.\n4.4.1 Convergence\nThe structure of the transmitted information and the internal update mechanism (Protocol 2) allow the process to converge.\nIndeed, a request R can only be in two states (realized or not) given by the boolean bR.\nOnce an internal update is made - i.e. R is realized - R cannot go back to its former state.\nConsequently, an internal update can only be performed once.\nAs far as candidacies are concerned, updates only modify the modalities, which may change many times and go back to previous states.\nThen it seems that livelocks2 would be likely to appear.\nHowever, a candidacy C is associated to a request and a realization date (the deadline given by obsC).\nAfter the deadline, the candidacy becomes meaningless.\nThus for each candidacy, there exists a date t \u2208 T when changes will propagate no more.\n4.4.2 Complexity\nIt has been shown that in a set of N agents where a single one has a new piece of information, an epidemic protocol takes O (logN) steps to broadcast the information [33].\nDuring one step, each agent has a communication occurrence.\nAs agents do not have much time to communicate, such a communication occurrence must not have a too big temporal complexity, which we can prove formally: PROPOSITION 1.\nThe temporal complexity of a communication occurrence at time t \u2208 T between two agents si and sj is, for agent si, O (| Rt si |.\n| Rt sj |.\n| S | 2) PROOF 1.\nFor the worst case, each agent sk sends | Rtsk | pieces of information on requests and | Rtsk |.\n| S | pieces of informations on candidacies (one candidacy for each request and for each agent of the swarm).\nLet si and sj two agents meeting at time t \u2208 T. For agent si, the complexity of Protocol 1 is\nFor each received piece of information, agent si uses Protocol 3 and searches through its knowledge bases: | Rtsi | pieces of information for each received request and | Rt si |.\n| S | pieces of 2Communicating endlessly without converging.\ninformation for each received candidacy.\nConsequently, the complexity of Protocol 3 is\n5.\nON-BOARD PLANNING\nIn space contexts, [5, 21, 6] present multi-agent architectures for on-board planning.\nHowever, they assume high communication and computation capabilities [10].\n[13] relax these constraints by cleaving planning modules: on the first hand, satellites have a planner that builds plans on a large horizon and on the second hand, they have a decision module that enables them to choose to realize or not a planned observation.\nIn an uncertain environment such as the one of satellite swarms, it may be advantageous to delay the decision until the last moment (i.e. the realization date), especially if there are several possibilities for a given request.\nThe main idea in contingency planning [15, 29] is to determine the nodes in the initial plan where the risks of failures are most important and to incrementally build contingency branches for these situations.\n5.1 A deliberative approach\nInspired from both approaches, we propose to build allocations made up of a set of unquestionable requests and a set of uncertain disjunctive requests on which a decision will be made at the end of the decision horizon.\nThis horizon corresponds to the request realization date.\nProposing such partial allocations allows conflicts to be solved locally without propagating them through the whole plan.\nIn order to build the agents' initial plans, let us assume that each agent is equipped with an on-board planner.\nA plan is defined as follows: DEFINITION 6 (PLAN).\nLet si be an agent, Rtsi a set of requests and Ctsi a set of candidacies.\nLet us define three sets:--the set of potential requests:\nA plan At si generated at time t \u2208 T is a set of requests such as Rm \u2286 At si \u2286 Rp and ~ \u2203 R \u2208 Rg such as R \u2208 At si.\nBuilding a plan generates candidacies.\nDEFINITION 7 (GENERATING CANDIDACIES).\nLet si be\nan agent and At1 si a (possibly empty) plan at time t1.\nLet\nAt2\nsi be the plan generated at time t2 with t2> t1.\n290 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Conflicts\nWhen two agents compare their respective plans some conflicts may appear.\nIt is a matter of redundancies between allocations on a given request, i.e.: several agents stand as candidates to carry out this request.\nWhereas such redundancies may sometimes be useful to ensure the realization of a request (the realization may fail, e.g. because of clouds), it may also lead to a loss of opportunity.\nConsequently, conflict has to be defined: DEFINITION 8 (CONFLICT).\nLet si and sj be two agents with, at time t, candidacies Csi and Csj respectively (sCsi = si and sCsj = sj).\nsi and sj are in conflict if and only if:--RCsi = RCsj--modCsi and modCsj \u2208 {\u2751, O} Let us notice that the agents have the means to know whether they are in conflict with another one during the communication process.\nIndeed, they exchange information not only concerning their own plan but also concerning what they know about the other agents' plans.\nAll the conflicts do not have the same strength, meaning that they can be solved with more or less difficulty according to the agents' communication capacities.\nA conflict is soft when the concerned agents can communicate before one or the other carries out the request in question.\nA conflict is hard when the agents cannot communicate before the realization of the request.\nA conflict is soft if it exists a chain of agents between the two agents in conflict such as information can propagate before both agents realize the request.\nIf this chain does not exist, it means that the agents in conflict cannot communicate directly or not.\nConsequently, the conflict is hard.\nIn satellite swarms, the geographical positions of the requests are known as well as the satellite orbits.\nSo each agent is able to determine if a conflict is soft or hard.\nWe can define the conflict cardinality:\nThe conflict cardinality corresponds to the number of agents that are candidates or committed to the same request.\nThus, a conflict has at least a cardinality of 2.\n6.\nCOLLABORATION STRATEGIES\nIn space contexts, communication time and agents' computing capacities are limited.\nWhen they are in conflict, the agents must find a local agreement (instead of an expensive global agreement) by using the conflict in order to increase the number of realized requests, to decrease the time of mission return, to increase the quality of the pictures taken or to make sure that a request is carried out.\nEXAMPLE 2.\nLet us suppose a conflict on request R between agents si and sj.\nWe would like that the most expert agent, i.e. the agent that can carry out the request under the best conditions, does it.\nLet us suppose si is the expert.\nsi must allocate R to itself.\nIt remains to determine what sj must do: sj can either select a substitute for R in order to increase the number of requests potentially realized, or do nothing in order to preserve resources, or allocate R to itself to ensure redundancy.\nConsequently, we can define collaboration strategies dedicated to conflict solving.\nA strategy is a private (namely intrinsic to an agent) decision process that allows an agent to make a decision on a given object.\nIn our application, strategies specify what to do with redundancies.\n6.1 Cost and expertise\nIn our application, cost is linked to the realization dates.\nCarrying out a request consumes the agents' resources (e.g.: on-board energy, memory).\nConsequently, an observation has a cost for each agent which depends on when it is realized: the closer the realization date to the desired date of observation, the lower the cost.\nFrom this cost notion, we can formally define an expert notion between two agents.\nThe expertise for an agent means it can realize the request at the lower cost.\nDEFINITION 12 (EXPERTISE).\nLet si and sj \u2208 S be two agents and R a request.\nAgent si is an expert for R if and only if costsi (R) \u2264 costsj (R).\n6.2 Soft conflict solving strategies\nThree strategies are proposed to solve a conflict.\nThe expert strategy means that the expert agent maintains its candidacy whereas the other one gives up.\nThe altruist strategy means that the agent that can download first3, provided the cost increase is negligible, maintains its candidacy whereas the other one gives up.\nThe insurance strategy means that both agents maintain their candidacies in order to ensure redundancy.\nSTRATEGY 1 (EXPERT).\nLet si and sj be two agents in conflict on their respective candidacies Csi and Csj such as si is the expert agent.\nThe expert strategy is: modCsi = \u2751 and modCsj = \u00ac \u2751.\nSTRATEGY 2 (ALTRUIST).\nLet si and sj be two agents in conflict on their respective candidacies Csi and Csj such as si is the expert agent.\nLet e \u2208 R + be a threshold on the cost increase.\nThe altruist strategy is: if dnlCsi> dnlCsj and | costsi (R) \u2212 costsj (R) | \u03b1 then modCsi = O and modCsj = O. 3i.\ne. the agent using memory resources during a shorter time.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 291\nIn the insurance strategy, redundancy triggering is adjusted by the conflict cardinality cardc (R).\nThe reason is the following: the more redundancies on a given request, the less a new redundancy on this request is needed.\nThe three strategies are implemented in a negotiation protocol dedicated to soft conflicts.\nThe protocol is based on a subsumption architecture [7] on strategies: the insurance strategy (1) is the major strategy because it ensures redundancy for which the swarm is implemented.\nThen the altruist strategy comes (2) in order to allocate the resources so as to enhance the mission return.\nFinally, the expert strategy that does not have preconditions (3) enhances the cost of the plan.\nPROTOCOL 4 (SOFT CONFLICT SOLVING).\nLet R be a request in a soft conflict between two agents, si and sj.\nThese agents have Csi and Csj for respective candidacies.\nLet si be the expert agent.\nAgents apply strategies as follows:\n1.\ninsurance strategy (\u03b1) 2.\naltruist strategy (e) 3.\nexpert strategy\nThe choice of parameters \u03b1 and a allows to adjust the protocol results.\nFor example, if e = 0, the altruist strategy is never used.\n6.3 Hard conflict solving strategies\nIn case of a hard conflict, the agent that is not aware will necessarily realize the request (with success or not).\nConsequently, a redundancy is useful only if the other agent is more expert or if the priority of the request is high enough to need redundancy.\nTherefore, we will use the insurance strategy (refer to Section 6.2) and define a competitive strategy.\nThe latter is defined for two agents, si and sj, in a hard conflict on a request R. Let si be the agent that is aware of the conflict4.\nSTRATEGY 4 (COMPETITIVE).\nLet \u03bb E R + be an cost threshold.\nThe competitive strategy is: if costsi (R) Service Information<\/xs:documentation> <\/xs:annotation> <\/xs:sequence> ... <\/xs:schema> There can be multiple proxy UDDI registries in this architecture.\nThe advantage of this is to introduce distributed interactions between the UDDI clients and registries.\nOrganization can also decide what information is available from the local registries by implementing policies at the proxy registry.\n3.1 Sequence of Operations In this section, we demonstrate what the sequence of operations should be for three crucial scenarios - adding a new local registry, inserting a new service and querying a service.\nOther operations like deleting a registry, deleting a service, etc. are similar and for the sake of brevity are omitted here.\nFigure 2: Sequence Diagram- Add New Local Registry Add a New Local UDDI Registry Figure 2 contains a sequence diagram illustrating how a new UDDI registry is added to the network of UDDI registries.\nThe new registry registers itself with its proxy registry.\nThe proxy registry in turn queries the new registry for all services that it has UDDI Local Registry UDDI Local Registry UDDI Local Registry Proxy Registry DHT Based Distribution Proxy Registry Proxy Registry Article 2 stored in its databases and in turn registers each of those entries with the DHT.\nFigure 3: Sequence Diagram - Add New Service Add a New Service The use case diagram depicted in Error!\nReference source not found.\nhighlights how a client publishes a new service to the UDDI registry.\nIn order to interact with the registry a client has to know how to contact its local proxy registry.\nIt then publishes a service with the proxy registry which in turn publishes the service with the local UDDI registry and receives the UDDI key of the registry entry.\nThen new key-value pairs are published in the DHT, where each key is obtained by hashing a searchable keyword of the service and the value consists of the query URL of the registry and the UDDI key.\nFigure 4: Sequence Diagram - Query for a Service Query a Service Figure 4 shows how a client queries the UDDI registry for a service.\nOnce again, the client needs to know how to contact its local proxy registry and invokes the query service request.\nThe proxy registry in turn contacts one of the DHT nodes to determine DHT queries using the search terms.\nAs explained earlier in the context of Figure 1, multiple values might be retrieved from the DHT.\nEach value includes the query URL of a registry, and the unique UDDI key of a matching service in that registry.\nThe proxy then contacts the matching registries and waits for the response of lookup operations using the corresponding UDDI keys.\nUpon receiving the responses, the proxy registry collates all responses and returns the aggregated set of services to the client.\nWe will now illustrate these operations using an example.\nConsider a client contacting its local proxy to publish a service called Computer Accessories.\nThe proxy follows the steps in Figure 3 to add the service to UDDI 1 registry, and also publishes two entries in the DHT.\nThe keys of these entries are obtained by hashing the words computer and accessories respectively.\nBoth entries have the same value consisting of the query URL of this registry and the unique UDDI key returned by the registry for this service.\nNext we consider another client publishing a service called Computer Repair through its proxy to UDDI 2 registry.\nA similar process results in 2 more entries being added to the DHT.\nRecall that our DHT deployment can have multiple entries with the same key.\nIf we follow the steps in Figure 4 for a client sending a query to its proxy using the word computer, we see that the DHT is queried with the hash of the word computer as key.\nThis retrieves the query URL and respective UDDI keys of both services mentioned before in this example.\nThe proxy can then do a simple lookup operation at both UDDI 1 and 2 registries.\nIt is clear that as the number of UDDI registries and clients increases, this process of lookup at only relevant UDDI registries is more scalable that doing a full search using the word computer at all UDDI registries.\n4.\nIMPLEMENTATION In this section, we describe our implementation which is currently deployed on PlanetLab [9].\nPlanetLab is an open, globally distributed platform for developing, deploying, and accessing network services.\nIt currently has 527 machines, hosted by 249 sites, spanning over 25 countries.\nPlanetLab machines are hosted by research\/academic institutions as well as industrial companies.\nFrance Telecom and HP are two of the major industry supporters for PlanetLab.\nEvery PlanetLab host machine is connected to the Internet and runs a common software package including a Linux based operating system that supports server virtualization.\nThus the users can develop and experiment with new services under real-world conditions.\nThe advantage of using PlanetLab is that we can test the DUDE architecture under real-world conditions with a large scale geographically dispersed node base.\nDue to the availability of jUDDI, an open source UDDI V2 registry (http:\/\/www.juddi.org) and a lack of existing readily available UDDI V3 registry, a decision to use UDDI V2 was made.\nThe standardization of UDDI V3 is recent and we intend to extend this work to support UDDI V3 and subsequent versions in the future.\nThe proxy registry is implemented by modifying the jUDDI source to enable publishing, querying and deleting service information from a DHT.\nFurthermore, it also allows querying multiple registries and collating the response using UDDI4j [13].\nFor the DHT implementation, we use the Bamboo DHT code [11].\nThe Bamboo DHT allows multiple proxy registries to publish and delete service information from their respective UDDI registries, as well as to query for services from all the registries.\nThe proxy uses the service name as input to the DHT``s hash Article 2 function to get the DHT key.\nThe value that is stored in the DHT using this key is the URI of the registry along with the UDDI key of the service.\nThis ensures that when the proxy registry queries for services with a certain name, it gets back the URI and UDDI keys for matching entries.\nUsing these returned results, the proxy can do fast lookup operations at the respective UDDI registries.\nThe UDDI keys make it unnecessary to repeat the search at the UDDI registries with the service name.\nWe have so far described the process of exact match on service name.\nHowever there are additional types of search that must be supported.\nFirstly, the search requested could be case-insensitive.\nTo support that, the proxy registry has to publish the same service once using the name exactly as entered in the UDDI registry, and once with the name converted to all lower-case letters.\nTo do a case-insensitive search, the proxy registry simply has to convert the query string into lower-case letters.\nSecondly, the user could query based on the prefix of a service name.\nIndeed, this is the default behavior of search in UDDI.\nIn other words, a wildcard is implicit at the end of the service name being searched.\nTo support this efficiently in the DHT, our proxy registries have to take prefixes of the service name of varying length and publish the URI and UDDI key multiple times, once using each prefix.\nFor example, the prefix sizes chosen in one deployment might be 5, 10, 15 and 20 characters.\nIf a search for the first 12 characters of a service name is submitted, the proxy registry will query the DHT with the first 10 characters of the search string, and then refine the search result to ensure that the match extends to the 12th character.\nIf the search string has less than 5 characters, and the search is for a prefix rather than an exact match, the DHT cannot be of any help, unless every service is published in the DHT with prefix of length 0.\nUsing this null prefix will send a copy of every advertised service to the DHT node to which the hash of the null prefix maps.\nSince this can lead to load-imbalance, a better solution might be to use the DHT only to get a list of all UDDI registries, and send the search to all of them in the locations to be searched.\nThirdly, the service name being searched can be a regular expression, such as one with embedded wildcard characters.\nFor example, a search for Garden%s should match both Garden Supplies and Gardening Tools.\nThis will be treated similarly to the previous case as the DHT has to be queried with the longest available prefix.\nThe results returned have to be refined to ensure that the regular expression matches.\nFigure 5 shows the network diagram for our implementation.\nThere are two proxy UDDI and juddi registry pairs.\nConsider a client which contacts the UDDI proxy on grouse.hpl.hp.com.\nThe proxy does a lookup of the DHT using the query string or a prefix.\nThis involves contacting one of the DHT nodes, such as pli1-br3.\nhpl.hp.com, which serves as the gateway to the DHT for grouse.hpl.hp.com, based on the latter``s configuration file.\nThe DHT node may then route the query to one of the other DHT nodes which is responsible for the DHT key that the query string maps to.\nThe results of the DHT lookup return to pli1-br3.\nhpl.hp.com, which forwards them to grouse.hpl.hp.com.\nThe results may include a few services from each of the juddi registries.\nSo the proxy registry performs the lookup operations at both planetlab1 and planetlab2.rdfrancetelecom.com for their respective entries listed in the search results.\nThe responses to these lookups are collated by the proxy registry and returned to the client.\nFigure 5 Network Diagram 5.\nRELATED WORK A framework for QoS-based service discovery in grids has been proposed in [18].\nUDDIe, an extended UDDI registry for publishing and discovering services based on QoS parameters, is proposed in [19].\nOur work is complementary since we focus on how to federate the UDDI registries and address the scalability issue with UDDI.\nThe DUDE proxy can publish the service properties supported by UDDIe in the DHT and support range queries using techniques proposed for such queries on DHTs.\nThen we can deliver the scalability benefits of our current solution to both UDDI and UDDIe registries.\nDiscovering services meeting QoS and price requirements has been studied in the context of a grid economy, so that grid schedulers can use various market models such as commodity markets and auctions.\nThe Grid Market Directory [20] was proposed for this purpose.\nIn [12], the authors present an ontology-based matchmaker.\nResource and request descriptions are expressed in RDF Schema, a semantic markup language.\nMatchmaking rules are expressed in TRIPLE, a language based on Horn Logic.\nAlthough our current implementation focuses on UDDI version 2, in future we will consider semantic extensions to UDDI, WS-Discovery [16] and other Grid computing standards such as Monitoring and Discovery Service (MDS) [10].\nSo the simplest extension of our work could involve using the DHT to do an initial syntax-based search to identify the local registries that need to be contacted.\nThen the Proxy Registry can contact these registries, which do semantic matchmaking to identify their matches, which are then merged at the Proxy Registry and returned to the client.\nThe convergence of grid and P2P computing has been explored in [5].\nGridVine [2] builds a logical semantic overlay on top of a physical layer consisting of P-Grid [1], a structured overlay based on distributed search tree that uses prefix-based routing and changes the overlay paths as part of the network maintenance protocol to adapt to load in different parts of the keyspace.\nA federated UDDI service [4] has been built on top of the PlanetP [3] publish-subscribe system for unstructured P2P communities.\nThe focus of this work has been on the manageability of the federated service.\nThe UDDI service is treated as an application Article 2 service to be managed in their framework.\nSo they do not address the issue of scalability in UDDI, and instead use simple replication.\nIn [21], the authors describe a UDDI extension (UX) system that launches a federated query only if locally found results are not adequate.\nWhile the UX Server is positioned as an intermediary similarly to the UDDI Proxy described in our DUDE framework, it focuses more on the QoS framework and does not attempt to implement a seamless federation mechanism such as our DHT based approach.\nIn [22] D2HT describes a discovery framework built on top of DHT.\nHowever, we have chosen to use UDDI on top of DHT.\nD2HT have used (Agent Management System) AMS\/ (Directory Facilitator) DF on top of DHT.\n6.\nCONCLUSIONS AND FUTURE WORK In this paper, we have described a distributed architecture to support large scale discovery of web-services.\nOur architecture will enable organizations to maintain autonomous control over their UDDI registries and at the same time allowing clients to query multiple registries simultaneously.\nThe clients are oblivious to the transparent proxy approach we have adopted and get richer and more complete response to their queries.\nBased on initial prototype testing, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nThe paper has solved the scalability issues with UDDI but does not preclude the application of this approach to other service discovery mechanisms.\nAn example of another service discovery mechanism that could benefit from such an approach is Globus Toolkit``s MDS.\nFurthermore, we plan to investigate other aspects of grid service discovery that extend this work.\nSome of these aspects include the ability to subscribe to resource\/service information, the ability to maintain soft states and the ability to provide a variety of views for various different purposes.\nIn addition, we plan to revisit the service APIs for a Grid Service Discovery solution leveraging the available solutions and specifications as well as the work presented in this paper.\n7.\nREFERENCES [1] P-grid: A self-organizing structured p2p system.\nK. Aberer, P. Cudr_e-Mauroux, A. Datta, Z. Despotovic, M. Hauswirth, M. Punceva, and R. Schmidt.\nACM SIGMOD Record, 32(3), 2003.\n[2] GridVine: Building Internet-Scale Semantic Overlay Networks Karl Aberer, Philippe Cudre-Mauroux, Manfred Hauswirth, and Tim van Pelt.\nProceedings, 3rd ISWC 2004, Hiroshima, Japan.\n[3] ``PlanetP: Using Gossiping to Build Content Addressable Peer-to-Peer Information Sharing Communities''.\nF. M. Cuenca-Acuna, C. Peery, R. P. Martin, and T. D. Nguyen.\nIn Proceedings of 12th Intl Symposium on HPDC, June 2003.\n[4] Self-Managing Federated Services.\nFrancisco Matias Cuenca-Acuna and Thu D. Nguyen.\nIn Proceedings of 23rd IEEE International SRDS, 2004, Florianpolis, BRAZIL.\n[5] On Death, Taxes, and the Convergence of P2P and Grid Computing.\nIan Foster and Adriana Iamnitchi.\nIn Proceedings of the 2nd IPTPS 2003.\n[6] The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration, I. Foster, C. Kesselman, J. M. Nick and S. Tuecke.\nPresented to OGSI WG, Global Grid Forum, June 22, 2002.\nAvailable at http:\/\/www.globus.org\/alliance\/publications\/papers.php [7] Was the Universal Service Registry a Dream?\n, Fred Hartman and Harris Reynolds, In the Web Services Journal, Dec 2, 2004.\n[8] Pastry: Scalable, distributed object location and routing for large scale peer-to-peer systems.\nA. Rowstron and P. Druschel.\nIn Proc.\nof IFIP\/ACM Middleware, Nov. 2001 [9] http:\/\/www.planet-lab.org [10] Grid information services for distributed resource sharing.\nK. Czajkowski, S. Fitzgerald, I. Foster, and C. Kesselman.\nProceedings of the IEEE HPDC-10, 2001.\n[11] Handling churn in a DHT.\nS. Rhea, D. Geels, T. Roscoe, and J. Kubiatowicz.\nProceedings of the USENIX Annual Technical Conference, June 2004.\n[12] Ontology-based Resource Matching in the Grid - The Grid Meets the Semantic Web, Hongsuda Tangmunarunkit, Stefan Decker, Carl Kesselman.\nIn Proceedings of the Second ISWC (2003), Miami, Florida.\n[13] UDDI4j Java Class Library: http:\/\/www124.ibm.com\/developerworks\/oss\/uddi4j\/ [14] UDDI V2 specification: Available at http:\/\/uddi.org\/ [15] UDDI V3.0.2 specification: http:\/\/uddi.org\/ [16] Web Services Dynamic Discovery (WS-Discovery) Specification, February 2004.\nhttp:\/\/msdn.microsoft.com\/ws\/2004\/02\/discovery [17] Information Services (MDS): Key Concepts.\nhttp:\/\/www.globus.org\/toolkit\/docs\/4.0\/info\/key\/ [18] G- QoSM: Grid Service Discovery using QoS Properties, R J. Al-Ali, O.F. Rana, D.W. Walker, S. Jha and S. Sohail.\nJournal of Computing and Informatics (Special issue on Grid Computing), Ed: Domenico LaForenza, Vol.\n21, No. 4, pp. 363-382, 2002.\n[19] UDDIe: An Extended Registry for Web Services, A. ShaikhAli, O.F. Rana, R. Al-Ali and D.W. Walker, Workshop on Service Oriented Computing: Models, Architectures and Applications at SAINT Conference, Florida, US, January 2003.\nIEEE Computer Society Press.\n[20] A Market-Oriented Grid Directory Service for Publication and Discovery of Grid Service Providers and their Services, J. Yu, S. Venugopal and R. Buyya, Journal of Supercomputing, Kluwer Academic Publishers, USA, 2005.\n[21] Chen Zhou, Liang-Tien Chia, Bilhanan Silverajan, Bu-Sung Lee: UX - An Architecture Providing QoS-Aware and Federated Support for UDDI.\nICWS 2003: 171-176.\n[22] Kee-Hyun Choi, Ho-Jin Shin, Dong-Ryeol Shin, Service Discovery Supporting Open Scalability Using FIPACompliant Agent Platform for Ubiquitous Networks, Lecture Notes in Computer Science, Volume 3482, Jan 2005.\nArticle 2","lvl-3":"Scalable Grid Service Discovery Based on UDDI *\nABSTRACT\nEfficient discovery of grid services is essential for the success of grid computing.\nThe standardization of grids based on web services has resulted in the need for scalable web service discovery mechanisms to be deployed in grids Even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control has severely hindered its widespread deployment and usage.\nWith the advent of grid computing the scalability issue of UDDI will become a roadblock that will prevent its deployment in grids.\nIn this paper we present our distributed web-service discovery architecture, called DUDE (Distributed UDDI Deployment Engine).\nDUDE leverages DHT (Distributed Hash Tables) as a rendezvous mechanism between multiple UDDI registries.\nDUDE enables consumers to query multiple registries, still at the same time allowing organizations to have autonomous control over their registries.\n.\nBased on preliminary prototype on PlanetLab, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nFurthermore, The DUDE architecture for scalable distribution can be applied beyond UDDI to any Grid Service Discovery mechanism.\n1.\nINTRODUCTION\nEfficient discovery of grid services is essential for the success of grid computing.\nThe standardization of grids based on web services has resulted in the need for scalable web service Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ndiscovery mechanisms to be deployed in grids.\nGrid discovery services provide the ability to monitor and discover resources and services on grids.\nThey provide the ability to query and subscribe to resource\/service information.\nIn addition, threshold traps might be required to indicate specific change in existing conditions.\nThe state of the data needs to be maintained in a soft state so that the most recent information is always available.\nThe information gathered needs to be provided to variety of systems for the purpose of either utilizing the grid or proving summary information.\nHowever, the fundamental problem is the need to be scalable to handle huge amounts of data from multiple sources.\nThe web services community has addressed the need for service discovery, before grids were anticipated, via an industry standard called UDDI.\nHowever, even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control, among other things has severely hindered its widespread deployment and usage [7].\nWith the advent of grid computing the scalability issue with UDDI will become a roadblock that will prevent its deployment in grids.\nThis paper tackles the scalability issue and a way to find services across multiple registries in UDDI by developing a distributed web services discovery architecture.\nDistributing UDDI functionality can be achieved in multiple ways and perhaps using different distributed computing infrastructure\/platforms (e.g., CORBA, DCE, etc.).\nIn this paper we explore how Distributed Hash Table (DHT) technology can be leveraged to develop a scalable distributed web services discovery architecture.\nA DHT is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nThis crucial design choice is motivated by two factors.\nThe first motivating factor is the inherent simplicity of the put\/get abstraction that DHTs provide, which makes it easy to rapidly build applications on top of DHTs.\nWe recognize that having just this abstraction may not suffice for all distributed applications, but for the objective at hand, works very well as will become clear later.\nOther distributed computing platforms\/middleware while providing more functionality have much higher overhead and complexity.\nThe second motivating factor stems from the fact that DHTs are relatively new tool for building distributed applications and we would like to test its potential by applying it to the problem of distributing UDDI.\nIn the next section, we provide a brief overview of grid information services, UDDI and its limitations, which is followed by an overview of DHTs in Section 3.\nSection 4 describes our proposed architecture with details on use cases.\nIn Section 5, we Article 2 describe our current implementation, followed by our findings in Section 6.\nSection 7 discusses the related work in this area and Section 8 contains our concluding remarks.\n2.\nBACKGROUND\n2.1 Grid Service Discovery\nGrid computing is based on standards which use web services technology.\nIn the architecture presented in [6], the service discovery function is assigned to a specialized Grid service called Registry.\nThe implementation of the web service version of the Monitoring and Discovery Service (WS MDS), also known as the MDS4 component of the Globus Toolkit version 4 (GT4), includes such a registry in the form of the Index service Resource and service properties are collected and indexed by this service.\nIts basic function makes it similar to UDDI registry.\nTo attain scalability, Index services from different Globus containers can register with each other in a hierarchical fashion to aggregate data.\nThis approach for attaining scalability works best in hierarchical Virtual Organizations (VO), and expanding a search to find sufficient number of matches involves traversing the hierarchy.\nSpecifically, this approach is not a good match for systems that try to exploit the convergence of grid and peer-to-peer computing [5].\n2.2 UDDI\nBeyond grid computing, the problem of service discovery needs to be addressed more generally in the web services community.\nAgain, scalability is a major concern since millions of buyers looking for specific services need to find all the potential sellers of the service who can meet their needs.\nAlthough there are different ways of doing this, the web services standards committees address this requirement through a specification called UDDI (Universal Description, Discovery, and Integration).\nA UDDI registry enables a business to enter three types of information in a UDDI registry--white pages, yellow pages and green pages.\nUDDI's intent is to function as a registry for services just as the yellow pages is a registry for businesses.\nJust like in Yellow pages, companies register themselves and their services under different categories.\nIn UDDI, White Pages are a listing of the business entities.\nGreen pages represent the technical information that is necessary to invoke a given service.\nThus, by browsing a UDDI registry, a developer should be able to locate a service and a company and find out how to invoke the service.\nWhen UDDI was initially offered, it provided a lot of potential.\nHowever, today we find that UDDI has not been widely deployed in the Internet.\nIn fact, the only known uses of UDDI are what are known as private UDDI registries within an enterprise's boundaries.\nThe readers can refer to [7] for a recent article that discusses the shortcomings of UDDI and the properties of an ideal service registry.\nImprovement of the UDDI standard is continuing in full force and UDDI version 3 (V3) was recently approved as an OASIS Standard.\nHowever, UDDI today has issues that have not been addressed, such as scalability and autonomy of individual registries.\nUDDI V3 provides larger support for multi-registry environments based on portability of keys By allowing keys to be re-registered in multiple registries, the ability to link registries in various topologies is effectively enabled.\nHowever, no normative description of these topologies is provided in the UDDI specification at this point.\nThe improvements within UDDI V3 that allow support for multi-registry environments are significant and open the possibility for additional research around how multiregistry environments may be deployed.\nA recommended deployment scenario proposed by the UDDI V3 .0.2 Specification is to use the UDDI Business Registries as root registries, and it is possible to enable this using our solution.\n2.3 Distributed Hash Tables\nA Distributed Hash Table (DHT) is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nIt maintains a collection of key-value pairs on the nodes participating in this graph structure.\nFor our deployment, a key is the hash of a keyword from a service name or description.\nThere will be multiple values for this key, one for each service containing the keyword.\nJust like any other hash table data structure, it provides a simple interface consisting of put () and get () operations.\nThis has to be done with robustness because of the transient nature of nodes in P2P systems.\nThe value stored in the DHT can be any object or a copy or reference to it.\nThe DHT keys are obtained from a large identifier space.\nA hash function, such as MD5 or SHA-1, is applied to an object name to obtain its DHT key.\nNodes in a DHT are also mapped into the same identifier space by applying the hash function to their identifier, such as IP address and port number, or public key.\nThe identifier space is assigned to the nodes in a distributed and deterministic fashion, so that routing and lookup can be performed efficiently.\nThe nodes of a DHT maintain links to some of the other nodes in the DHT.\nThe pattern of these links is known as the DHT's geometry.\nFor example, in the Bamboo DHT [11], and in the Pastry DHT [8] on which Bamboo is based, nodes maintain links to neighboring nodes and to other distant nodes found in a routing table.\nThe routing table entry at row i and column j, denoted Ri [j], is another node whose identifier matches its own in first i digits, and whose (i + 1) st digit is j.\nThe routing table allows efficient overlay routing.\nBamboo, like all DHTs, specifies algorithms to be followed when a node joins the overlay network, or when a node fails or leaves the network The geometry must be maintained even when this rate is high.\nTo attain consistent routing or lookup, a DHT key must be routed to the node with the numerically closest identifier.\nFor details of how the routing tables are constructed and maintained, the reader is referred to [8, 11].\n3.\nPROPOSED ARCHITECTURE OF DHT BASED UDDI REGISTRY HIERARCHIES\nUDDI Local Registry\n3.1 Sequence of Operations\nAdd a New Service\nQuery a Service\n4.\nIMPLEMENTATION\n5.\nRELATED WORK\nA framework for QoS-based service discovery in grids has been proposed in [18].\nUDDIe, an extended UDDI registry for publishing and discovering services based on QoS parameters, is proposed in [19].\nOur work is complementary since we focus on how to federate the UDDI registries and address the scalability issue with UDDI.\nThe DUDE proxy can publish the service properties supported by UDDIe in the DHT and support range queries using techniques proposed for such queries on DHTs.\nThen we can deliver the scalability benefits of our current solution to both UDDI and UDDIe registries.\nDiscovering services meeting QoS and price requirements has been studied in the context of a grid economy, so that grid schedulers can use various market models such as commodity markets and auctions.\nThe Grid Market Directory [20] was proposed for this purpose.\nIn [12], the authors present an ontology-based matchmaker.\nResource and request descriptions are expressed in RDF Schema, a semantic markup language.\nMatchmaking rules are expressed in TRIPLE, a language based on Horn Logic.\nAlthough our current implementation focuses on UDDI version 2, in future we will consider semantic extensions to UDDI, WS-Discovery [16] and other Grid computing standards such as Monitoring and Discovery Service (MDS) [10].\nSo the simplest extension of our work could involve using the DHT to do an initial syntax-based search to identify the local registries that need to be contacted.\nThen the Proxy Registry can contact these registries, which do semantic matchmaking to identify their matches, which are then merged at the Proxy Registry and returned to the client.\nThe convergence of grid and P2P computing has been explored in [5].\nGridVine [2] builds a logical semantic overlay on top of a physical layer consisting of P-Grid [1], a structured overlay based on distributed search tree that uses prefix-based routing and changes the overlay paths as part of the network maintenance protocol to adapt to load in different parts of the keyspace.\nA federated UDDI service [4] has been built on top of the PlanetP [3] publish-subscribe system for unstructured P2P communities.\nThe focus of this work has been on the manageability of the federated service.\nThe UDDI service is treated as an application Article 2 service to be managed in their framework.\nSo they do not address the issue of scalability in UDDI, and instead use simple replication.\nIn [21], the authors describe a UDDI extension (UX) system that launches a federated query only if locally found results are not adequate.\nWhile the UX Server is positioned as an intermediary similarly to the UDDI Proxy described in our DUDE framework, it focuses more on the QoS framework and does not attempt to implement a seamless federation mechanism such as our DHT based approach.\nIn [22] D2HT describes a discovery framework built on top of DHT.\nHowever, we have chosen to use UDDI on top of DHT.\nD2HT have used (Agent Management System) AMS \/ (Directory Facilitator) DF on top of DHT.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have described a distributed architecture to support large scale discovery of web-services.\nOur architecture will enable organizations to maintain autonomous control over their UDDI registries and at the same time allowing clients to query multiple registries simultaneously.\nThe clients are oblivious to the transparent proxy approach we have adopted and get richer and more complete response to their queries.\nBased on initial prototype testing, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nThe paper has solved the scalability issues with UDDI but does not preclude the application of this approach to other service discovery mechanisms.\nAn example of another service discovery mechanism that could benefit from such an approach is Globus Toolkit's MDS.\nFurthermore, we plan to investigate other aspects of grid service discovery that extend this work.\nSome of these aspects include the ability to subscribe to resource\/service information, the ability to maintain soft states and the ability to provide a variety of views for various different purposes.\nIn addition, we plan to revisit the service APIs for a Grid Service Discovery solution leveraging the available solutions and specifications as well as the work presented in this paper.","lvl-4":"Scalable Grid Service Discovery Based on UDDI *\nABSTRACT\nEfficient discovery of grid services is essential for the success of grid computing.\nThe standardization of grids based on web services has resulted in the need for scalable web service discovery mechanisms to be deployed in grids Even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control has severely hindered its widespread deployment and usage.\nWith the advent of grid computing the scalability issue of UDDI will become a roadblock that will prevent its deployment in grids.\nIn this paper we present our distributed web-service discovery architecture, called DUDE (Distributed UDDI Deployment Engine).\nDUDE leverages DHT (Distributed Hash Tables) as a rendezvous mechanism between multiple UDDI registries.\nDUDE enables consumers to query multiple registries, still at the same time allowing organizations to have autonomous control over their registries.\n.\nBased on preliminary prototype on PlanetLab, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nFurthermore, The DUDE architecture for scalable distribution can be applied beyond UDDI to any Grid Service Discovery mechanism.\n1.\nINTRODUCTION\nEfficient discovery of grid services is essential for the success of grid computing.\ndiscovery mechanisms to be deployed in grids.\nGrid discovery services provide the ability to monitor and discover resources and services on grids.\nThey provide the ability to query and subscribe to resource\/service information.\nThe state of the data needs to be maintained in a soft state so that the most recent information is always available.\nThe information gathered needs to be provided to variety of systems for the purpose of either utilizing the grid or proving summary information.\nHowever, the fundamental problem is the need to be scalable to handle huge amounts of data from multiple sources.\nThe web services community has addressed the need for service discovery, before grids were anticipated, via an industry standard called UDDI.\nHowever, even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control, among other things has severely hindered its widespread deployment and usage [7].\nWith the advent of grid computing the scalability issue with UDDI will become a roadblock that will prevent its deployment in grids.\nThis paper tackles the scalability issue and a way to find services across multiple registries in UDDI by developing a distributed web services discovery architecture.\nDistributing UDDI functionality can be achieved in multiple ways and perhaps using different distributed computing infrastructure\/platforms (e.g., CORBA, DCE, etc.).\nIn this paper we explore how Distributed Hash Table (DHT) technology can be leveraged to develop a scalable distributed web services discovery architecture.\nA DHT is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nThe first motivating factor is the inherent simplicity of the put\/get abstraction that DHTs provide, which makes it easy to rapidly build applications on top of DHTs.\nOther distributed computing platforms\/middleware while providing more functionality have much higher overhead and complexity.\nThe second motivating factor stems from the fact that DHTs are relatively new tool for building distributed applications and we would like to test its potential by applying it to the problem of distributing UDDI.\nIn the next section, we provide a brief overview of grid information services, UDDI and its limitations, which is followed by an overview of DHTs in Section 3.\nSection 4 describes our proposed architecture with details on use cases.\nIn Section 5, we Article 2 describe our current implementation, followed by our findings in Section 6.\nSection 7 discusses the related work in this area and Section 8 contains our concluding remarks.\n2.\nBACKGROUND\n2.1 Grid Service Discovery\nGrid computing is based on standards which use web services technology.\nIn the architecture presented in [6], the service discovery function is assigned to a specialized Grid service called Registry.\nIts basic function makes it similar to UDDI registry.\nTo attain scalability, Index services from different Globus containers can register with each other in a hierarchical fashion to aggregate data.\nSpecifically, this approach is not a good match for systems that try to exploit the convergence of grid and peer-to-peer computing [5].\n2.2 UDDI\nBeyond grid computing, the problem of service discovery needs to be addressed more generally in the web services community.\nAgain, scalability is a major concern since millions of buyers looking for specific services need to find all the potential sellers of the service who can meet their needs.\nAlthough there are different ways of doing this, the web services standards committees address this requirement through a specification called UDDI (Universal Description, Discovery, and Integration).\nA UDDI registry enables a business to enter three types of information in a UDDI registry--white pages, yellow pages and green pages.\nUDDI's intent is to function as a registry for services just as the yellow pages is a registry for businesses.\nJust like in Yellow pages, companies register themselves and their services under different categories.\nIn UDDI, White Pages are a listing of the business entities.\nGreen pages represent the technical information that is necessary to invoke a given service.\nThus, by browsing a UDDI registry, a developer should be able to locate a service and a company and find out how to invoke the service.\nWhen UDDI was initially offered, it provided a lot of potential.\nHowever, today we find that UDDI has not been widely deployed in the Internet.\nIn fact, the only known uses of UDDI are what are known as private UDDI registries within an enterprise's boundaries.\nThe readers can refer to [7] for a recent article that discusses the shortcomings of UDDI and the properties of an ideal service registry.\nImprovement of the UDDI standard is continuing in full force and UDDI version 3 (V3) was recently approved as an OASIS Standard.\nHowever, UDDI today has issues that have not been addressed, such as scalability and autonomy of individual registries.\nUDDI V3 provides larger support for multi-registry environments based on portability of keys By allowing keys to be re-registered in multiple registries, the ability to link registries in various topologies is effectively enabled.\nHowever, no normative description of these topologies is provided in the UDDI specification at this point.\nThe improvements within UDDI V3 that allow support for multi-registry environments are significant and open the possibility for additional research around how multiregistry environments may be deployed.\nA recommended deployment scenario proposed by the UDDI V3 .0.2 Specification is to use the UDDI Business Registries as root registries, and it is possible to enable this using our solution.\n2.3 Distributed Hash Tables\nA Distributed Hash Table (DHT) is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nIt maintains a collection of key-value pairs on the nodes participating in this graph structure.\nFor our deployment, a key is the hash of a keyword from a service name or description.\nThere will be multiple values for this key, one for each service containing the keyword.\nJust like any other hash table data structure, it provides a simple interface consisting of put () and get () operations.\nThis has to be done with robustness because of the transient nature of nodes in P2P systems.\nThe DHT keys are obtained from a large identifier space.\nA hash function, such as MD5 or SHA-1, is applied to an object name to obtain its DHT key.\nNodes in a DHT are also mapped into the same identifier space by applying the hash function to their identifier, such as IP address and port number, or public key.\nThe identifier space is assigned to the nodes in a distributed and deterministic fashion, so that routing and lookup can be performed efficiently.\nThe nodes of a DHT maintain links to some of the other nodes in the DHT.\nThe pattern of these links is known as the DHT's geometry.\nFor example, in the Bamboo DHT [11], and in the Pastry DHT [8] on which Bamboo is based, nodes maintain links to neighboring nodes and to other distant nodes found in a routing table.\nThe routing table allows efficient overlay routing.\nTo attain consistent routing or lookup, a DHT key must be routed to the node with the numerically closest identifier.\nFor details of how the routing tables are constructed and maintained, the reader is referred to [8, 11].\n5.\nRELATED WORK\nA framework for QoS-based service discovery in grids has been proposed in [18].\nUDDIe, an extended UDDI registry for publishing and discovering services based on QoS parameters, is proposed in [19].\nOur work is complementary since we focus on how to federate the UDDI registries and address the scalability issue with UDDI.\nThe DUDE proxy can publish the service properties supported by UDDIe in the DHT and support range queries using techniques proposed for such queries on DHTs.\nThen we can deliver the scalability benefits of our current solution to both UDDI and UDDIe registries.\nDiscovering services meeting QoS and price requirements has been studied in the context of a grid economy, so that grid schedulers can use various market models such as commodity markets and auctions.\nThe Grid Market Directory [20] was proposed for this purpose.\nResource and request descriptions are expressed in RDF Schema, a semantic markup language.\nMatchmaking rules are expressed in TRIPLE, a language based on Horn Logic.\nAlthough our current implementation focuses on UDDI version 2, in future we will consider semantic extensions to UDDI, WS-Discovery [16] and other Grid computing standards such as Monitoring and Discovery Service (MDS) [10].\nSo the simplest extension of our work could involve using the DHT to do an initial syntax-based search to identify the local registries that need to be contacted.\nThe convergence of grid and P2P computing has been explored in [5].\nA federated UDDI service [4] has been built on top of the PlanetP [3] publish-subscribe system for unstructured P2P communities.\nThe focus of this work has been on the manageability of the federated service.\nThe UDDI service is treated as an application Article 2 service to be managed in their framework.\nSo they do not address the issue of scalability in UDDI, and instead use simple replication.\nIn [21], the authors describe a UDDI extension (UX) system that launches a federated query only if locally found results are not adequate.\nWhile the UX Server is positioned as an intermediary similarly to the UDDI Proxy described in our DUDE framework, it focuses more on the QoS framework and does not attempt to implement a seamless federation mechanism such as our DHT based approach.\nIn [22] D2HT describes a discovery framework built on top of DHT.\nHowever, we have chosen to use UDDI on top of DHT.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have described a distributed architecture to support large scale discovery of web-services.\nOur architecture will enable organizations to maintain autonomous control over their UDDI registries and at the same time allowing clients to query multiple registries simultaneously.\nBased on initial prototype testing, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nThe paper has solved the scalability issues with UDDI but does not preclude the application of this approach to other service discovery mechanisms.\nAn example of another service discovery mechanism that could benefit from such an approach is Globus Toolkit's MDS.\nFurthermore, we plan to investigate other aspects of grid service discovery that extend this work.\nIn addition, we plan to revisit the service APIs for a Grid Service Discovery solution leveraging the available solutions and specifications as well as the work presented in this paper.","lvl-2":"Scalable Grid Service Discovery Based on UDDI *\nABSTRACT\nEfficient discovery of grid services is essential for the success of grid computing.\nThe standardization of grids based on web services has resulted in the need for scalable web service discovery mechanisms to be deployed in grids Even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control has severely hindered its widespread deployment and usage.\nWith the advent of grid computing the scalability issue of UDDI will become a roadblock that will prevent its deployment in grids.\nIn this paper we present our distributed web-service discovery architecture, called DUDE (Distributed UDDI Deployment Engine).\nDUDE leverages DHT (Distributed Hash Tables) as a rendezvous mechanism between multiple UDDI registries.\nDUDE enables consumers to query multiple registries, still at the same time allowing organizations to have autonomous control over their registries.\n.\nBased on preliminary prototype on PlanetLab, we believe that DUDE architecture can support effective distribution of UDDI registries thereby making UDDI more robust and also addressing its scaling issues.\nFurthermore, The DUDE architecture for scalable distribution can be applied beyond UDDI to any Grid Service Discovery mechanism.\n1.\nINTRODUCTION\nEfficient discovery of grid services is essential for the success of grid computing.\nThe standardization of grids based on web services has resulted in the need for scalable web service Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ndiscovery mechanisms to be deployed in grids.\nGrid discovery services provide the ability to monitor and discover resources and services on grids.\nThey provide the ability to query and subscribe to resource\/service information.\nIn addition, threshold traps might be required to indicate specific change in existing conditions.\nThe state of the data needs to be maintained in a soft state so that the most recent information is always available.\nThe information gathered needs to be provided to variety of systems for the purpose of either utilizing the grid or proving summary information.\nHowever, the fundamental problem is the need to be scalable to handle huge amounts of data from multiple sources.\nThe web services community has addressed the need for service discovery, before grids were anticipated, via an industry standard called UDDI.\nHowever, even though UDDI has been the de facto industry standard for web-services discovery, imposed requirements of tight-replication among registries and lack of autonomous control, among other things has severely hindered its widespread deployment and usage [7].\nWith the advent of grid computing the scalability issue with UDDI will become a roadblock that will prevent its deployment in grids.\nThis paper tackles the scalability issue and a way to find services across multiple registries in UDDI by developing a distributed web services discovery architecture.\nDistributing UDDI functionality can be achieved in multiple ways and perhaps using different distributed computing infrastructure\/platforms (e.g., CORBA, DCE, etc.).\nIn this paper we explore how Distributed Hash Table (DHT) technology can be leveraged to develop a scalable distributed web services discovery architecture.\nA DHT is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nThis crucial design choice is motivated by two factors.\nThe first motivating factor is the inherent simplicity of the put\/get abstraction that DHTs provide, which makes it easy to rapidly build applications on top of DHTs.\nWe recognize that having just this abstraction may not suffice for all distributed applications, but for the objective at hand, works very well as will become clear later.\nOther distributed computing platforms\/middleware while providing more functionality have much higher overhead and complexity.\nThe second motivating factor stems from the fact that DHTs are relatively new tool for building distributed applications and we would like to test its potential by applying it to the problem of distributing UDDI.\nIn the next section, we provide a brief overview of grid information services, UDDI and its limitations, which is followed by an overview of DHTs in Section 3.\nSection 4 describes our proposed architecture with details on use cases.\nIn Section 5, we Article 2 describe our current implementation, followed by our findings in Section 6.\nSection 7 discusses the related work in this area and Section 8 contains our concluding remarks.\n2.\nBACKGROUND\n2.1 Grid Service Discovery\nGrid computing is based on standards which use web services technology.\nIn the architecture presented in [6], the service discovery function is assigned to a specialized Grid service called Registry.\nThe implementation of the web service version of the Monitoring and Discovery Service (WS MDS), also known as the MDS4 component of the Globus Toolkit version 4 (GT4), includes such a registry in the form of the Index service Resource and service properties are collected and indexed by this service.\nIts basic function makes it similar to UDDI registry.\nTo attain scalability, Index services from different Globus containers can register with each other in a hierarchical fashion to aggregate data.\nThis approach for attaining scalability works best in hierarchical Virtual Organizations (VO), and expanding a search to find sufficient number of matches involves traversing the hierarchy.\nSpecifically, this approach is not a good match for systems that try to exploit the convergence of grid and peer-to-peer computing [5].\n2.2 UDDI\nBeyond grid computing, the problem of service discovery needs to be addressed more generally in the web services community.\nAgain, scalability is a major concern since millions of buyers looking for specific services need to find all the potential sellers of the service who can meet their needs.\nAlthough there are different ways of doing this, the web services standards committees address this requirement through a specification called UDDI (Universal Description, Discovery, and Integration).\nA UDDI registry enables a business to enter three types of information in a UDDI registry--white pages, yellow pages and green pages.\nUDDI's intent is to function as a registry for services just as the yellow pages is a registry for businesses.\nJust like in Yellow pages, companies register themselves and their services under different categories.\nIn UDDI, White Pages are a listing of the business entities.\nGreen pages represent the technical information that is necessary to invoke a given service.\nThus, by browsing a UDDI registry, a developer should be able to locate a service and a company and find out how to invoke the service.\nWhen UDDI was initially offered, it provided a lot of potential.\nHowever, today we find that UDDI has not been widely deployed in the Internet.\nIn fact, the only known uses of UDDI are what are known as private UDDI registries within an enterprise's boundaries.\nThe readers can refer to [7] for a recent article that discusses the shortcomings of UDDI and the properties of an ideal service registry.\nImprovement of the UDDI standard is continuing in full force and UDDI version 3 (V3) was recently approved as an OASIS Standard.\nHowever, UDDI today has issues that have not been addressed, such as scalability and autonomy of individual registries.\nUDDI V3 provides larger support for multi-registry environments based on portability of keys By allowing keys to be re-registered in multiple registries, the ability to link registries in various topologies is effectively enabled.\nHowever, no normative description of these topologies is provided in the UDDI specification at this point.\nThe improvements within UDDI V3 that allow support for multi-registry environments are significant and open the possibility for additional research around how multiregistry environments may be deployed.\nA recommended deployment scenario proposed by the UDDI V3 .0.2 Specification is to use the UDDI Business Registries as root registries, and it is possible to enable this using our solution.\n2.3 Distributed Hash Tables\nA Distributed Hash Table (DHT) is a peer-to-peer (P2P) distributed system that forms a structured overlay allowing more efficient routing than the underlying network.\nIt maintains a collection of key-value pairs on the nodes participating in this graph structure.\nFor our deployment, a key is the hash of a keyword from a service name or description.\nThere will be multiple values for this key, one for each service containing the keyword.\nJust like any other hash table data structure, it provides a simple interface consisting of put () and get () operations.\nThis has to be done with robustness because of the transient nature of nodes in P2P systems.\nThe value stored in the DHT can be any object or a copy or reference to it.\nThe DHT keys are obtained from a large identifier space.\nA hash function, such as MD5 or SHA-1, is applied to an object name to obtain its DHT key.\nNodes in a DHT are also mapped into the same identifier space by applying the hash function to their identifier, such as IP address and port number, or public key.\nThe identifier space is assigned to the nodes in a distributed and deterministic fashion, so that routing and lookup can be performed efficiently.\nThe nodes of a DHT maintain links to some of the other nodes in the DHT.\nThe pattern of these links is known as the DHT's geometry.\nFor example, in the Bamboo DHT [11], and in the Pastry DHT [8] on which Bamboo is based, nodes maintain links to neighboring nodes and to other distant nodes found in a routing table.\nThe routing table entry at row i and column j, denoted Ri [j], is another node whose identifier matches its own in first i digits, and whose (i + 1) st digit is j.\nThe routing table allows efficient overlay routing.\nBamboo, like all DHTs, specifies algorithms to be followed when a node joins the overlay network, or when a node fails or leaves the network The geometry must be maintained even when this rate is high.\nTo attain consistent routing or lookup, a DHT key must be routed to the node with the numerically closest identifier.\nFor details of how the routing tables are constructed and maintained, the reader is referred to [8, 11].\n3.\nPROPOSED ARCHITECTURE OF DHT BASED UDDI REGISTRY HIERARCHIES\nAs mentioned earlier, we propose to build a distributed UDDI system on top of a DHT infrastructure.\nThis choice is primarily motivated by the simplicity of the put\/get abstraction that DHTs provide, which is powerful enough for the task at hand, especially since we plan to validate our approach with an implementation running on PlanetLab [9].\nA secondary motivation is to understand deployment issues with DHT based systems.\nSeveral applications have been built as overlays using DHTs, such as distributed file storage, databases, publish-subscribe systems and content distribution networks.\nIn our case, we are building a DHT based overlay network of UDDI registries, where the DHT acts as a rendezvous network that connects multiple registries.\nIn the grid computing scenario, an overlay network of multiple UDDI registries seems to an interesting alternative to the UDDI public Article 2 registries currently maintained by Microsoft, IBM, SAP and NTT.\nIn addition, our aim is to not change any of the UDDI interfaces for clients as well as publishers.\nFigure 1 highlights the proposed architecture for the DHT based UDDI Registry framework.\nUDDI nodes are replicated in a UDDI registry as per the current UDDI standard.\nHowever, each local registry has a local proxy registry that mediates between the local UDDI registry and the DHT Service.\nThe DHT service is the glue that connects the Proxy Registries together and facilitates searching across registries.\nUDDI Local Registry\nFigure 1: DUDE Architecture\nService information can be dispersed to several UDDI registries to promote scalability.\nThe proxy registry publishes, performs queries and deletes information from the dispersed UDDI registries.\nHowever, the scope of the queries is limited to relevant registries.\nThe DHT provides information about the relevant registries.\nThe core idea in the architecture is to populate DHT nodes with the necessary information from the proxies which enables easy and ubiquitous searching when queries are made.\nWhen a new service is added to a registry, all potential search terms are hashed by the proxy and used as DHT keys to publish the service in the DHT.\nThe value stored for this service uniquely identifies the service, and includes the URL of a registry and the unique UDDI key of the service in that registry.\nSimilarly when queries arrive, they are parsed and a set of search terms are identified.\nThese search terms are hashed and the values stored with those hash values are retrieved from the DHT.\nNote that a proxy does not need to know all DHT nodes; it needs to know just one DHT node (this is done as part of the bootstrapping process) and as described in Section 2.3, this DHT node can route the query as necessary to the other nodes on the DHT overlay.\nWe describe three usage scenarios later that deal with adding a new local registry, inserting a new service, and querying for a service.\nFurthermore, the DHT optimizes the UDDI query mechanism.\nThis process becomes a lookup using a UDDI unique key rather than a query using a set of search parameters.\nThis key and the URL of the registry are obtained by searching initially in the DHT.\nThe DHT query can return multiple values for matching services, and in each of the matching registries, the proxy performs lookup operations.\nThe service name is used as a hash for inserting the service information.\nThe service information contains the query URL and unique UDDI key for the registry containing the service.\nThere could be multiple registries associated with a given service.\nThe service information conforms to the following schema.\n 0.2 indicates bandwidth problems and processors with speed < 0.05 do not contribute to the computation.\nAdditionally, when one of the clusters has an exceptionally high inter-cluster overhead (larger than 0.25), we conclude that the bandwidth on the link between this cluster and the Internet backbone is insufficient for the application.\nIn that case, we simply remove the whole cluster instead of computing node badness and removing the worst nodes.\nAfter deciding which nodes are removed, the coordinator sends a message to these nodes and the nodes leave the computation.\nFigure 1 shows a schematic view of the adaptation strategy.\nDashed lines indicate a part that is not supported yet, as will be explained below.\nThis simple adaptation strategy allows us to improve application performance in several situations typical for the Grid: \u2022 If an application is started on fewer processors than its degree of parallelism allows, it will automatically expand to more processors (as soon as there are extra resources available).\nConversely, if an application is started on more processors than it can efficiently use, a part of the processors will be released.\n\u2022 If an application is running on an appropriate set of resources but after a while some of the resources (processors and\/or network links) become overloaded and slow down the computation, the overloaded resources will be removed.\nAfter removing the overloaded resources, the weighted average efficiency will increase to above the Emax threshold and the adaptation coordinator will try to add new resources.\nTherefore, the application will be migrated from overloaded resources.\n\u2022 If some of the original resources chosen by the user are inappropriate for the application, for example the bandwidth to one of the clusters is too small, the inappropriate resources will be removed.\nIf necessary, the adaptation component will try to add other resources.\n\u2022 If during the computation a substantial part of the processors will crash, the adaptation component will try to add new resources to replace the crashed processors.\n123 0 2000 4000 6000 runtime(secs.)\nScenario 0 a b c Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 without monitoring and adaptation (runtime 1) with monitoring and adaptation (runtime 2) with monitoring but no adaptation (runtime 3) Figure 2.\nThe runtimes of the Barnes-Hut application, scenarios 0-5 add nodes faster nodes available if compute weighted average efficiency E wa wait & collect statistics rank nodes remove worst nodes waE Ewa Y N N Y above if below if Emin maxE Figure 1.\nAdaptation strategy \u2022 If the application degree of parallelism is changing during the computation, the number of nodes the application is running on will be automatically adjusted.\nFurther improvements are possible, but require extra functionality from the grid scheduler and\/or integration with monitoring services such as NWS (22).\nFor example, adding nodes to a computation can be improved.\nCurrently, we add any nodes the scheduler gives us.\nHowever, it would be more efficient to ask for the fastest processors among the available ones.\nThis could be done, for example, by passing a benchmark to the grid scheduler, so that it can measure processor speeds in an application specific way.\nTypically, it would be enough to measure the speed of one processor per site, since clusters and supercomputers are usually homogeneous.\nAn alternative approach would be ranking the processors based on parameters such as clock speed and cache size.\nThis approach is sometimes used for resource selection for sequential applications (14).\nHowever, it is less accurate than using an application specific benchmark.\nAlso, during application execution, we can learn some application requirements and pass them to the scheduler.\nOne example is minimal bandwidth required by the application.\nThe lower bound on minimal required bandwidth is tightened each time a cluster with high inter-cluster overhead is removed.\nThe bandwidth between each pair of clusters is estimated during the computation by measuring data transfer times, and the bandwidth to the removed cluster is set as a minimum.\nAlternatively, information from a grid monitoring system can be used.\nSuch bounds can be passed to the scheduler to avoid adding inappropriate resources.\nIt is especially important when migrating from resources that cause performance problems: we have to be careful not to add the resources we have just removed.\nCurrently we use blacklisting - we simply do not allow adding resources we removed before.\nThis means, however, that we cannot use these resources even if the cause of the performance problem disappears, e.g. the bandwidth of a link might improve if the background traffic diminishes.\nWe are currently not able to perform opportunistic migration - migrating to better resources when they are discovered.\nIf an application runs with efficiency between Emin and Emax, the adaptation component will not undertake any action, even if better resources become available.\nEnabling opportunistic migration requires, again, the ability to specify to the scheduler what better resources are (faster, with a certain minimal bandwidth) and receiving notifications when such resources become available.\nExisting grid schedulers such as GRAM from the Globus Toolkit (11) do not support such functionality.\nThe developers of the KOALA metascheduler (15) have recently started a project whose goal is to provide support for adaptive applications.\nWe are currently discussing with them the possibility of providing the functionalities required by us, aiming to extend our adaptivity strat124 egy to support opportunistic migration and to improve the initial resource selection.\n4.\nImplementation We incorporated our adaptation mechanism into Satin - a Java framework for creating grid-enabled divide-and-conquer applications.\nWith Satin, the programmer annotates the sequential code with divide-and-conquer primitives and compiles the annotated code with a special Satin compiler that generates the necessary communication and load balancing code.\nSatin uses a very efficient, grid-aware load balancing algorithm - Cluster-aware Random Work Stealing (CRS) (19), which hides wide-area latencies by overlapping local and remote stealing.\nSatin also provides transparent fault tolerance and malleability (23).\nWith Satin, removing and adding processors from\/to an ongoing computation incurs little overhead.\nWe instrumented the Satin runtime system to collect runtime statistics and send them to the adaptation coordinator.\nThe coordinator is implemented as a separate process.\nBoth coordinator and Satin are implemented entirely in Java on top of the Ibis communication library (21).\nThe core of Ibis is also implemented in Java.\nThe resulting system therefore is highly portable (due to Java``s write once, run anywhere property) allowing the software to run unmodified on a heterogeneous grid.\nIbis also provides the Ibis Registry.\nThe Registry provides, among others, a membership service to the processors taking part in the computation.\nThe adaptation coordinator uses the Registry to discover the application processes, and the application processes use this service to discover each other.\nThe Registry also offers fault detection (additional to the fault detection provided by the communication channels).\nFinally, the Registry provides the possibility to send signals to application processes.\nThe coordinator uses this functionality to notify the processors that they need to leave the computation.\nCurrently the Registry is implemented as a centralized server.\nFor requesting new nodes, the Zorilla (9) system is used - a peer-to-peer supercomputing middleware which allows straightforward allocation of processors in multiple clusters and\/or supercomputers.\nZorilla provides locality-aware scheduling, which tries to allocate processors that are located close to each other in terms of communication latency.\nIn the future, Zorilla will also support bandwidth-aware scheduling, which tries to maximize the total bandwidth in the system.\nZorilla can be easily replaced with another grid scheduler.\nIn the future, we are planning to integrate our adaptation component with GAT (3) which is becoming a standard in the grid community and KOALA (15) a scheduler that provides co-allocation on top of standard grid middleware, such as the Globus Toolkit (11).\n5.\nPerformance evaluation In this section, we will evaluate our approach.\nWe will demonstrate the performance of our mechanism in a few scenarios.\nThe first scenario is an ideal situation: the application runs on a reasonable set of nodes (i.e., such that the efficiency is around 50%) and no problems such as overloaded networks and processors, crashing processors etc. occur.\nThis scenario allows us to measure the overhead of the adaptation support.\nThe remaining scenarios are typical for grid environments and demonstrate that with our adaptation support the application can avoid serious performance bottlenecks such as overloaded processors or network links.\nFor each scenario, we compare the performance of an application with adaptation support to a non-adaptive version.\nIn the non-adaptive version, the coordinator does not collect statistics and no benchmarking (for measuring processor speeds) is performed.\nIn the ideal scenario, 0 5 10 15 iteration number 0 200 400 600 iterationduration(secs.)\nstarting on 8 nodes starting on 16 nodes starting on 24 nodes starting on 8 nodes starting on 16 nodes starting on 24 nodes }no adaptation }with adaptation Figure 3.\nBarnes-Hut iteration durations with\/without adaptation, too few processors 0 5 10 15 iteration number 0 200 400 600 800 1000 iterationduration(secs.)\nno adaptation with adaptation CPU load introduced overloaded nodes removed started adding nodes 36 nodes reached Figure 4.\nBarnes-Hut iteration durations with\/without adaptation, overloaded CPUs we additionally measure the performance of an application with collecting statistics and benchmarking turned on but without doing adaptation, that is, without allowing it to change the number of nodes.\nThis allows us to measure the overhead of benchmarking and collecting statistics.\nIn all experiments we used a monitoring period of 3 minutes for the adaptive versions of the applications.\nAll the experiments were carried out on the DAS-2 wide-area system (8), which consists of five clusters located at five Dutch uni125 versities.\nOne of the clusters consists of 72 nodes, the others of 32 nodes.\nEach node contains two 1 GHz Pentium processors.\nWithin a cluster, the nodes are connected by Fast Ethernet.\nThe clusters are connected by the Dutch university Internet backbone.\nIn our experiments, we used the Barnes-Hut N-body simulation.\nBarnesHut simulates the evolution of a large set of bodies under influence of (gravitational or electrostatic) forces.\nThe evolution of N bodies is simulated in iterations of discrete time steps.\n5.1 Scenario 0: adaptivity overhead In this scenario, the application is started on 36 nodes.\nThe nodes are equally divided over 3 clusters (12 nodes in each cluster).\nOn this number of nodes, the application runs with 50% efficiency, so we consider it a reasonable number of nodes.\nAs mentioned above, in this scenario we measured three runtimes: the runtime of the application without adaptation support (runtime 1), the runtime with adaptation support (runtime 2) and the runtime with monitoring (i.e., collection of statistics and benchmarking) turned on but without allowing it to change the number of nodes (runtime 3).\nThose runtimes are shown in Figure 2, first group of bars.\nThe comparison between runtime 3 and 1 shows the overhead of adaptation support.\nIn this experiment it is around 15%.\nAlmost all overhead comes from benchmarking.\nThe benchmark is run 1-2 times per monitoring period.\nThis overhead can be made smaller by increasing the length of the monitoring period and decreasing the benchmarking frequency.\nThe monitoring period we used (3 minutes) is relatively short, because the runtime of the application was also relatively short (30-60 minutes).\nUsing longer running applications would not allow us to finish the experimentation in a reasonable time.\nHowever, real-world grid applications typically need hours, days or even weeks to complete.\nFor such applications, a much longer monitoring period can be used and the adaptation overhead can be kept much lower.\nFor example, with the Barnes-Hut application, if the monitoring period is extended to 10 minutes, the overhead drops to 6%.\nNote that combining benchmarking with monitoring processor load (as described in Section 3.2) would reduce the benchmarking overhead to almost zero: since the processor load is not changing, the benchmarks would only need to be run at the beginning of the computation.\n5.2 Scenario 1: expanding to more nodes In this scenario, the application is started on fewer nodes than the application can efficiently use.\nThis may happen because the user does not know the right number of nodes or because insufficient nodes were available at the moment the application was started.\nWe tried 3 initial numbers of nodes: 8 (Scenario 1a), 16 (Scenario 1b) and 24 (Scenario 1c).\nThe nodes were located in 1 or 2 clusters.\nIn each of the three sub-scenarios, the application gradually expanded to 36-40 nodes located in 4 clusters.\nThis allowed to reduce the application runtimes by 50% (Scenario 1a), 35% (Scenario 1b) and 12% (Scenario 1c) with respect to the non-adaptive version.\nThose runtimes are shown in Figure 2.\nSince Barnes-Hut is an iterative application, we also measured the time of each iteration, as shown in Figure 3.\nAdaptation reduces the iteration time by a factor of 3 (Scenario 1a), 1.7 (Scenario 1b) and 1.2 (Scenario 1c) which allows us to conclude that the gains in the total runtime would be even bigger if the application were run longer than for 15 iterations.\n5.3 Scenario 2: overloaded processors In this scenario, we started the application on 36 nodes in 3 clusters.\nAfter 200 seconds, we introduced a heavy, artificial load on the processors in one of the clusters.\nSuch a situation may happen when an application with a higher priority is started on some of the resources.\nFigure 4 shows the iteration durations of both the adaptive and non-adaptive versions.\nAfter introducing the load, the iteration 0 5 10 15 iteration number 0 200 400 600 800 1000 iterationduration(secs.)\nno adaptation with adaptation one cluster is badly connected badly connected cluster removed started adding nodes 36 nodes reached Figure 5.\nBarnes-Hut iteration durations with\/without adaptation, overloaded network link 0 5 10 15 iteration number 0 200 400 600 800 1000 iterationduration(secs.)\nno adaptation with adaptation one cluster is badly connected 12 nodes lightly overloaded removed badly connected cluster removed 2 lightly overloaded nodes Figure 6.\nBarnes-Hut iteration durations with\/without adaptation, overloaded CPUs and an overloaded network link duration increased by a factor of 2 to 3.\nAlso, the iteration times became very variable.\nThe adaptive version reacted by removing the overloaded nodes.\nAfter removing these nodes, the weighted average efficiency rose to around 35% which triggered adding new nodes and the application expanded back to 38 nodes.\nSo, the overloaded nodes were replaced by better nodes, which brought the iteration duration back to the initial values.\nThis reduced the total runtime by 14%.\nThe runtimes are shown in Figure 2.\n126 5.4 Scenario 3: overloaded network link In this scenario, we ran the application on 36 nodes in 3 clusters.\nWe simulated that the uplink to one of the clusters was overloaded and the bandwidth on this uplink was reduced to approximately 100 KB\/s.\nTo simulate low bandwidth we use the traffic-shaping techniques described in (6).\nThe iteration durations in this experiment are shown in Figure 5.\nThe iteration durations of the nonadaptive version exhibit enormous variation: from 170 to 890 seconds.\nThe adaptive version removed the badly connected cluster after the first monitoring period.\nAs a result, the weighted average efficiency rose to around 35% and new nodes were gradually added until their number reached 38.\nThis brought the iteration times down to around 100 seconds.\nThe total runtime was reduced by 60% (Figure 2).\n5.5 Scenario 4: overloaded processors and an overloaded network link In this scenario, we ran the application on 36 nodes in 3 clusters.\nAgain, we simulated an overloaded uplink to one of the clusters.\nAdditionally, we simulated processors with heterogeneous speeds by inserting a relatively light artificial load on the processors in one of the remaining clusters.\nThe iteration durations are shown in Figure 6.\nAgain, the non-adaptive version exhibits a great variation in iteration durations: from 200 to 1150 seconds.\nThe adaptive version removes the badly connected cluster after the first monitoring period which brings the iteration duration down to 210 seconds on average.\nAfter removing one of the clusters, since some of the processors are slower (approximately 5 times), the weighted average efficiency raises only to around 40%.\nSince this value lies between Emin and Emax, no nodes are added or removed.\nThis example illustrates what the advantages of opportunistic migration would be.\nThere were faster nodes available in the system.\nIf these nodes were added to the application (which could trigger removing the slower nodes) the iteration duration could be reduced even further.\nStill, the adaptation reduced the total runtime by 30% (Figure 2).\n5.6 Scenario 5: crashing nodes In the last scenario, we also run the application on 36 nodes in 3 clusters.\nAfter 500 seconds, 2 out of 3 clusters crash.\nThe iteration durations are shown in Figure 7.\nAfter the crash, the iteration duration raised from 100 to 200 seconds.\nThe weighted efficiency rose to around 30% which triggered adding new nodes in the adaptive version.\nThe number of nodes gradually went back to 35 which brought the iteration duration back to around 100 seconds.\nThe total runtime was reduced by 13% (Figure 2).\n6.\nRelated work A number of Grid projects address the question of resource selection and adaptation.\nIn GrADS (18) and ASSIST (1), resource selection and adaptation requires a performance model that allows predicting application runtimes.\nIn the resource selection phase, a number of possible resource sets is examined and the set of resources with the shortest predicted runtime is selected.\nIf performance degradation is detected during the computation, the resource selection phase is repeated.\nGrADS uses the ratio of the predicted execution times (of certain application phases) to the real execution times as an indicator of application performance.\nASSIST uses the number of iterations per time unit (for iterative applications) or the number of tasks per time unit (for regular master-worker applications) as a performance indicator.\nThe main difference between these approaches and our approach is the use of performance models.\nThe main advantage is that once the performance model is known, the system is able to take more accurate migration decisions than with our approach.\nHowever, even if the performance 0 5 10 15 iteration number 0 200 400 600 800 1000 iterationduration(secs.)\nno adaptation with adaptation 2 out of 3 clusters crash started adding nodes 36 nodes reached Figure 7.\nBarnes-Hut iteration durations with\/without adaptation, crashing CPUs model is known, the problem of finding an optimal resource set (i.e. the resource set with the minimal execution time) is NP-complete.\nCurrently, both GrADS and ASSIST examine only a subset of all possible resource sets and therefore there is no guarantee that the resulting resource set will be optimal.\nAs the number of available grid resources increases, the accuracy of this approach diminishes, as the subset of possible resource sets that can be examined in a reasonable time becomes smaller.\nAnother disadvantage of these systems is that the performance degradation detection is suitable only for iterative or regular applications.\nCactus (2) and GridWay (14) do not use performance models.\nHowever, these frameworks are only suitable for sequential (GridWay) or single-site applications (Cactus).\nIn that case, the resource selection problem boils down to selecting the fastest machine or cluster.\nProcessor clock speed, average load and a number of processors in a cluster (Cactus) are used to rank resources and the resource with the highest rank is selected.\nThe application is migrated if performance degradation is detected or better resources are discovered.\nBoth Cactus and GridWay use the number of iterations per time unit as the performance indicator.\nThe main limitation of this methodology is that it is suitable only for sequential or single-site applications.\nMoreover, resource selection based on clock speed is not always accurate.\nFinally, performance degradation detection is suitable only for iterative applications and cannot be used for irregular computations such as search and optimization problems.\nThe resource selection problem was also studied by the AppLeS project (5).\nIn the context of this project, a number of applications were studied and performance models for these applications were created.\nBased on such a model a scheduling agent is built that uses the performance model to select the best resource set and the best application schedule on this set.\nAppLeS scheduling agents are written on a case-by-case basis and cannot be reused for another application.\nTwo reusable templates were also developed for specific classes of applications, namely master-worker (AMWAT template) and parameter sweep (APST template) applications.\nMigration is not supported by the AppLeS software.\n127 In (13), the problem of scheduling master-worker applications is studied.\nThe authors assume homogeneous processors (i.e., with the same speed) and do not take communication costs into account.\nTherefore, the problem is reduced to finding the right number of workers.\nThe approach here is similar to ours in that no performance model is used.\nInstead, the system tries to deduce the application requirements at runtime and adjusts the number of workers to approach the ideal number.\n7.\nConclusions and future work In this paper, we investigated the problem of resource selection and adaptation in grid environments.\nExisting approaches to these problems typically assume the existence of a performance model that allows predicting application runtimes on various sets of resources.\nHowever, creating performance models is inherently difficult and requires knowledge about the application.\nWe propose an approach that does not require in-depth knowledge about the application.\nWe start the application on an arbitrary set of resources and monitor its performance.\nThe performance monitoring allows us to learn certain application requirements such as the number of processors needed by the application or the application``s bandwidth requirements.\nWe use this knowledge to gradually refine the resource set by removing inadequate nodes or adding new nodes if necessary.\nThis approach does not result in the optimal resource set, but in a reasonable resource set, i.e. a set free from various performance bottlenecks such as slow network connections or overloaded processors.\nOur approach also allows the application to adapt to the changing grid conditions.\nThe adaptation decisions are based on the weighted average efficiency - an extension of the concept of parallel efficiency defined for traditional, homogeneous parallel machines.\nIf the weighted average efficiency drops below a certain level, the adaptation coordinator starts removing worst nodes.\nThe badness of the nodes is defined by a heuristic formula.\nIf the weighted average efficiency raises above a certain level, new nodes are added.\nOur simple adaptation strategy allows us to handle multiple scenarios typical for grid environments: expand to more nodes or shrink to fewer nodes if the application was started on an inappropriate number of processors, remove inadequate nodes and replace them with better ones, replace crashed processors, etc..\nThe application adapts fully automatically to changing conditions.\nWe implemented our approach in the Satin divide-and-conquer framework and evaluated it on the DAS-2 distributed supercomputer and demonstrate that our approach can yield significant performance improvements (up to 60% in our experiments).\nFuture work will involve extending our adaptation strategy to support opportunistic migration.\nThis, however, requires grid schedulers with more sophisticated functionality than currently exists.\nFurther research is also needed to decrease the benchmarking overhead.\nFor example, the information about CPU load could be used to decrease the benchmarking frequency.\nAnother line of research that we wish to investigate is using feedback control to refine the adaptation strategy during the application run.\nFor example, the node badness formula could be refined at runtime based on the effectiveness of the previous adaptation decisions.\nFinally, the centralized implementation of the adaptation coordinator might become a bottleneck for applications which are running on very large numbers of nodes (hundreds or thousands).\nThis problem can be solved by implementing a hierarchy of coordinators: one subcoordinator per cluster which collects and processes statistics from its cluster and one main coordinator which collects the information from the sub-coordinators.\nAcknowledgments This work was carried out in the context of Virtual Laboratory for e-Science project (ww.vl-e.\nnl).\nThis project is supported by a BSIK grant from the Dutch Ministry of Education, Culture and Science (OC&W) and is part of the ICT innovation program of the Ministry of Economic Affairs (EZ).\nReferences [1] M. Aldinucci, F. Andre, J. Buisson, S. Campa, M. Coppola, M. Danelutto, and C. Zoccolo.\nParallel program\/component adaptivity management.\nIn ParCo 2005, Sept. 2005.\n[2] G. Allen, D. Angulo, I. Foster, G. Lanfermann, C. Liu, T. Radke, E. Seidel, and J. Shalf.\nThe cactus worm: Experiments with resource discovery and allocation in a grid environment.\nInt``l Journal of High Performance Computing Applications, 15(4):345-358, 2001.\n[3] G. Allen, K. Davis, K. N. Dolkas, N. D. Doulamis, T. Goodale, T. Kielmann, A. Merzky, J. Nabrzyski, J. Pukacki, T. Radke, M. Russell, E. Seidel, J. Shalf, and I. Taylor.\nEnabling applications on the grid - a gridlab overview.\nInt``l Journal of High-Performance Computing Applications, 17(4):449-466, Aug. 2003.\n[4] J. E. Baldeschwieler, R. D. Blumofe, and E. A. Brewer.\nATLAS: An Infrastructure for Global Computing.\nIn 7th ACM SIGOPS European Workshop on System Support for Worldwide Applications, pages 165-172, Sept. 1996.\n[5] F. Berman, R. Wolski, H. Casanova, W. Cirne, H. Dail, M. Faerman, S. Figueira, J. Hayes, G. Obertelli, J. Schopf, G. Shao, S. Smallen, N. Spring, A. Su, and D. Zagorodnov.\nAdaptive Computing on the Grid Using AppLeS.\nIEEE Trans.\non Parallel and Distributed Systems, 14(4):369-382, Apr. 2003.\n[6] D.-M.\nChiu, M. Kadansky, J. Provino, and J. Wesley.\nExperiences in programming a traffic shaper.\nIn 5th IEEE Symp.\non Computers and Communications, pages 470-476, 2000.\n[7] W. Chrabakh and R. Wolski.\nGridSAT: A Chaff-based Distributed SAT Solver for the Grid.\nIn 2003 ACM\/IEEE conference on Supercomputing, page 37, 2003.\n[8] The Distributed ASCI Supercomputer (DAS).\nhttp:\/\/www.cs.vu.nl\/das2\/.\n[9] N. Drost, R. V. van Nieuwport, and H. E. Bal.\nSimple localityaware co-allocation in peer-to-peer supercomputing.\nIn 6th Int``l Workshop on Global Peer-2-Peer Computing, May 2005.\n[10] D. L. Eager, J. Zahorjan, and E. D. Lazowska.\nSpeedup versus efficiency in parallel systems.\nIEEE Transactions on Computers, 38(3):408-423, Mar. 1989.\n[11] I. Foster.\nGlobus toolkit version 4: Software for serviceoriented systems.\nIn IFIP International Conference on Network and Parallel Computing, pages 2-13.\nSpringer-Verlag LNCS 3779, 2005.\n[12] J.-P.\nGoux, S. Kulkarni, M. Yoder, and J. Linderoth.\nAn Enabling Framework for Master-Worker Applications on the Computational Grid.\nIn 9th IEEE Int``l Symp.\non High Performance Distributed Computing, pages 43-50, Aug. 2000.\n[13] E. Heymann, M. A. Senar, E. Luque, and M. Livny.\nAdaptive scheduling for master-worker applications on the computational grid.\nIn 1st IEEE\/ACM International Workshop on Grid Computing, pages 214-227.\nSpringer Verlag LNCS 1971, 2000.\n128 [14] E. Huedo, R. S. Montero, and I. M. Llorente.\nA framework for adaptive execution in grids.\nSoftware - Practice & Experience, 34(7):631-651, 2004.\n[15] H. H. Mohamed and D. H. Epema.\nExperiences with the KOALA Co-Allocating Scheduler in Multiclusters.\nIn 5th IEEE\/ACM Int``l Symp.\non Cluster Computing and the GRID, pages 640-650, May 2005.\n[16] A. Plaat, H. E. Bal, and R. F. H. Hofman.\nSensitivity of parallel applications to large differences in bandwidth and latency in two-layer interconnects.\nIn 5th Int``l Symp.\nOn High Performance Computer Architecture, pages 244-253, Jan. 1999.\n[17] J. W. Romein, H. E. Bal, J. Schaeffer, and A. Plaat.\nA performance analysis of transposition-table-driven work scheduling in distributed search.\nIEEE Trans.\non Parallel and Distributed Systems, 13(5):447-459, May 2002.\n[18] S. S. Vadhiyar and J. J. Dongarra.\nSelf adaptivity in Grid computing.\nConcurrency and Computation: Practice and Experience, 17(2-4):235-257, 2005.\n[19] R. V. van Nieuwpoort, T. Kielmann, and H. E. Bal.\nEfficient load balancing for wide-area divide-and-conquer applications.\nIn 8th ACM SIGPLAN Symp.\non Principles and Practices of Parallel Programming, pages 34-43, 2001.\n[20] R. V. van Nieuwpoort, J. Maassen, T. Kielmann, and H. E. Bal.\nSatin: Simple and Efficient Java-based Grid Programming.\nScalable Computing: Practice and Experience, 6(3):19-32, Sept. 2004.\n[21] R. V. van Nieuwpoort, J. Maassen, G. Wrzesinska, R. Hofman, C. Jacobs, T. Kielmann, and H. E. Bal.\nIbis: a Flexible and Efficient Java-based Grid Programming Environment.\nConcurrency & Computation: Practice & Experience, 17(78):1079-1107, 2005.\n[22] R. Wolski, N. Spring, and J. Hayes.\nThe network weather service: A distributed resource performance forecasting service for metacomputing.\nJournal of Future Generation Computing Systems, 15(5-6):757-768, Oct. 1999.\n[23] G. Wrzesinska, R. V. van Nieuwport, J. Maassen, and H. E. Bal.\nFault-tolerance, Malleability and Migration for Divideand-Conquer Applications on the Grid.\nIn Int``l Parallel and Distributed Processing Symposium, Apr. 2005.\n129","lvl-3":"Self-Adaptive Applications on the Grid\nAbstract\nGrids are inherently heterogeneous and dynamic.\nOne important problem in grid computing is resource selection, that is, finding an appropriate resource set for the application.\nAnother problem is adaptation to the changing characteristics of the grid environment.\nExisting solutions to these two problems require that a performance model for an application is known.\nHowever, constructing such models is a complex task.\nIn this paper, we investigate an approach that does not require performance models.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect the statistics about the application run and deduce application requirements from these statistics.\nThen, we adjust the resource set to better fit the application needs.\nThis approach allows us to avoid performance bottlenecks, such as overloaded WAN links or very slow processors, and therefore can yield significant performance improvements.\nWe evaluate our approach in a number of scenarios typical for the Grid.\n1.\nIntroduction\nIn recent years, grid computing has become a real alternative to traditional parallel computing.\nA grid provides much computational power, and thus offers the possibility to solve very large problems, especially if applications can run on multiple sites at the same time (7; 15; 20).\nHowever, the complexity of Grid environments also is many times larger than that of traditional parallel machines like clusters and supercomputers.\nOne important problem is resource selection - selecting a set of compute nodes such that the application achieves good performance.\nEven in traditional, homogeneous parallel environments, finding the optimal number of nodes is a hard problem and is often solved in a trial-and-error fashion.\nIn a grid environment this problem is even more difficult, because of the heterogeneity of resources: the compute nodes have various\nspeeds and the quality of network connections between them varies from low-latency and high-bandwidth local-area networks (LANs) to high-latency and possibly low-bandwidth wide-area networks (WANs).\nAnother important problem is that the performance and availability of grid resources varies over time: the network links or compute nodes may become overloaded, or the compute nodes may become unavailable because of crashes or because they have been claimed by a higher priority application.\nAlso, new, better resources may become available.\nTo maintain a reasonable performance level, the application therefore needs to adapt to the changing conditions.\nThe adaptation problem can be reduced to the resource selection problem: the resource selection phase can be repeated during application execution, either at regular intervals, or when a performance problem is detected, or when new resources become available.\nThis approach has been adopted by a number of systems (5; 14; 18).\nFor resource selection, the application runtime is estimated for some resource sets and the set that yields the shortest runtime is selected for execution.\nPredicting the application runtime on a given set of resources, however, requires knowledge about the application.\nTypically, an analytical performance model is used, but constructing such a model is inherently difficult and requires an expertise which application programmers may not have.\nIn this paper, we introduce and evaluate an alternative approach to application adaptation and resource selection which does not need a performance model.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect information about the communication times and idle times of the processors.\nWe use these statistics to automatically estimate the resource requirements of the application.\nNext, we adjust the resource set the application is running on by adding or removing compute nodes or even entire clusters.\nOur adaptation strategy uses the work by Eager et al. (10) to determine the efficiency and tries to keep the efficiency of the application between a lower and upper threshold derived from their theory.\nProcessors are added or deleted to stay between the thresholds, thus adapting automatically to the changing environment.\nA major advantage of our approach is that it improves application performance in many different situations that are typical for grid computing.\nIt handles all of the following cases:\n\u2022 automatically adapting the number of processors to the degree of parallelism in the application, even when this degree changes dynamically during the computation \u2022 migrating (part of) a computation away from overloaded resources \u2022 removing resources with poor communication links that slow down the computation \u2022 adding new resources to replace resources that have crashed\nOur work assumes the application is malleable and can run (efficiently) on multiple sites of a grid (i.e., using co-allocation (15)).\nIt should not use static load balancing or be very sensitive to wide\narea latencies.\nWe have applied our ideas to divide-and-conquer applications, which satisfy these requirements.\nDivide-and-conquer has been shown to be an attractive paradigm for programming grid applications (4; 20).\nWe believe that our approach can be extended to other classes of applications with the given assumptions.\nWe implemented our strategy in Satin, which is a Java-centric framework for writing grid-enabled divide-and-conquer applications (20).\nWe evaluate the performance of our approach on the DAS-2 wide-area system and we will show that our approach yields major performance improvements (roughly 10-60%) in the above scenarios.\nThe rest of this paper is structured as follows.\nIn Section 2, we explain what assumptions we are making about the applications and grid resources.\nIn Section 3, we present our resource selection and adaptation strategy.\nIn Section 4, we describe its implementation in the Satin framework.\nIn Section 5, we evaluate our approach in a number of grid scenarios.\nIn Section 6, we compare our approach with the related work.\nFinally, in Section 7, we conclude and describe future work.\n2.\nBackground and assumptions\nIn this section, we describe our assumptions about the applications and their resources.\nWe assume the following resource model.\nThe applications are running on multiple sites at the same time, where sites are clusters or supercomputers.\nWe also assume that the processors of the sites are accessible using a grid scheduling system, such as Koala (15), Zorilla (9) or GRMS (3).\nProcessors belonging to one site are connected by a fast LAN with a low latency and high bandwidth.\nThe different sites are connected by a WAN.\nCommunication between sites suffers from high latencies.\nWe assume that the links connecting the sites with the Internet backbone might become bottlenecks causing the inter-site communication to suffer from low bandwidths.\nWe studied the adaptation problem in the context of divide-andconquer applications.\nHowever, we believe that our methodology can be used for other types of applications as well.\nIn this section we summarize the assumptions about applications that are important to our approach.\nThe first assumption we make is that the application is malleable, i.e., it is able to handle processors joining and leaving the on-going computation.\nIn (23), we showed how divide-andconquer applications can be made fault tolerant and malleable.\nProcessors can be added or removed at any point in the computation with little overhead.\nThe second assumption is that the application can efficiently run on processors with different speeds.\nThis can be achieved by using a dynamic load balancing strategy, such as work stealing used by divide-and-conquer applications (19).\nAlso, master-worker applications typically use dynamic load-balancing strategies (e.g., MW--a framework for writing gridenabled master-worker applications (12)).\nWe find it a reasonable assumption for a grid application, since applications for which the slowest processor becomes a bottleneck will not be able to efficiently utilize grid resources.\nFinally, the application should be insensitive to wide-area latencies, so it can run efficiently on a widearea grid (16; 17).\n3.\nSelf-adaptation\n3.1 Weighted average efficiency\n3.2 Application monitoring\n3.3 Adaptation strategy\n4.\nImplementation\n5.\nPerformance evaluation\n5.1 Scenario 0: adaptivity overhead\n5.2 Scenario 1: expanding to more nodes\n5.3 Scenario 2: overloaded processors\n5.4 Scenario 3: overloaded network link\n5.5 Scenario 4: overloaded processors and an overloaded network link\n5.6 Scenario 5: crashing nodes\n6.\nRelated work\nA number of Grid projects address the question of resource selection and adaptation.\nIn GrADS (18) and ASSIST (1), resource selection and adaptation requires a performance model that allows predicting application runtimes.\nIn the resource selection phase, a number of possible resource sets is examined and the set of resources with the shortest predicted runtime is selected.\nIf performance degradation is detected during the computation, the resource selection phase is repeated.\nGrADS uses the ratio of the predicted execution times (of certain application phases) to the real execution times as an indicator of application performance.\nASSIST uses the number of iterations per time unit (for iterative applications) or the number of tasks per time unit (for regular master-worker applications) as a performance indicator.\nThe main difference between these approaches and our approach is the use of performance models.\nThe main advantage is that once the performance model is known, the system is able to take more accurate migration decisions than with our approach.\nHowever, even if the performance no adaptation with adaptation\nFigure 7.\nBarnes-Hut iteration durations with\/without adaptation, crashing CPUs\nmodel is known, the problem of finding an optimal resource set (i.e. the resource set with the minimal execution time) is NP-complete.\nCurrently, both GrADS and ASSIST examine only a subset of all possible resource sets and therefore there is no guarantee that the resulting resource set will be optimal.\nAs the number of available grid resources increases, the accuracy of this approach diminishes, as the subset of possible resource sets that can be examined in a reasonable time becomes smaller.\nAnother disadvantage of these systems is that the performance degradation detection is suitable only for iterative or regular applications.\nCactus (2) and GridWay (14) do not use performance models.\nHowever, these frameworks are only suitable for sequential (GridWay) or single-site applications (Cactus).\nIn that case, the resource selection problem boils down to selecting the fastest machine or cluster.\nProcessor clock speed, average load and a number of processors in a cluster (Cactus) are used to rank resources and the resource with the highest rank is selected.\nThe application is migrated if performance degradation is detected or better resources are discovered.\nBoth Cactus and GridWay use the number of iterations per time unit as the performance indicator.\nThe main limitation of this methodology is that it is suitable only for sequential or single-site applications.\nMoreover, resource selection based on clock speed is not always accurate.\nFinally, performance degradation detection is suitable only for iterative applications and cannot be used for irregular computations such as search and optimization problems.\nThe resource selection problem was also studied by the AppLeS project (5).\nIn the context of this project, a number of applications were studied and performance models for these applications were created.\nBased on such a model a scheduling agent is built that uses the performance model to select the best resource set and the best application schedule on this set.\nAppLeS scheduling agents are written on a case-by-case basis and cannot be reused for another application.\nTwo reusable templates were also developed for specific classes of applications, namely master-worker (AMWAT template) and parameter sweep (APST template) applications.\nMigration is not supported by the AppLeS software.\n2 out of 9 clusters crash started adding nodes 96 nodes reached\nIn (13), the problem of scheduling master-worker applications is studied.\nThe authors assume homogeneous processors (i.e., with the same speed) and do not take communication costs into account.\nTherefore, the problem is reduced to finding the right number of workers.\nThe approach here is similar to ours in that no performance model is used.\nInstead, the system tries to deduce the application requirements at runtime and adjusts the number of workers to approach the ideal number.\n7.\nConclusions and future work\nIn this paper, we investigated the problem of resource selection and adaptation in grid environments.\nExisting approaches to these problems typically assume the existence of a performance model that allows predicting application runtimes on various sets of resources.\nHowever, creating performance models is inherently difficult and requires knowledge about the application.\nWe propose an approach that does not require in-depth knowledge about the application.\nWe start the application on an arbitrary set of resources and monitor its performance.\nThe performance monitoring allows us to learn certain application requirements such as the number of processors needed by the application or the application's bandwidth requirements.\nWe use this knowledge to gradually refine the resource set by removing inadequate nodes or adding new nodes if necessary.\nThis approach does not result in the optimal resource set, but in a reasonable resource set, i.e. a set free from various performance bottlenecks such as slow network connections or overloaded processors.\nOur approach also allows the application to adapt to the changing grid conditions.\nThe adaptation decisions are based on the weighted average efficiency--an extension of the concept of parallel efficiency defined for traditional, homogeneous parallel machines.\nIf the weighted average efficiency drops below a certain level, the adaptation coordinator starts removing \"worst\" nodes.\nThe \"badness\" of the nodes is defined by a heuristic formula.\nIf the weighted average efficiency raises above a certain level, new nodes are added.\nOur simple adaptation strategy allows us to handle multiple scenarios typical for grid environments: expand to more nodes or shrink to fewer nodes if the application was started on an inappropriate number of processors, remove inadequate nodes and replace them with better ones, replace crashed processors, etc. .\nThe application adapts fully automatically to changing conditions.\nWe implemented our approach in the Satin divide-and-conquer framework and evaluated it on the DAS-2 distributed supercomputer and demonstrate that our approach can yield significant performance improvements (up to 60% in our experiments).\nFuture work will involve extending our adaptation strategy to support opportunistic migration.\nThis, however, requires grid schedulers with more sophisticated functionality than currently exists.\nFurther research is also needed to decrease the benchmarking overhead.\nFor example, the information about CPU load could be used to decrease the benchmarking frequency.\nAnother line of research that we wish to investigate is using feedback control to refine the adaptation strategy during the application run.\nFor example, the node \"badness\" formula could be refined at runtime based on the effectiveness of the previous adaptation decisions.\nFinally, the centralized implementation of the adaptation coordinator might become a bottleneck for applications which are running on very large numbers of nodes (hundreds or thousands).\nThis problem can be solved by implementing a hierarchy of coordinators: one subcoordinator per cluster which collects and processes statistics from its cluster and one main coordinator which collects the information from the sub-coordinators.","lvl-4":"Self-Adaptive Applications on the Grid\nAbstract\nGrids are inherently heterogeneous and dynamic.\nOne important problem in grid computing is resource selection, that is, finding an appropriate resource set for the application.\nAnother problem is adaptation to the changing characteristics of the grid environment.\nExisting solutions to these two problems require that a performance model for an application is known.\nHowever, constructing such models is a complex task.\nIn this paper, we investigate an approach that does not require performance models.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect the statistics about the application run and deduce application requirements from these statistics.\nThen, we adjust the resource set to better fit the application needs.\nThis approach allows us to avoid performance bottlenecks, such as overloaded WAN links or very slow processors, and therefore can yield significant performance improvements.\nWe evaluate our approach in a number of scenarios typical for the Grid.\n1.\nIntroduction\nIn recent years, grid computing has become a real alternative to traditional parallel computing.\nA grid provides much computational power, and thus offers the possibility to solve very large problems, especially if applications can run on multiple sites at the same time (7; 15; 20).\nHowever, the complexity of Grid environments also is many times larger than that of traditional parallel machines like clusters and supercomputers.\nOne important problem is resource selection - selecting a set of compute nodes such that the application achieves good performance.\nIn a grid environment this problem is even more difficult, because of the heterogeneity of resources: the compute nodes have various\nAnother important problem is that the performance and availability of grid resources varies over time: the network links or compute nodes may become overloaded, or the compute nodes may become unavailable because of crashes or because they have been claimed by a higher priority application.\nAlso, new, better resources may become available.\nTo maintain a reasonable performance level, the application therefore needs to adapt to the changing conditions.\nThe adaptation problem can be reduced to the resource selection problem: the resource selection phase can be repeated during application execution, either at regular intervals, or when a performance problem is detected, or when new resources become available.\nThis approach has been adopted by a number of systems (5; 14; 18).\nFor resource selection, the application runtime is estimated for some resource sets and the set that yields the shortest runtime is selected for execution.\nPredicting the application runtime on a given set of resources, however, requires knowledge about the application.\nTypically, an analytical performance model is used, but constructing such a model is inherently difficult and requires an expertise which application programmers may not have.\nIn this paper, we introduce and evaluate an alternative approach to application adaptation and resource selection which does not need a performance model.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect information about the communication times and idle times of the processors.\nWe use these statistics to automatically estimate the resource requirements of the application.\nNext, we adjust the resource set the application is running on by adding or removing compute nodes or even entire clusters.\nProcessors are added or deleted to stay between the thresholds, thus adapting automatically to the changing environment.\nA major advantage of our approach is that it improves application performance in many different situations that are typical for grid computing.\nIt handles all of the following cases:\nOur work assumes the application is malleable and can run (efficiently) on multiple sites of a grid (i.e., using co-allocation (15)).\narea latencies.\nWe have applied our ideas to divide-and-conquer applications, which satisfy these requirements.\nDivide-and-conquer has been shown to be an attractive paradigm for programming grid applications (4; 20).\nWe believe that our approach can be extended to other classes of applications with the given assumptions.\nWe implemented our strategy in Satin, which is a Java-centric framework for writing grid-enabled divide-and-conquer applications (20).\nThe rest of this paper is structured as follows.\nIn Section 2, we explain what assumptions we are making about the applications and grid resources.\nIn Section 3, we present our resource selection and adaptation strategy.\nIn Section 4, we describe its implementation in the Satin framework.\nIn Section 5, we evaluate our approach in a number of grid scenarios.\nIn Section 6, we compare our approach with the related work.\nFinally, in Section 7, we conclude and describe future work.\n2.\nBackground and assumptions\nIn this section, we describe our assumptions about the applications and their resources.\nWe assume the following resource model.\nThe applications are running on multiple sites at the same time, where sites are clusters or supercomputers.\nProcessors belonging to one site are connected by a fast LAN with a low latency and high bandwidth.\nThe different sites are connected by a WAN.\nCommunication between sites suffers from high latencies.\nWe studied the adaptation problem in the context of divide-andconquer applications.\nHowever, we believe that our methodology can be used for other types of applications as well.\nIn this section we summarize the assumptions about applications that are important to our approach.\nThe first assumption we make is that the application is malleable, i.e., it is able to handle processors joining and leaving the on-going computation.\nIn (23), we showed how divide-andconquer applications can be made fault tolerant and malleable.\nProcessors can be added or removed at any point in the computation with little overhead.\nThe second assumption is that the application can efficiently run on processors with different speeds.\nThis can be achieved by using a dynamic load balancing strategy, such as work stealing used by divide-and-conquer applications (19).\nAlso, master-worker applications typically use dynamic load-balancing strategies (e.g., MW--a framework for writing gridenabled master-worker applications (12)).\nWe find it a reasonable assumption for a grid application, since applications for which the slowest processor becomes a bottleneck will not be able to efficiently utilize grid resources.\nFinally, the application should be insensitive to wide-area latencies, so it can run efficiently on a widearea grid (16; 17).\n6.\nRelated work\nA number of Grid projects address the question of resource selection and adaptation.\nIn GrADS (18) and ASSIST (1), resource selection and adaptation requires a performance model that allows predicting application runtimes.\nIn the resource selection phase, a number of possible resource sets is examined and the set of resources with the shortest predicted runtime is selected.\nIf performance degradation is detected during the computation, the resource selection phase is repeated.\nGrADS uses the ratio of the predicted execution times (of certain application phases) to the real execution times as an indicator of application performance.\nASSIST uses the number of iterations per time unit (for iterative applications) or the number of tasks per time unit (for regular master-worker applications) as a performance indicator.\nThe main difference between these approaches and our approach is the use of performance models.\nThe main advantage is that once the performance model is known, the system is able to take more accurate migration decisions than with our approach.\nHowever, even if the performance no adaptation with adaptation\nFigure 7.\nBarnes-Hut iteration durations with\/without adaptation, crashing CPUs\nmodel is known, the problem of finding an optimal resource set (i.e. the resource set with the minimal execution time) is NP-complete.\nAs the number of available grid resources increases, the accuracy of this approach diminishes, as the subset of possible resource sets that can be examined in a reasonable time becomes smaller.\nAnother disadvantage of these systems is that the performance degradation detection is suitable only for iterative or regular applications.\nCactus (2) and GridWay (14) do not use performance models.\nHowever, these frameworks are only suitable for sequential (GridWay) or single-site applications (Cactus).\nIn that case, the resource selection problem boils down to selecting the fastest machine or cluster.\nProcessor clock speed, average load and a number of processors in a cluster (Cactus) are used to rank resources and the resource with the highest rank is selected.\nThe application is migrated if performance degradation is detected or better resources are discovered.\nBoth Cactus and GridWay use the number of iterations per time unit as the performance indicator.\nThe main limitation of this methodology is that it is suitable only for sequential or single-site applications.\nMoreover, resource selection based on clock speed is not always accurate.\nFinally, performance degradation detection is suitable only for iterative applications and cannot be used for irregular computations such as search and optimization problems.\nThe resource selection problem was also studied by the AppLeS project (5).\nIn the context of this project, a number of applications were studied and performance models for these applications were created.\nBased on such a model a scheduling agent is built that uses the performance model to select the best resource set and the best application schedule on this set.\nAppLeS scheduling agents are written on a case-by-case basis and cannot be reused for another application.\nTwo reusable templates were also developed for specific classes of applications, namely master-worker (AMWAT template) and parameter sweep (APST template) applications.\n2 out of 9 clusters crash started adding nodes 96 nodes reached\nIn (13), the problem of scheduling master-worker applications is studied.\nTherefore, the problem is reduced to finding the right number of workers.\nThe approach here is similar to ours in that no performance model is used.\nInstead, the system tries to deduce the application requirements at runtime and adjusts the number of workers to approach the ideal number.\n7.\nConclusions and future work\nIn this paper, we investigated the problem of resource selection and adaptation in grid environments.\nExisting approaches to these problems typically assume the existence of a performance model that allows predicting application runtimes on various sets of resources.\nHowever, creating performance models is inherently difficult and requires knowledge about the application.\nWe propose an approach that does not require in-depth knowledge about the application.\nWe start the application on an arbitrary set of resources and monitor its performance.\nThe performance monitoring allows us to learn certain application requirements such as the number of processors needed by the application or the application's bandwidth requirements.\nWe use this knowledge to gradually refine the resource set by removing inadequate nodes or adding new nodes if necessary.\nThis approach does not result in the optimal resource set, but in a reasonable resource set, i.e. a set free from various performance bottlenecks such as slow network connections or overloaded processors.\nOur approach also allows the application to adapt to the changing grid conditions.\nIf the weighted average efficiency drops below a certain level, the adaptation coordinator starts removing \"worst\" nodes.\nIf the weighted average efficiency raises above a certain level, new nodes are added.\nThe application adapts fully automatically to changing conditions.\nFuture work will involve extending our adaptation strategy to support opportunistic migration.\nThis, however, requires grid schedulers with more sophisticated functionality than currently exists.\nFurther research is also needed to decrease the benchmarking overhead.\nAnother line of research that we wish to investigate is using feedback control to refine the adaptation strategy during the application run.\nFinally, the centralized implementation of the adaptation coordinator might become a bottleneck for applications which are running on very large numbers of nodes (hundreds or thousands).","lvl-2":"Self-Adaptive Applications on the Grid\nAbstract\nGrids are inherently heterogeneous and dynamic.\nOne important problem in grid computing is resource selection, that is, finding an appropriate resource set for the application.\nAnother problem is adaptation to the changing characteristics of the grid environment.\nExisting solutions to these two problems require that a performance model for an application is known.\nHowever, constructing such models is a complex task.\nIn this paper, we investigate an approach that does not require performance models.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect the statistics about the application run and deduce application requirements from these statistics.\nThen, we adjust the resource set to better fit the application needs.\nThis approach allows us to avoid performance bottlenecks, such as overloaded WAN links or very slow processors, and therefore can yield significant performance improvements.\nWe evaluate our approach in a number of scenarios typical for the Grid.\n1.\nIntroduction\nIn recent years, grid computing has become a real alternative to traditional parallel computing.\nA grid provides much computational power, and thus offers the possibility to solve very large problems, especially if applications can run on multiple sites at the same time (7; 15; 20).\nHowever, the complexity of Grid environments also is many times larger than that of traditional parallel machines like clusters and supercomputers.\nOne important problem is resource selection - selecting a set of compute nodes such that the application achieves good performance.\nEven in traditional, homogeneous parallel environments, finding the optimal number of nodes is a hard problem and is often solved in a trial-and-error fashion.\nIn a grid environment this problem is even more difficult, because of the heterogeneity of resources: the compute nodes have various\nspeeds and the quality of network connections between them varies from low-latency and high-bandwidth local-area networks (LANs) to high-latency and possibly low-bandwidth wide-area networks (WANs).\nAnother important problem is that the performance and availability of grid resources varies over time: the network links or compute nodes may become overloaded, or the compute nodes may become unavailable because of crashes or because they have been claimed by a higher priority application.\nAlso, new, better resources may become available.\nTo maintain a reasonable performance level, the application therefore needs to adapt to the changing conditions.\nThe adaptation problem can be reduced to the resource selection problem: the resource selection phase can be repeated during application execution, either at regular intervals, or when a performance problem is detected, or when new resources become available.\nThis approach has been adopted by a number of systems (5; 14; 18).\nFor resource selection, the application runtime is estimated for some resource sets and the set that yields the shortest runtime is selected for execution.\nPredicting the application runtime on a given set of resources, however, requires knowledge about the application.\nTypically, an analytical performance model is used, but constructing such a model is inherently difficult and requires an expertise which application programmers may not have.\nIn this paper, we introduce and evaluate an alternative approach to application adaptation and resource selection which does not need a performance model.\nWe start an application on any set of resources.\nDuring the application run, we periodically collect information about the communication times and idle times of the processors.\nWe use these statistics to automatically estimate the resource requirements of the application.\nNext, we adjust the resource set the application is running on by adding or removing compute nodes or even entire clusters.\nOur adaptation strategy uses the work by Eager et al. (10) to determine the efficiency and tries to keep the efficiency of the application between a lower and upper threshold derived from their theory.\nProcessors are added or deleted to stay between the thresholds, thus adapting automatically to the changing environment.\nA major advantage of our approach is that it improves application performance in many different situations that are typical for grid computing.\nIt handles all of the following cases:\n\u2022 automatically adapting the number of processors to the degree of parallelism in the application, even when this degree changes dynamically during the computation \u2022 migrating (part of) a computation away from overloaded resources \u2022 removing resources with poor communication links that slow down the computation \u2022 adding new resources to replace resources that have crashed\nOur work assumes the application is malleable and can run (efficiently) on multiple sites of a grid (i.e., using co-allocation (15)).\nIt should not use static load balancing or be very sensitive to wide\narea latencies.\nWe have applied our ideas to divide-and-conquer applications, which satisfy these requirements.\nDivide-and-conquer has been shown to be an attractive paradigm for programming grid applications (4; 20).\nWe believe that our approach can be extended to other classes of applications with the given assumptions.\nWe implemented our strategy in Satin, which is a Java-centric framework for writing grid-enabled divide-and-conquer applications (20).\nWe evaluate the performance of our approach on the DAS-2 wide-area system and we will show that our approach yields major performance improvements (roughly 10-60%) in the above scenarios.\nThe rest of this paper is structured as follows.\nIn Section 2, we explain what assumptions we are making about the applications and grid resources.\nIn Section 3, we present our resource selection and adaptation strategy.\nIn Section 4, we describe its implementation in the Satin framework.\nIn Section 5, we evaluate our approach in a number of grid scenarios.\nIn Section 6, we compare our approach with the related work.\nFinally, in Section 7, we conclude and describe future work.\n2.\nBackground and assumptions\nIn this section, we describe our assumptions about the applications and their resources.\nWe assume the following resource model.\nThe applications are running on multiple sites at the same time, where sites are clusters or supercomputers.\nWe also assume that the processors of the sites are accessible using a grid scheduling system, such as Koala (15), Zorilla (9) or GRMS (3).\nProcessors belonging to one site are connected by a fast LAN with a low latency and high bandwidth.\nThe different sites are connected by a WAN.\nCommunication between sites suffers from high latencies.\nWe assume that the links connecting the sites with the Internet backbone might become bottlenecks causing the inter-site communication to suffer from low bandwidths.\nWe studied the adaptation problem in the context of divide-andconquer applications.\nHowever, we believe that our methodology can be used for other types of applications as well.\nIn this section we summarize the assumptions about applications that are important to our approach.\nThe first assumption we make is that the application is malleable, i.e., it is able to handle processors joining and leaving the on-going computation.\nIn (23), we showed how divide-andconquer applications can be made fault tolerant and malleable.\nProcessors can be added or removed at any point in the computation with little overhead.\nThe second assumption is that the application can efficiently run on processors with different speeds.\nThis can be achieved by using a dynamic load balancing strategy, such as work stealing used by divide-and-conquer applications (19).\nAlso, master-worker applications typically use dynamic load-balancing strategies (e.g., MW--a framework for writing gridenabled master-worker applications (12)).\nWe find it a reasonable assumption for a grid application, since applications for which the slowest processor becomes a bottleneck will not be able to efficiently utilize grid resources.\nFinally, the application should be insensitive to wide-area latencies, so it can run efficiently on a widearea grid (16; 17).\n3.\nSelf-adaptation\nIn this section we will explain how we use application malleability to find a suitable set of resources for a given application and to adapt to changing conditions in the grid environment.\nIn order to monitor the application performance and guide the adaptation, we added an extra process to the computation which we call adaptation coordinator.\nThe adaptation coordinator periodically collects performance statistics from the application processors.\nWe introduce a new application performance metric: weighted average efficiency which describes the application performance on a heterogeneous set of resources.\nThe coordinator uses statistics from application processors to compute the weighted average efficiency.\nIf the efficiency falls above or below certain thresholds, the coordinator decides on adding or removing processors.\nA heuristic formula is used to decide which processors have to be removed.\nDuring this process the coordinator learns the application requirements by remembering the characteristics of the removed processors.\nThese requirements are then used to guide the adding of new processors.\n3.1 Weighted average efficiency\nIn traditional parallel computing, a standard metric describing the performance of a parallel application is efficiency.\nEfficiency is defined as the average utilization of the processors, that is, the fraction of time the processors spend doing useful work rather than being idle or communicating with other processors (10).\nwhere n is the number of processors and overheadi is the fraction of time the ith processor spends being idle or communicating.\nEfficiency indicates the benefit of using multiple processors.\nTypically, the efficiency drops as new processors are added to the computation.\nTherefore, achieving a high speedup (and thus a low execution time) and achieving a high system utilization are conflicting goals (10).\nThe optimal number of processors is the number for which the ratio of efficiency to execution time is maximized.\nAdding processors beyond this number yields little benefit.\nThis number is typically hard to find, but in (10) it was theoretically proven that if the optimal number of processors is used, the efficiency is at least 50%.\nTherefore, adding processors when efficiency is smaller or equal to 50% will only decrease the system utilization without significant performance gains.\nFor heterogeneous environments with different processor speeds, we extended the notion of efficiency and introduced weighted average efficiency.\nThe useful work done by a processor (1--overheadi) is weighted by multiplying it by the speed of this processor relative to the fastest processor.\nThe fastest processor has speed = 1, for others holds: 0 0.2 indicates bandwidth problems and processors with speed <0.05 do not contribute to the computation.\nAdditionally, when one of the clusters has an exceptionally high inter-cluster overhead (larger than 0.25), we conclude that the bandwidth on the link between this cluster and the Internet backbone is insufficient for the application.\nIn that case, we simply remove the whole cluster instead of computing node badness and removing the worst nodes.\nAfter deciding which nodes are removed, the coordinator sends a message to these nodes and the nodes leave the computation.\nFigure 1 shows a schematic view of the adaptation strategy.\nDashed lines indicate a part that is not supported yet, as will be explained below.\nThis simple adaptation strategy allows us to improve application performance in several situations typical for the Grid:\n\u2022 If an application is started on fewer processors than its degree of parallelism allows, it will automatically expand to more processors (as soon as there are extra resources available).\nConversely, if an application is started on more processors than it can efficiently use, a part of the processors will be released.\n\u2022 If an application is running on an appropriate set of resources but after a while some of the resources (processors and\/or network links) become overloaded and slow down the computation, the overloaded resources will be removed.\nAfter removing the overloaded resources, the weighted average efficiency will increase to above the Emax threshold and the adaptation coordinator will try to add new resources.\nTherefore, the application will be migrated from overloaded resources.\n\u2022 If some of the original resources chosen by the user are inappropriate for the application, for example the bandwidth to one of the clusters is too small, the inappropriate resources will be removed.\nIf necessary, the adaptation component will try to add other resources.\n\u2022 If during the computation a substantial part of the processors will crash, the adaptation component will try to add new resources to replace the crashed processors.\nwithout monitoring and adaptation (runtime 1) with monitoring and adaptation (runtime 2) with monitoring but no adaptation (runtime 3)\nFigure 2.\nThe runtimes of the Barnes-Hut application, scenarios 0-5 Figure 1.\nAdaptation strategy\n\u2022 If the application degree of parallelism is changing during the computation, the number of nodes the application is running on will be automatically adjusted.\nFurther improvements are possible, but require extra functionality from the grid scheduler and\/or integration with monitoring services such as NWS (22).\nFor example, adding nodes to a computation can be improved.\nCurrently, we add any nodes the scheduler gives us.\nHowever, it would be more efficient to ask for the fastest processors among the available ones.\nThis could be done, for example, by passing a benchmark to the grid scheduler, so that it can measure processor speeds in an application specific way.\nTypically, it would be enough to measure the speed of one processor per site, since clusters and supercomputers are usually homogeneous.\nAn alternative approach would be ranking the processors based on parameters such as clock speed and cache size.\nThis approach is sometimes used for resource selection for sequential applications (14).\nHowever, it is less accurate than using an application specific benchmark.\nAlso, during application execution, we can learn some application requirements and pass them to the scheduler.\nOne example is minimal bandwidth required by the application.\nThe lower bound on minimal required bandwidth is tightened each time a cluster with high inter-cluster overhead is removed.\nThe bandwidth between each pair of clusters is estimated during the computation by measuring data transfer times, and the bandwidth to the removed cluster is set as a minimum.\nAlternatively, information from a grid monitoring system can be used.\nSuch bounds can be passed to the scheduler to avoid adding inappropriate resources.\nIt is especially important when migrating from resources that cause performance problems: we have to be careful not to add the resources we have just removed.\nCurrently we use blacklisting - we simply do not allow adding resources we removed before.\nThis means, however, that we cannot use these resources even if the cause of the performance problem disappears, e.g. the bandwidth of a link might improve if the background traffic diminishes.\nWe are currently not able to perform opportunistic migration - migrating to better resources when they are discovered.\nIf an application runs with efficiency between E in and E. .\n.\n, the adaptation component will not undertake any action, even if better resources become available.\nEnabling opportunistic migration requires, again, the ability to specify to the scheduler what \"better\" resources are (faster, with a certain minimal bandwidth) and receiving notifications when such resources become available.\nExisting grid schedulers such as GRAM from the Globus Toolkit (11) do not support such functionality.\nThe developers of the KOALA metascheduler (15) have recently started a project whose goal is to provide support for adaptive applications.\nWe are currently discussing with them the possibility of providing the functionalities required by us, aiming to extend our adaptivity strat\niteration duration (secs.)\negy to support opportunistic migration and to improve the initial resource selection.\n4.\nImplementation\nWe incorporated our adaptation mechanism into Satin--a Java framework for creating grid-enabled divide-and-conquer applications.\nWith Satin, the programmer annotates the sequential code with divide-and-conquer primitives and compiles the annotated code with a special Satin compiler that generates the necessary communication and load balancing code.\nSatin uses a very efficient, grid-aware load balancing algorithm--Cluster-aware Random Work Stealing (CRS) (19), which hides wide-area latencies by overlapping local and remote stealing.\nSatin also provides transparent fault tolerance and malleability (23).\nWith Satin, removing and adding processors from\/to an ongoing computation incurs little overhead.\nWe instrumented the Satin runtime system to collect runtime statistics and send them to the adaptation coordinator.\nThe coordinator is implemented as a separate process.\nBoth coordinator and Satin are implemented entirely in Java on top of the Ibis communication library (21).\nThe core of Ibis is also implemented in Java.\nThe resulting system therefore is highly portable (due to Java's \"write once, run anywhere\" property) allowing the software to run unmodified on a heterogeneous grid.\nIbis also provides the Ibis Registry.\nThe Registry provides, among others, a membership service to the processors taking part in the computation.\nThe adaptation coordinator uses the Registry to discover the application processes, and the application processes use this service to discover each other.\nThe Registry also offers fault detection (additional to the fault detection provided by the communication channels).\nFinally, the Registry provides the possibility to send signals to application processes.\nThe coordinator uses this functionality to notify the processors that they need to leave the computation.\nCurrently the Registry is implemented as a centralized server.\nFor requesting new nodes, the Zorilla (9) system is used--a peer-to-peer supercomputing middleware which allows straightforward allocation of processors in multiple clusters and\/or supercomputers.\nZorilla provides locality-aware scheduling, which tries to allocate processors that are located close to each other in terms of communication latency.\nIn the future, Zorilla will also support bandwidth-aware scheduling, which tries to maximize the total bandwidth in the system.\nZorilla can be easily replaced with another grid scheduler.\nIn the future, we are planning to integrate our adaptation component with GAT (3) which is becoming a standard in the grid community and KOALA (15) a scheduler that provides co-allocation on top of standard grid middleware, such as the Globus Toolkit (11).\n5.\nPerformance evaluation\nIn this section, we will evaluate our approach.\nWe will demonstrate the performance of our mechanism in a few scenarios.\nThe first scenario is an \"ideal\" situation: the application runs on a reasonable set of nodes (i.e., such that the efficiency is around 50%) and no problems such as overloaded networks and processors, crashing processors etc. occur.\nThis scenario allows us to measure the overhead of the adaptation support.\nThe remaining scenarios are typical for grid environments and demonstrate that with our adaptation support the application can avoid serious performance bottlenecks such as overloaded processors or network links.\nFor each scenario, we compare the performance of an application with adaptation support to a non-adaptive version.\nIn the non-adaptive version, the coordinator does not collect statistics and no benchmarking (for measuring processor speeds) is performed.\nIn the \"ideal\" scenario,\nFigure 4.\nBarnes-Hut iteration durations with\/without adaptation, overloaded CPUs\nwe additionally measure the performance of an application with collecting statistics and benchmarking turned on but without doing adaptation, that is, without allowing it to change the number of nodes.\nThis allows us to measure the overhead of benchmarking and collecting statistics.\nIn all experiments we used a monitoring period of 3 minutes for the adaptive versions of the applications.\nAll the experiments were carried out on the DAS-2 wide-area system (8), which consists of five clusters located at five Dutch uni\nFigure 3.\nBarnes-Hut iteration durations with\/without adaptation, too few processors\nversities.\nOne of the clusters consists of 72 nodes, the others of 32 nodes.\nEach node contains two 1 GHz Pentium processors.\nWithin a cluster, the nodes are connected by Fast Ethernet.\nThe clusters are connected by the Dutch university Internet backbone.\nIn our experiments, we used the Barnes-Hut N-body simulation.\nBarnesHut simulates the evolution of a large set of bodies under influence of (gravitational or electrostatic) forces.\nThe evolution of N bodies is simulated in iterations of discrete time steps.\n5.1 Scenario 0: adaptivity overhead\nIn this scenario, the application is started on 36 nodes.\nThe nodes are equally divided over 3 clusters (12 nodes in each cluster).\nOn this number of nodes, the application runs with 50% efficiency, so we consider it a reasonable number of nodes.\nAs mentioned above, in this scenario we measured three runtimes: the runtime of the application without adaptation support (runtime 1), the runtime with adaptation support (runtime 2) and the runtime with monitoring (i.e., collection of statistics and benchmarking) turned on but without allowing it to change the number of nodes (runtime 3).\nThose runtimes are shown in Figure 2, first group of bars.\nThe comparison between runtime 3 and 1 shows the overhead of adaptation support.\nIn this experiment it is around 15%.\nAlmost all overhead comes from benchmarking.\nThe benchmark is run 1-2 times per monitoring period.\nThis overhead can be made smaller by increasing the length of the monitoring period and decreasing the benchmarking frequency.\nThe monitoring period we used (3 minutes) is relatively short, because the runtime of the application was also relatively short (30--60 minutes).\nUsing longer running applications would not allow us to finish the experimentation in a reasonable time.\nHowever, real-world grid applications typically need hours, days or even weeks to complete.\nFor such applications, a much longer monitoring period can be used and the adaptation overhead can be kept much lower.\nFor example, with the Barnes-Hut application, if the monitoring period is extended to 10 minutes, the overhead drops to 6%.\nNote that combining benchmarking with monitoring processor load (as described in Section 3.2) would reduce the benchmarking overhead to almost zero: since the processor load is not changing, the benchmarks would only need to be run at the beginning of the computation.\n5.2 Scenario 1: expanding to more nodes\nIn this scenario, the application is started on fewer nodes than the application can efficiently use.\nThis may happen because the user does not know the right number of nodes or because insufficient nodes were available at the moment the application was started.\nWe tried 3 initial numbers of nodes: 8 (Scenario 1a), 16 (Scenario 1b) and 24 (Scenario 1c).\nThe nodes were located in 1 or 2 clusters.\nIn each of the three sub-scenarios, the application gradually expanded to 36-40 nodes located in 4 clusters.\nThis allowed to reduce the application runtimes by 50% (Scenario 1a), 35% (Scenario 1b) and 12% (Scenario 1c) with respect to the non-adaptive version.\nThose runtimes are shown in Figure 2.\nSince Barnes-Hut is an iterative application, we also measured the time of each iteration, as shown in Figure 3.\nAdaptation reduces the iteration time by a factor of 3 (Scenario 1a), 1.7 (Scenario 1b) and 1.2 (Scenario 1c) which allows us to conclude that the gains in the total runtime would be even bigger if the application were run longer than for 15 iterations.\n5.3 Scenario 2: overloaded processors\nIn this scenario, we started the application on 36 nodes in 3 clusters.\nAfter 200 seconds, we introduced a heavy, artificial load on the processors in one of the clusters.\nSuch a situation may happen when an application with a higher priority is started on some of the resources.\nFigure 4 shows the iteration durations of both the adaptive and non-adaptive versions.\nAfter introducing the load, the iteration\nFigure 6.\nBarnes-Hut iteration durations with\/without adaptation, overloaded CPUs and an overloaded network link\nduration increased by a factor of 2 to 3.\nAlso, the iteration times became very variable.\nThe adaptive version reacted by removing the overloaded nodes.\nAfter removing these nodes, the weighted average efficiency rose to around 35% which triggered adding new nodes and the application expanded back to 38 nodes.\nSo, the overloaded nodes were replaced by better nodes, which brought the iteration duration back to the initial values.\nThis reduced the total runtime by 14%.\nThe runtimes are shown in Figure 2.\nFigure 5.\nBarnes-Hut iteration durations with\/without adaptation, overloaded network link\niteration duration (secs.)\n5.4 Scenario 3: overloaded network link\nIn this scenario, we ran the application on 36 nodes in 3 clusters.\nWe simulated that the uplink to one of the clusters was overloaded and the bandwidth on this uplink was reduced to approximately 100 KB\/s.\nTo simulate low bandwidth we use the traffic-shaping techniques described in (6).\nThe iteration durations in this experiment are shown in Figure 5.\nThe iteration durations of the nonadaptive version exhibit enormous variation: from 170 to 890 seconds.\nThe adaptive version removed the badly connected cluster after the first monitoring period.\nAs a result, the weighted average efficiency rose to around 35% and new nodes were gradually added until their number reached 38.\nThis brought the iteration times down to around 100 seconds.\nThe total runtime was reduced by 60% (Figure 2).\n5.5 Scenario 4: overloaded processors and an overloaded network link\nIn this scenario, we ran the application on 36 nodes in 3 clusters.\nAgain, we simulated an overloaded uplink to one of the clusters.\nAdditionally, we simulated processors with heterogeneous speeds by inserting a relatively light artificial load on the processors in one of the remaining clusters.\nThe iteration durations are shown in Figure 6.\nAgain, the non-adaptive version exhibits a great variation in iteration durations: from 200 to 1150 seconds.\nThe adaptive version removes the badly connected cluster after the first monitoring period which brings the iteration duration down to 210 seconds on average.\nAfter removing one of the clusters, since some of the processors are slower (approximately 5 times), the weighted average efficiency raises only to around 40%.\nSince this value lies between E in and E. .\n.\n, no nodes are added or removed.\nThis example illustrates what the advantages of opportunistic migration would be.\nThere were faster nodes available in the system.\nIf these nodes were added to the application (which could trigger removing the slower nodes) the iteration duration could be reduced even further.\nStill, the adaptation reduced the total runtime by 30% (Figure 2).\n5.6 Scenario 5: crashing nodes\nIn the last scenario, we also run the application on 36 nodes in 3 clusters.\nAfter 500 seconds, 2 out of 3 clusters crash.\nThe iteration durations are shown in Figure 7.\nAfter the crash, the iteration duration raised from 100 to 200 seconds.\nThe weighted efficiency rose to around 30% which triggered adding new nodes in the adaptive version.\nThe number of nodes gradually went back to 35 which brought the iteration duration back to around 100 seconds.\nThe total runtime was reduced by 13% (Figure 2).\n6.\nRelated work\nA number of Grid projects address the question of resource selection and adaptation.\nIn GrADS (18) and ASSIST (1), resource selection and adaptation requires a performance model that allows predicting application runtimes.\nIn the resource selection phase, a number of possible resource sets is examined and the set of resources with the shortest predicted runtime is selected.\nIf performance degradation is detected during the computation, the resource selection phase is repeated.\nGrADS uses the ratio of the predicted execution times (of certain application phases) to the real execution times as an indicator of application performance.\nASSIST uses the number of iterations per time unit (for iterative applications) or the number of tasks per time unit (for regular master-worker applications) as a performance indicator.\nThe main difference between these approaches and our approach is the use of performance models.\nThe main advantage is that once the performance model is known, the system is able to take more accurate migration decisions than with our approach.\nHowever, even if the performance no adaptation with adaptation\nFigure 7.\nBarnes-Hut iteration durations with\/without adaptation, crashing CPUs\nmodel is known, the problem of finding an optimal resource set (i.e. the resource set with the minimal execution time) is NP-complete.\nCurrently, both GrADS and ASSIST examine only a subset of all possible resource sets and therefore there is no guarantee that the resulting resource set will be optimal.\nAs the number of available grid resources increases, the accuracy of this approach diminishes, as the subset of possible resource sets that can be examined in a reasonable time becomes smaller.\nAnother disadvantage of these systems is that the performance degradation detection is suitable only for iterative or regular applications.\nCactus (2) and GridWay (14) do not use performance models.\nHowever, these frameworks are only suitable for sequential (GridWay) or single-site applications (Cactus).\nIn that case, the resource selection problem boils down to selecting the fastest machine or cluster.\nProcessor clock speed, average load and a number of processors in a cluster (Cactus) are used to rank resources and the resource with the highest rank is selected.\nThe application is migrated if performance degradation is detected or better resources are discovered.\nBoth Cactus and GridWay use the number of iterations per time unit as the performance indicator.\nThe main limitation of this methodology is that it is suitable only for sequential or single-site applications.\nMoreover, resource selection based on clock speed is not always accurate.\nFinally, performance degradation detection is suitable only for iterative applications and cannot be used for irregular computations such as search and optimization problems.\nThe resource selection problem was also studied by the AppLeS project (5).\nIn the context of this project, a number of applications were studied and performance models for these applications were created.\nBased on such a model a scheduling agent is built that uses the performance model to select the best resource set and the best application schedule on this set.\nAppLeS scheduling agents are written on a case-by-case basis and cannot be reused for another application.\nTwo reusable templates were also developed for specific classes of applications, namely master-worker (AMWAT template) and parameter sweep (APST template) applications.\nMigration is not supported by the AppLeS software.\n2 out of 9 clusters crash started adding nodes 96 nodes reached\nIn (13), the problem of scheduling master-worker applications is studied.\nThe authors assume homogeneous processors (i.e., with the same speed) and do not take communication costs into account.\nTherefore, the problem is reduced to finding the right number of workers.\nThe approach here is similar to ours in that no performance model is used.\nInstead, the system tries to deduce the application requirements at runtime and adjusts the number of workers to approach the ideal number.\n7.\nConclusions and future work\nIn this paper, we investigated the problem of resource selection and adaptation in grid environments.\nExisting approaches to these problems typically assume the existence of a performance model that allows predicting application runtimes on various sets of resources.\nHowever, creating performance models is inherently difficult and requires knowledge about the application.\nWe propose an approach that does not require in-depth knowledge about the application.\nWe start the application on an arbitrary set of resources and monitor its performance.\nThe performance monitoring allows us to learn certain application requirements such as the number of processors needed by the application or the application's bandwidth requirements.\nWe use this knowledge to gradually refine the resource set by removing inadequate nodes or adding new nodes if necessary.\nThis approach does not result in the optimal resource set, but in a reasonable resource set, i.e. a set free from various performance bottlenecks such as slow network connections or overloaded processors.\nOur approach also allows the application to adapt to the changing grid conditions.\nThe adaptation decisions are based on the weighted average efficiency--an extension of the concept of parallel efficiency defined for traditional, homogeneous parallel machines.\nIf the weighted average efficiency drops below a certain level, the adaptation coordinator starts removing \"worst\" nodes.\nThe \"badness\" of the nodes is defined by a heuristic formula.\nIf the weighted average efficiency raises above a certain level, new nodes are added.\nOur simple adaptation strategy allows us to handle multiple scenarios typical for grid environments: expand to more nodes or shrink to fewer nodes if the application was started on an inappropriate number of processors, remove inadequate nodes and replace them with better ones, replace crashed processors, etc. .\nThe application adapts fully automatically to changing conditions.\nWe implemented our approach in the Satin divide-and-conquer framework and evaluated it on the DAS-2 distributed supercomputer and demonstrate that our approach can yield significant performance improvements (up to 60% in our experiments).\nFuture work will involve extending our adaptation strategy to support opportunistic migration.\nThis, however, requires grid schedulers with more sophisticated functionality than currently exists.\nFurther research is also needed to decrease the benchmarking overhead.\nFor example, the information about CPU load could be used to decrease the benchmarking frequency.\nAnother line of research that we wish to investigate is using feedback control to refine the adaptation strategy during the application run.\nFor example, the node \"badness\" formula could be refined at runtime based on the effectiveness of the previous adaptation decisions.\nFinally, the centralized implementation of the adaptation coordinator might become a bottleneck for applications which are running on very large numbers of nodes (hundreds or thousands).\nThis problem can be solved by implementing a hierarchy of coordinators: one subcoordinator per cluster which collects and processes statistics from its cluster and one main coordinator which collects the information from the sub-coordinators.","keyphrases":["self-adapt","grid comput","resourc select","grid environ","parallel comput","homogen parallel environ","resourc heterogen","high-bandwidth local-area network","lower-bandwidth wide-area network","network link","commun time","the processor idl time","parallel degre","overload resourc","divid-and-conquer"],"prmu":["P","P","P","P","M","M","R","U","U","M","U","M","U","R","U"]} {"id":"I-26","title":"Sequential Decision Making in Parallel Two-Sided Economic Search","abstract":"This paper presents a two-sided economic search model in which agents are searching for beneficial pairwise partnerships. In each search stage, each of the agents is randomly matched with several other agents in parallel, and makes a decision whether to accept a potential partnership with one of them. The distinguishing feature of the proposed model is that the agents are not restricted to maintaining a synchronized (instantaneous) decision protocol and can sequentially accept and reject partnerships within the same search stage. We analyze the dynamics which drive the agents' strategies towards a stable equilibrium in the new model and show that the proposed search strategy weakly dominates the one currently in use for the two-sided parallel economic search model. By identifying several unique characteristics of the equilibrium we manage to efficiently bound the strategy space that needs to be explored by the agents and propose an efficient means for extracting the distributed equilibrium strategies in common environments.","lvl-1":"Sequential Decision Making in Parallel Two-Sided Economic Search David Sarne School of Engineering and Applied Sciences Harvard University Cambridge MA 02138 USA Teijo Arponen Institute of Mathematics Helsinki University of Technology SF-02015 TKK, Finland ABSTRACT This paper presents a two-sided economic search model in which agents are searching for beneficial pairwise partnerships.\nIn each search stage, each of the agents is randomly matched with several other agents in parallel, and makes a decision whether to accept a potential partnership with one of them.\nThe distinguishing feature of the proposed model is that the agents are not restricted to maintaining a synchronized (instantaneous) decision protocol and can sequentially accept and reject partnerships within the same search stage.\nWe analyze the dynamics which drive the agents'' strategies towards a stable equilibrium in the new model and show that the proposed search strategy weakly dominates the one currently in use for the two-sided parallel economic search model.\nBy identifying several unique characteristics of the equilibrium we manage to efficiently bound the strategy space that needs to be explored by the agents and propose an efficient means for extracting the distributed equilibrium strategies in common environments.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceIntelligent agents General Terms Algorithms, Economics 1.\nINTRODUCTION A two-sided economic search is a distributed mechanism for forming agents'' pairwise partnerships [5].1 On every stage of the process, each of the agents is randomly matched with another agent 1 Notice that the concept of search here is very different from the classical definition of search in AI.\nWhile AI search is an active process in which an agent finds a sequence of actions that will bring it from the initial state to a goal state, economic search refers to the identification of the best agent to commit to a partnership with.\nand the two interact bilaterally in order to learn the benefit encapsulated in a partnership between them.\nThe interaction does not involve bargaining thus each agent merely needs to choose between accepting or rejecting the partnership with the other agent.\nA typical market where this kind of two-sided search takes place is the marriage market [22].\nRecent literature suggests various software agent-based applications where a two-sided distributed (i.e., with no centralized matching mechanisms) search takes place.\nAn important class of such applications includes secondary markets for exchanging unexploited resources.\nAn exchange mechanism is used in those cases where selling these resources is not the core business of the organization or when the overhead for selling them makes it non-beneficial.\nFor example, through a twosided search, agents, representing different service providers, can exchange unused bandwidth [21] and communication satellites can transfer communication with a greater geographical coverage.\nTwosided agents-based search can also be found in applications of buyers and sellers in eMarkets and peer-to-peer applications.\nThe twosided nature of the search suggests that a partnership between a pair of agents is formed only if it is mutually accepted.\nBy forming a partnership the agents gain an immediate utility and terminate their search.\nWhen resuming the search, on the other hand, a more suitable partner might be found however some resources will need to be consumed for maintaining the search process.\nIn this paper we focus on a specific class of two-sided search matching problems, in which the performance of the partnership applies to both parties, i.e., both gain an equal utility [13].\nThe equal utility scenario is usually applicable in domains where the partners gain from the synergy between them.\nFor example, consider tennis players that seek partners when playing doubles (or a canoe``s paddler looking for a partner to practice with).\nHere the players are being rewarded completely based on the team``s (rather than the individual) performance.\nOther examples are the scenario where students need to form pairs for working together on an assignment, for which both partners share the same grade, and the scenario where two buyer agents interested in similar or interchangeable products join forces to buy a product together, taking advantage of discount for quantity (i.e. each of them enjoys the same reduced price).\nIn all these applications, any two agents can form a partnership and the performance of any given partnership depends on the skills or the characteristics of its members.\nFurthermore, the equal utility scenario can also hold whenever there is an option for side-payments and the partnership``s overall utility is equally split among the two agents forming it [22].\nWhile the two-sided search literature offers comprehensive equilibrium analysis for various models, it assumes that the agents'' search is conducted in a purely sequential manner: each agent locates and interacts with one other agent in its environment at a time 450 978-81-904262-7-5 (RPS) c 2007 IFAAMAS [5, 22].\nNevertheless, when the search is assigned to autonomous software agents a better search strategy can be used.\nHere an agent can take advantage of its unique inherent filtering and information processing capabilities and its ability to efficiently (in comparison to people) maintain concurrent interactions with several other agents at each stage of its search.\nSuch use of parallel interactions in search is favorable whenever the average cost2 per interaction with another agent, when interacting in parallel with a batch of other agents, is smaller than the cost of maintaining one interaction at a time (i.e., advantage to size).\nFor example, the analysis of the costs associated with evaluating potential partnerships between service providers reveals both fixed and variable components when using the parallel search, thus the average cost per interaction decreases as the number of parallel interactions increases [21].\nDespite the advantages identified for parallel interactions in adjacent domains (e.g., in one-sided economic search [7, 16]), a first attempt for modeling a repeated pairwise matching process in which agents are capable of maintaining interaction with several other agents at a time was introduced only recently [21].\nHowever, the agents in that seminal model are required to synchronize their decision making process.\nThus each agent, upon reviewing the opportunities available in a specific search stage, has to notify all other agents of its decision whether to commit to a partnership (at most with one of them) or reject the partnership (with the rest of them).\nThis inherent restriction imposes a significant limitation on the agents'' strategic behavior.\nIn our model, the agents are free to notify the other agents of their decisions in an asynchronous manner.\nThe asynchronous approach allows the agents to re-evaluate their strategy, based on each new response they receive from the agents they interact with.\nThis leads to a sequential decision making process by which each agent, upon sending a commit message to one of the other agents, delays its decision concerning a commitment or rejection of all other potential partnerships until receiving a response from that agent (i.e., the agent still maintains parallel interactions in each search stage, except that its decision making process at the end of the stage is sequential rather than instantaneous).\nThe new model is a much more realistic pairwise model and, as we show in the analysis section, is always preferred by any single agents participating in the process.\nIn the absence of other economic two-sided parallel search models, we use the model that relies on an instantaneous (synchronous) decision making process [21] (denoted I-DM throughout the rest of the paper) as a benchmark for evaluating the usefulness of our proposed sequential (asynchronous) decision making strategy (denoted S-DM).\nThe main contributions of this paper are threefold: First, we formally model and analyze a two-sided search process in which the agents have no temporal decision making constraints concerning the rejection of or commitment to potential partnerships they encounter in parallel (the S-DM model).\nThis model is a general search model which can be applied in various (not necessarily software agents-based) domains.\nSecond, we prove that the agents'' SDM strategy weakly dominates the I-DM strategy, thus every agent has an incentive to deviate to the S-DM strategy when all other agents are using the I-DM strategy.\nFinally, by using an innovative recursive presentation of the acceptance probabilities of different potential partnerships, we identify unique characteristics of the equilibrium strategies in the new model.\nThese are used for supplying an appropriate computational means that facilitates the calculation of the agents'' equilibrium strategy.\nThis latter contribution is 2 The term costs refers to resources the agent needs to consume for maintaining its search, such as: self advertisement, locating other agents, communicating with them and processing their offers.\nof special importance since the transition to the asynchronous mode adds inherent complexity to the model (mainly because now each agent needs to evaluate the probabilities of having each other agent being rejected or accepted by each of the other agents it interacts with, in a multi-stage sequential process).\nWe manage to extract the agents'' new equilibrium strategies without increasing the computational complexity in comparison to the I-DM model.\nThroughout the paper we demonstrate the different properties of the new model and compare it with the I-DM model using an artificial synthetic environment.\nIn the following section we formally present the S-DM model.\nAn equilibrium analysis and computational means for finding the equilibrium strategy are provided in Section 3.\nIn Section 4 we review related MAS and economic search theory literature.\nWe conclude with a discussion and suggest directions for future research in Section 5.\n2.\nMODEL AND ANALYSIS We consider an environment populated with an infinite number of self-interested fully rational agents of different types3 .\nAny agent Ai can form a partnership with any other agent Aj in the environment, associated with an immediate perceived utility U(Ai, Aj) for both agents.\nAs in many other partnership formation models (see [5, 21]) we assume that the value of U(x, y) (where x and y are any two agents in the environment) is randomly drawn from a continuous population characterized with a probability distribution function (p.d.f.) f(U) and a cumulative distribution function (c.d.f.) F(U), (0 \u2264 U < \u221e).\nThe agents are assumed to be acquainted with the utility distribution function f(x), however they cannot tell a-priori what utility can be gained by a partnership with any specific agent in their environment.\nTherefore, the only way by which an agent Ai can learn the value of a partnership with another agent Aj, U(Ai, Aj), is by interacting with agent Aj.\nSince each agent in two-sided search models has no prior information concerning any of the other agents in its environment, it initiates interactions (i.e., search) with other agents randomly.\nThe nature of the two-sided search application suggests that the agents are satisfied with having a single partner, thus once a partnership is formed the two agents forming it terminate their search process and leave the environment.\nThe agents are not limited to interacting with a single potential partner agent at a time, but rather can select to interact with several other agents in parallel.\nWe define a search round\/stage as the interval in which the agent interacts with several agents in parallel and learns the utility of forming a partnership with each of them.\nBased on the learned values, the agent needs to decide whether to commit or reject each of the potential partnerships available to it.\nCommitment is achieved by sending a commit message to the appropriate agent and an agent cannot commit to more than one potential partnership simultaneously.\nDeclining a partnership is achieved by sending a reject message.\nThe communication between the agents is assumed to be asynchronous and each agent can delay its decision, concerning any given potential partnership, as necessary.4 If two agents Ai and Aj mutually commit to a partnership between 3 The infinite number of agents assumption is common in two-sided search models (see [5, 22, 21]).\nIn many domains (e.g., eCommerce) this derives from the high entrance and leave rates, thus the probability of running into the same agent in a random match is negligible.\n4 Notice that the asynchronous procedure does not eliminate the inherent structure of the search.\nThe search is still based on stages\/rounds where on each search round the agent interacts with several other agents, except that now the agent can delay its decision making process (within each search round) as necessary.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 451 them, then the partnership is formed and both agents gain the immediate utility U(Ai, Aj) associated with it.\nIf an agent does not form a partnership in a given search stage, it continues to its next search stage and interacts with more agents in a similar manner.\nGiven the option for asynchronous decision making, each individual agent, Ai, follows the following procedure: 1: loop 2: Set N (number of parallel interactions for next search round) 3: Locate randomly a set A = {A1, ... , AN } of agents to interact with 4: Evaluate the set of utilities {U(Ai, A1), ... , U(Ai, AN )} 5: Set A\u2217 ={Aj|Aj \u2208A and U(Ai, Aj)>U(resume)} 6: Send a reject message to each agent in the set {A \\ A\u2217 } 7: while (A\u2217 = \u2205) do 8: Send a commit message to Aj = argmaxAl\u2208A\u2217 U(Ai, Al) 9: Remove Aj from A\u2217 10: Wait for Aj``s decision 11: if (Aj responded commit) then 12: Send reject messages to the remaining agents in A\u2217 13: Terminate search 14: end if 15: end while 16: end loop where U(resume) denotes the expected utility of continuing the search (in the following paragraphs we show that U(resume) is fixed throughout the search and derives from the agent``s strategy).\nIn the above algorithm, any agent Ai first identifies the set A\u2217 of other agents it is willing to accept out of those reviewed in the current search stage and sends a reject message to the rest.\nThen it sends a commit message to the agent Aj \u2208 A\u2217 that is associated with the partnership yielding the highest utility.\nIf a reject message was received from agent Aj then this agent is removed from A\u2217 and a new commit message is sent according to the same criteria.\nThe process continues until either: (a) the set A\u2217 becomes empty, in which case the agent initiates another search stage; or (b) a dual commitment is obtained, in which case the agent sends reject messages to the remaining agents in A\u2217 .\nThe method differs from the one used in the I-DM model in the way it handles the commitment messages: in the I-DM model, after evaluating the set of utilities (step 4), the agent merely sends instantaneously a commit message to the agent associated with the greatest utility and a reject message to all the other agents it interacted with (as a replacement to steps 5-15 in the above procedure).\nOur proposed S-DM model is much more intuitive as it allows an agent to hold and possibly exploit relatively beneficial opportunities even if its first priority partnership is rejected by the other agent.\nIn the I-DM model, on the other hand, since reject messages are sent alongside the commit message, simultaneously, a reject message from the agent associated with the best partnership enforces a new search round.\nNotice that the two-sided search mechanism above aligns with most other two-sided search mechanisms in a sense that it is based on random matching (i.e., in each search round the agent encounters a random sample of agents).\nWhile the maintenance of the random matching infrastructure is an interesting research question, it is beyond the scope of this paper.\nNotwithstanding, we do wish to emphasize that given the large number of agents in the environment and the fact that in MAS the turnover rate is quite substantial due to the open nature of the environment (and the interoperability between environments).\nTherefore, the probability of ending up interacting with the same agent more than once, when initiating a random interaction, is practically negligible.\nTHEOREM 1.\nThe S-DM agent``s decision making process: (a) is the optimal one (maximizes the utility) for any individual agent in the environment; and (b) guarantees a zero deadlock probability for any given agent in the environment.\nProof: (a) The method is optimal since it cannot be changed in a way that produces a better utility for the agent.\nSince bargaining is not applicable here (benefits are non-divisible) then the agent``s strategy is limited to accepting or rejecting offers.\nThe decision of rejecting a partnership in step 6 is based only on the immediate utility that can be gained from this partnership in comparison to the expected utility of resuming the search (i.e., moving on to the next search stage) and is not affected by the willingness of the other agents to commit or reject a partnership with Ai.\nAs for partnerships that yield a utility greater than the expected utility of resuming the search (i.e., the partnerships with agents from the set A\u2217 ), the agent always prefers to delay its decision concerning partnerships of this type until receiving all notifications concerning potential partnerships that are associated with a greater immediate utility.\nThe delay never results with a loss of opportunity since the other agent``s decision concerning this opportunity is not affected by agent Ai``s willingness to commit or reject this opportunity (but rather by the other agent``s estimation of its expected utility if resuming the search and the rejection messages it receives for more beneficial potential partnerships).\nFinally, the agent cannot benefit from delaying a commit message to the agent associated with the highest utility in A\u2217 , thus will always send it a commit message.\n(b) We first prove the following lemma that states that the probability of having two partnering opportunities associated with an identical utility is zero.\nLEMMA 2.1.\nWhen f is a continuous distribution function, then lim y\u2192x ''Z y z=x f(z)dz -2 !\n= 0.\nProof: since f is continuous and the interval between x and y is finite, by the intermediate value theorem (found in most calculus texts) there exists a c between x and y thatZ y z=x f(z)dz = f(c)(y \u2212 x) (intuitively, a rectangle with the base from z = x to z = y and height = f(c) has the same area as the integral on the left hand side.)\n.\nTherefore ''Z y z=x f(z)dz -2 = |f(c)|2 |y \u2212 x|2 When y \u2192 x, f(c) stays bounded due to continuity of f, moreover limy\u2192x f(c) = f(x), hence lim y\u2192x ''Z y z=x f(z)dz -2 !\n= f(x)2 lim y\u2192x |y \u2212 x|2 = 0.\n.\nAn immediate derivative from the above lemma is that no tiebreaking procedures are required and an agent in a waiting state is always waiting for a reply from the single agent that is associated with the highest utility among the agents in the set A\u2217 (i.e., no other agent in the set A\u2217 is associated with an equal utility).\nA deadlock can be formed only if we can create a cyclic sequence of agents in which any agent is waiting for a reply from the subsequent agent in the sequence.\nHowever, in our method any agent Ai will be waiting for a reply from another agent Aj, to which it sent a commit message, only if: (1) any agent Ak \u2208 A, associated with a utility U(Ai, Ak) > U(Ai, Aj), has already rejected the partnership with agent Ai; and (2) agent Aj itself is waiting for a reply from agent Al where U(Al, Aj) > U(Aj, Ai).\nTherefore, if we have a sequence of waiting agents then the utility associated with partnerships between any two subsequent agents in the sequence must increase along the sequence.\nIf the sequence is cyclic, then we have a 452 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) pattern of the form: U(Ai, Al) > U(Al, Aj) > U(Aj, Ai).\nSince U(Ai, Al) > U(Aj, Ai), agent Ai can be waiting for agent Aj only if it has already been rejected by Al (see (1) above).\nHowever, if agent Al has rejected agent Ai then it has also rejected agent Aj.\nTherefore, agent Aj cannot be waiting for agent Al to make a decision.\nThe same logic can be applied to any longer sequence.\n2 The search activity is assumed to be costly [11, 1, 16] in a way that any agent needs to consume some of its resources in order to locate other agents to interact with, and for maintaining the interactions themselves.\nWe assume utilities and costs are additive and that the agents are trying to maximize their overall utility, defined as the utility from the partnership formed minus the aggregated search costs along the search process.\nThe agent``s cost of interacting with N other agents (in parallel) is given by the function c(N).\nThe search cost structure is principally a parameter of the environment and thus shared by all agents.\nAn agent``s strategy S(A ) \u2192 {commit Aj \u2208 A , reject A \u2282 A , N} defines for any given set of partnership opportunities, A , what is the subset of opportunities that should be immediately declined, to which agent to send a commit message (if no pending notification from another agent is expected) or the number of new interactions to initiate (N).\nSince the search process is two-sided, our goal is to find an equilibrium set of strategies for the agents.\n2.1 Strategy Structure Recall that each agent declines partnerships based on (a) the partnerships'' immediate utility in comparison to the agent``s expected utility from resuming search; and (b) achieving a mutual commitment (thus declining pending partnerships that were not rejected in (a)).\nTherefore an agent``s strategy can be represented by a pair (Nt , xt ) where Nt is the number of agents with whom it chooses to interact in search stage t and xt is its reservation value5 (a threshold) for accepting\/rejecting the resulting N potential partnerships.\nThe subset A\u2217 , thus, will include all partnership opportunities of search stage t that are associated with a utility equal to or greater than xt .\nThe reservation value xt is actually the expected utility for resuming the search at time t (i.e., U(resume)).\nThe agent will always prefer committing to an opportunity greater than the expected utility of resuming the search and will always prefer to resume the search otherwise.\nSince the agents are not limited by a decision horizon, and their search process does not imply any new information about the market structure (e.g., about the utility distribution of future partnership opportunities), their strategy is stationary - an agent will not accept an opportunity it has rejected beforehand (i.e., x1 = x2 = ... = x) and will use the same sample size, N1 = N2 = ... = N, along its search.\n2.2 Calculating Acceptance Probabilities The transition from instantaneous decision making process to a sequential one introduces several new difficulties in extracting the agents'' strategies.\nNow, in order to estimate the probability of being accepted by any of the other agents, the agent needs to recursively model, while setting its strategy, the probabilities of rejections other agents might face from other agents they interact with.\nIn the following paragraphs we introduce several complementary definitions and notations, facilitating the formal introduction of the acceptance probabilities.\nConsider an agent Ai, using a strategy (N, xN ) while operating in an environment where all other agents 5 Notice the reservation value used here is different from a reservation price concept (that is usually used as buyers'' private evaluation).\nThe use of reservation-value based strategies is common in economic search models [21, 17].\nare using a strategy (k, xk).\nThe probability that agent Ai will receive a commitment message from agent Aj it interacted with depends on the utility associated with the potential partnership between them, x.\nThis probability, denoted by Gk(x) can be calculated as:6 Gk(x) = 8 >< >: \u201e 1 \u2212 Z \u221e y=x f(y)Gk(y)dy ``k\u22121 if x \u2265 xk 0 otherwise.\n(1) The case where x < xk above is trivial: none of the other agents will accept agent Ai if the utility in such a partnership is smaller than their reservation value xk.\nHowever even when the partnership``s utility is greater or equal to xk, commitment is not guaranteed.\nIn the latter scenario, a commitment message from agent Aj will be received only if agent Aj has been rejected by all other agents in its set A\u2217 that were associated with a utility greater than the utility of a partnership with agent Ai.\nThe unique solution to the recursive Equation 1 is: Gk(x) = 8 >>>>>< >>>>>: 1+(k\u22122) R \u221e y=xf(y)dy 1\u2212k k\u22122 , k>2, x\u2265xk, exp(\u2212 R \u221e y=x f(y)dy), k=2, x\u2265xk, 1, k=1, x\u2265xk 0, x < xk.\n(2) Notice that as expected, a partnership opportunity that yields the maximum mutual utility is necessarily accepted by both agents, i.e., limx\u2192\u221e Gk(x) = 1.\nOn the other hand, when the utility associated with a potential partnership opportunity is zero (x = 0) the acceptance probability is non-negligible: lim x\u21920 Gk(x) = (k \u2212 1) 1\u2212k k\u22122 (3) This non-intuitive result derives from the fact that there is still a non-negligible probability that the other agent is rejected by all other agents it interacts with.\n2.3 Setting the Agents'' Strategies Using the function Gk(x), we can now formulate and explore the agents'' expected utility when using their search strategies.\nConsider again an agent Ai that is using a sample of size N while all other agents are using a strategy (k, xk).\nWe denote by RN (x) the probability that the maximum utility that agent Ai can be guaranteed when interacting with N agents (i.e., the highest utility to which a commit message will be received) is at most x.\nThis can be calculated as the probability that none of N agents send agent Ai a commit message for a partnership associated with a utility greater than x: RN (x) = 1 \u2212 Z \u221e max(x,xk) f(y)Gk(y)dy N (4) Notice that RN (x) is in fact a cumulative distribution function, satisfying: limx\u2192\u221e RN (x) = 1 and dRN (x)\/dx > 0 (the function never gets a zero value simply because there is always a positive probability that none of the agents commit at all to a partnership with agent Ai).\nTherefore, the derivative of the function RN (x), denoted rN (x), is in fact the probability distribution function of the maximum utility that can be guaranteed for agent Ai when sampling N other agents: rN (x) = dRN (x) dx = 8 < : Nf(x)Gk(x) N+k\u22122 k\u22121 , x \u2265 xk 0, x < xk (5) 6 The use of the recursive Equation 1 is enabled since we assume that the number of agents is infinite (thus the probability of having an overlap between the interacting agents and the affect of such overlap on the probabilities we calculate become insignificant).\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 453 This function rN (x) is essential for calculating VN (xN ), the expected utility of agent Ai when using a strategy (N, xN ), given the strategy (k, xk) used by the other agents: VN (xN )= Z \u221e y=max(xN ,xk) yrN (y)dy+ 1\u2212 Z \u221e y=max(xN ,xk) rN (y)dy VN (xN ) \u2212 c(N) (6) The right hand side of the above equation represents the expected utility of agent Ai from taking an additional search stage.\nThe first term represents the expected utility from mutual commitment scenarios, whereas the second term is the expected utility associated with resuming the search (which equals VN (xN ) since nothing has changed for the agent).\nUsing simple mathematical manipulations and substituting rN (x), Equation 6 transforms into: VN (x) = R \u221e y=max(x,xk) yNf(y)Gk(y) N+k\u22122 k\u22121 dy \u2212 c(N) R \u221e y=max(x,xk) Nf(y)Gk(y) N+k\u22122 k\u22121 dy (7) and further simplified into: VN (x) = max(x, xk) + Z \u221e max(x,xk) (1 \u2212 Gk(y) N k\u22121 )dy \u2212 c(N) 1 \u2212 Gk(max(x, xk)) N k\u22121 (8) Equation 8, allows us to prove some important characteristics of the model as summarized in the following Theorem 2.\nTHEOREM 2.\nWhen other agents use strategy (k, xk): (a) An agent``s expected utility function, VN (xN ), when using a strategy (N, x), is quasi concave in x with a unique maximum, obtained for the value xN satisfying: VN (xN ) = xN (9) (b) The value xN satisfies: c(N) = ` max(xN , xk) \u2212 xN \u00b4` 1 \u2212 Gk(xk) N k\u22121 \u00b4 + + Z \u221e max(xN ,xk) (1 \u2212 Gk(y) N k\u22121 )dy (10) The proof is obtained by deriving VN (xN ) in Equation 8 and setting it to zero.\nAfter applying further mathematical manipulations we obtain (9) and (10).\nBoth parts of Theorem 2 can be used as an efficient means for extracting the optimal reservation value xN of an agent, given the strategies of the other agents in the environment and the number of parallel interactions it uses.\nFurthermore, in the case of complex distribution functions where extracting xN from Equation 10 is not immediate, a simple algorithm (principally based on binary search) can be constructed for calculating the agent``s optimal reservation value (which equals its expected utility, according to 9), with a complexity O(log( \u02c6x \u03c1 )), where \u03c1 is the required precision level for xN and \u02c6x is the solution to: R \u221e y=\u02c6x yNf(y)F(y)N\u22121 dy = c(N).\nHaving the ability to calculate xN , we can now prove the following Proposition 2.1.\nPROPOSITION 2.1.\nAn agent operating in an environment where all agents are using a strategy according to the instantaneous parallel search equilibrium (i.e., according to the I-DM model [21]) can only benefit from deviating to the proposed S-DM strategy.\nSketch of proof: For the I-DM model the following holds [21]: c(N) = N 2N \u2212 1 Z \u221e y=xI\u2212DM N (1 \u2212 F(y)2N\u22121 )dy (11) We apply the methodology used above in this subsection for constructing the expected utility of the agent using the S-DM strategy as a function of its reservation value, assuming all other agents are using the I-DM search strategy.\nThis results with an optimal reservation value for the agent using S-DM, satisfying: c(N) = Z \u221e y=xS\u2212DM N (1 \u2212 (1 \u2212 1 N + F(y)N N )N )dy (12) Finally, we prove that the integrand in Equation 11 is smaller than the integrand in Equation 12.\nGiven the fact that both terms equal c(N), we obtain xS\u2212DM N > xI\u2212DM N and consequently (according to Theorem 2) a similar relationship in terms of expected utilities.\nFigure 1 illustrates the superiority of the proposed search strategy S-DM, as well as the expected utility function``s characteristics (as reflected in Theorem 2).\nFor comparative reasons we use the same synthetic environment that was used for the I-DM model [21].\nHere the utilities are assumed to be drawn from a uniform distribution function and the cost function was taken to be c(N) = 0.05 + 0.005N.\nThe agent is using N = 3 while other agents are using k = 25 and xk = 0.2.\nThe different curves depict the expected utility of the agent as a function of the reservation value, x, that it uses, when: (a) all agents are using the I-DM strategy (marked as I-DM); (b) the agent is using the S-DM strategy while the other agents are using the I-DM strategy (marked as I-DM\/SDM); and (c) all agents are using the S-DM strategy (marked as S-DM).\nAs expected, according to Equation 8 and Theorem 2, the agent``s expected utility remains constant until its reservation value exceeds xk.\nThen, it reaches a global maximum when the reservation value satisfies VN (x) = x. From the graph we can see that the agent always has an incentive to deviate from the I-DM strategy to S-DM strategy (as was proven in Proposition 2.1).\n0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 reservation value (x) expected utility VN(x) S-D M I-D M I-D M \/ S-D M Figure 1: The expected utility as a function of the reservation value used by the agent 3.\nEQUILIBRIUM DYNAMICS Since all agents are subject to similar search costs, and their perceived utilities are drawn from the same distribution function, they all share the same strategy in equilibrium.\nA multi-equilibria scenario may occur, however as we discuss in the following paragraphs since all agents share the same preferences\/priorities (unlike, for example, in the famous battle of the sexes scenario) we can always identify which equilibrium strategy will be used.\nNotice that if all agents are using the same sample size, N, then the value xN resulting from solving Equation 10 by substituting k = N and xk = xN is a stable reservation value (i.e., none of the agents can benefit from changing just the value of xN ).\nAn equilibrium strategy (N, xN ) can be found by identifying an N value for which no single agent has an incentive to use a different number of parallel interactions, k (and the new optimal reservation 454 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) value that is associated with k according to Equation 10).\nWhile this implies an infinite solution space, we can always bound it using Equations 8 and 10.\nWithin the framework of this paper, we demonstrate such a bounding methodology for the common case were c(N) is linear7 or convex, by using the following Theorem 3.\nTHEOREM 3.\nWhen c(N) is linear (or convex), then: (a) When all other agents sample k potential partners over a search round, if an agent``s expected utility of sampling k + 1 potential partners, Vk+1(xk+1), is smaller than Vk(xk), then the expected utility when sampling N potential partners, VN (xN ), where N > k+1, is also smaller than Vk(xk).\n(b) Similarly, when all other agents sample k potential partners over a search round, if an agent``s expected utility of using k \u2212 1 potential partners, Vk\u22121(xk\u22121), is smaller than the expected utility when using k potential partners, Vk(xk), then the expected utility when using N potential partners, where N < k \u2212 1, is also smaller than Vk(xk).\nProof: Let us use the notation ci for c(i).\nSince Vk(xk) = xk \u2200k (according to Equation 9), the claims are: (a) if xk+1 < xk then xN < xk for all N \u2265 k + 1, and (b) if xk\u22121 < xk then xN < xk for all N \u2264 k \u2212 1.\n(a) We start by proving that if xk+1 < xk then xk+2 < xk.\nAssume otherwise, i.e., xk+1 < xk and xk+2 > xk.\nTherefore, according to Equation 10, the following holds: 0 < ck+2 \u2212 2ck+1 + ck < Z \u221e xk+2 (1 \u2212 Gk(y) k+2 k\u22121 )dy \u2212 2 Z \u221e xk (1 \u2212 Gk(y) k+1 k\u22121 )dy + Z \u221e xk (1 \u2212 Gk(y) k k\u22121 )dy where the transition to inequality is valid since c(i) is convex.\nSince the assumption in this proof is that xk+2 > xk then the above can be transformed into: Z \u221e xk 2Gk(y) k+1 k\u22121 \u2212 Gk(y) k+2 k\u22121 \u2212 Gk(y) k k\u22121 dy > 0 (13) Now notice that the integrated term is actually \u2212Gk(y) k k\u22121 ` 1\u2212 Gk(y) 1 k\u22121 \u00b42 which is obviously negative, contradicting the initial assumption, thus if xk+1 < xk then necessarily xk+2 < xk.\nNow we need to prove the same for any xk+j.\nWe will prove this in two steps: first, if xk+i < xk then xk+2i < xk.\nSecond, if xk+i < xk and xk+i+1 < xk, then xk+2i+1 < xk.\nTogether these constitute the necessary induction arguments to prove the case (a).\nWe start with the even case, using a similar methodology: Assume otherwise, i.e., xk+l < xk \u2200l = 1, ..., j \u2212 1 and xk+2i > xk.\nAccording to Equation 10, and the fact that c(i) is convex, the following holds: Z \u221e xk 2Gk(y) k+i k\u22121 \u2212 Gk(y) k+2i k\u22121 \u2212 Gk(y) k k\u22121 dy > 0 (14) And again the integrand is actually \u2212Gk(y) k k\u22121 ` 1\u2212Gk(y) i k\u22121 \u00b42 which is obviously negative, contradicting the initial assumption, thus xk+2i < xk.\nAs for the odd case, we use Equation 10 once for k + i + 1 parallel interactions and once for k + 2i + 1.\nFrom the convexity of ci, we obtain: ck+2i+1 \u2212 ck+i \u2212 ck+i+1 + ck > 0, thus: Z \u221e xk ` Gk(y) k+i k\u22121 +Gk(y) k+i+1 k\u22121 \u2212Gk(y) k+2i+1 k\u22121 \u2212Gk(y) k k\u22121 \u00b4 dy>0 (15) 7 A linear cost function is mostly common in agent-based two-sided search applications, since often the cost function can be divided into fixed costs (e.g. operating the agent per time unit) and variable costs (i.e., cost of processing a single interaction``s data).\nThis time the integrated term in Equation 15 can be re-written as Gk(y) k k\u22121 (1 \u2212 Gk(y) i k\u22121 )(Gk(y) i+1 k\u22121 \u2212 1) which is obviously negative, contradicting the initial assumption, thus xk+i+1 < xk.\nNow using induction one can prove that if xk+1 < xk then xk+i < xk.\nThis concludes part (a) of the proof.\nThe proof for part (b) of the theorem is obtained in a similar manner.\nIn this case: ck \u2212 2ck\u2212i + ck\u22122i > 0 and ck \u2212 ck\u2212i\u22121 \u2212 ck\u2212i + ck\u22122i\u22121 > 0.\nThe above theorem supplies us with a powerful tool for eliminating non-equilibrium N values.\nIt suggests that we can check the stability of a sample size N and the appropriate reservation value xN simply by calculating the optimal reservation values of a single agent when deviating towards using samples of sizes N \u2212 1 and N + 1 (keeping the other agents with strategy (N, xN )).\nIf both the appropriate reservation values associated with the two latter sample sizes are smaller than xN then according to Theorems 3 the same holds when deviating to any other sample size k.\nThe above process can be further simplified by using VN+1(xN ) > xN and VN\u22121(xN ) > xN as the two elimination rules.\nThis derives from Theorem 3 and the properties of the function VN (x) found in Theorem 2.\nNotice that a multi-equilibria scenario may occur, however can easily be resolved.\nIf several strategies satisfy the stability condition defined above, then the agents will always prefer the one associated with the highest expected utility.\nTherefore an algorithm that goes over the different N values and checks them according to the rules above can be applied, assuming that we can bound the interval for searching the equilibrium N.\nThe following Theorem 4 suggests such an upper bound.\nTHEOREM 4.\nAn upper bound for the equilibrium number of partners to be considered over a search round is the solution of the equation: A(N) = c(N) (16) provided A(N \u2212 1) > c(N \u2212 1), where we denote, A(N) := Z \u221e y=0 yNf(y)Gk(y) N+k\u22122 k\u22121 dy.\nProof: We denote: A(N, x) = Z \u221e y=x yNf(y)Gk(y) N+k\u22122 k\u22121 dy so that A(N) = A(N, 0).\nFrom Equation 7: VN (x) = A(N, x) \u2212 c(N) N R \u221e x f(y)Gk(y)bdy = A(N, x) \u2212 c(N) positive , Clearly A(N) \u2265 A(N, x)\u2200x since the integrand is positive.\nHence if A(N) \u2212 c(N) < 0, then A(N, x) \u2212 c(N) < 0\u2200x and VN (x) < 0 \u2200x. Next we prove that if A(N)\u2212c(N) gets negative, it stays negative.\nRecalling that for any g(y): d dN (g(y)b(N) ) = g(y)b(N) log(g(y)) db dN we get: A (N) = \u22121 (k \u2212 1)2 Z \u221e 0 Gk(y) N k\u22121 (log Gk(y))2 dy which is always negative, since the integrand is nonnegative.\nTherefore A(N) is concave.\nSince c(N) is convex, \u2212c(N) is concave, and a sum of concave functions is concave, we obtain that The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 455 A(N) \u2212 c(N) is concave.\nThis guarantees that once the concave expression A(N) \u2212 c(N) shifts from a positive value to a negative one (with the increase in N), it cannot become positive again.\nTherefore, having N\u2217 such that A(N\u2217 ) = c(N\u2217 ), and A(N\u2217\u2217 ) > c(N\u2217\u2217 ) for some N\u2217\u2217 < N\u2217 , is an upper bound for N, i.e., VN (x) < 0 \u2200N \u2265 N\u2217 .\nThe condition we specify for N\u2217\u2217 is merely for ensuring that VN is switching from a positive value to a negative one (and not vice versa) and is trivial to implement.\nGiven the existence of the upper bound, we can design an algorithm for finding the equilibrium strategy (if one exists).\nThe algorithm extracts the upper bound, \u02c6N, for the equilibrium number of parallel interactions according to Theorem 4.\nOut of the set of values satisfying the stability condition defined above, the algorithm chooses the one associated with the highest reservation value according to Equation 10.\nThis is the equilibrium associated with the highest expected utility to all agents according to Theorem 2.\n0.1875 0.39 0.41 0.43 0.45 0.47 0.49 2 3 4 5 6 7 8 9 10 11 12 13 expected utility VN(x) num ber ofparallelinteractions (N) VN+ 1 ( XN) VN( XN) VN-1 ( XN) enlarged Figure 2: The incentive to deviate from strategy (N, xN ) The process is illustrated in Figure 2 for an artificial environment where partnerships'' utilities are associated with a uniform distribution.\nThe cost function used is c(N) = 0.2 + 0.02N.\nThe graph depicts a single agent``s expected utility when all other agents are using N parallel interactions (on the horizontal axis) and the appropriate reservation value xN (calculated according to Equation 10).\nThe different curves depict the expected utility of the agent when it uses a strategy: (a) (N, xN ) similar to the other agents (marked as VN (xN )); (b) (N + 1, xN ) (marked as VN+1(xN )); and (c) (N \u2212 1, xN ) (marked as VN\u22121(xN )).\nAccording to the discussion following Theorem 3, a stable equilibrium satisfies: VN (xN ) > max{VN+1(xN ), VN\u22121(xN )}.\nThe strategy satisfying the latter condition in our example is (9, 0.437).\n4.\nRELATED WORK The two-sided economic search for partnerships in AI literature is a sub-domain of coalition formation8 .\nWhile coalition formation models usually consider general coalition-sizes [24], the partnership formation model (often referred as matchmaking) considers environments where agents have a benefit only when forming a partnership and this benefit can not be improved by extending the partnership to more than two agents [12, 23] (e.g., in the case of buyers and sellers or peer-to-peer applications).\nAs in the general 8 The use of the term partnership in this context refers to the agreement between two individual agents to cooperate in a pre-defined manner.\nFor example, in the buyer-seller application a partnership is defined as an agreed transaction between the two-parties [9].\ncoalition formation case, agents have the incentive to form partnerships when they are incapable of executing a task by their own or when the partnership can improve their individual utilities [14].\nVarious centralized matching mechanisms can be found in the literature [6, 2, 8].\nHowever, in many MAS environments, in the absence of any reliable central matching mechanism, the matching process is completely distributed.\nWhile the search in agent-based environments is well recognized to be costly [11, 21, 1], most of the proposed coalition formation mechanisms assume that an agent can scan as many partnership opportunities in its environment as needed or have access to central matchers or middle agents [6].\nThe incorporation of costly search in this context is quite rare [21] and to the best of our knowledge, a distributed two-sided search for partners model similar to the S-DM model has not been studied to date.\nClassical economic search theory ([15, 17], and references therein) widely addresses the problem of a searcher operating in a costly environment, seeking to maximize his long term utility.\nIn these models, classified as one-sided search, the focus is on establishing the optimal strategies for the searcher, assuming no mutual search activities (i.e., no influence on the environment).\nHere the sequential search procedure is often applied, allowing the searcher to investigate a single [15] or multiple [7, 19] opportunities at a time.\nWhile the latter method is proven to be beneficial for the searcher, it was never used in the two-sided search models that followed (where dual search activities are modeled) [22, 5, 18].\nTherefore, in these models, the equilibrium strategies are always developed based on the assumption that the agents interact with others sequentially (i.e., with one agent at a time).\nA first attempt to integrate the parallel search into a two-sided search model is given in [21], as detailed in the introduction section.\nSeveral of the two-sided search essences can be found in the strategic theory of bargaining [3] - both coalition formation and matching can be represented as a sequential bargaining game [4] in which payoffs are defined as a function of the coalition structure and can be divided according to a fixed or negotiated division rule.\nNevertheless, in the sequential bargaining literature, most emphasis is put on specifying the details of the sequential negotiating process over the division of the utility (or cost) jointly owned by parties or the strategy the coalition needs to adopt [20, 4].\nThe models presented in this area do not associate the coalition formation process with search costs, which is the essence of the analysis that economic search theory aims to supply.\nFurthermore, even in repeated pairwise bargaining [10] models the agents are always limited to initiating a single bargaining interaction at a time.\n5.\nDISCUSSION AND CONCLUSIONS The phenomenal growth evidenced in recent years in the number of software agent-based applications, alongside the continuous improvement in agents'' processing and communication capabilities, suggest various incentives for agents to improve their search performance by applying advanced search strategies such as parallel search.\nThe multiple-interactions technique is known to be beneficial for agents both in one-sided and two-sided economic search [7, 16, 21], since it allows the agents to decrease their average cost of learning about potential partnerships and their values.\nIn this paper we propose a new parallel two-sided search mechanism that differs from the existing one in a sense that it allows the agents to delay their decision making process concerning the acceptance and rejection of potential partnerships as necessary.\nThis, in comparison to the existing instantaneous model [21] which force each agent to make a simultaneous decision concerning each of the potential partnerships revealed to it during the current search stage.\n456 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) As discussed throughout the paper, the new method is much more intuitive to the agent than the existing model - an agent will always prefer to keep all options available.\nFurthermore, as we prove in the former sections, an agent``s transition to the new search method always results with a better utility.\nAs we prove in Section 2, in spite of the transition to a sequential decision making, deadlocks never occur in the proposed method as long as all agents use the proposed strategies.\nSince our analysis is equilibrium-based, a deviation from the proposed strategies is not beneficial.\nSimilarly, we show that a deviation of a single agent (back) to the instantaneous decision making strategy is not beneficial.\nThe only problem that may arise in the transition from an instantaneous to sequential decision making is when an agent fails (technically) to function (endlessly delaying the notification to the agents it interacted with).\nWhile equilibrium analysis normally do not consider malfunction as a legitimate strategy, we do wish to emphasize that the malfunctioning agent problem can be resolved by using a simple timeout for receiving responses and skipping this agent in the sequential decision process if the timeout is exceeded.\nOur analysis covers all aspects of the new two-sided search technique, from individual strategy construction throughout the dynamics that lead to stability (equilibrium).\nThe difficulty in the extraction of the agents'' equilibrium strategies in the new model derives from the need to recursively model, while setting an agent``s strategy, the rejection other agents might face from other agents they interact with.\nThis complexity (that does not exist in former models) is resolved by the introduction of the recursive function Gk(x) in Section 2.\nUsing the different theorems and propositions we prove, we proffer efficient tools for calculating the agents'' equilibrium strategies.\nOur capabilities to produce an upper bound for the number of parallel interactions used in equilibrium (Theorem 4) and to quickly identify (and eliminate) non-equilibrium strategies (Theorem 3) resolves the problem of the computational complexity associated with having to deal with a theoretically infinite strategy space.\nWhile the analysis we present is given in the context of software agents, the model we suggest is general, and can be applied to any two-sided economic search environment where the searchers can search in parallel.\nIn particular, in addition to weakly dominating the instantaneous decision making model (as we prove in the analysis section) the proposed method weakly dominates the purely sequential two-sided search model (where each agent interacts with only one other agent at a time) [5].\nThis derives from the fact that the proposed method is a generalization of the latter (i.e., in the worst case scenario, the agent can interact with one other agent at a time in parallel).\nNaturally the attempt to integrate search theory techniques into day-to-day applications brings up the applicability question.\nJustification and legitimacy considerations for this integration were discussed in the wide literature we refer to throughout the paper.\nThe current paper is not focused on re-arguing applicability, but rather on the improvement of the the core two-sided search model.\nWe see great importance in future research that will combine bargaining as part of the interaction process.\nWe believe such research can result in many rich variants of our two-sided search model.\n6.\nREFERENCES [1] Y. Bakos.\nReducing buyer search costs: Implications for electronic marketplaces.\nManagement Science, 42(12):1676-1692, June 1997.\n[2] G. Becker.\nA theory of marriage.\nJournal of Political Economy, 81:813-846, 1973.\n[3] K. Binmore, M. Osborne, and A. Rubinstein.\nNon-cooperative models of bargaining.\nIn Handbook of Game Theory with Economic Applications, pages 180-220.\nElsevier, New York, 1992.\n[4] F. Bloch.\nSequential formation of coalitions in games with externalities and fixed payoff division.\nGames and Economic Behavior, 14(1):90-123, 1996.\n[5] K. Burdett and R. Wright.\nTwo-sided search with nontransferable utility.\nReview of Economic Dynamics, 1:220-245, 1998.\n[6] K. Decker, K. Sycara, and M. Williamson.\nMiddle-agents for the internet.\nIn Proc.\nof IJCAI, pages 578-583, 1997.\n[7] S. Gal, M. Landsberger, and B. Levykson.\nA compound strategy for search in the labor market.\nInt.\nEconomic Review, 22(3):597-608, 1981.\n[8] D. Gale and L. Shapley.\nCollege admissions and the stability of marriage.\nAmerican Math.\nMonthly, 69:9-15, 1962.\n[9] M. Hadad and S. Kraus.\nSharedplans in electronic commerce.\nIn M. Klusch, editor, Intelligent Information Agents, pages 204-231.\nSpringer Publisher, 1999.\n[10] M. Jackson and T. Palfrey.\nEfficiency and voluntary implementation in markets with repeated pairwise bargaining.\nEconometrica, 66(6):1353-1388, 1998.\n[11] J. Kephart and A. Greenwald.\nShopbot economics.\nJAAMAS, 5(3):255-287, 2002.\n[12] M. Klusch.\nAgent-mediated trading: Intelligent agents and e-business.\nJ. on Data and Knowledge Engineering, 36(3), 2001.\n[13] S. Kraus, O. Shehory, and G. Taase.\nCoalition formation with uncertain heterogeneous information.\nIn Proc.\nof AAMAS ``03, pages 1-8, 2003.\n[14] K. Lermann and O. Shehory.\nCoalition formation for large scale electronic markets.\nIn Proc.\nof ICMAS``2000, pages 216-222, Boston, 2000.\n[15] S. A. Lippman and J. J. McCall.\nThe economics of job search: A survey.\nEconomic Inquiry, 14:155-189, 1976.\n[16] E. Manisterski, D. Sarne, and S. Kraus.\nIntegrating parallel interactions into cooperative search.\nIn AAMAS, pages 257-264, 2006.\n[17] J. McMillan and M. Rothschild.\nSearch.\nIn R. Aumann and S. Hart, editors, Handbook of Game Theory with Economic Applications, pages 905-927.\n1994.\n[18] J. M. McNamara and E. J. Collins.\nThe job search problem as an employer-candidate game.\nJournal of Applied Probability, 27(4):815-827, 1990.\n[19] P. Morgan.\nSearch and optimal sample size.\nReview of Economic Studies, 50(4):659-675, 1983.\n[20] A. Rubinstein.\nPerfect equilibrium in a bargaining model.\nEconometrica, 50(1):97-109, 1982.\n[21] D. Sarne and S. Kraus.\nAgents strategies for the dual parallel search in partnership formation applications.\nIn Proc.\nof AMEC2004, LNCS 3435, pages 158 - 172, 2004.\n[22] R. Shimer and L. Smith.\nAssortative matching and search.\nEconometrica, 68(2):343-370, 2000.\n[23] K. Sycara, S. Widoff, M. Klusch, and J. Lu.\nLarks: Dynamic matchmaking among heterogeneous software agents in cyberspace.\nJAAMAS, 5:173-203, 2002.\n[24] N. Tsvetovat, K. Sycara, Y. Chen, and J. Ying.\nCustomer coalitions in electronic markets.\nIn Proc.\nof AMEC2000, pages 121-138, 2000.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 457","lvl-3":"Sequential Decision Making in Parallel Two-Sided Economic Search\nABSTRACT\nThis paper presents a two-sided economic search model in which agents are searching for beneficial pairwise partnerships.\nIn each search stage, each of the agents is randomly matched with several other agents in parallel, and makes a decision whether to accept a potential partnership with one of them.\nThe distinguishing feature of the proposed model is that the agents are not restricted to maintaining a synchronized (instantaneous) decision protocol and can sequentially accept and reject partnerships within the same search stage.\nWe analyze the dynamics which drive the agents' strategies towards a stable equilibrium in the new model and show that the proposed search strategy weakly dominates the one currently in use for the two-sided parallel economic search model.\nBy identifying several unique characteristics of the equilibrium we manage to efficiently bound the strategy space that needs to be explored by the agents and propose an efficient means for extracting the distributed equilibrium strategies in common environments.\n1.\nINTRODUCTION\nA two-sided economic search is a distributed mechanism for forming agents' pairwise partnerships [5].1 On every stage of the process, each of the agents is randomly matched with another agent 1Notice that the concept of\" search\" here is very different from the classical definition of\" search\" in AI.\nWhile AI search is an active process in which an agent finds a sequence of actions that will bring it from the initial state to a goal state, economic search refers to the identification of the best agent to commit to a partnership with.\nand the two interact bilaterally in order to learn the benefit encapsulated in a partnership between them.\nThe interaction does not involve bargaining thus each agent merely needs to choose between accepting or rejecting the partnership with the other agent.\nA typical market where this kind of two-sided search takes place is the marriage market [22].\nRecent literature suggests various software agent-based applications where a two-sided distributed (i.e., with no centralized matching mechanisms) search takes place.\nAn important class of such applications includes secondary markets for exchanging unexploited resources.\nAn exchange mechanism is used in those cases where selling these resources is not the core business of the organization or when the overhead for selling them makes it non-beneficial.\nFor example, through a twosided search, agents, representing different service providers, can exchange unused bandwidth [21] and communication satellites can transfer communication with a greater geographical coverage.\nTwosided agents-based search can also be found in applications of buyers and sellers in eMarkets and peer-to-peer applications.\nThe twosided nature of the search suggests that a partnership between a pair of agents is formed only if it is mutually accepted.\nBy forming a partnership the agents gain an immediate utility and terminate their search.\nWhen resuming the search, on the other hand, a more suitable partner might be found however some resources will need to be consumed for maintaining the search process.\nIn this paper we focus on a specific class of two-sided search matching problems, in which the performance of the partnership applies to both parties, i.e., both gain an equal utility [13].\nThe equal utility scenario is usually applicable in domains where the partners gain from the synergy between them.\nFor example, consider tennis players that seek partners when playing doubles (or a canoe's paddler looking for a partner to practice with).\nHere the players are being rewarded completely based on the team's (rather than the individual) performance.\nOther examples are the scenario where students need to form pairs for working together on an assignment, for which both partners share the same grade, and the scenario where two buyer agents interested in similar or interchangeable products join forces to buy a product together, taking advantage of discount for quantity (i.e. each of them enjoys the same reduced price).\nIn all these applications, any two agents can form a partnership and the performance of any given partnership depends on the skills or the characteristics of its members.\nFurthermore, the equal utility scenario can also hold whenever there is an option for side-payments and the partnership's overall utility is equally split among the two agents forming it [22].\nWhile the two-sided search literature offers comprehensive equilibrium analysis for various models, it assumes that the agents' search is conducted in a purely sequential manner: each agent locates and interacts with one other agent in its environment at a time\n[5, 22].\nNevertheless, when the search is assigned to autonomous software agents a better search strategy can be used.\nHere an agent can take advantage of its unique inherent filtering and information processing capabilities and its ability to efficiently (in comparison to people) maintain concurrent interactions with several other agents at each stage of its search.\nSuch use of parallel interactions in search is favorable whenever the average cost2 per interaction with another agent, when interacting in parallel with a batch of other agents, is smaller than the cost of maintaining one interaction at a time (i.e., advantage to size).\nFor example, the analysis of the costs associated with evaluating potential partnerships between service providers reveals both fixed and variable components when using the parallel search, thus the average cost per interaction decreases as the number of parallel interactions increases [21].\nDespite the advantages identified for parallel interactions in adjacent domains (e.g., in one-sided economic search [7, 16]), a first attempt for modeling a repeated pairwise matching process in which agents are capable of maintaining interaction with several other agents at a time was introduced only recently [21].\nHowever, the agents in that seminal model are required to synchronize their decision making process.\nThus each agent, upon reviewing the opportunities available in a specific search stage, has to notify all other agents of its decision whether to commit to a partnership (at most with one of them) or reject the partnership (with the rest of them).\nThis inherent restriction imposes a significant limitation on the agents' strategic behavior.\nIn our model, the agents are free to notify the other agents of their decisions in an asynchronous manner.\nThe asynchronous approach allows the agents to re-evaluate their strategy, based on each new response they receive from the agents they interact with.\nThis leads to a sequential decision making process by which each agent, upon sending a commit message to one of the other agents, delays its decision concerning a commitment or rejection of all other potential partnerships until receiving a response from that agent (i.e., the agent still maintains parallel interactions in each search stage, except that its decision making process at the end of the stage is sequential rather than instantaneous).\nThe new model is a much more realistic pairwise model and, as we show in the analysis section, is always preferred by any single agents participating in the process.\nIn the absence of other economic two-sided parallel search models, we use the model that relies on an instantaneous (synchronous) decision making process [21] (denoted I-DM throughout the rest of the paper) as a benchmark for evaluating the usefulness of our proposed sequential (asynchronous) decision making strategy (denoted S-DM).\nThe main contributions of this paper are threefold: First, we formally model and analyze a two-sided search process in which the agents have no temporal decision making constraints concerning the rejection of or commitment to potential partnerships they encounter in parallel (the S-DM model).\nThis model is a general search model which can be applied in various (not necessarily software agents-based) domains.\nSecond, we prove that the agents' SDM strategy weakly dominates the I-DM strategy, thus every agent has an incentive to deviate to the S-DM strategy when all other agents are using the I-DM strategy.\nFinally, by using an innovative recursive presentation of the acceptance probabilities of different potential partnerships, we identify unique characteristics of the equilibrium strategies in the new model.\nThese are used for supplying an appropriate computational means that facilitates the calculation of the agents' equilibrium strategy.\nThis latter contribution is\nof special importance since the transition to the asynchronous mode adds inherent complexity to the model (mainly because now each agent needs to evaluate the probabilities of having each other agent being rejected or accepted by each of the other agents it interacts with, in a multi-stage sequential process).\nWe manage to extract the agents' new equilibrium strategies without increasing the computational complexity in comparison to the I-DM model.\nThroughout the paper we demonstrate the different properties of the new model and compare it with the I-DM model using an artificial synthetic environment.\nIn the following section we formally present the S-DM model.\nAn equilibrium analysis and computational means for finding the equilibrium strategy are provided in Section 3.\nIn Section 4 we review related MAS and economic search theory literature.\nWe conclude with a discussion and suggest directions for future research in Section 5.\n2.\nMODEL AND ANALYSIS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 451\n452 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.1 Strategy Structure\n2.2 Calculating Acceptance Probabilities\n2.3 Setting the Agents' Strategies\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 453\n3.\nEQUILIBRIUM DYNAMICS\n454 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 455\n4.\nRELATED WORK\nThe two-sided economic search for partnerships in AI literature is a sub-domain of coalition formation8.\nWhile coalition formation models usually consider general coalition-sizes [24], the partnership formation model (often referred as matchmaking) considers environments where agents have a benefit only when forming a partnership and this benefit cannot be improved by extending the partnership to more than two agents [12, 23] (e.g., in the case of buyers and sellers or peer-to-peer applications).\nAs in the general 8The use of the term\" partnership\" in this context refers to the agreement between two individual agents to cooperate in a pre-defined manner.\nFor example, in the buyer-seller application a partnership is defined as an agreed transaction between the two-parties [9].\ncoalition formation case, agents have the incentive to form partnerships when they are incapable of executing a task by their own or when the partnership can improve their individual utilities [14].\nVarious centralized matching mechanisms can be found in the literature [6, 2, 8].\nHowever, in many MAS environments, in the absence of any reliable central matching mechanism, the matching process is completely distributed.\nWhile the search in agent-based environments is well recognized to be costly [11, 21, 1], most of the proposed coalition formation mechanisms assume that an agent can scan as many partnership opportunities in its environment as needed or have access to central matchers or middle agents [6].\nThe incorporation of costly search in this context is quite rare [21] and to the best of our knowledge, a distributed two-sided search for partners model similar to the S-DM model has not been studied to date.\nClassical economic search theory ([15, 17], and references therein) widely addresses the problem of a searcher operating in a costly environment, seeking to maximize his long term utility.\nIn these models, classified as one-sided search, the focus is on establishing the optimal strategies for the searcher, assuming no mutual search activities (i.e., no influence on the environment).\nHere the sequential search procedure is often applied, allowing the searcher to investigate a single [15] or multiple [7, 19] opportunities at a time.\nWhile the latter method is proven to be beneficial for the searcher, it was never used in the\" two-sided\" search models that followed (where dual search activities are modeled) [22, 5, 18].\nTherefore, in these models, the equilibrium strategies are always developed based on the assumption that the agents interact with others sequentially (i.e., with one agent at a time).\nA first attempt to integrate the parallel search into a two-sided search model is given in [21], as detailed in the introduction section.\nSeveral of the two-sided search essences can be found in the strategic theory of bargaining [3] - both coalition formation and matching can be represented as a sequential bargaining game [4] in which payoffs are defined as a function of the coalition structure and can be divided according to a fixed or negotiated division rule.\nNevertheless, in the sequential bargaining literature, most emphasis is put on specifying the details of the sequential negotiating process over the division of the utility (or cost) jointly owned by parties or the strategy the coalition needs to adopt [20, 4].\nThe models presented in this area do not associate the coalition formation process with search costs, which is the essence of the analysis that economic search theory aims to supply.\nFurthermore, even in repeated pairwise bargaining [10] models the agents are always limited to initiating a single bargaining interaction at a time.","lvl-4":"Sequential Decision Making in Parallel Two-Sided Economic Search\nABSTRACT\nThis paper presents a two-sided economic search model in which agents are searching for beneficial pairwise partnerships.\nIn each search stage, each of the agents is randomly matched with several other agents in parallel, and makes a decision whether to accept a potential partnership with one of them.\nThe distinguishing feature of the proposed model is that the agents are not restricted to maintaining a synchronized (instantaneous) decision protocol and can sequentially accept and reject partnerships within the same search stage.\nWe analyze the dynamics which drive the agents' strategies towards a stable equilibrium in the new model and show that the proposed search strategy weakly dominates the one currently in use for the two-sided parallel economic search model.\nBy identifying several unique characteristics of the equilibrium we manage to efficiently bound the strategy space that needs to be explored by the agents and propose an efficient means for extracting the distributed equilibrium strategies in common environments.\n1.\nINTRODUCTION\nA two-sided economic search is a distributed mechanism for forming agents' pairwise partnerships [5].1 On every stage of the process, each of the agents is randomly matched with another agent 1Notice that the concept of\" search\" here is very different from the classical definition of\" search\" in AI.\nWhile AI search is an active process in which an agent finds a sequence of actions that will bring it from the initial state to a goal state, economic search refers to the identification of the best agent to commit to a partnership with.\nand the two interact bilaterally in order to learn the benefit encapsulated in a partnership between them.\nThe interaction does not involve bargaining thus each agent merely needs to choose between accepting or rejecting the partnership with the other agent.\nA typical market where this kind of two-sided search takes place is the marriage market [22].\nRecent literature suggests various software agent-based applications where a two-sided distributed (i.e., with no centralized matching mechanisms) search takes place.\nAn important class of such applications includes secondary markets for exchanging unexploited resources.\nFor example, through a twosided search, agents, representing different service providers, can exchange unused bandwidth [21] and communication satellites can transfer communication with a greater geographical coverage.\nTwosided agents-based search can also be found in applications of buyers and sellers in eMarkets and peer-to-peer applications.\nThe twosided nature of the search suggests that a partnership between a pair of agents is formed only if it is mutually accepted.\nBy forming a partnership the agents gain an immediate utility and terminate their search.\nWhen resuming the search, on the other hand, a more suitable partner might be found however some resources will need to be consumed for maintaining the search process.\nIn this paper we focus on a specific class of two-sided search matching problems, in which the performance of the partnership applies to both parties, i.e., both gain an equal utility [13].\nThe equal utility scenario is usually applicable in domains where the partners gain from the synergy between them.\nIn all these applications, any two agents can form a partnership and the performance of any given partnership depends on the skills or the characteristics of its members.\nFurthermore, the equal utility scenario can also hold whenever there is an option for side-payments and the partnership's overall utility is equally split among the two agents forming it [22].\nWhile the two-sided search literature offers comprehensive equilibrium analysis for various models, it assumes that the agents' search is conducted in a purely sequential manner: each agent locates and interacts with one other agent in its environment at a time\n[5, 22].\nNevertheless, when the search is assigned to autonomous software agents a better search strategy can be used.\nHere an agent can take advantage of its unique inherent filtering and information processing capabilities and its ability to efficiently (in comparison to people) maintain concurrent interactions with several other agents at each stage of its search.\nSuch use of parallel interactions in search is favorable whenever the average cost2 per interaction with another agent, when interacting in parallel with a batch of other agents, is smaller than the cost of maintaining one interaction at a time (i.e., advantage to size).\nFor example, the analysis of the costs associated with evaluating potential partnerships between service providers reveals both fixed and variable components when using the parallel search, thus the average cost per interaction decreases as the number of parallel interactions increases [21].\nDespite the advantages identified for parallel interactions in adjacent domains (e.g., in one-sided economic search [7, 16]), a first attempt for modeling a repeated pairwise matching process in which agents are capable of maintaining interaction with several other agents at a time was introduced only recently [21].\nHowever, the agents in that seminal model are required to synchronize their decision making process.\nThus each agent, upon reviewing the opportunities available in a specific search stage, has to notify all other agents of its decision whether to commit to a partnership (at most with one of them) or reject the partnership (with the rest of them).\nThis inherent restriction imposes a significant limitation on the agents' strategic behavior.\nIn our model, the agents are free to notify the other agents of their decisions in an asynchronous manner.\nThe asynchronous approach allows the agents to re-evaluate their strategy, based on each new response they receive from the agents they interact with.\nThe new model is a much more realistic pairwise model and, as we show in the analysis section, is always preferred by any single agents participating in the process.\nIn the absence of other economic two-sided parallel search models, we use the model that relies on an instantaneous (synchronous) decision making process [21] (denoted I-DM throughout the rest of the paper) as a benchmark for evaluating the usefulness of our proposed sequential (asynchronous) decision making strategy (denoted S-DM).\nThe main contributions of this paper are threefold: First, we formally model and analyze a two-sided search process in which the agents have no temporal decision making constraints concerning the rejection of or commitment to potential partnerships they encounter in parallel (the S-DM model).\nThis model is a general search model which can be applied in various (not necessarily software agents-based) domains.\nSecond, we prove that the agents' SDM strategy weakly dominates the I-DM strategy, thus every agent has an incentive to deviate to the S-DM strategy when all other agents are using the I-DM strategy.\nFinally, by using an innovative recursive presentation of the acceptance probabilities of different potential partnerships, we identify unique characteristics of the equilibrium strategies in the new model.\nThese are used for supplying an appropriate computational means that facilitates the calculation of the agents' equilibrium strategy.\nThis latter contribution is\nWe manage to extract the agents' new equilibrium strategies without increasing the computational complexity in comparison to the I-DM model.\nThroughout the paper we demonstrate the different properties of the new model and compare it with the I-DM model using an artificial synthetic environment.\nIn the following section we formally present the S-DM model.\nAn equilibrium analysis and computational means for finding the equilibrium strategy are provided in Section 3.\nIn Section 4 we review related MAS and economic search theory literature.\n4.\nRELATED WORK\nThe two-sided economic search for partnerships in AI literature is a sub-domain of coalition formation8.\nAs in the general 8The use of the term\" partnership\" in this context refers to the agreement between two individual agents to cooperate in a pre-defined manner.\nFor example, in the buyer-seller application a partnership is defined as an agreed transaction between the two-parties [9].\ncoalition formation case, agents have the incentive to form partnerships when they are incapable of executing a task by their own or when the partnership can improve their individual utilities [14].\nVarious centralized matching mechanisms can be found in the literature [6, 2, 8].\nHowever, in many MAS environments, in the absence of any reliable central matching mechanism, the matching process is completely distributed.\nWhile the search in agent-based environments is well recognized to be costly [11, 21, 1], most of the proposed coalition formation mechanisms assume that an agent can scan as many partnership opportunities in its environment as needed or have access to central matchers or middle agents [6].\nThe incorporation of costly search in this context is quite rare [21] and to the best of our knowledge, a distributed two-sided search for partners model similar to the S-DM model has not been studied to date.\nClassical economic search theory ([15, 17], and references therein) widely addresses the problem of a searcher operating in a costly environment, seeking to maximize his long term utility.\nIn these models, classified as one-sided search, the focus is on establishing the optimal strategies for the searcher, assuming no mutual search activities (i.e., no influence on the environment).\nHere the sequential search procedure is often applied, allowing the searcher to investigate a single [15] or multiple [7, 19] opportunities at a time.\nWhile the latter method is proven to be beneficial for the searcher, it was never used in the\" two-sided\" search models that followed (where dual search activities are modeled) [22, 5, 18].\nTherefore, in these models, the equilibrium strategies are always developed based on the assumption that the agents interact with others sequentially (i.e., with one agent at a time).\nA first attempt to integrate the parallel search into a two-sided search model is given in [21], as detailed in the introduction section.\nThe models presented in this area do not associate the coalition formation process with search costs, which is the essence of the analysis that economic search theory aims to supply.\nFurthermore, even in repeated pairwise bargaining [10] models the agents are always limited to initiating a single bargaining interaction at a time.","lvl-2":"Sequential Decision Making in Parallel Two-Sided Economic Search\nABSTRACT\nThis paper presents a two-sided economic search model in which agents are searching for beneficial pairwise partnerships.\nIn each search stage, each of the agents is randomly matched with several other agents in parallel, and makes a decision whether to accept a potential partnership with one of them.\nThe distinguishing feature of the proposed model is that the agents are not restricted to maintaining a synchronized (instantaneous) decision protocol and can sequentially accept and reject partnerships within the same search stage.\nWe analyze the dynamics which drive the agents' strategies towards a stable equilibrium in the new model and show that the proposed search strategy weakly dominates the one currently in use for the two-sided parallel economic search model.\nBy identifying several unique characteristics of the equilibrium we manage to efficiently bound the strategy space that needs to be explored by the agents and propose an efficient means for extracting the distributed equilibrium strategies in common environments.\n1.\nINTRODUCTION\nA two-sided economic search is a distributed mechanism for forming agents' pairwise partnerships [5].1 On every stage of the process, each of the agents is randomly matched with another agent 1Notice that the concept of\" search\" here is very different from the classical definition of\" search\" in AI.\nWhile AI search is an active process in which an agent finds a sequence of actions that will bring it from the initial state to a goal state, economic search refers to the identification of the best agent to commit to a partnership with.\nand the two interact bilaterally in order to learn the benefit encapsulated in a partnership between them.\nThe interaction does not involve bargaining thus each agent merely needs to choose between accepting or rejecting the partnership with the other agent.\nA typical market where this kind of two-sided search takes place is the marriage market [22].\nRecent literature suggests various software agent-based applications where a two-sided distributed (i.e., with no centralized matching mechanisms) search takes place.\nAn important class of such applications includes secondary markets for exchanging unexploited resources.\nAn exchange mechanism is used in those cases where selling these resources is not the core business of the organization or when the overhead for selling them makes it non-beneficial.\nFor example, through a twosided search, agents, representing different service providers, can exchange unused bandwidth [21] and communication satellites can transfer communication with a greater geographical coverage.\nTwosided agents-based search can also be found in applications of buyers and sellers in eMarkets and peer-to-peer applications.\nThe twosided nature of the search suggests that a partnership between a pair of agents is formed only if it is mutually accepted.\nBy forming a partnership the agents gain an immediate utility and terminate their search.\nWhen resuming the search, on the other hand, a more suitable partner might be found however some resources will need to be consumed for maintaining the search process.\nIn this paper we focus on a specific class of two-sided search matching problems, in which the performance of the partnership applies to both parties, i.e., both gain an equal utility [13].\nThe equal utility scenario is usually applicable in domains where the partners gain from the synergy between them.\nFor example, consider tennis players that seek partners when playing doubles (or a canoe's paddler looking for a partner to practice with).\nHere the players are being rewarded completely based on the team's (rather than the individual) performance.\nOther examples are the scenario where students need to form pairs for working together on an assignment, for which both partners share the same grade, and the scenario where two buyer agents interested in similar or interchangeable products join forces to buy a product together, taking advantage of discount for quantity (i.e. each of them enjoys the same reduced price).\nIn all these applications, any two agents can form a partnership and the performance of any given partnership depends on the skills or the characteristics of its members.\nFurthermore, the equal utility scenario can also hold whenever there is an option for side-payments and the partnership's overall utility is equally split among the two agents forming it [22].\nWhile the two-sided search literature offers comprehensive equilibrium analysis for various models, it assumes that the agents' search is conducted in a purely sequential manner: each agent locates and interacts with one other agent in its environment at a time\n[5, 22].\nNevertheless, when the search is assigned to autonomous software agents a better search strategy can be used.\nHere an agent can take advantage of its unique inherent filtering and information processing capabilities and its ability to efficiently (in comparison to people) maintain concurrent interactions with several other agents at each stage of its search.\nSuch use of parallel interactions in search is favorable whenever the average cost2 per interaction with another agent, when interacting in parallel with a batch of other agents, is smaller than the cost of maintaining one interaction at a time (i.e., advantage to size).\nFor example, the analysis of the costs associated with evaluating potential partnerships between service providers reveals both fixed and variable components when using the parallel search, thus the average cost per interaction decreases as the number of parallel interactions increases [21].\nDespite the advantages identified for parallel interactions in adjacent domains (e.g., in one-sided economic search [7, 16]), a first attempt for modeling a repeated pairwise matching process in which agents are capable of maintaining interaction with several other agents at a time was introduced only recently [21].\nHowever, the agents in that seminal model are required to synchronize their decision making process.\nThus each agent, upon reviewing the opportunities available in a specific search stage, has to notify all other agents of its decision whether to commit to a partnership (at most with one of them) or reject the partnership (with the rest of them).\nThis inherent restriction imposes a significant limitation on the agents' strategic behavior.\nIn our model, the agents are free to notify the other agents of their decisions in an asynchronous manner.\nThe asynchronous approach allows the agents to re-evaluate their strategy, based on each new response they receive from the agents they interact with.\nThis leads to a sequential decision making process by which each agent, upon sending a commit message to one of the other agents, delays its decision concerning a commitment or rejection of all other potential partnerships until receiving a response from that agent (i.e., the agent still maintains parallel interactions in each search stage, except that its decision making process at the end of the stage is sequential rather than instantaneous).\nThe new model is a much more realistic pairwise model and, as we show in the analysis section, is always preferred by any single agents participating in the process.\nIn the absence of other economic two-sided parallel search models, we use the model that relies on an instantaneous (synchronous) decision making process [21] (denoted I-DM throughout the rest of the paper) as a benchmark for evaluating the usefulness of our proposed sequential (asynchronous) decision making strategy (denoted S-DM).\nThe main contributions of this paper are threefold: First, we formally model and analyze a two-sided search process in which the agents have no temporal decision making constraints concerning the rejection of or commitment to potential partnerships they encounter in parallel (the S-DM model).\nThis model is a general search model which can be applied in various (not necessarily software agents-based) domains.\nSecond, we prove that the agents' SDM strategy weakly dominates the I-DM strategy, thus every agent has an incentive to deviate to the S-DM strategy when all other agents are using the I-DM strategy.\nFinally, by using an innovative recursive presentation of the acceptance probabilities of different potential partnerships, we identify unique characteristics of the equilibrium strategies in the new model.\nThese are used for supplying an appropriate computational means that facilitates the calculation of the agents' equilibrium strategy.\nThis latter contribution is\nof special importance since the transition to the asynchronous mode adds inherent complexity to the model (mainly because now each agent needs to evaluate the probabilities of having each other agent being rejected or accepted by each of the other agents it interacts with, in a multi-stage sequential process).\nWe manage to extract the agents' new equilibrium strategies without increasing the computational complexity in comparison to the I-DM model.\nThroughout the paper we demonstrate the different properties of the new model and compare it with the I-DM model using an artificial synthetic environment.\nIn the following section we formally present the S-DM model.\nAn equilibrium analysis and computational means for finding the equilibrium strategy are provided in Section 3.\nIn Section 4 we review related MAS and economic search theory literature.\nWe conclude with a discussion and suggest directions for future research in Section 5.\n2.\nMODEL AND ANALYSIS\nWe consider an environment populated with an infinite number of self-interested fully rational agents of different types3.\nAny agent Ai can form a partnership with any other agent Aj in the environment, associated with an immediate perceived utility U (Ai, Aj) for both agents.\nAs in many other partnership formation models (see [5, 21]) we assume that the value of U (x, y) (where x and y are any two agents in the environment) is randomly drawn from a continuous population characterized with a probability distribution function (p.d.f.) f (U) and a cumulative distribution function (c.d.f.) F (U), (0 \u2264 U <\u221e).\nThe agents are assumed to be acquainted with the utility distribution function f (x), however they cannot tell a-priori what utility can be gained by a partnership with any specific agent in their environment.\nTherefore, the only way by which an agent Ai can learn the value of a partnership with another agent Aj, U (Ai, Aj), is by interacting with agent Aj.\nSince each agent in two-sided search models has no prior information concerning any of the other agents in its environment, it initiates interactions (i.e., search) with other agents randomly.\nThe nature of the two-sided search application suggests that the agents are satisfied with having a single partner, thus once a partnership is formed the two agents forming it terminate their search process and leave the environment.\nThe agents are not limited to interacting with a single potential partner agent at a time, but rather can select to interact with several other agents in parallel.\nWe define a search round\/stage as the interval in which the agent interacts with several agents in parallel and learns the utility of forming a partnership with each of them.\nBased on the learned values, the agent needs to decide whether to commit or reject each of the potential partnerships available to it.\nCommitment is achieved by sending a commit message to the appropriate agent and an agent cannot commit to more than one potential partnership simultaneously.\nDeclining a partnership is achieved by sending a reject message.\nThe communication between the agents is assumed to be asynchronous and each agent can delay its decision, concerning any given potential partnership, as necessary .4 If two agents Ai and Aj mutually commit to a partnership between\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 451\nthem, then the partnership is formed and both agents gain the immediate utility U (Ai, Aj) associated with it.\nIf an agent does not form a partnership in a given search stage, it continues to its next search stage and interacts with more agents in a similar manner.\nGiven the option for asynchronous decision making, each individual agent, Ai, follows the following procedure:\n1: loop 2: Set N (number of parallel interactions for next search round) 3: Locate randomly a set A = {A1,..., AN} of agents to interact with 4: Evaluate the set of utilities {U (Ai, A1),..., U (Ai, AN)} 5: Set A \u2217 = {Aj | Aj \u2208 A and U (Ai, Aj)> U (resume)} 6: Send a reject message to each agent in the set {A \\ A \u2217} 7: while (A \u2217 = ~ \u2205) do 8: Send a commit message to Aj = argmaxAl \u2208 A \u2217 U (Ai, Al) 9: Remove Aj from A \u2217 10: Wait for Aj's decision 11: if (Aj responded\" commit\") then 12: Send\" reject\" messages to the remaining agents in A \u2217 13: Terminate search 14: end if 15: end while 16: end loop\nwhere U (resume) denotes the expected utility of continuing the search (in the following paragraphs we show that U (resume) is fixed throughout the search and derives from the agent's strategy).\nIn the above algorithm, any agent Ai first identifies the set A \u2217 of other agents it is willing to accept out of those reviewed in the current search stage and sends a reject message to the rest.\nThen it sends a commit message to the agent Aj \u2208 A \u2217 that is associated with the partnership yielding the highest utility.\nIf a reject message was received from agent Aj then this agent is removed from A \u2217 and a new commit message is sent according to the same criteria.\nThe process continues until either: (a) the set A \u2217 becomes empty, in which case the agent initiates another search stage; or (b) a dual commitment is obtained, in which case the agent sends reject messages to the remaining agents in A \u2217.\nThe method differs from the one used in the I-DM model in the way it handles the commitment messages: in the I-DM model, after evaluating the set of utilities (step 4), the agent merely sends instantaneously a commit message to the agent associated with the greatest utility and a reject message to all the other agents it interacted with (as a replacement to steps 5-15 in the above procedure).\nOur proposed S-DM model is much more intuitive as it allows an agent to\" hold\" and possibly exploit relatively beneficial opportunities even if its first priority partnership is rejected by the other agent.\nIn the I-DM model, on the other hand, since reject messages are sent alongside the commit message, simultaneously, a reject message from the agent associated with the\" best\" partnership enforces a new search round.\nNotice that the two-sided search mechanism above aligns with most other two-sided search mechanisms in a sense that it is based on\" random matching\" (i.e., in each search round the agent encounters a random sample of agents).\nWhile the maintenance of the random matching infrastructure is an interesting research question, it is beyond the scope of this paper.\nNotwithstanding, we do wish to emphasize that given the large number of agents in the environment and the fact that in MAS the turnover rate is quite substantial due to the open nature of the environment (and the interoperability between environments).\nTherefore, the probability of ending up interacting with the same agent more than once, when initiating a random interaction, is practically negligible.\nTHEOREM 1.\nThe S-DM agent's decision making process: (a) is the optimal one (maximizes the utility) for any individual agent in the environment; and (b) guarantees a zero deadlock probability for any given agent in the environment.\nProof: (a) The method is optimal since it cannot be changed in a way that produces a better utility for the agent.\nSince bargaining is not applicable here (benefits are non-divisible) then the agent's strategy is limited to accepting or rejecting offers.\nThe decision of rejecting a partnership in step 6 is based only on the immediate utility that can be gained from this partnership in comparison to the expected utility of resuming the search (i.e., moving on to the next search stage) and is not affected by the willingness of the other agents to commit or reject a partnership with Ai.\nAs for partnerships that yield a utility greater than the expected utility of resuming the search (i.e., the partnerships with agents from the set A \u2217), the agent always prefers to delay its decision concerning partnerships of this type until receiving all notifications concerning potential partnerships that are associated with a greater immediate utility.\nThe delay never results with a loss of opportunity since the other agent's decision concerning this opportunity is not affected by agent Ai's willingness to commit or reject this opportunity (but rather by the other agent's estimation of its expected utility if resuming the search and the rejection messages it receives for more beneficial potential partnerships).\nFinally, the agent cannot benefit from delaying a commit message to the agent associated with the highest utility in A \u2217, thus will always send it a commit message.\n(b) We first prove the following lemma that states that the probability of having two partnering opportunities associated with an identical utility is zero.\nProof: since f is continuous and the interval between x and y is finite, by the intermediate value theorem (found in most calculus\n(intuitively, a rectangle with the base from z = x to z = y and height = f (c) has the same area as the integral on the left hand side.)\n.\nTherefore\nAn immediate derivative from the above lemma is that no tiebreaking procedures are required and an agent in a waiting state is always waiting for a reply from the single agent that is associated with the highest utility among the agents in the set A \u2217 (i.e., no other agent in the set A \u2217 is associated with an equal utility).\nA deadlock can be formed only if we can create a cyclic sequence of agents in which any agent is waiting for a reply from the subsequent agent in the sequence.\nHowever, in our method any agent Ai will be waiting for a reply from another agent Aj, to which it sent a commit message, only if: (1) any agent Ak \u2208 A, associated with a utility U (Ai, Ak)> U (Ai, Aj), has already rejected the partnership with agent Ai; and (2) agent Aj itself is waiting for a reply from agent Al where U (Al, Aj)> U (Aj, Ai).\nTherefore, if we have a sequence of waiting agents then the utility associated with partnerships between any two subsequent agents in the sequence must increase along the sequence.\nIf the sequence is cyclic, then we have a lim y \u2192 x lim y \u2192 x\n452 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\npattern of the form: U (Ai, Al)> U (Al, Aj)> U (Aj, Ai).\nSince U (Ai, Al)> U (Aj, Ai), agent Ai can be waiting for agent Aj only if it has already been rejected by Al (see (1) above).\nHowever, if agent Al has rejected agent Ai then it has also rejected agent Aj.\nTherefore, agent Aj cannot be waiting for agent Al to make a decision.\nThe same logic can be applied to any longer sequence.\n\u2737 The search activity is assumed to be costly [11, 1, 16] in a way that any agent needs to consume some of its resources in order to locate other agents to interact with, and for maintaining the interactions themselves.\nWe assume utilities and costs are additive and that the agents are trying to maximize their overall utility, defined as the utility from the partnership formed minus the aggregated search costs along the search process.\nThe agent's cost of interacting with N other agents (in parallel) is given by the function c (N).\nThe search cost structure is principally a parameter of the environment and thus shared by all agents.\nAn agent's strategy S (A ~)--{commit Aj E A ~, reject A ~ ~ C A ~, N} defines for any given set of partnership opportunities, A ~, what is the subset of opportunities that should be immediately declined, to which agent to send a commit message (if no pending notification from another agent is expected) or the number of new interactions to initiate (N).\nSince the search process is two-sided, our goal is to find an equilibrium set of strategies for the agents.\n2.1 Strategy Structure\nRecall that each agent declines partnerships based on (a) the partnerships' immediate utility in comparison to the agent's expected utility from resuming search; and (b) achieving a mutual commitment (thus declining pending partnerships that were not rejected in (a)).\nTherefore an agent's strategy can be represented by a pair (Nt, xt) where Nt is the number of agents with whom it chooses to interact in search stage t and xt is its reservation value5 (a threshold) for accepting\/rejecting the resulting N potential partnerships.\nThe subset A \u2217, thus, will include all partnership opportunities of search stage t that are associated with a utility equal to or greater than xt.\nThe reservation value xt is actually the expected utility for resuming the search at time t (i.e., U (resume)).\nThe agent will always prefer committing to an opportunity greater than the expected utility of resuming the search and will always prefer to resume the search otherwise.\nSince the agents are not limited by a decision horizon, and their search process does not imply any new information about the market structure (e.g., about the utility distribution of future partnership opportunities), their strategy is stationary - an agent will not accept an opportunity it has rejected beforehand (i.e., x1 = x2 =...= x) and will use the same sample size, N1 = N2 =...= N, along its search.\n2.2 Calculating Acceptance Probabilities\nThe transition from instantaneous decision making process to a sequential one introduces several new difficulties in extracting the agents' strategies.\nNow, in order to estimate the probability of being accepted by any of the other agents, the agent needs to recursively model, while setting its strategy, the probabilities of rejections other agents might face from other agents they interact with.\nIn the following paragraphs we introduce several complementary definitions and notations, facilitating the formal introduction of the acceptance probabilities.\nConsider an agent Ai, using a strategy (N, xN) while operating in an environment where all other agents\nare using a strategy (k, xk).\nThe probability that agent Ai will receive a commitment message from agent Aj it interacted with depends on the utility associated with the potential partnership between them, x.\nThis probability, denoted by Gk (x) can be calculated as:6\nThe case where x xI \u2212 DM and consequently (according\nFigure 1 illustrates the superiority of the proposed search strategy S-DM, as well as the expected utility function's characteristics (as reflected in Theorem 2).\nFor comparative reasons we use the same synthetic environment that was used for the I-DM model [21].\nHere the utilities are assumed to be drawn from a uniform distribution function and the cost function was taken to be c (N) = 0.05 + 0.005 N.\nThe agent is using N = 3 while other agents are using k = 25 and xk = 0.2.\nThe different curves depict the expected utility of the agent as a function of the reservation value, x, that it uses, when: (a) all agents are using the I-DM strategy (marked as I-DM); (b) the agent is using the S-DM strategy while the other agents are using the I-DM strategy (marked as I-DM\/SDM); and (c) all agents are using the S-DM strategy (marked as S-DM).\nAs expected, according to Equation 8 and Theorem 2, the agent's expected utility remains constant until its reservation value exceeds xk.\nThen, it reaches a global maximum when the reservation value satisfies VN (x) = x. From the graph we can see that the agent always has an incentive to deviate from the I-DM strategy to S-DM strategy (as was proven in Proposition 2.1).\nFigure 1: The expected utility as a function of the reservation value used by the agent\n3.\nEQUILIBRIUM DYNAMICS\nSince all agents are subject to similar search costs, and their perceived utilities are drawn from the same distribution function, they all share the same strategy in equilibrium.\nA multi-equilibria scenario may occur, however as we discuss in the following paragraphs since all agents share the same preferences\/priorities (unlike, for example, in the famous\" battle of the sexes\" scenario) we can always identify which equilibrium strategy will be used.\nNotice that if all agents are using the same sample size, N, then the value xN resulting from solving Equation 10 by substituting k = N and xk = xN is a stable reservation value (i.e., none of the agents can benefit from changing just the value of xN).\nAn equilibrium strategy (N, xN) can be found by identifying an N value for which no single agent has an incentive to use a different number of parallel interactions, k (and the new optimal reservation\n454 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nvalue that is associated with k according to Equation 10).\nWhile this implies an infinite solution space, we can always bound it using Equations 8 and 10.\nWithin the framework of this paper, we demonstrate such a bounding methodology for the common case were c (N) is linear7 or convex, by using the following Theorem 3.\nTHEOREM 3.\nWhen c (N) is linear (or convex), then: (a) When all other agents sample k potential partners over a search round, if an agent's expected utility of sampling k + 1 potential partners, Vk +1 (xk +1), is smaller than Vk (xk), then the expected utility when sampling N potential partners, VN (xN), where N> k + 1, is also smaller than Vk (xk).\n(b) Similarly, when all other agents sample k potential partners over a search round, if an agent's expected utility of using k -1 potential partners, Vk-1 (xk-1), is smaller than the expected utility when using k potential partners, Vk (xk), then the expected utility when using N potential partners, where N k + 1, and (b) if xk-1 xk.\nTherefore, according to Equation 10, the following holds:\nwhere the transition to inequality is valid since c (i) is convex.\nSince the assumption in this proof is that xk +2> xk then the above can be transformed into:\nk \u2212 1) 2 which is obviously negative, contradicting the initial assumption, thus if xk +1 xk.\nAccording to Equation 10, and the fact that c (i) is convex, the following holds:\nAnd again the integrand is actually - Gk (y) k\nwhich is obviously negative, contradicting the initial assumption, thus xk +2 i 0, thus:\n7A linear cost function is mostly common in agent-based two-sided search applications, since often the cost function can be divided into fixed costs (e.g. operating the agent per time unit) and variable costs (i.e., cost of processing a single interaction's data).\nThis time the integrated term in Equation 15 can be re-written as\nnegative, contradicting the initial assumption, thus xk + i +1 0 and ck - ck-i-1 ck-i + ck-2i-1> 0.\nThe above theorem supplies us with a powerful tool for eliminating non-equilibrium N values.\nIt suggests that we can check the stability of a sample size N and the appropriate reservation value xN simply by calculating the optimal reservation values of a single agent when deviating towards using samples of sizes N -1 and N + 1 (keeping the other agents with strategy (N, xN)).\nIf both the appropriate reservation values associated with the two latter sample sizes are smaller than xN then according to Theorems 3 the same holds when deviating to any other sample size k.\nThe above process can be further simplified by using VN +1 (xN)> xN and VN-1 (xN)> xN as the two elimination rules.\nThis derives from Theorem 3 and the properties of the function VN (x) found in Theorem 2.\nNotice that a multi-equilibria scenario may occur, however can easily be resolved.\nIf several strategies satisfy the stability condition defined above, then the agents will always prefer the one associated with the highest expected utility.\nTherefore an algorithm that goes over the different N values and checks them according to the rules above can be applied, assuming that we can bound the interval for searching the equilibrium N.\nThe following Theorem\nNext we prove that if A (N) - c (N) gets negative, it stays negative.\nRecalling that for any g (y):\nwhich is always negative, since the integrand is nonnegative.\nTherefore A (N) is concave.\nSince c (N) is convex, - c (N) is concave, and a sum of concave functions is concave, we obtain that\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 455\nA (N) \u2212 c (N) is concave.\nThis guarantees that once the concave expression A (N) \u2212 c (N) shifts from a positive value to a negative one (with the increase in N), it cannot become positive again.\nTherefore, having N * such that A (N *) = c (N *), and A (N **)> c (N **) for some N ** N *.\nThe condition we specify for N ** is merely for ensuring that VN is switching from a positive value to a negative one (and not vice versa) and is trivial to implement.\nGiven the existence of the upper bound, we can design an algorithm for finding the equilibrium strategy (if one exists).\nThe algorithm extracts the upper bound, \u02c6N, for the equilibrium number of parallel interactions according to Theorem 4.\nOut of the set of values satisfying the stability condition defined above, the algorithm chooses the one associated with the highest reservation value according to Equation 10.\nThis is the equilibrium associated with the highest expected utility to all agents according to Theorem 2.\nFigure 2: The incentive to deviate from strategy (N, xN)\nThe process is illustrated in Figure 2 for an artificial environment where partnerships' utilities are associated with a uniform distribution.\nThe cost function used is c (N) = 0.2 + 0.02 N.\nThe graph depicts a single agent's expected utility when all other agents are using N parallel interactions (on the horizontal axis) and the appropriate reservation value xN (calculated according to Equation 10).\nThe different curves depict the expected utility of the agent when it uses a strategy: (a) (N, xN) similar to the other agents (marked as VN (xN)); (b) (N + 1, xN) (marked as VN +1 (xN)); and (c) (N \u2212 1, xN) (marked as VN_1 (xN)).\nAccording to the discussion following Theorem 3, a stable equilibrium satisfies: VN (xN)> max {VN +1 (xN), VN_1 (xN)}.\nThe strategy satisfying the latter condition in our example is (9, 0.437).\n4.\nRELATED WORK\nThe two-sided economic search for partnerships in AI literature is a sub-domain of coalition formation8.\nWhile coalition formation models usually consider general coalition-sizes [24], the partnership formation model (often referred as matchmaking) considers environments where agents have a benefit only when forming a partnership and this benefit cannot be improved by extending the partnership to more than two agents [12, 23] (e.g., in the case of buyers and sellers or peer-to-peer applications).\nAs in the general 8The use of the term\" partnership\" in this context refers to the agreement between two individual agents to cooperate in a pre-defined manner.\nFor example, in the buyer-seller application a partnership is defined as an agreed transaction between the two-parties [9].\ncoalition formation case, agents have the incentive to form partnerships when they are incapable of executing a task by their own or when the partnership can improve their individual utilities [14].\nVarious centralized matching mechanisms can be found in the literature [6, 2, 8].\nHowever, in many MAS environments, in the absence of any reliable central matching mechanism, the matching process is completely distributed.\nWhile the search in agent-based environments is well recognized to be costly [11, 21, 1], most of the proposed coalition formation mechanisms assume that an agent can scan as many partnership opportunities in its environment as needed or have access to central matchers or middle agents [6].\nThe incorporation of costly search in this context is quite rare [21] and to the best of our knowledge, a distributed two-sided search for partners model similar to the S-DM model has not been studied to date.\nClassical economic search theory ([15, 17], and references therein) widely addresses the problem of a searcher operating in a costly environment, seeking to maximize his long term utility.\nIn these models, classified as one-sided search, the focus is on establishing the optimal strategies for the searcher, assuming no mutual search activities (i.e., no influence on the environment).\nHere the sequential search procedure is often applied, allowing the searcher to investigate a single [15] or multiple [7, 19] opportunities at a time.\nWhile the latter method is proven to be beneficial for the searcher, it was never used in the\" two-sided\" search models that followed (where dual search activities are modeled) [22, 5, 18].\nTherefore, in these models, the equilibrium strategies are always developed based on the assumption that the agents interact with others sequentially (i.e., with one agent at a time).\nA first attempt to integrate the parallel search into a two-sided search model is given in [21], as detailed in the introduction section.\nSeveral of the two-sided search essences can be found in the strategic theory of bargaining [3] - both coalition formation and matching can be represented as a sequential bargaining game [4] in which payoffs are defined as a function of the coalition structure and can be divided according to a fixed or negotiated division rule.\nNevertheless, in the sequential bargaining literature, most emphasis is put on specifying the details of the sequential negotiating process over the division of the utility (or cost) jointly owned by parties or the strategy the coalition needs to adopt [20, 4].\nThe models presented in this area do not associate the coalition formation process with search costs, which is the essence of the analysis that economic search theory aims to supply.\nFurthermore, even in repeated pairwise bargaining [10] models the agents are always limited to initiating a single bargaining interaction at a time.","keyphrases":["sequenti decis make","decis","pairwis partnership","partnership","match","equilibrium strategi","peer-to-peer applic","inform process","util","search cost","multi-equilibrium scenario","parallel interact","bound methodolog","coalit format","partnership format","costli environ","search perform","instantan decis make","two-side search"],"prmu":["P","P","P","P","P","P","U","U","U","M","U","M","M","U","M","M","M","R","M"]} {"id":"I-32","title":"An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions","abstract":"Multiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions. This paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment. In such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model). We define an Adversarial Environment by describing the mental states of an agent in such an environment. We then present behavioral axioms that are intended to serve as design principles for building such adversarial agents. We explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms' appropriateness.","lvl-1":"An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions Inon Zuckerman1 , Sarit Kraus1 , Jeffrey S. Rosenschein2 , Gal Kaminka1 1 Department of Computer Science 2 The School of Engineering Bar-Ilan University and Computer Science Ramat-Gan, Israel Hebrew University, Jerusalem, Israel {zukermi,sarit,galk}@cs.\nbiu.ac.il jeff@cs.huji.ac.il ABSTRACT Multiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions.\nThis paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment.\nIn such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model).\nWe define an Adversarial Environment by describing the mental states of an agent in such an environment.\nWe then present behavioral axioms that are intended to serve as design principles for building such adversarial agents.\nWe explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms'' appropriateness.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Intelligent agents,Multiagent Systems; I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods -Modal logic General Terms Design, Theory 1.\nINTRODUCTION Early research in multiagent systems (MAS) considered cooperative groups of agents; because individual agents had limited resources, or limited access to information (e.g., limited processing power, limited sensor coverage), they worked together by design to solve problems that individually they could not solve, or at least could not solve as efficiently.\nMAS research, however, soon began to consider interacting agents with individuated interests, as representatives of different humans or organizations with non-identical interests.\nWhen interactions are guided by diverse interests, participants may have to overcome disagreements, uncooperative interactions, and even intentional attempts to damage one another.\nWhen these types of interactions occur, environments require appropriate behavior from the agents situated in them.\nWe call these environments Adversarial Environments, and call the clashing agents Adversaries.\nModels of cooperation and teamwork have been extensively explored in MAS through the axiomatization of mental states (e.g., [8, 4, 5]).\nHowever, none of this research dealt with adversarial domains and their implications for agent behavior.\nOur paper addresses this issue by providing a formal, axiomatized mental state model for a subset of adversarial domains, namely simple zero-sum adversarial environments.\nSimple zero-sum encounters exist of course in various twoplayer games (e.g., Chess, Checkers), but they also exist in n-player games (e.g., Risk, Diplomacy), auctions for a single good, and elsewhere.\nIn these latter environments especially, using a utility-based adversarial search (such as the Min-Max algorithm) does not always provide an adequate solution; the payoff function might be quite complex or difficult to quantify, and there are natural computational limitations on bounded rational agents.\nIn addition, traditional search methods (like Min-Max) do not make use of a model of the opponent, which has proven to be a valuable addition to adversarial planning [9, 3, 11].\nIn this paper, we develop a formal, axiomatized model for bounded rational agents that are situated in a zero-sum adversarial environment.\nThe model uses different modality operators, and its main foundations are the SharedPlans [4] model for collaborative behavior.\nWe explore environment properties and the mental states of agents to derive behavioral axioms; these behavioral axioms constitute a formal model that serves as a specification and design guideline for agent design in such settings.\nWe then investigate the behavior of our model empirically using the Connect-Four board game.\nWe show that this game conforms to our environment definition, and analyze players'' behavior using a large set of completed match log 550 978-81-904262-7-5 (RPS) c 2007 IFAAMAS files.\nIn addition, we use the results presented in [9] to discuss the importance of opponent modeling in our ConnectFour adversarial domain.\nThe paper proceeds as follows.\nSection 2 presents the model``s formalization.\nSection 3 presents the empirical analysis and its results.\nWe discuss related work in Section 4, and conclude and present future directions in Section 5.\n2.\nADVERSARIAL ENVIRONMENTS The adversarial environment model (denoted as AE) is intended to guide the design of agents by providing a specification of the capabilities and mental attitudes of an agent in an adversarial environment.\nWe focus here on specific types of adversarial environments, specified as follows: 1.\nZero-Sum Interactions: positive and negative utilities of all agents sum to zero; 2.\nSimple AEs: all agents in the environment are adversarial agents; 3.\nBilateral AEs: AE``s with exactly two agents; 4.\nMultilateral AEs'': AE``s of three or more agents.\nWe will work on both bilateral and multilateral instantiations of zero-sum and simple environments.\nIn particular, our adversarial environment model will deal with interactions that consist of N agents (N \u2265 2), where all agents are adversaries, and only one agent can succeed.\nExamples of such environments range from board games (e.g., Chess, Connect-Four, and Diplomacy) to certain economic environments (e.g., N-bidder auctions over a single good).\n2.1 Model Overview Our approach is to formalize the mental attitudes and behaviors of a single adversarial agent; we consider how a single agent perceives the AE.\nThe following list specifies the conditions and mental states of an agent in a simple, zero-sum AE: 1.\nThe agent has an individual intention that its own goal will be completed; 2.\nThe agent has an individual belief that it and its adversaries are pursuing full conflicting goals (defined below)there can be only one winner; 3.\nThe agent has an individual belief that each adversary has an intention to complete its own full conflicting goal; 4.\nThe agent has an individual belief in the (partial) profile of its adversaries.\nItem 3 is required, since it might be the case that some agent has a full conflicting goal, and is currently considering adopting the intention to complete it, but is, as of yet, not committed to achieving it.\nThis might occur because the agent has not yet deliberated about the effects that adopting that intention might have on the other intentions it is currently holding.\nIn such cases, it might not consider itself to even be in an adversarial environment.\nItem 4 states that the agent should hold some belief about the profiles of its adversaries.\nThe profile represents all the knowledge the agent has about its adversary: its weaknesses, strategic capabilities, goals, intentions, trustworthiness, and more.\nIt can be given explicitly or can be learned from observations of past encounters.\n2.2 Model Definitions for Mental States We use Grosz and Kraus``s definitions of the modal operators, predicates, and meta-predicates, as defined in their SharedPlan formalization [4].\nWe recall here some of the predicates and operators that are used in that formalization: Int.To(Ai, \u03b1, Tn, T\u03b1, C) represents Ai``s intentions at time Tn to do an action \u03b1 at time T\u03b1 in the context of C. Int.Th(Ai, prop, Tn, Tprop, C) represents Ai``s intentions at time Tn that a certain proposition prop holds at time Tprop in the context of C.\nThe potential intention operators, Pot.Int.To(...) and Pot.Int.Th(...), are used to represent the mental state when an agent considers adopting an intention, but has not deliberated about the interaction of the other intentions it holds.\nThe operator Bel(Ai, f, Tf ) represents agent Ai believing in the statement expressed in formula f, at time Tf .\nMB(A, f, Tf ) represents mutual belief for a group of agents A.\nA snapshot of the system finds our environment to be in some state e \u2208 E of environmental variable states, and each adversary in any LAi \u2208 L of possible local states.\nAt any given time step, the system will be in some world w of the set of all possible worlds w \u2208 W, where w = E\u00d7LA1 \u00d7LA2 \u00d7 ...LAn , and n is the number of adversaries.\nFor example, in a Texas Hold``em poker game, an agent``s local state might be its own set of cards (which is unknown to its adversary) while the environment will consist of the betting pot and the community cards (which are visible to both players).\nA utility function under this formalization is defined as a mapping from a possible world w \u2208 W to an element in , which expresses the desirability of the world, from a single agent perspective.\nWe usually normalize the range to [0,1], where 0 represents the least desirable possible world, and 1 is the most desirable world.\nThe implementation of the utility function is dependent on the domain in question.\nThe following list specifies new predicates, functions, variables, and constants used in conjunction with the original definitions for the adversarial environment formalization: 1.\n\u03c6 is a null action (the agent does not do anything).\n2.\nGAi is the set of agent Ai``s goals.\nEach goal is a set of predicates whose satisfaction makes the goal complete (we use G\u2217 Ai \u2208 GAi to represent an arbitrary goal of agent Ai).\n3.\ngAi is the set of agent Ai``s subgoals.\nSubgoals are predicates whose satisfaction represents an important milestone toward achievement of the full goal.\ngG\u2217 Ai \u2286 gAi is the set of subgoals that are important to the completion of goal G\u2217 Ai (we will use g\u2217 G\u2217 Ai \u2208 gG\u2217 Ai to represent an arbitrary subgoal).\n4.\nP Aj Ai is the profile object agent Ai holds about agent Aj.\n5.\nCA is a general set of actions for all agents in A which are derived from the environment``s constraints.\nCAi \u2286 CA is the set of agent Ai``s possible actions.\n6.\nDo(Ai, \u03b1, T\u03b1, w) holds when Ai performs action \u03b1 over time interval T\u03b1 in world w. 7.\nAchieve(G\u2217 Ai , \u03b1, w) is true when goal G\u2217 Ai is achieved following the completion of action \u03b1 in world w \u2208 W, where \u03b1 \u2208 CAi .\n8.\nProfile(Ai, PAi Ai ) is true when agent Ai holds an object profile for agent Aj.\nDefinition 1.\nFull conflict (FulConf ) describes a zerosum interaction where only a single goal of the goals in conflict can be completed.\nFulConf(G\u2217 Ai , G\u2217 Aj ) \u21d2 (\u2203\u03b1 \u2208 CAi , \u2200w, \u03b2 \u2208 CAj ) (Achieve(G\u2217 Ai , \u03b1, w) \u21d2 \u00acAchieve(G\u2217 Aj , \u03b2, w)) \u2228 (\u2203\u03b2 \u2208 CAj , \u2200w, \u03b1 \u2208 CAi )(Achieve(G\u2217 Aj , \u03b2, w) \u21d2 \u00acAchieve(G\u2217 Ai , \u03b1, w)) Definition 2.\nAdversarial Knowledge (AdvKnow) is a function returning a value which represents the amount of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 551 knowledge agent Ai has on the profile of agent Aj, at time Tn.\nThe higher the value, the more knowledge agent Ai has.\nAdvKnow : P Aj Ai \u00d7 Tn \u2192 Definition 3.\nEval - This evaluation function returns an estimated expected utility value for an agent in A, after completing an action from CA in some world state w. Eval : A \u00d7 CA \u00d7 w \u2192 Definition 4.\nTrH - (Threshold) is a numerical constant in the [0,1] range that represents an evaluation function (Eval) threshold value.\nAn action that yields an estimated utility evaluation above the TrH is regarded as a highly beneficial action.\nThe Eval value is an estimation and not the real utility function, which is usually unknown.\nUsing the real utility value for a rational agent would easily yield the best outcome for that agent.\nHowever, agents usually do not have the real utility functions, but rather a heuristic estimate of it.\nThere are two important properties that should hold for the evaluation function: Property 1.\nThe evaluation function should state that the most desirable world state is one in which the goal is achieved.\nTherefore, after the goal has been satisfied, there can be no future action that can put the agent in a world state with higher Eval value.\n(\u2200Ai, G\u2217 Ai , \u03b1, \u03b2 \u2208 CAi , w \u2208 W) Achieve(G\u2217 Ai , \u03b1, w) \u21d2 Eval(Ai, \u03b1, w) \u2265 Eval(Ai, \u03b2, w) Property 2.\nThe evaluation function should project an action that causes a completion of a goal or a subgoal to a value which is greater than TrH (a highly beneficial action).\n(\u2200Ai, G\u2217 Ai \u2208 GAi , \u03b1 \u2208 CAi , w \u2208 W, g\u2217 GAi \u2208 gGAi ) Achieve(G\u2217 Ai , \u03b1, w) \u2228 Achieve(g\u2217 GAi , \u03b1, w) \u21d2 Eval(Ai, \u03b1, w) \u2265 TrH.\nDefinition 5.\nSetAction We define a set action (SetAction) as a set of action operations (either complex or basic actions) from some action sets CAi and CAj which, according to agent Ai``s belief, are attached together by a temporal and consequential relationship, forming a chain of events (action, and its following consequent action).\n(\u2200\u03b11 , ... , \u03b1u \u2208 CAi , \u03b21 , ... , \u03b2v \u2208 CAj , w \u2208 W) SetAction(\u03b11 , ... , \u03b1u , \u03b21 , ... , \u03b2v , w) \u21d2 ((Do(Ai, \u03b11 , T\u03b11 , w) \u21d2 Do(Aj, \u03b21 , T\u03b21 , w)) \u21d2 Do(Ai, \u03b12 , T\u03b12 , w) \u21d2 ... \u21d2 Do(Ai, \u03b1u , T\u03b1u , w)) The consequential relation might exist due to various environmental constraints (when one action forces the adversary to respond with a specific action) or due to the agent``s knowledge about the profile of its adversary.\nProperty 3.\nAs the knowledge we have about our adversary increases we will have additional beliefs about its behavior in different situations which in turn creates new set actions.\nFormally, if our AdvKnow at time Tn+1 is greater than AdvKnow at time Tn, then every SetAction known at time Tn is also known at time Tn+1.\nAdvKnow(P Aj Ai , Tn+1) > AdvKnow(P Aj Ai , Tn) \u21d2 (\u2200\u03b11 , ... , \u03b1u \u2208 CAi , \u03b21 , ... , \u03b2v \u2208 CAj ) Bel(Aag, SetAction(\u03b11 , ... , \u03b1u , \u03b21 , ... , \u03b2v ), Tn) \u21d2 Bel(Aag, SetAction(\u03b11 , ... , \u03b1u , \u03b21 , ... , \u03b2v ), Tn+1) 2.3 The Environment Formulation The following axioms provide the formal definition for a simple, zero-sum Adversarial Environment (AE).\nSatisfaction of these axioms means that the agent is situated in such an environment.\nIt provides specifications for agent Aag to interact with its set of adversaries A with respect to goals G\u2217 Aag and G\u2217 A at time TCo at some world state w. AE(Aag, A, G\u2217 Aag , A1, ... , Ak, G\u2217 A1 , ... , G\u2217 Ak , Tn, w) 1.\nAag has an Int.Th his goal would be completed: (\u2203\u03b1 \u2208 CAag , T\u03b1) Int.Th(Aag, Achieve(G\u2217 Aag , \u03b1), Tn, T\u03b1, AE) 2.\nAag believes that it and each of its adversaries Ao are pursuing full conflicting goals: (\u2200Ao \u2208 {A1, ... , Ak}) Bel(Aag, FulConf(G\u2217 Aag , G\u2217 Ao ), Tn) 3.\nAag believes that each of his adversaries in Ao has the Int.Th his conflicting goal G\u2217 Aoi will be completed: (\u2200Ao \u2208 {A1, ... , Ak})(\u2203\u03b2 \u2208 CAo , T\u03b2) Bel(Aag, Int.Th(Ao, Achieve(G\u2217 Ao , \u03b2), TCo, T\u03b2, AE), Tn) 4.\nAag has beliefs about the (partial) profiles of its adversaries (\u2200Ao \u2208 {A1, ... , Ak}) (\u2203PAo Aag \u2208 PAag )Bel(Aag, Profile(Ao, PAo Aag ), Tn) To build an agent that will be able to operate successfully within such an AE, we must specify behavioral guidelines for its interactions.\nUsing a naive Eval maximization strategy to a certain search depth will not always yield satisfactory results for several reasons: (1) the search horizon problem when searching for a fixed depth; (2) the strong assumption of an optimally rational, unbounded resources adversary; (3) using an estimated evaluation function which will not give optimal results in all world states, and can be exploited [9].\nThe following axioms specify the behavioral principles that can be used to differentiate between successful and less successful agents in the above Adversarial Environment.\nThose axioms should be used as specification principles when designing and implementing agents that should be able to perform well in such Adversarial Environments.\nThe behavioral axioms represent situations in which the agent will adopt potential intentions to (Pot.Int.To(...)) perform an action, which will typically require some means-end reasoning to select a possible course of action.\nThis reasoning will lead to the adoption of an Int.To(...) (see [4]).\nA1.\nGoal Achieving Axiom.\nThe first axiom is the simplest case; when the agent Aag believes that it is one action (\u03b1) away from achieving his conflicting goal G\u2217 Aag , it should adopt the potential intention to do \u03b1 and complete its goal.\n(\u2200Aag, \u03b1 \u2208 CAag , Tn, T\u03b1, w \u2208 W) (Bel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 Achieve(G\u2217 Aag , \u03b1, w)) \u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w) This somewhat trivial behavior is the first and strongest axiom.\nIn any situation, when the agent is an action away from completing the goal, it should complete the action.\nAny fair Eval function would naturally classify \u03b1 as the maximal value action (property 1).\nHowever, without explicit axiomatization of such behavior there might be situations where the agent will decide on taking another action for various reasons, due to its bounded decision resources.\nA2.\nPreventive Act Axiom.\nBeing in an adversarial situation, agent Aag might decide to take actions that will damage one of its adversary``s plans to complete its goal, even if those actions do not explicitly advance Aag towards its conflicting goal G\u2217 Aag .\nSuch preventive action will take place when agent Aag has a belief about the possibility of its adversary Ao doing an action \u03b2 that will give it a high 552 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) utility evaluation value (> TrH).\nBelieving that taking action \u03b1 will prevent the opponent from doing its \u03b2, it will adopt a potential intention to do \u03b1.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , Tn, T\u03b2, w \u2208 W) (Bel(Aag, Do(Ao, \u03b2, T\u03b2, w) \u2227 Eval(Ao, \u03b2, w) > TrH, Tn) \u2227 Bel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 \u00acDo(Ao, \u03b2, T\u03b2, w), Tn) \u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w) This axiom is a basic component of any adversarial environment.\nFor example, looking at a Chess board game, a player could realize that it is about to be checkmated by its opponent, thus making a preventive move.\nAnother example is a Connect Four game: when a player has a row of three chips, its opponent must block it, or lose.\nA specific instance of A1 occurs when the adversary is one action away from achieving its goal, and immediate preventive action needs to be taken by the agent.\nFormally, we have the same beliefs as stated above, with a changed belief that doing action \u03b2 will cause agent Ao to achieve its goal.\nProposition 1: Prevent or lose case.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , G\u2217 Ao , Tn, T\u03b1, T\u03b2, w \u2208 W) Bel(Aag, Do(Ao, \u03b2, T\u03b2, w) \u21d2 Achieve(G\u2217 Ao , \u03b2, w), Tn) \u2227 Bel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 \u00acDo(Ao, \u03b2, T\u03b2, w)) \u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w) Sketch of proof: Proposition 1 can be easily derived from axiom A1 and the property 2 of the Eval function, which states that any action that causes a completion of a goal is a highly beneficial action.\nThe preventive act behavior will occur implicitly when the Eval function is equal to the real world utility function.\nHowever, being bounded rational agents and dealing with an estimated evaluation function we need to explicitly axiomatize such behavior, for it will not always occur implicitly from the evaluation function.\nA3.\nSuboptimal Tactical Move Axiom.\nIn many scenarios a situation may occur where an agent will decide not to take the current most beneficial action it can take (the action with the maximal utility evaluation value), because it believes that taking another action (with lower utility evaluation value) might yield (depending on the adversary``s response) a future possibility for a highly beneficial action.\nThis will occur most often when the Eval function is inaccurate and differs by a large extent from the Utility function.\nPut formally, agent Aag believes in a certain SetAction that will evolve according to its initial action and will yield a high beneficial value (> TrH) solely for it.\n(\u2200Aag, Ao \u2208 A, Tn, w \u2208 W) (\u2203\u03b11 , ... , \u03b1u \u2208 CAi , \u03b21 , ... , \u03b2v \u2208 CAj , T\u03b11 ) Bel(Aag, SetAction(\u03b11 , ... , \u03b1u , \u03b21 , ... , \u03b2v ), Tn) \u2227 Bel(Aag, Eval(Ao, \u03b2v , w) < TrH < Eval(Aag, \u03b1u , w), Tn) \u21d2 Pot.Int.To(Aag, \u03b11 , Tn, T\u03b11 , w) An agent might believe that a chain of events will occur for various reasons due to the inevitable nature of the domain.\nFor example, in Chess, we often observe the following: a move causes a check position, which in turn limits the opponent``s moves to avoiding the check, to which the first player might react with another check, and so on.\nThe agent might also believe in a chain of events based on its knowledge of its adversary``s profile, which allows it to foresee the adversary``s movements with high accuracy.\nA4.\nProfile Detection Axiom.\nThe agent can adjust its adversary``s profiles by observations and pattern study (specifically, if there are repeated encounters with the same adversary).\nHowever, instead of waiting for profile information to be revealed, an agent can also initiate actions that will force its adversary to react in a way that will reveal profile knowledge about it.\nFormally, the axiom states that if all actions (\u03b3) are not highly beneficial actions (< TrH), the agent can do action \u03b1 in time T\u03b1 if it believes that it will result in a non-highly beneficial action \u03b2 from its adversary, which in turn teaches it about the adversary``s profile, i.e., gives a higher AdvKnow(P Aj Ai , T\u03b2).\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , Tn, T\u03b1, T\u03b2, w \u2208 W) Bel(Aag, (\u2200\u03b3 \u2208 CAag )Eval(Aag, \u03b3, w) < TrH, Tn) \u2227 Bel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 Do(Ao, \u03b2, T\u03b2, w), Tn) \u2227 Bel(Aag, Eval(Ao, \u03b2, w) < TrH) \u2227 Bel(Aag, AdvKnow(P Aj Ai , T\u03b2) > AdvKnow(P Aj Ai , Tn), Tn) \u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w) For example, going back to the Chess board game scenario, consider starting a game versus an opponent about whom we know nothing, not even if it is a human or a computerized opponent.\nWe might start playing a strategy that will be suitable versus an average opponent, and adjust our game according to its level of play.\nA5.\nAlliance Formation Axiom The following behavioral axiom is relevant only in a multilateral instantiation of the adversarial environment (obviously, an alliance cannot be formed in a bilateral, zero-sum encounter).\nIn different situations during a multilateral interaction, a group of agents might believe that it is in their best interests to form a temporary alliance.\nSuch an alliance is an agreement that constrains its members'' behavior, but is believed by its members to enable them to achieve a higher utility value than the one achievable outside of the alliance.\nAs an example, we can look at the classical Risk board game, where each player has an individual goal of being the sole conquerer of the world, a zero-sum game.\nHowever, in order to achieve this goal, it might be strategically wise to make short-term ceasefire agreements with other players, or to join forces and attack an opponent who is stronger than the rest.\nAn alliance``s terms defines the way its members should act.\nIt is a set of predicates, denoted as Terms, that is agreed upon by the alliance members, and should remain true for the duration of the alliance.\nFor example, the set Terms in the Risk scenario, could contain the following predicates: 1.\nAlliance members will not attack each other on territories X, Y and Z; 2.\nAlliance members will contribute C units per turn for attacking adversary Ao; 3.\nMembers are obligated to stay as part of the alliance until time Tk or until adversary``s Ao army is smaller than Q.\nThe set Terms specifies inter-group constraints on each of the alliance member``s (\u2200Aal i \u2208 Aal \u2286 A) set of actions Cal i \u2286 C. Definition 6.\nAl val - the total evaluation value that agent Ai will achieve while being part of Aal is the sum of Evali (Eval for Ai) of each of Aal j Eval values after taking their own \u03b1 actions (via the agent(\u03b1) predicate): Al val(Ai, Cal , Aal , w) = \u03b1\u2208Cal Evali(Aal j , agent(\u03b1), w) Definition 7.\nAl TrH - is a number representing an Al val The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 553 threshold; above it, the alliance can be said to be a highly beneficial alliance.\nThe value of Al TrH will be calculated dynamically according to the progress of the interaction, as can be seen in [7].\nAfter an alliance is formed, its members are now working in their normal adversarial environment, as well as according to the mental states and axioms required for their interactions as part of the alliance.\nThe following Alliance model (AL) specifies the conditions under which the group Aal can be said to be in an alliance and working with a new and constrained set of actions Cal , at time Tn.\nAL(Aal , Cal , w, Tn) 1.\nAal has a MB that all members are part of Aal : MB(Aal , (\u2200Aal i \u2208 Aal )member(Aal i , Aal ), Tn) 2.\nAal has a MB that the group be maintained: MB(Aal , (\u2200Aal i \u2208 Aal )Int.Th (Ai, member(Ai, Aal ), Tn, Tn+1, Co), Tn) 3.\nAal has a MB that being members gives them high utility value: MB(Aal , (\u2200Aal i \u2208 Aal )Al val(Aal i , Cal , Aal , w) \u2265 Al TrH, Tn) Members'' profiles are a crucial part of successful alliances.\nWe assume that agents that have more accurate profiles of their adversaries will be more successful in such environments.\nSuch agents will be able to predict when a member is about to breach the alliance``s contract (item 2 in the above model), and take counter measures (when item 3 will falsify).\nThe robustness of the alliance is in part a function of its members'' trustfulness measure, objective position estimation, and other profile properties.\nWe should note that an agent can simultaneously be part of more than one alliance.\nSuch a temporary alliance, where the group members do not have a joint goal but act collaboratively for the interest of their own individual goals, is classified as a Treatment Group by modern psychologists [12] (in contrast to a Task Group, where its members have a joint goal).\nThe Shared Activity model as presented in [5] modeled Treatment Group behavior using the same SharedPlans formalization.\nWhen comparing both definitions of an alliance and a Treatment Group we found an unsurprising resemblance between both models: the environment model``s definitions are almost identical (see SA``s definitions in [5]), and their Selfish-Act and Cooperative Act axioms conform to our adversarial agent``s behavior.\nThe main distinction between both models is the integration of a Helpful-behavior act axiom, in the Shared Activity which cannot be part of ours.\nThis axiom states that an agent will consider taking action that will lower its Eval value (to a certain lower bound), if it believes that a group partner will gain a significant benefit.\nSuch behavior cannot occur in a pure adversarial environment (as a zero-sum game is), where the alliance members are constantly on watch to manipulate their alliance to their own advantage.\nA6.\nEvaluation Maximization Axiom.\nIn a case when all other axioms are inapplicable, we will proceed with the action that maximizes the heuristic value as computed in the Eval function.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 Cag, Tn, w \u2208 W) Bel(Aag, (\u2200\u03b3 \u2208 Cag)Eval(Aag, \u03b1, w) \u2265 Eval(Aag, \u03b3, w), Tn) \u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w) T1.\nOptimality on Eval = Utility The above axiomatic model handles situations where the Utility is unknown and the agents are bounded rational agents.\nThe following theorem shows that in bilateral interactions, where the agents have the real Utility function (i.e., Eval = Utility) and are rational agents, the axioms provide the same optimal result as classic adversarial search (e.g., Min-Max).\nTheorem 1.\nLet Ae ag be an unbounded rational AE agent using the Eval heuristic evaluation function, Au ag be the same agent using the true Utility function, and Ao be a sole unbounded utility-based rational adversary.\nGiven that Eval = Utility: (\u2200\u03b1 \u2208 CAu ag , \u03b1 \u2208 CAe ag , Tn, w \u2208 W) Pot.Int.To(Au ag, \u03b1, Tn, T\u03b1, w) \u2192 Pot.Int.To(Ae ag, \u03b1 , Tn, T\u03b1, w) \u2227 ((\u03b1 = \u03b1 ) \u2228 (Utility(Au ag, \u03b1, w) = Eval(Ae ag, \u03b1 , w))) Sketch of proof - Given that Au ag has the real utility function and unbounded resources, it can generate the full game tree and run the optimal MinMax algorithm to choose the highest utility value action, which we denote by, \u03b1.\nThe proof will show that Ae ag, using the AE axioms, will select the same or equal utility \u03b1 (when there is more than one action with the same max utility) when Eval = Utility.\n(A1) Goal achieving axiom - suppose there is an \u03b1 such that its completion will achieve Au ag``s goal.\nIt will obtain the highest utility by Min-Max for Au ag.\nThe Ae ag agent will select \u03b1 or another action with the same utility value via A1.\nIf such \u03b1 does not exist, Ae ag cannot apply this axiom, and proceeds to A2.\n(A2) Preventive act axiom - (1) Looking at the basic case (see Prop1), if there is a \u03b2 which leads Ao to achieve its goal, then a preventive action \u03b1 will yield the highest utility for Au ag.\nAu ag will choose it through the utility, while Ae ag will choose it through A2.\n(2) In the general case, \u03b2 is a highly beneficial action for Ao, thus yields low utility for Au ag, which will guide it to select an \u03b1 that will prevent \u03b2, while Ae ag will choose it through A2.1 If such \u03b2 does not exist for Ao, then A2 is not applicable, and Ae ag can proceed to A3.\n(A3) Suboptimal tactical move axiom - When using a heuristic Eval function, Ae ag has a partial belief in the profile of its adversary (item 4 in AE model), which may lead it to believe in SetActions (Prop1).\nIn our case, Ae ag is holding a full profile on its optimal adversary and knows that Ao will behave optimally according to the real utility values on the complete search tree, therefore, any belief about suboptimal SetAction cannot exist, yielding this axiom inapplicable.\nAe ag will proceed to A4.\n(A4) Profile detection axiom - Given that Ae ag has the full profile of Ao, none of Ae ag``s actions can increase its knowledge.\nThat axiom will not be applied, and the agent will proceed with A6 (A5 will be disregarded because the interaction is bilateral).\n(A6) Evaluation maximization axiom - This axiom will select the max Eval for Ae ag.\nGiven that Eval = Utility, the same \u03b1 that was selected by Au ag will be selected.\n3.\nEVALUATION The main purpose of our experimental analysis is to evaluate the model``s behavior and performance in a real adversarial environment.\nThis section investigates whether bounded 1 A case where following the completion of \u03b2 there exists a \u03b3 which gives high utility for Agent Au ag, cannot occur because Ao uses the same utility, and \u03b3``s existence will cause it to classify \u03b2 as a low utility action.\n554 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) rational agents situated in such adversarial environments will be better off applying our suggested behavioral axioms.\n3.1 The Domain To explore the use of the above model and its behavioral axioms, we decided to use the Connect-Four game as our adversarial environment.\nConnect-Four is a 2-player, zerosum game which is played using a 6x7 matrix-like board.\nEach turn, a player drops a disc into one of the 7 columns (the set of 21 discs is usually colored yellow for player 1 and red for player 2; we will use White and Black respectively to avoid confusion).\nThe winner is the first player to complete a horizontal, vertical, or diagonal set of four discs with its color.\nOn very rare occasions, the game might end in a tie if all the empty grids are filled, but no player managed to create a 4-disc set.\nThe Connect-Four game was solved in [1], where it is shown that the first player (playing with the white discs) can force a win by starting in the middle column (column 4) and playing optimally However, the optimal strategy is very complex, and difficult to follow even for complex bounded rational agents, such as human players.\nBefore we can proceed checking agent behavior, we must first verify that the domain conforms to the adversarial environment``s definition as given above (which the behavioral axioms are based on).\nFirst, when playing a Connect-Four game, the agent has an intention to win the game (item 1).\nSecond (item 2), our agent believes that in Connect-Four there can only be one winner (or no winner at all in the rare occurrence of a tie).\nIn addition, our agent believes that its opponent to the game will try to win (item 3), and we hope it has some partial knowledge (item 4) about its adversary (this knowledge can vary from nothing, through simple facts such as age, to strategies and weaknesses).\nOf course, not all Connect-Four encounters are adversarial.\nFor example, when a parent is playing the game with its child, the following situation might occur: the child, having a strong incentive to win, treats the environment as adversarial (it intends to win, understands that there can only be one winner, and believes that its parent is trying to beat him).\nHowever, the parent``s point of view might see the environment as an educational one, where its goal is not to win the game, but to cause enjoyment or practice strategic reasoning.\nIn such an educational environment, a new set of behavioral axioms might be more beneficial to the parent``s goals than our suggested adversarial behavioral axioms.\n3.2 Axiom Analysis After showing that the Connect-Four game is indeed a zero-sum, bilateral adversarial environment, the next step is to look at players'' behaviors during the game and check whether behaving according to our model does improve performance.\nTo do so we have collected log files from completed Connect-Four games that were played by human players over the Internet.\nOur collected log file data came from Play by eMail (PBeM) sites.\nThese are web sites that host email games, where each move is taken by an email exchange between the server and the players.\nMany such sites'' archives contain real competitive interactions, and also maintain a ranking system for their members.\nMost of the data we used can be found in [6].\nAs can be learned from [1], Connect-Four has an optimal strategy and a considerable advantage for the player who starts the game (which we call the White player).\nWe will concentrate in our analysis on the second player``s moves (to be called Black).\nThe White player, being the first to act, has the so-called initiative advantage.\nHaving the advantage and a good strategy will keep the Black player busy reacting to its moves, instead of initiating threats.\nA threat is a combination of three discs of the same color, with an empty spot for the fourth winning disk.\nAn open threat is a threat that can be realized in the opponent``s next move.\nIn order for the Black player to win, it must somehow turn the tide, take the advantage and start presenting threats to the White player.\nWe will explore Black players'' behavior and their conformance to our axioms.\nTo do so, we built an application that reads log files and analyzes the Black player``s moves.\nThe application contains two main components: (1) a Min-Max algorithm for evaluation of moves; (2) open threats detector for the discovering of open threats.\nThe Min-Max algorithm will work to a given depth, d and for each move \u03b1 will output the heuristic value for the next action taken by the player as written in the log file, h(\u03b1), alongside the maximum heuristic value, maxh(\u03b1), that could be achieved prior to taking the move (obviously, if h(\u03b1) = maxh(\u03b1), then the player did not do the optimal move heuristically).\nThe threat detector``s job is to notify if some action was taken in order to block an open threat (not blocking an open threat will probably cause the player to lose in the opponent``s next move).\nThe heuristic function used by Min-Max to evaluate the player``s utility is the following function, which is simple to compute, yet provides a reasonable challenge to human opponents: Definition 8.\nLet Group be an adjacent set of four squares that are horizontal, vertical, or diagonal.\nGroupn b (Groupn w) be a Group with n pieces of the black (white) color and 4\u2212n empty squares.\nh = ((Group1 b \u2217\u03b1)+(Group2 b \u2217\u03b2)+(Group3 b \u2217\u03b3)+(Group4 b \u2217\u221e)) \u2212 ((Group1 w \u2217\u03b1)+(Group2 w \u2217\u03b2)+(Group3 w \u2217\u03b3)+(Group4 w \u2217\u221e)) The values of \u03b1, \u03b2 and \u03b4 can vary to form any desired linear combination; however, it is important to value them with the \u03b1 < \u03b2 < \u03b4 ordering in mind (we used 1, 4, and 8 as their respective values).\nGroups of 4 discs of the same color means victory, thus discovery of such a group will result in \u221e to ensure an extreme value.\nWe now use our estimated evaluation function to evaluate the Black player``s actions during the Connect-Four adversarial interaction.\nEach game from the log file was input into the application, which processed and output a reformatted log file containing the h value of the current move, the maxh value that could be achieved, and a notification if an open threat was detected.\nA total of 123 games were analyzed (57 with White winning, and 66 with Black winning).\nA few additional games were manually ignored in the experiment, due to these problems: a player abandoning the game while the outcome is not final, or a blunt irrational move in the early stages of the game (e.g., not blocking an obvious winning group in the first opening moves).\nIn addition, a single tie game was also removed.\nThe simulator was run to a search depth of 3 moves.\nWe now proceed to analyze the games with respect to each behavioral axiom.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 555 Table 1: Average heuristic difference analysis Black losses Black Won Avg'' minh -17.62 -12.02 Avg'' 3 lowest h moves (min3 h) -13.20 -8.70 3.2.1 Affirming the Suboptimal tactical move axiom The following section presents the heuristic evaluations of the Min-Max algorithm for each action, and checks the amount and extent of suboptimal tactical actions and their implications on performance.\nTable 1 shows results and insights from the games'' heuristic analysis, when search depth equals 3 (this search depth was selected for the results to be comparable to [9], see Section 3.2.3).\nThe table``s heuristic data is the difference between the present maximal heuristic value and the heuristic value of the action that was eventually taken by the player (i.e., the closer the number is to 0, the closer the action was to the maximum heuristic action).\nThe first row presents the difference values of the action that had the maximal difference value among all the Black player``s actions in a given game, as averaged over all Black``s winning and losing games (see respective columns).\nIn games in which the Black player loses, its average difference value was -17.62, while in games in which the Black player won, its average was -12.02.\nThe second row expands the analysis by considering the 3 highest heuristic difference actions, and averaging them.\nIn that case, we notice an average heuristic difference of 5 points between games which the Black player loses and games in which it wins.\nNevertheless, the importance of those numbers is that they allowed us to take an educated guess on a threshold number of 11.5, as the value of the TrH constant, which differentiates between normal actions and highly beneficial ones.\nAfter finding an approximated TrH constant, we can proceed with an analysis of the importance of suboptimal moves.\nTo do so we took the subset of games in which the minimum heuristic difference value for Black``s actions was 11.5.\nAs presented in Table 2, we can see the different min3 h average of the 3 largest ranges and the respective percentage of games won.\nThe first row shows that the Black player won only 12% of the games in which the average of its 3 highest heuristically difference actions (min3 h) was smaller than the suggested threshold, TrH = 11.5.\nThe second row shows a surprising result: it seems that when min3 h > \u22124 the Black player rarely wins.\nIntuition would suggest that games in which the action evaluation values were closer to the maximal values will result in more winning games for Black.\nHowever, it seems that in the Connect-Four domain, merely responding with somewhat easily expected actions, without initiating a few surprising and suboptimal moves, does not yield good results.\nThe last row sums up the main insights from the analysis; most of Black``s wins (83%) came when its min3 h was in the range of -11.5 to -4.\nA close inspection of those Black winning games shows the following pattern behind the numbers: after standard opening moves, Black suddenly drops a disc into an isolated column, which seems a waste of a move.\nWhite continues to build its threats, while usually disregarding Black``s last move, which in turn uses the isolated disc as an anchor for a future winning threat.\nThe results show that it was beneficial for the Black player Table 2: Black``s winnings percentages % of games min3 h < \u221211.5 12% min3 h > \u22124 5% \u221211.5 \u2264 min3 h \u2264 \u22124 83% to take suboptimal actions and not give the current highest possible heuristic value, but will not be too harmful for its position (i.e., will not give high beneficial value to its adversary).\nAs it turned out, learning the threshold is an important aspect of success: taking wildly risky moves (min3 h < \u221211.5) or trying to avoid them (min3 h > \u22124) reduces the Black player``s winning chances by a large margin.\n3.2.2 Affirming the Profile Monitoring Axiom In the task of showing the importance of monitoring one``s adversaries'' profiles, our log files could not be used because they did not contain repeated interactions between players, which are needed to infer the players'' knowledge about their adversaries.\nHowever, the importance of opponent modeling and its use in attaining tactical advantages was already studied in various domains ([3, 9] are good examples).\nIn a recent paper, Markovitch and Reger [9] explored the notion of learning and exploitation of opponent weakness in competitive interactions.\nThey apply simple learning strategies by analyzing examples from past interactions in a specific domain.\nThey also used the Connect-Four adversarial domain, which can now be used to understand the importance of monitoring the adversary``s profile.\nFollowing the presentation of their theoretical model, they describe an extensive empirical study and check the agent``s performance after learning the weakness model with past examples.\nOne of the domains used as a competitive environment was the same Connect-Four game (Checkers was the second domain).\nTheir heuristic function was identical to ours with three different variations (H1, H2, and H3) that are distinguished from one another in their linear combination coefficient values.\nThe search depth for the players was 3 (as in our analysis).\nTheir extensive experiments check and compare various learning strategies, risk factors, predefined feature sets and usage methods.\nThe bottom line is that the Connect-Four domain shows an improvement from a 0.556 winning rate before modeling to a 0.69 after modeling (page 22).\nTheir conclusions, showing improved performance when holding and using the adversary``s model, justify the effort to monitor the adversary profile for continuous and repeated interactions.\nAn additional point that came up in their experiments is the following: after the opponent weakness model has been learned, the authors describe different methods of integrating the opponent weakness model into the agent``s decision strategy.\nNevertheless, regardless of the specific method they chose to work with, all integration methods might cause the agent to take suboptimal decisions; it might cause the agent to prefer actions that are suboptimal at the present decision junction, but which might cause the opponent to react in accordance with its weakness model (as represented by our agent) which in turn will be beneficial for us in the future.\nThe agent``s behavior, as demonstrated in [9] further confirms and strengthens our Suboptimal Tactical Axiom as discussed in the previous section.\n556 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2.3 Additional Insights The need for the Goal Achieving, Preventive Act, and Evaluation Maximization axioms are obvious, and need no further verification.\nHowever, even with respect to those axioms, a few interesting insights came up in the log analysis.\nThe Goal achieving and Preventive Act axioms, though theoretically trivial, seem to provide some challenge to a human player.\nIn the initial inspection of the logs, we encountered few games2 where a player, for inexplicable reasons, did not block the other from winning or failed to execute its own winning move.\nWe can blame those faults on the human``s lack of attention, or a typing error in its move reply; nevertheless, those errors might occur in bounded rational agents, and the appropriate behavior needs to be axiomatized.\nA typical Connect-Four game revolves around generating threats and blocking them.\nIn our analysis we looked for explicit preventive actions, i.e., moves that block a group of 3 discs, or that remove a future threat (in our limited search horizon).\nWe found that in 83% of the total games there was at least one preventive action taken by the Black player.\nIt was also found that Black averaged 2.8 preventive actions per game on the games in which it lost, while averaging 1.5 preventive actions per game when winning.\nIt seems that Black requires 1 or 2 preventive actions to build its initial taking position, before starting to present threats.\nIf it did not manage to win, it will usually prevent an extra threat or two before succumbing to White.\n4.\nRELATED WORK Much research deals with the axiomatization of teamwork and mental states of individuals: some models use knowledge and belief [10], others have models of goals and intentions [8, 4].\nHowever, all these formal theories deal with agent teamwork and cooperation.\nAs far as we know, our model is the first to provide a formalized model for explicit adversarial environments and agents'' behavior in it.\nThe classical Min-Max adversarial search algorithm was the first attempt to integrate the opponent into the search space with a weak assumption of an optimally playing opponent.\nSince then, much effort has gone into integrating the opponent model into the decision procedure to predict future behavior.\nThe M\u2217 algorithm presented by Carmel and Markovitch [2] showed a method of incorporating opponent models into adversary search, while in [3] they used learning to provide a more accurate opponent model in a 2player repeated game environment, where agents'' strategies were modeled as finite automata.\nAdditional Adversarial planning work was done by Willmott et al. [13], which provided an adversarial planning approach to the game of GO.\nThe research mentioned above dealt with adversarial search and the integration of opponent models into classical utilitybased search methods.\nThat work shows the importance of opponent modeling and the ability to exploit it to an agent``s advantage.\nHowever, the basic limitations of those search methods still apply; our model tries to overcome those limitations by presenting a formal model for a new, mental state-based adversarial specification.\n5.\nCONCLUSIONS We presented an Adversarial Environment model for a 2 These were later removed from the final analysis.\nbounded rational agent that is situated in an N-player, zerosum environment.\nWe used the SharedPlans formalization to define the model and the axioms that agents can apply as behavioral guidelines.\nThe model is meant to be used as a guideline for designing agents that need to operate in such adversarial environments.\nWe presented empirical results, based on ConnectFour log file analysis, that exemplify the model and the axioms for a bilateral instance of the environment.\nThe results we presented are a first step towards an expanded model that will cover all types of adversarial environments, for example, environments that are non-zero-sum, and environments that contain natural agents that are not part of the direct conflict.\nThose challenges and more will be dealt with in future research.\n6.\nACKNOWLEDGMENT This research was supported in part by Israel Science Foundation grants #1211\/04 and #898\/05.\n7.\nREFERENCES [1] L. V. Allis.\nA knowledge-based approach of Connect-Four - the game is solved: White wins.\nMaster``s thesis, Free University, Amsterdam, The Netherlands, 1988.\n[2] D. Carmel and S. Markovitch.\nIncorporating opponent models into adversary search.\nIn Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 120-125, Portland, OR, 1996.\n[3] D. Carmel and S. Markovitch.\nOpponent modeling in multi-agent systems.\nIn G. Wei\u00df and S. Sen, editors, Adaptation and Learning in Multi-Agent Systems, pages 40-52.\nSpringer-Verlag, 1996.\n[4] B. J. Grosz and S. Kraus.\nCollaborative plans for complex group action.\nArtificial Intelligence, 86(2):269-357, 1996.\n[5] M. Hadad, G. Kaminka, G. Armon, and S. Kraus.\nSupporting collaborative activity.\nIn Proc.\nof AAAI-2005, pages 83-88, Pittsburgh, 2005.\n[6] http:\/\/www.gamerz.net\/\u02dcpbmserv\/.\n[7] S. Kraus and D. Lehmann.\nDesigning and building a negotiating automated agent.\nComputational Intelligence, 11:132-171, 1995.\n[8] H. J. Levesque, P. R. Cohen, and J. H. T. Nunes.\nOn acting together.\nIn Proc.\nof AAAI-90, pages 94-99, Boston, MA, 1990.\n[9] S. Markovitch and R. Reger.\nLearning and exploiting relative weaknesses of opponent agents.\nAutonomous Agents and Multi-Agent Systems, 10(2):103-130, 2005.\n[10] Y. M. Ronald Fagin, Joseph Y. Halpern and M. Y. Vardi.\nReasoning about knowledge.\nMIT Press, Cambridge, Mass., 1995.\n[11] P. Thagard.\nAdversarial problem solving: Modeling an oponent using explanatory coherence.\nCognitive Science, 16(1):123-149, 1992.\n[12] R. W. Toseland and R. F. Rivas.\nAn Introduction to Group Work Practice.\nPrentice Hall, Englewood Cliffs, NJ, 2nd edition edition, 1995.\n[13] S. Willmott, J. Richardson, A. Bundy, and J. Levine.\nAn adversarial planning approach to Go.\nLecture Notes in Computer Science, 1558:93-112, 1999.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 557","lvl-3":"An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions\nABSTRACT\nMultiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions.\nThis paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment.\nIn such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model).\nWe define an Adversarial Environment by describing the mental states of an agent in such an environment.\nWe then present behavioral axioms that are intended to serve as design principles for building such adversarial agents.\nWe explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms' appropriateness.\n1.\nINTRODUCTION\nEarly research in multiagent systems (MAS) considered cooperative groups of agents; because individual agents had\nlimited resources, or limited access to information (e.g., limited processing power, limited sensor coverage), they worked together by design to solve problems that individually they could not solve, or at least could not solve as efficiently.\nMAS research, however, soon began to consider interacting agents with individuated interests, as representatives of different humans or organizations with non-identical interests.\nWhen interactions are guided by diverse interests, participants may have to overcome disagreements, uncooperative interactions, and even intentional attempts to damage one another.\nWhen these types of interactions occur, environments require appropriate behavior from the agents situated in them.\nWe call these environments Adversarial Environments, and call the clashing agents Adversaries.\nModels of cooperation and teamwork have been extensively explored in MAS through the axiomatization of mental states (e.g., [8, 4, 5]).\nHowever, none of this research dealt with adversarial domains and their implications for agent behavior.\nOur paper addresses this issue by providing a formal, axiomatized mental state model for a subset of adversarial domains, namely simple zero-sum adversarial environments.\nSimple zero-sum encounters exist of course in various twoplayer games (e.g., Chess, Checkers), but they also exist in n-player games (e.g., Risk, Diplomacy), auctions for a single good, and elsewhere.\nIn these latter environments especially, using a utility-based adversarial search (such as the Min-Max algorithm) does not always provide an adequate solution; the payoff function might be quite complex or difficult to quantify, and there are natural computational limitations on bounded rational agents.\nIn addition, traditional search methods (like Min-Max) do not make use of a model of the opponent, which has proven to be a valuable addition to adversarial planning [9, 3, 11].\nIn this paper, we develop a formal, axiomatized model for bounded rational agents that are situated in a zero-sum adversarial environment.\nThe model uses different modality operators, and its main foundations are the SharedPlans [4] model for collaborative behavior.\nWe explore environment properties and the mental states of agents to derive behavioral axioms; these behavioral axioms constitute a formal model that serves as a specification and design guideline for agent design in such settings.\nWe then investigate the behavior of our model empirically using the Connect-Four board game.\nWe show that this game conforms to our environment definition, and analyze players' behavior using a large set of completed match log\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nADVERSARIAL ENVIRONMENTS\n2.1 Model Overview\n2.2 Model Definitions for Mental States\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 551\n2.3 The Environment Formulation\n552 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nProposition 1: Prevent or lose case.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 553\n3.\nEVALUATION\n554 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.1 The Domain\n3.2 Axiom Analysis\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 555\n3.2.1 Affirming the Suboptimal tactical move axiom\n3.2.2 Affirming the Profile Monitoring Axiom\n556 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2.3 Additional Insights\n4.\nRELATED WORK\nMuch research deals with the axiomatization of teamwork and mental states of individuals: some models use knowledge and belief [10], others have models of goals and intentions [8, 4].\nHowever, all these formal theories deal with agent teamwork and cooperation.\nAs far as we know, our model is the first to provide a formalized model for explicit adversarial environments and agents' behavior in it.\nThe classical Min-Max adversarial search algorithm was the first attempt to integrate the opponent into the search space with a weak assumption of an optimally playing opponent.\nSince then, much effort has gone into integrating the opponent model into the decision procedure to predict future behavior.\nThe M \u2217 algorithm presented by Carmel and Markovitch [2] showed a method of incorporating opponent models into adversary search, while in [3] they used learning to provide a more accurate opponent model in a 2player repeated game environment, where agents' strategies were modeled as finite automata.\nAdditional Adversarial planning work was done by Willmott et al. [13], which provided an adversarial planning approach to the game of GO.\nThe research mentioned above dealt with adversarial search and the integration of opponent models into classical utilitybased search methods.\nThat work shows the importance of opponent modeling and the ability to exploit it to an agent's advantage.\nHowever, the basic limitations of those search methods still apply; our model tries to overcome those limitations by presenting a formal model for a new, mental state-based adversarial specification.\n5.\nCONCLUSIONS\nWe presented an Adversarial Environment model for a 2These were later removed from the final analysis.\nbounded rational agent that is situated in an N-player, zerosum environment.\nWe used the SharedPlans formalization to define the model and the axioms that agents can apply as behavioral guidelines.\nThe model is meant to be used as a guideline for designing agents that need to operate in such adversarial environments.\nWe presented empirical results, based on ConnectFour log file analysis, that exemplify the model and the axioms for a bilateral instance of the environment.\nThe results we presented are a first step towards an expanded model that will cover all types of adversarial environments, for example, environments that are non-zero-sum, and environments that contain natural agents that are not part of the direct conflict.\nThose challenges and more will be dealt with in future research.","lvl-4":"An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions\nABSTRACT\nMultiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions.\nThis paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment.\nIn such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model).\nWe define an Adversarial Environment by describing the mental states of an agent in such an environment.\nWe then present behavioral axioms that are intended to serve as design principles for building such adversarial agents.\nWe explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms' appropriateness.\n1.\nINTRODUCTION\nEarly research in multiagent systems (MAS) considered cooperative groups of agents; because individual agents had\nMAS research, however, soon began to consider interacting agents with individuated interests, as representatives of different humans or organizations with non-identical interests.\nWhen these types of interactions occur, environments require appropriate behavior from the agents situated in them.\nWe call these environments Adversarial Environments, and call the clashing agents Adversaries.\nModels of cooperation and teamwork have been extensively explored in MAS through the axiomatization of mental states (e.g., [8, 4, 5]).\nHowever, none of this research dealt with adversarial domains and their implications for agent behavior.\nOur paper addresses this issue by providing a formal, axiomatized mental state model for a subset of adversarial domains, namely simple zero-sum adversarial environments.\nIn addition, traditional search methods (like Min-Max) do not make use of a model of the opponent, which has proven to be a valuable addition to adversarial planning [9, 3, 11].\nIn this paper, we develop a formal, axiomatized model for bounded rational agents that are situated in a zero-sum adversarial environment.\nThe model uses different modality operators, and its main foundations are the SharedPlans [4] model for collaborative behavior.\nWe explore environment properties and the mental states of agents to derive behavioral axioms; these behavioral axioms constitute a formal model that serves as a specification and design guideline for agent design in such settings.\nWe then investigate the behavior of our model empirically using the Connect-Four board game.\nWe show that this game conforms to our environment definition, and analyze players' behavior using a large set of completed match log\n4.\nRELATED WORK\nHowever, all these formal theories deal with agent teamwork and cooperation.\nAs far as we know, our model is the first to provide a formalized model for explicit adversarial environments and agents' behavior in it.\nThe classical Min-Max adversarial search algorithm was the first attempt to integrate the opponent into the search space with a weak assumption of an optimally playing opponent.\nSince then, much effort has gone into integrating the opponent model into the decision procedure to predict future behavior.\nAdditional Adversarial planning work was done by Willmott et al. [13], which provided an adversarial planning approach to the game of GO.\nThe research mentioned above dealt with adversarial search and the integration of opponent models into classical utilitybased search methods.\nThat work shows the importance of opponent modeling and the ability to exploit it to an agent's advantage.\nHowever, the basic limitations of those search methods still apply; our model tries to overcome those limitations by presenting a formal model for a new, mental state-based adversarial specification.\n5.\nCONCLUSIONS\nWe presented an Adversarial Environment model for a 2These were later removed from the final analysis.\nbounded rational agent that is situated in an N-player, zerosum environment.\nWe used the SharedPlans formalization to define the model and the axioms that agents can apply as behavioral guidelines.\nThe model is meant to be used as a guideline for designing agents that need to operate in such adversarial environments.\nWe presented empirical results, based on ConnectFour log file analysis, that exemplify the model and the axioms for a bilateral instance of the environment.\nThe results we presented are a first step towards an expanded model that will cover all types of adversarial environments, for example, environments that are non-zero-sum, and environments that contain natural agents that are not part of the direct conflict.\nThose challenges and more will be dealt with in future research.","lvl-2":"An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions\nABSTRACT\nMultiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions.\nThis paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment.\nIn such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model).\nWe define an Adversarial Environment by describing the mental states of an agent in such an environment.\nWe then present behavioral axioms that are intended to serve as design principles for building such adversarial agents.\nWe explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms' appropriateness.\n1.\nINTRODUCTION\nEarly research in multiagent systems (MAS) considered cooperative groups of agents; because individual agents had\nlimited resources, or limited access to information (e.g., limited processing power, limited sensor coverage), they worked together by design to solve problems that individually they could not solve, or at least could not solve as efficiently.\nMAS research, however, soon began to consider interacting agents with individuated interests, as representatives of different humans or organizations with non-identical interests.\nWhen interactions are guided by diverse interests, participants may have to overcome disagreements, uncooperative interactions, and even intentional attempts to damage one another.\nWhen these types of interactions occur, environments require appropriate behavior from the agents situated in them.\nWe call these environments Adversarial Environments, and call the clashing agents Adversaries.\nModels of cooperation and teamwork have been extensively explored in MAS through the axiomatization of mental states (e.g., [8, 4, 5]).\nHowever, none of this research dealt with adversarial domains and their implications for agent behavior.\nOur paper addresses this issue by providing a formal, axiomatized mental state model for a subset of adversarial domains, namely simple zero-sum adversarial environments.\nSimple zero-sum encounters exist of course in various twoplayer games (e.g., Chess, Checkers), but they also exist in n-player games (e.g., Risk, Diplomacy), auctions for a single good, and elsewhere.\nIn these latter environments especially, using a utility-based adversarial search (such as the Min-Max algorithm) does not always provide an adequate solution; the payoff function might be quite complex or difficult to quantify, and there are natural computational limitations on bounded rational agents.\nIn addition, traditional search methods (like Min-Max) do not make use of a model of the opponent, which has proven to be a valuable addition to adversarial planning [9, 3, 11].\nIn this paper, we develop a formal, axiomatized model for bounded rational agents that are situated in a zero-sum adversarial environment.\nThe model uses different modality operators, and its main foundations are the SharedPlans [4] model for collaborative behavior.\nWe explore environment properties and the mental states of agents to derive behavioral axioms; these behavioral axioms constitute a formal model that serves as a specification and design guideline for agent design in such settings.\nWe then investigate the behavior of our model empirically using the Connect-Four board game.\nWe show that this game conforms to our environment definition, and analyze players' behavior using a large set of completed match log\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nfiles.\nIn addition, we use the results presented in [9] to discuss the importance of opponent modeling in our ConnectFour adversarial domain.\nThe paper proceeds as follows.\nSection 2 presents the model's formalization.\nSection 3 presents the empirical analysis and its results.\nWe discuss related work in Section 4, and conclude and present future directions in Section 5.\n2.\nADVERSARIAL ENVIRONMENTS\nThe adversarial environment model (denoted as AE) is intended to guide the design of agents by providing a specification of the capabilities and mental attitudes of an agent in an adversarial environment.\nWe focus here on specific types of adversarial environments, specified as follows:\n1.\nZero-Sum Interactions: positive and negative utilities of all agents sum to zero; 2.\nSimple AEs: all agents in the environment are adversarial agents; 3.\nBilateral AEs: AE's with exactly two agents; 4.\nMultilateral AEs': AE's of three or more agents.\nWe will work on both bilateral and multilateral instantiations of zero-sum and simple environments.\nIn particular, our adversarial environment model will deal with interactions that consist of N agents (N> 2), where all agents are adversaries, and only one agent can succeed.\nExamples of such environments range from board games (e.g., Chess, Connect-Four, and Diplomacy) to certain economic environments (e.g., N-bidder auctions over a single good).\n2.1 Model Overview\nOur approach is to formalize the mental attitudes and behaviors of a single adversarial agent; we consider how a single agent perceives the AE.\nThe following list specifies the conditions and mental states of an agent in a simple, zero-sum AE:\n1.\nThe agent has an individual intention that its own goal will be completed; 2.\nThe agent has an individual belief that it and its adversaries are pursuing full conflicting goals (defined below)--there can be only one winner; 3.\nThe agent has an individual belief that each adversary has an intention to complete its own full conflicting goal; 4.\nThe agent has an individual belief in the (partial) profile of its adversaries.\nItem 3 is required, since it might be the case that some agent has a full conflicting goal, and is currently considering adopting the intention to complete it, but is, as of yet, not committed to achieving it.\nThis might occur because the agent has not yet deliberated about the effects that adopting that intention might have on the other intentions it is currently holding.\nIn such cases, it might not consider itself to even be in an adversarial environment.\nItem 4 states that the agent should hold some belief about the profiles of its adversaries.\nThe profile represents all the knowledge the agent has about its adversary: its weaknesses, strategic capabilities, goals, intentions, trustworthiness, and more.\nIt can be given explicitly or can be learned from observations of past encounters.\n2.2 Model Definitions for Mental States\nWe use Grosz and Kraus's definitions of the modal operators, predicates, and meta-predicates, as defined in their SharedPlan formalization [4].\nWe recall here some of the predicates and operators that are used in that formalization: Int.To (Ai, \u03b1, Tn, T\u03b1, C) represents Ai's intentions at time Tn to do an action \u03b1 at time T\u03b1 in the context of C. Int.Th (Ai, prop, Tn, Tprop, C) represents Ai's intentions at time Tn that a certain proposition prop holds at time Tprop in the context of C.\nThe potential intention operators, Pot.Int.To (...) and Pot.Int.Th (...), are used to represent the mental state when an agent considers adopting an intention, but has not deliberated about the interaction of the other intentions it holds.\nThe operator Bel (Ai, f, Tf) represents agent Ai believing in the statement expressed in formula f, at time Tf.\nMB (A, f, Tf) represents mutual belief for a group of agents A.\nA snapshot of the system finds our environment to be in some state e E E of environmental variable states, and each adversary in any LAi E L of possible local states.\nAt any given time step, the system will be in some world w of the set of all possible worlds w E W, where w = E xLA1 xLA2 x...LAn, and n is the number of adversaries.\nFor example, in a Texas Hold 'em poker game, an agent's local state might be its own set of cards (which is unknown to its adversary) while the environment will consist of the betting pot and the community cards (which are visible to both players).\nA utility function under this formalization is defined as a mapping from a possible world w E W to an element in R, which expresses the desirability of the world, from a single agent perspective.\nWe usually normalize the range to [0,1], where 0 represents the least desirable possible world, and 1 is the most desirable world.\nThe implementation of the utility function is dependent on the domain in question.\nThe following list specifies new predicates, functions, variables, and constants used in conjunction with the original definitions for the adversarial environment formalization:\n1.\n\u03c6 is a null action (the agent does not do anything).\n2.\nGAi is the set of agent Ai's goals.\nEach goal is a set of predicates whose satisfaction makes the goal complete (we use G * Ai E GAi to represent an arbitrary goal of agent Ai).\n3.\ngAi is the set of agent Ai's subgoals.\nSubgoals are predicates whose satisfaction represents an important milestone toward achievement of the full goal.\ngG \u2217 Ai C gAi is the set of subgoals that are important to the completion of goal G * Ai (we will use g * G \u2217 Ai E gG \u2217 Ai to represent an arbitrary subgoal).\n4.\nPAj Ai is the profile object agent Ai holds about agent Aj.\n5.\nCA is a general set of actions for all agents in A which are derived from the environment's constraints.\nCAi C CA is the set of agent Ai's possible actions.\n6.\nDo (Ai, \u03b1, T\u03b1, w) holds when Ai performs action \u03b1 over time interval T\u03b1 in world w. 7.\nAchieve (G * Ai, \u03b1, w) is true when goal G * Ai is achieved following the completion of action \u03b1 in world w E W, where \u03b1 E CAi.\n8.\nProfile (Ai, PAi\nAi) is true when agent Ai holds an object profile for agent Aj.\nfunction returning a value which represents the amount of\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 551\nknowledge agent Ai has on the profile of agent Aj, at time Tn.\nThe higher the value, the more knowledge agent Ai has.\nAdvKnow: PAj Ai \u00d7 Tn \u2192 ~ Definition 3.\nEval--This evaluation function returns an estimated expected utility value for an agent in A, after completing an action from CA in some world state w. Eval: A \u00d7 CA \u00d7 w \u2192 ~ Definition 4.\nTrH--(Threshold) is a numerical constant in the [0,1] range that represents an evaluation function (Eval) threshold value.\nAn action that yields an estimated utility evaluation above the TrH is regarded as a highly beneficial action.\nThe Eval value is an estimation and not the real utility function, which is usually unknown.\nUsing the real utility value for a rational agent would easily yield the best outcome for that agent.\nHowever, agents usually do not have the real utility functions, but rather a heuristic estimate of it.\nThere are two important properties that should hold for the evaluation function: Property 1.\nThe evaluation function should state that the most desirable world state is one in which the goal is achieved.\nTherefore, after the goal has been satisfied, there can be no future action that can put the agent in a world state with higher Eval value.\n(\u2200 Ai, G * Ai, \u03b1,,3 \u2208 CAi, w \u2208 W) Achieve (G * Ai, \u03b1, w) \u21d2 Eval (Ai, \u03b1, w) \u2265 Eval (Ai,,3, w) Property 2.\nThe evaluation function should project an action that causes a completion of a goal or a subgoal to a value which is greater than TrH (a highly beneficial action).\nDefinition 5.\nSetAction We define a set action (SetAction) as a set of action operations (either complex or basic actions) from some action sets CAi and CAj which, according to agent Ai's belief, are attached together by a temporal and consequential relationship, forming a chain of events (action, and its following consequent action).\nThe consequential relation might exist due to various environmental constraints (when one action forces the adversary to respond with a specific action) or due to the agent's knowledge about the profile of its adversary.\nProperty 3.\nAs the knowledge we have about our adversary increases we will have additional beliefs about its behavior in different situations which in turn creates new set actions.\nFormally, if our AdvKnow at time Tn +1 is greater than AdvKnow at time Tn, then every SetAction known at time Tn is also known at time Tn +1.\nBel (Aag, SetAction (\u03b11,..., \u03b1u,,31,...,,3 v), Tn) \u21d2 Bel (Aag, SetAction (\u03b11,..., \u03b1u,,31,...,,3 v), Tn +1)\n2.3 The Environment Formulation\nThe following axioms provide the formal definition for a simple, zero-sum Adversarial Environment (AE).\nSatisfaction of these axioms means that the agent is situated in such an environment.\nIt provides specifications for agent Aag to interact with its set of adversaries A with respect to goals G * Aag and G * A at time TCo at some world state w. AE (Aag, A, G * Aag, A1,..., Ak, G * A1,..., G * Ak, Tn, w)\n1.\nAag has an Int.Th his goal would be completed:\n4.\nAag has beliefs about the (partial) profiles of its adversaries\nTo build an agent that will be able to operate successfully within such an AE, we must specify behavioral guidelines for its interactions.\nUsing a naive Eval maximization strategy to a certain search depth will not always yield satisfactory results for several reasons: (1) the search horizon problem when searching for a fixed depth; (2) the strong assumption of an optimally rational, unbounded resources adversary; (3) using an estimated evaluation function which will not give optimal results in all world states, and can be exploited [9].\nThe following axioms specify the behavioral principles that can be used to differentiate between successful and less successful agents in the above Adversarial Environment.\nThose axioms should be used as specification principles when designing and implementing agents that should be able to perform well in such Adversarial Environments.\nThe behavioral axioms represent situations in which the agent will adopt potential intentions to (Pot.Int.To (...)) perform an action, which will typically require some means-end reasoning to select a possible course of action.\nThis reasoning will lead to the adoption of an Int.To (...) (see [4]).\nA1.\nGoal Achieving Axiom.\nThe first axiom is the simplest case; when the agent Aag believes that it is one action (\u03b1) away from achieving his conflicting goal G * Aag, it should adopt the potential intention to do \u03b1 and complete its goal.\n(\u2200 Aag, \u03b1 \u2208 CAag, Tn, T\u03b1, w \u2208 W) (Bel (Aag, Do (Aag, \u03b1, T\u03b1, w) \u21d2 Achieve (G * Aag, \u03b1, w)) \u21d2 Pot.Int.To (Aag, \u03b1, Tn, T\u03b1, w)\nThis somewhat trivial behavior is the first and strongest axiom.\nIn any situation, when the agent is an action away from completing the goal, it should complete the action.\nAny fair Eval function would naturally classify \u03b1 as the maximal value action (property 1).\nHowever, without explicit axiomatization of such behavior there might be situations where the agent will decide on taking another action for various reasons, due to its bounded decision resources.\nA2.\nPreventive Act Axiom.\nBeing in an adversarial situation, agent Aag might decide to take actions that will damage one of its adversary's plans to complete its goal, even if those actions do not explicitly advance Aag towards its conflicting goal G * Aag.\nSuch preventive action will take place when agent Aag has a belief about the possibility of its adversary Ao doing an action,3 that will give it a high\n552 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nutility evaluation value (> TrH).\nBelieving that taking action \u03b1 will prevent the opponent from doing its,3, it will adopt a potential intention to do \u03b1.\nThis axiom is a basic component of any adversarial environment.\nFor example, looking at a Chess board game, a player could realize that it is about to be checkmated by its opponent, thus making a preventive move.\nAnother example is a Connect Four game: when a player has a row of three chips, its opponent must block it, or lose.\nA specific instance of A1 occurs when the adversary is one action away from achieving its goal, and immediate preventive action needs to be taken by the agent.\nFormally, we have the same beliefs as stated above, with a changed belief that doing action,3 will cause agent Ao to achieve its goal.\nProposition 1: Prevent or lose case.\nSketch of proof: Proposition 1 can be easily derived from axiom A1 and the property 2 of the Eval function, which states that any action that causes a completion of a goal is a highly beneficial action.\nThe preventive act behavior will occur implicitly when the Eval function is equal to the real world utility function.\nHowever, being bounded rational agents and dealing with an estimated evaluation function we need to explicitly axiomatize such behavior, for it will not always occur implicitly from the evaluation function.\nA3.\nSuboptimal Tactical Move Axiom.\nIn many scenarios a situation may occur where an agent will decide not to take the current most beneficial action it can take (the action with the maximal utility evaluation value), because it believes that taking another action (with lower utility evaluation value) might yield (depending on the adversary's response) a future possibility for a highly beneficial action.\nThis will occur most often when the Eval function is inaccurate and differs by a large extent from the Utility function.\nPut formally, agent Aag believes in a certain SetAction that will evolve according to its initial action and will yield a high beneficial value (> TrH) solely for it.\nAn agent might believe that a chain of events will occur for various reasons due to the inevitable nature of the domain.\nFor example, in Chess, we often observe the following: a move causes a check position, which in turn limits the opponent's moves to avoiding the check, to which the first player might react with another check, and so on.\nThe agent might also believe in a chain of events based on its knowledge of its adversary's profile, which allows it to foresee the adversary's movements with high accuracy.\nA4.\nProfile Detection Axiom.\nThe agent can adjust its adversary's profiles by observations and pattern study (specifically, if there are repeated encounters with the same adversary).\nHowever, instead of waiting for profile information to be revealed, an agent can also initiate actions that will force its adversary to react in a way that will reveal profile knowledge about it.\nFormally, the axiom states that if all actions (\u03b3) are not highly beneficial actions ( Al TrH, Tn) Members' profiles are a crucial part of successful alliances.\nWe assume that agents that have more accurate profiles of their adversaries will be more successful in such environments.\nSuch agents will be able to predict when a member is about to breach the alliance's contract (item 2 in the above model), and take counter measures (when item 3 will falsify).\nThe robustness of the alliance is in part a function of its members' trustfulness measure, objective position estimation, and other profile properties.\nWe should note that an agent can simultaneously be part of more than one alliance.\nSuch a temporary alliance, where the group members do not have a joint goal but act collaboratively for the interest of their own individual goals, is classified as a Treatment Group by modern psychologists [12] (in contrast to a Task Group, where its members have a joint goal).\nThe Shared Activity model as presented in [5] modeled Treatment Group behavior using the same SharedPlans formalization.\nWhen comparing both definitions of an alliance and a Treatment Group we found an unsurprising resemblance between both models: the environment model's definitions are almost identical (see SA's definitions in [5]), and their Selfish-Act and Cooperative Act axioms conform to our adversarial agent's behavior.\nThe main distinction between both models is the integration of a Helpful-behavior act axiom, in the Shared Activity which cannot be part of ours.\nThis axiom states that an agent will consider taking action that will lower its Eval value (to a certain lower bound), if it believes that a group partner will gain a significant benefit.\nSuch behavior cannot occur in a pure adversarial environment (as a zero-sum game is), where the alliance members are constantly on watch to manipulate their alliance to their own advantage.\nA6.\nEvaluation Maximization Axiom.\nIn a case when all other axioms are inapplicable, we will proceed with the action that maximizes the heuristic value as computed in the Eval function.\nT1.\nOptimality on Eval = Utility The above axiomatic model handles situations where the Utility is unknown and the agents are bounded rational agents.\nThe following theorem shows that in bilateral interactions, where the agents have the real Utility function (i.e., Eval = Utility) and are rational agents, the axioms provide the same optimal result as classic adversarial search (e.g., Min-Max).\nTHEOREM 1.\nLet Aeag be an unbounded rational AE agent using the Eval heuristic evaluation function, Auag be the same agent using the true Utility function, and Ao be a sole unbounded utility-based rational adversary.\nGiven that Eval = Utility:\nSketch of proof--Given that Auag has the real utility function and unbounded resources, it can generate the full game tree and run the optimal MinMax algorithm to choose the highest utility value action, which we denote by, \u03b1.\nThe proof will show that Aeag, using the AE axioms, will select the same or equal utility \u03b1 (when there is more than one action with the same max utility) when Eval = Utility.\n(A1) Goal achieving axiom--suppose there is an \u03b1 such that its completion will achieve Auag's goal.\nIt will obtain the highest utility by Min-Max for Auag.\nThe Aeag agent will select \u03b1 or another action with the same utility value via A1.\nIf such \u03b1 does not exist, Aeag cannot apply this axiom, and proceeds to A2.\n(A2) Preventive act axiom--(1) Looking at the basic case (see Prop1), if there is a \u03b2 which leads Ao to achieve its goal, then a preventive action \u03b1 will yield the highest utility for Auag.\nAuag will choose it through the utility, while Aeag will choose it through A2.\n(2) In the general case, \u03b2 is a highly beneficial action for Ao, thus yields low utility for Auag, which will guide it to select an \u03b1 that will prevent \u03b2, while Aeag will choose it through A2 .1 If such \u03b2 does not exist for Ao, then A2 is not applicable, and Aeag can proceed to A3.\n(A3) Suboptimal tactical move axiom--When using a heuristic Eval function, Aeag has a partial belief in the profile of its adversary (item 4 in AE model), which may lead it to believe in SetActions (Prop1).\nIn our case, Aeag is holding a full profile on its optimal adversary and knows that Ao will behave optimally according to the real utility values on the complete search tree, therefore, any belief about suboptimal SetAction cannot exist, yielding this axiom inapplicable.\nAeag will proceed to A4.\n(A4) Profile detection axiom--Given that Aeag has the full profile of Ao, none of Aeag's actions can increase its knowledge.\nThat axiom will not be applied, and the agent will proceed with A6 (A5 will be disregarded because the interaction is bilateral).\n(A6) Evaluation maximization axiom--This axiom will select the max Eval for Aeag.\nGiven that Eval = Utility, the same \u03b1 that was selected by Auag will be selected.\n3.\nEVALUATION\nThe main purpose of our experimental analysis is to evaluate the model's behavior and performance in a real adversarial environment.\nThis section investigates whether bounded 1A case where following the completion of \u03b2 there exists a \u03b3 which gives high utility for Agent Auag, cannot occur because Ao uses the same utility, and \u03b3's existence will cause it to classify \u03b2 as a low utility action.\n554 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nrational agents situated in such adversarial environments will be better off applying our suggested behavioral axioms.\n3.1 The Domain\nTo explore the use of the above model and its behavioral axioms, we decided to use the Connect-Four game as our adversarial environment.\nConnect-Four is a 2-player, zerosum game which is played using a 6x7 matrix-like board.\nEach turn, a player drops a disc into one of the 7 columns (the set of 21 discs is usually colored yellow for player 1 and red for player 2; we will use White and Black respectively to avoid confusion).\nThe winner is the first player to complete a horizontal, vertical, or diagonal set of four discs with its color.\nOn very rare occasions, the game might end in a tie if all the empty grids are filled, but no player managed to create a 4-disc set.\nThe Connect-Four game was solved in [1], where it is shown that the first player (playing with the white discs) can force a win by starting in the middle column (column 4) and playing optimally However, the optimal strategy is very complex, and difficult to follow even for complex bounded rational agents, such as human players.\nBefore we can proceed checking agent behavior, we must first verify that the domain conforms to the adversarial environment's definition as given above (which the behavioral axioms are based on).\nFirst, when playing a Connect-Four game, the agent has an intention to win the game (item 1).\nSecond (item 2), our agent believes that in Connect-Four there can only be one winner (or no winner at all in the rare occurrence of a tie).\nIn addition, our agent believes that its opponent to the game will try to win (item 3), and we hope it has some partial knowledge (item 4) about its adversary (this knowledge can vary from nothing, through simple facts such as age, to strategies and weaknesses).\nOf course, not all Connect-Four encounters are adversarial.\nFor example, when a parent is playing the game with its child, the following situation might occur: the child, having a strong incentive to win, treats the environment as adversarial (it intends to win, understands that there can only be one winner, and believes that its parent is trying to beat him).\nHowever, the parent's point of view might see the environment as an educational one, where its goal is not to win the game, but to cause enjoyment or practice strategic reasoning.\nIn such an educational environment, a new set of behavioral axioms might be more beneficial to the parent's goals than our suggested adversarial behavioral axioms.\n3.2 Axiom Analysis\nAfter showing that the Connect-Four game is indeed a zero-sum, bilateral adversarial environment, the next step is to look at players' behaviors during the game and check whether behaving according to our model does improve performance.\nTo do so we have collected log files from completed Connect-Four games that were played by human players over the Internet.\nOur collected log file data came from Play by eMail (PBeM) sites.\nThese are web sites that host email games, where each move is taken by an email exchange between the server and the players.\nMany such sites' archives contain real competitive interactions, and also maintain a ranking system for their members.\nMost of the data we used can be found in [6].\nAs can be learned from [1], Connect-Four has an optimal strategy and a considerable advantage for the player who starts the game (which we call the White player).\nWe will concentrate in our analysis on the second player's moves (to be called Black).\nThe White player, being the first to act, has the so-called initiative advantage.\nHaving the advantage and a good strategy will keep the Black player busy reacting to its moves, instead of initiating threats.\nA threat is a combination of three discs of the same color, with an empty spot for the fourth winning disk.\nAn open threat is a threat that can be realized in the opponent's next move.\nIn order for the Black player to win, it must somehow turn the tide, take the advantage and start presenting threats to the White player.\nWe will explore Black players' behavior and their conformance to our axioms.\nTo do so, we built an application that reads log files and analyzes the Black player's moves.\nThe application contains two main components: (1) a Min-Max algorithm for evaluation of moves; (2) open threats detector for the discovering of open threats.\nThe Min-Max algorithm will work to a given depth, d and for each move \u03b1 will output the heuristic value for the next action taken by the player as written in the log file, h (\u03b1), alongside the maximum heuristic value, maxh (\u03b1), that could be achieved prior to taking the move (obviously, if h (\u03b1) = ~ maxh (\u03b1), then the player did not do the optimal move heuristically).\nThe threat detector's job is to notify if some action was taken in order to block an open threat (not blocking an open threat will probably cause the player to lose in the opponent's next move).\nThe heuristic function used by Min-Max to evaluate the player's utility is the following function, which is simple to compute, yet provides a reasonable challenge to human opponents: Definition 8.\nLet Group be an adjacent set of four squares that are horizontal, vertical, or diagonal.\nGroupn b (Groupn w) be a Group with n pieces of the black (white) color and 4 \u2212 n empty squares.\nThe values of \u03b1,,3 and 5 can vary to form any desired linear combination; however, it is important to value them with the \u03b1 <,3 <5 ordering in mind (we used 1, 4, and 8 as their respective values).\nGroups of 4 discs of the same color means victory, thus discovery of such a group will result in \u221e to ensure an extreme value.\nWe now use our estimated evaluation function to evaluate the Black player's actions during the Connect-Four adversarial interaction.\nEach game from the log file was input into the application, which processed and output a reformatted log file containing the h value of the current move, the maxh value that could be achieved, and a notification if an open threat was detected.\nA total of 123 games were analyzed (57 with White winning, and 66 with Black winning).\nA few additional games were manually ignored in the experiment, due to these problems: a player abandoning the game while the outcome is not final, or a blunt irrational move in the early stages of the game (e.g., not blocking an obvious winning group in the first opening moves).\nIn addition, a single tie game was also removed.\nThe simulator was run to a search depth of 3 moves.\nWe now proceed to analyze the games with respect to each behavioral axiom.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 555\nTable 1: Average heuristic difference analysis\n3.2.1 Affirming the Suboptimal tactical move axiom\nThe following section presents the heuristic evaluations of the Min-Max algorithm for each action, and checks the amount and extent of suboptimal tactical actions and their implications on performance.\nTable 1 shows results and insights from the games' heuristic analysis, when search depth equals 3 (this search depth was selected for the results to be comparable to [9], see Section 3.2.3).\nThe table's heuristic data is the difference between the present maximal heuristic value and the heuristic value of the action that was eventually taken by the player (i.e., the closer the number is to 0, the closer the action was to the maximum heuristic action).\nThe first row presents the difference values of the action that had the maximal difference value among all the Black player's actions in a given game, as averaged over all Black's winning and losing games (see respective columns).\nIn games in which the Black player loses, its average difference value was -17.62, while in games in which the Black player won, its average was -12.02.\nThe second row expands the analysis by considering the 3 highest heuristic difference actions, and averaging them.\nIn that case, we notice an average heuristic difference of 5 points between games which the Black player loses and games in which it wins.\nNevertheless, the importance of those numbers is that they allowed us to take an educated guess on a threshold number of 11.5, as the value of the TrH constant, which differentiates between normal actions and highly beneficial ones.\nAfter finding an approximated TrH constant, we can proceed with an analysis of the importance of suboptimal moves.\nTo do so we took the subset of games in which the minimum heuristic difference value for Black's actions was 11.5.\nAs presented in Table 2, we can see the different min3h average of the 3 largest ranges and the respective percentage of games won.\nThe first row shows that the Black player won only 12% of the games in which the average of its 3 highest heuristically difference actions (min3h) was smaller than the suggested threshold, TrH = 11.5.\nThe second row shows a surprising result: it seems that when min3h>--4 the Black player rarely wins.\nIntuition would suggest that games in which the action evaluation values were closer to the maximal values will result in more winning games for Black.\nHowever, it seems that in the Connect-Four domain, merely responding with somewhat easily expected actions, without initiating a few surprising and suboptimal moves, does not yield good results.\nThe last row sums up the main insights from the analysis; most of Black's wins (83%) came when its min3h was in the range of -11.5 to -4.\nA close inspection of those Black winning games shows the following pattern behind the numbers: after standard opening moves, Black suddenly drops a disc into an isolated column, which seems a waste of a move.\nWhite continues to build its threats, while usually disregarding Black's last move, which in turn uses the isolated disc as an anchor for a future winning threat.\nThe results show that it was beneficial for the Black player\nTable 2: Black's winnings percentages\nto take suboptimal actions and not give the current highest possible heuristic value, but will not be too harmful for its position (i.e., will not give high beneficial value to its adversary).\nAs it turned out, learning the threshold is an important aspect of success: taking wildly risky moves (min3h <--11.5) or trying to avoid them (min3h>--4) reduces the Black player's winning chances by a large margin.\n3.2.2 Affirming the Profile Monitoring Axiom\nIn the task of showing the importance of monitoring one's adversaries' profiles, our log files could not be used because they did not contain repeated interactions between players, which are needed to infer the players' knowledge about their adversaries.\nHowever, the importance of opponent modeling and its use in attaining tactical advantages was already studied in various domains ([3, 9] are good examples).\nIn a recent paper, Markovitch and Reger [9] explored the notion of learning and exploitation of opponent weakness in competitive interactions.\nThey apply simple learning strategies by analyzing examples from past interactions in a specific domain.\nThey also used the Connect-Four adversarial domain, which can now be used to understand the importance of monitoring the adversary's profile.\nFollowing the presentation of their theoretical model, they describe an extensive empirical study and check the agent's performance after learning the weakness model with past examples.\nOne of the domains used as a competitive environment was the same Connect-Four game (Checkers was the second domain).\nTheir heuristic function was identical to ours with three different variations (H1, H2, and H3) that are distinguished from one another in their linear combination coefficient values.\nThe search depth for the players was 3 (as in our analysis).\nTheir extensive experiments check and compare various learning strategies, risk factors, predefined feature sets and usage methods.\nThe bottom line is that the Connect-Four domain shows an improvement from a 0.556 winning rate before modeling to a 0.69 after modeling (page 22).\nTheir conclusions, showing improved performance when holding and using the adversary's model, justify the effort to monitor the adversary profile for continuous and repeated interactions.\nAn additional point that came up in their experiments is the following: after the opponent weakness model has been learned, the authors describe different methods of integrating the opponent weakness model into the agent's decision strategy.\nNevertheless, regardless of the specific method they chose to work with, all integration methods might cause the agent to take suboptimal decisions; it might cause the agent to prefer actions that are suboptimal at the present decision junction, but which might cause the opponent to react in accordance with its weakness model (as represented by our agent) which in turn will be beneficial for us in the future.\nThe agent's behavior, as demonstrated in [9] further confirms and strengthens our Suboptimal Tactical Axiom as discussed in the previous section.\n556 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2.3 Additional Insights\nThe need for the Goal Achieving, Preventive Act, and Evaluation Maximization axioms are obvious, and need no further verification.\nHowever, even with respect to those axioms, a few interesting insights came up in the log analysis.\nThe Goal achieving and Preventive Act axioms, though theoretically trivial, seem to provide some challenge to a human player.\nIn the initial inspection of the logs, we encountered few games2 where a player, for inexplicable reasons, did not block the other from winning or failed to execute its own winning move.\nWe can blame those faults on the human's lack of attention, or a typing error in its move reply; nevertheless, those errors might occur in bounded rational agents, and the appropriate behavior needs to be axiomatized.\nA typical Connect-Four game revolves around generating threats and blocking them.\nIn our analysis we looked for explicit preventive actions, i.e., moves that block a group of 3 discs, or that remove a future threat (in our limited search horizon).\nWe found that in 83% of the total games there was at least one preventive action taken by the Black player.\nIt was also found that Black averaged 2.8 preventive actions per game on the games in which it lost, while averaging 1.5 preventive actions per game when winning.\nIt seems that Black requires 1 or 2 preventive actions to build its initial taking position, before starting to present threats.\nIf it did not manage to win, it will usually prevent an extra threat or two before succumbing to White.\n4.\nRELATED WORK\nMuch research deals with the axiomatization of teamwork and mental states of individuals: some models use knowledge and belief [10], others have models of goals and intentions [8, 4].\nHowever, all these formal theories deal with agent teamwork and cooperation.\nAs far as we know, our model is the first to provide a formalized model for explicit adversarial environments and agents' behavior in it.\nThe classical Min-Max adversarial search algorithm was the first attempt to integrate the opponent into the search space with a weak assumption of an optimally playing opponent.\nSince then, much effort has gone into integrating the opponent model into the decision procedure to predict future behavior.\nThe M \u2217 algorithm presented by Carmel and Markovitch [2] showed a method of incorporating opponent models into adversary search, while in [3] they used learning to provide a more accurate opponent model in a 2player repeated game environment, where agents' strategies were modeled as finite automata.\nAdditional Adversarial planning work was done by Willmott et al. [13], which provided an adversarial planning approach to the game of GO.\nThe research mentioned above dealt with adversarial search and the integration of opponent models into classical utilitybased search methods.\nThat work shows the importance of opponent modeling and the ability to exploit it to an agent's advantage.\nHowever, the basic limitations of those search methods still apply; our model tries to overcome those limitations by presenting a formal model for a new, mental state-based adversarial specification.\n5.\nCONCLUSIONS\nWe presented an Adversarial Environment model for a 2These were later removed from the final analysis.\nbounded rational agent that is situated in an N-player, zerosum environment.\nWe used the SharedPlans formalization to define the model and the axioms that agents can apply as behavioral guidelines.\nThe model is meant to be used as a guideline for designing agents that need to operate in such adversarial environments.\nWe presented empirical results, based on ConnectFour log file analysis, that exemplify the model and the axioms for a bilateral instance of the environment.\nThe results we presented are a first step towards an expanded model that will cover all types of adversarial environments, for example, environments that are non-zero-sum, and environments that contain natural agents that are not part of the direct conflict.\nThose challenges and more will be dealt with in future research.","keyphrases":["adversari environ","agent","interact","multiag environ","adversari interact","behavior axiom","connect-four game","bilater and multilater instanti","evalu function","benefici action","empir studi","axiomat model","zero-sum encount","treatment group","eval valu","multiag system","modal logic"],"prmu":["P","P","P","P","P","P","P","M","U","U","M","M","M","U","U","M","U"]} {"id":"I-33","title":"A Formal Road from Institutional Norms to Organizational Structures","abstract":"Up to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics. In order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones. The paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures. As a result, it is shown how institutional norms can be refined to constructs -- organizational structures -- which are closer to an implemented system. It is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.","lvl-1":"A Formal Road from Institutional Norms to Organizational Structures Davide Grossi Utrecht University PO Box 80.089, 3508TB Utrecht, The Netherlands davide@cs.uu.nl Frank Dignum Utrecht University PO Box 80.089, 3508TB Utrecht, The Netherlands dignum@cs.uu.nl John-Jules Ch.\nMeyer Utrecht University PO Box 80.089, 3508TB Utrecht, The Netherlands jj@cs.uu.nl ABSTRACT Up to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics.\nIn order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones.\nThe paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures.\nAs a result, it is shown how institutional norms can be refined to constructsorganizational structures-which are closer to an implemented system.\nIt is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.\nCategories and Subject Descriptors F.4.1 [Mathematical Logic and Formal Languages]: Modal Logic; I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; F.2.11 [Distributed Artificial Intelligence]: Coherence and Coordination General Terms Theory.\n1.\nINTRODUCTION The opportunity of a technology transfer from the field of organizational and social theory to distributed AI and multiagent systems (MASs) has long been advocated ([8]).\nIn MASs the application of the organizational and institutional metaphors to system design has proven to be useful for the development of methodologies and tools.\nIn many cases, however, the application of these conceptual apparatuses amounts to mere heuristics guiding the high level design of the systems.\nIt is our thesis that the application of those apparatuses can be pushed further once their key concepts are treated formally, that is, once notions such as norm, role, structure, etc. obtain a formal semantics.\nThis has been the case for agent programming languages after the relevant concepts borrowed from folk psychology (belief, intention, desire, knowledge, etc.) have been addressed in comprehensive formal logical theories such as, for instance, BDICTL ([22]) and KARO ([17]).\nAs a matter of fact, those theories have fostered the production of architectures and programming languages.\nWhat is lacking at the moment for the design and development of open MASs is, in our opinion, something that can play the role that BDI-like formalisms have played for the design and development of single-agent architectures.\nAim of the present paper is to fill this gap with respect to the notion of institution providing formal foundations for the application of the institutional metaphor and for its relation to the organizational one.\nThe main result of the paper consists in showing how abstract constraints (institutions) can be step by step refined to concrete structural descriptions (organizational structures) of the to-be-implemented system, bridging thus the gap between abstract norms and concrete system specifications.\nConcretely, in Section 2, a logical framework is presented which provides a formal semantics for the notions of institution, norm, role, and which supports the account of key features of institutions such as the translation of abstract norms into concrete and implementable ones, the institutional empowerment of agents, and some aspects of the design of norm enforcement.\nIn Section 3 the framework is extended to deal with the notion of the infrastructure of an institution.\nThe extended framework is then studied in relation to the formalism for representing organizational structures presented in [11].\nIn Section 4 some conclusions follow.\n2.\nINSTITUTIONS Social theory usually thinks of institutions as the rules of the game ([18, 23]).\nFrom an agent perspective institutions are, to paraphrase this quote, the rules of the various games agents can play in order to interact with one another.\nTo assume an institutional perspective on MASs means therefore to think of MASs in normative terms: [... ] law, computer systems, and many other kinds of organizational structure may be viewed as instances of normative systems.\nWe use the term to refer to any set of interacting agents whose behavior can usefully be regarded as governed by norms ([15], p.276).\nThe normative system perspective on institutions is, as such, nothing original and it is already a quite acknowledged position within the community working on electronic institutions, or eInstitutions ([26]).\nWhat has not been sufficiently investigated and understood with formal methods is, in our view, the question: what does it 628 978-81-904262-7-5 (RPS) c 2007 IFAAMAS amount to, for a MAS, to be put under a set of norms?\nOr in other words: what does it mean for a designer of an eInstitution to state a set of norms?\nWe advance a precise thesis on this issue, which is also inspired by work in social theory: Now, as the original manner of producing physical entities is creation, there is hardly a better way to describe the production of moral entities than by the word `imposition'' [impositio].\nFor moral entities do not arise from the intrinsic substantial principles of things but are superadded to things already existent and physically complete ([21], pp. 100-101).\nBy ignoring for a second the philosophical jargon of the Seventeenth century we can easily extract an illuminating message from the excerpt: what institutions do is to impose properties on already existing entities.\nThat is to say, institutions provide descriptions of entities by making use of conceptualizations that are not proper of the common descriptions of those entities.\nFor example, that cars have wheels is a common factual property, whereas the fact that cars count as vehicles in some technical legal sense is a property that law imposes on the concept car.\nTo say it with [25], the fact that cars have wheels is a brute fact, while the fact that cars are vehicles is an institutional fact.\nInstitutions build structured descriptions of institutional properties upon brute descriptions of a given domain.\nAt this point, the step toward eInstitutions is natural.\neInstitutions impose properties on the possible states of a MAS: they specify what are the states in which an agent i enacts a role r; what are the states in which a certain agent is violating the norms of the institution, etc..\nThey do this via linking some institutional properties of the possible states and transitions of the system (e.g., agent i enacts role r) to some brute properties of those states and transitions (e.g., agent i performs protocol No.56).\nAn institutional property is therefore a property of system states or system transitions (i.e., a state type or a transition type) that does not belong to a merely technical, or factual, description of the system.\nTo sum up, institution are viewed as sets of norms (normative system perspective), and norms are thought of as the imposition of an institutional description of the system upon its description in terms of brute properties.\nIn a nutshell, institutions are impositions of institutional terminologies upon brute ones.\nThe following sections provide a formal analysis of this thesis and show its explanatory power in delivering a rigorous understanding of key features of institutions.\nBecause of its suitability for representing complex domain descriptions, the formal framework we will make use of is the one of Description Logics (DL).\nThe use of such formalism will also stress the idea of viewing institutions as the impositions of domain descriptions.\n2.1 Preliminaries: a very expressive DL The description logic language enabling the necessary expressivity expands the standard description logic language ALC ([3]) with relational operators ( ,\u25e6,\u00ac,id) to express complex transition types, and relational hierarchies (H) to express inclusion between transition types.\nFollowing a notational convention common within DL we denote this language with ALCH( ,\u25e6,\u00ac,id) .\nDEFINITION 1.\n(Syntax of ALCH( ,\u25e6,\u00ac,id) ) transition types and state type constructs are defined by the following BNF: \u03b1 := a | \u03b1 \u25e6 \u03b1 | \u03b1 \u03b1 | \u00ac\u03b1 | id(\u03b3) \u03b3 := c | \u22a5 | \u00ac\u03b3 | \u03b3 \u03b3 | \u2200\u03b1.\u03b3 where a and c are atomic transition types and, respectively, atomic state types.\nIt is worth providing the intuitive reading of a couple of the operators and the constructs just introduced.\nIn particular \u2200\u03b1.\u03b3 has to be read as: after all executions of transitions of type \u03b1, states of type \u03b3 are reached.\nThe operator \u25e6 denotes the concatenation of transition types.\nThe operator id applies to a state description \u03b3 and yields a transition description, namely, the transition ending in \u03b3 states.\nIt is the description logic variant of the test operator in Dynamic Logic ([5]).\nNotice that we use the same symbols and \u00ac for denoting the boolean operators of disjunction and negation of both state and transition types.\nAtomic state types c are often indexed by an agent identifier i in order to express agent properties (e.g., dutch(i)), and atomic transition types a are often indexed by a pair of agent identifiers (i, j) (e.g., PAY(i, j)) denoting the actor and, respectively, the recipient of the transition.\nBy removing the agent identifiers from state types and transition types we obtain state type forms (e.g., dutch or rea(r)) and transition type form (e.g., PAY).\nA terminological box (henceforth TBox) T = \u0393, A consists of a finite set \u0393 of state type inclusion assertions (\u03b31 \u03b32), and of a finite set A of transition type inclusion assertions (\u03b11 \u03b12).\nThe semantics of ALCH( ,\u25e6,\u00ac,id) is model theoretical and it is given in terms of interpreted transition systems.\nAs usual, state types are interpreted as sets of states and transition types as sets of state pairs.\nDEFINITION 2.\n(Semantics of ALCH( ,\u25e6,\u00ac,id) ) An interpreted transition system m for ALCH( ,\u25e6,\u00ac,id) is a structure S, I where S is a non-empty set of states and I is a function such that: I(c) \u2286 S I(a) \u2286 S \u00d7 S I(\u22a5) = \u2205 I(\u00ac\u03b3) = \u0394m\\ I(\u03b3) I(\u03b31 \u03b32) = I(\u03b31) \u2229 I(\u03b32) I(\u2200\u03b1.\u03b3) = {s \u2208 S | \u2200t, (s, t) \u2208 I(\u03b1) \u21d2 t \u2208 I(\u03b3)} I(\u03b11 \u03b12) = I(\u03b11) \u222a I(\u03b12) I(\u00ac\u03b1) = S \u00d7 S \\ I(\u03b1) I(\u03b11 \u25e6 \u03b12) = {(s, s ) | \u2203s , (s, s ) \u2208 I(\u03b11) & (s , s ) \u2208 I(\u03b12)} I(id(\u03b3)) = {(s, s) | s \u2208 I(\u03b3)} An interpreted transition system m is a model of a state type inclusion assertion \u03b31 \u03b32 if I(\u03b31) \u2286 I(\u03b32).\nIt is a model of a transition type inclusion assertion \u03b11 \u03b12 if I(\u03b11) \u2286 I(\u03b12).\nAn interpreted transition system m is a model of a TBox T = \u0393, A if m is a model of each inclusion assertion in \u0393 and A. REMARK 1.\n(Derived constructs) The correspondence between description logic and dynamic logic is well-known ([3]).\nIn fact, the language presented in Definitions 1 and 2 is a notational variant of the language of Dynamic Logic ([5]) without the iteration operator of transition types.\nAs a consequence, some key constructs are still definable in ALCH( ,\u25e6,\u00ac,id) .\nIn particular we will make use of the following definition of the if-then-else transition type: if \u03b3 then \u03b11else \u03b12 = (id(\u03b3) \u25e6 \u03b11) (id(\u00ac\u03b3) \u25e6 \u03b12).\nBoolean operators are defined as usual.\nWe will come back to some complexity features of this logic in Section 2.5.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 629 2.2 Institutions as terminologies We have upheld that institutions impose new system descriptions which are formulated in terms of sets of norms.\nThe step toward a formal grounding of this view of institutions is now short: norms can be thought of as terminological axioms, and institutions as sets of terminological axioms, i.e., terminological boxes.\nAn institution can be specified as a terminological box Ins = \u0393ins, Ains , where each inclusion statement in \u0393ins and Ains models a norm of the institution.\nObviously, not every TBox can be considered to be an institution specification.\nIn particular, an institution specification Ins must have some precise linguistic relationship with the `brute'' descriptions upon which the institution is specified.\nWe denote by Lins the non-logical alphabet containing only institutional state and transition types, and by Lbrute the nonlogical alphabet containing those types taken to talk about, instead, `brute'' states and transitions1 .\nDEFINITION 3.\n(Institutions as TBoxes) A TBox Ins = \u0393ins, Ains is an institution specification if: 1.\nThe non-logical alphabet on which Ins is specified contains elements of both Lins and Lbrute.\nIn symbols: L(Ins) \u2286 Lins \u222a Lbrute.\n2.\nThere exist sets of terminological axioms \u0393bridge \u2286 \u0393ins and Abridge \u2286 Ains such that either the left-hand side of these axioms is always a description expressed in Lbrute and the right-hand side a description expressed in Lins, or those axioms are definitions.\nIn symbols: if \u03b31 \u03b32 \u2208 \u0393bridge then either \u03b31 \u2208 Lbrute and \u03b32 \u2208 Lins or it is the case that also \u03b32 \u03b31 \u2208 \u0393bridge.\nThe clause for Abridge is analogous.\n3.\nThe remaining sets of terminological axioms \u0393ins\\\u0393bridge and Ains\\Abridge are all expressed in Lins.\nIn symbols: L(\u0393ins\\\u0393bridge) \u2286 Lins and L(Ains\\Abridge) \u2286 Lins.\nThe definition states that an institution specification needs to be expressed on a language including institutional as well as brute terms (1); that a part of the specification concerns a description of mere institutional terms (3); and that there needs to be a part of the specification which connects institutional terms to brute ones (2).\nTerminological axioms in \u0393bridge and Abridge formalize in DL the Searlean notion of counts-as conditional ([25]), that is, rules stating what kind of meaning an institution gives to certain brute facts and transitions (e.g., checking box No.4 in form No.2 counts as accepting your personal data to be used for research purposes).\nA formal theory of counts-as statements has been thoroughly developed in a series of papers among which [10, 13].\nThe technical content of the present paper heavily capitalizes on that work.\nNotice also that given the semantics presented in Definition 2, if institutions can be specified via TBoxes then the meaning of such specifications is a set of interpreted transition systems, i.e., the models of those TBoxes.\nThese transitions systems can be in turn thought of as all the possible MASs which model the specified institution.\nREMARK 2.\n(Lbrute from a designer``s perspective) From a design perspective language Lbrute has to be thought of as the language on which a designer would specify a system instantiating a given institution2 .\nDefinition 3 shows that for such a design task 1 Symbols from Lins and Lbrute will be indexed (especially with agent identifiers) to add some syntactic sugar.\n2 To make a concrete example, the AMELI middleware [7] can be viewed as a specification tool at a Lbrute level.\nit is needed to formally specify an explicit bridge between the concepts used in the description of the actual system and the institutional `abstract'' concepts.\nWe will come back to this issue in Section 3.\n2.3 From abstract to concrete norms To illustrate Definition 3, and show its explanatory power, an example follows which depicts an essential phenomenon of institutions.\nEXAMPLE 1.\n(From abstract to concrete norms) Consider an institution supposed to regulate access to a set of public web services.\nIt may contain the following norm: it is forbidden to discriminate access on the basis of citizenship.\nSuppose now a system has to be built which complies with this norm.\nThe first question is: what does it mean, in concrete, to discriminate on the basis of citizenship?\nThe system designer should make some concrete choices for interpreting the norm and these choices should be kept track of in order to explicitly link the abstract norm to its concrete interpretation.\nThe problem can be represented as follows.\nThe abstract norm is formalized by Formula 1 by making use of a standard reduction technique of deontic notions (see [16]): the statement it is forbidden to discriminate on the basis of citizenship amounts to the statement after every execution of a transition of type DISCR(i, j) the system always ends up in a violation state.\nTogether with the norm also some intuitive background knowledge about the discrimination action needs to be formalized.\nHere, as well as in the rest of the examples in the paper, we provide just that part of the formalization which is strictly functional to show how the formalism works in practice.\nFormulae 2 and 3 express two effect laws: if the requester j is Dutch then after all executions of transitions of type DISCR(i, j) j is accepted by i, whereas if it is not all the executions of the transitions of the same type have as an effect that it is not accepted.\nAll formulae have to be read as schemata determining a finite number of subsumption expressions depending on the number of agents i, j considered.\n\u2200DISCR(i, j).\nviol \u2261 (1) dutch(j) \u2200DISCR(i, j).\naccepted(j) (2) \u00acdutch(j) \u2200DISCR(i, j).\n\u00acaccepted(j) (3) The rest of the axioms concern the translation of the abstract type DISCR(i, j) to concrete transition types.\nFormula 4 refines it by making explicit that a precise if-then-else procedure counts as a discriminatory act of agent i. Formulae 5 and 6 specify which messages of i to j count as acceptance and rejection.\nIf the designer uses transition types SEND(msg33, i, j) and SEND(msg38, i, j) for the concrete system specification, then Formulae 5 and 6 can be thought of as bridge axioms connecting notions belonging to the institutional alphabet (to accept, and to reject) to concrete ones (to send specific messages).\nFinally, Formulae 7 and 8 state two intuitive effect laws concerning the ACCEPT(i, j) and REJECT(i, j) types.\nif dutch(j)then ACCEPT(i, j) else REJECT(i, j) DISCR(i, j) (4) SEND(msg33, i, j) ACCEPT(i, j) (5) SEND(msg38, i, j) REJECT(i, j) (6) \u2200ACCEPT(i, j).\naccepted(j) \u2261 (7) \u2200REJECT(i, j).\n\u00acaccepted(j) \u2261 (8) It is easy to see, on the grounds of the semantics exposed in Definition 2, that the following concrete inclusion statement holds w.r.t. 630 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) the specified institution: if dutch(j) then SEND(msg33, i, j) else SEND(msg38, i, j) DISCR(i, j) (9) This scenario exemplifies a pervasive feature of human institutions which, as extensively argued in [10], should be incorporated by electronic ones.\nCurrent formal approaches to institutions, such as ISLANDER [6], do not allow for the formal specification of explicit translations of abstract norms into concrete ones, and focus only on norms that can be specified at the concrete system specification level.\nWhat Example 1 shows is that the problem of the abstractness of norms in institutions can be formally addressed and can be given a precise formal semantics.\nThe scenario suggests that, just by modifying an appropriate set of terminological axioms, it is possible for the designer to obtain a different institution by just modifying the sets of bridge axioms without touching the terminological axioms expressed only in the institutional language Lins.\nIn fact, it is the case that a same set of abstract norms can be translated to different and even incompatible sets of concrete norms.\nThis translation can nevertheless not be arbitrary ([1]).\nEXAMPLE 2.\n(Acceptable and unacceptable translations of abstract norms) Reconsider again the scenario sketched in Example 1.\nThe transition type DISCR(i, j) has been translated to a complex procedure composed by concrete transition types.\nWould any translation do?\nConsider an alternative institution specification Ins containing Formulae 1-3 and the following translation rule: PAY(j, i, e10) DISCR(i, j) (10) Would this formula be an acceptable translation of the abstract norm expressed in Formula 1?\nThe axiom states that transitions where i receives e10 from j count as transitions of type DISCR(i, j).\nNeedless to say this is not intuitive, because the abstract transition type DISCR(i, j) obeys some intuitive conceptual constraints (Formulae 2 and 3) that all its translations should also obey.\nIn fact, the following inclusions would then hold in Ins : dutch(j) \u2200PAY(j, i, e10).\naccepted(j) (11) \u00acdutch(j) \u2200PAY(j, i, e10).\n\u00acaccepted(j) (12) In fact, there properties of the transition type PAY(j, i, e10) look at least awkward: if an agent is Dutch than by paying e10 it would be accepted, while if it was not Dutch the same action would make it not accepted.\nThe problem is that the meaning of `paying'' is not intuitively subsumed by the meaning of `discriminating''.\nIn other words, a transition type PAY(j, i, e10) does not intuitively yield the effects that a sub-type of DISCR(i, j) yields.\nIt is on the contrary perfectly intuitive that Formula 9 obeys the constraints in Formulae 2 and 3, which it does, as it can be easily checked on the grounds of the semantics.\nIt is worth stressing that without providing a model-theoretic semantics for the translation rules linking the institutional notions to the brute ones, it would not be so straightforward to model the logical constraints to which the translations are subjected (Example 2).\nThis is precisely the advantage of viewing translation rules as specific terminological axioms, i.e., \u0393bridge and Abridge, working as a bridge between two languages (Definition 3).\nIn [12], we have thoroughly compared this approach with approaches such as [9] which conceive of translation rules as inference rules.\nThe two examples have shown how our approach can account for some essential features of institutions.\nIn the next section the same framework is applied to provide a formal analysis of the notion of role.\n2.4 Institutional modules and roles Viewing institutions as the impositions of institutional descriptions on systems'' states and transitions allows for analyzing the normative system perspective itself (i.e., institutions are sets of norms) at a finer granularity.\nWe have seen that the terminological axioms specifying an institution concern complex descriptions of new institutional notions.\nSome of the institutional state types occurring in the institution specification play a key role in structuring the specification of the institution itself.\nThe paradigmatic example in this sense ([25]) are facts such as agent i enacts role r which will be denoted by state types rea(i, r).\nBy stating how an agent can enact and `deact'' a role r, and what normative consequences follow from the enactment of r, an institution describes expected forms of agents'' behavior while at the same time abstracting from the concrete agents taking part to the system.\nThe sets of norms specifying an institution can be clustered on the grounds of the rea state types.\nFor each relevant institutional state type (e.g., rea(i, r)), the terminological axioms which define an institution, i.e., its norms, can be clustered in (possibly overlapping) sets of three different types: the axioms specifying how states of that institutional type can be reached (e.g., how an agent i can enact the role r); how states of that type can be left (e.g., how an agent i can `deact'' the a role r); and what kind of institutional consequences do those states bear (e.g., what rights and power does agent i acquire by enacting role r).\nBorrowing the terminology from work in legal and institutional theory ([23, 25]), these clusters of norms can be called, respectively, institutive, terminative and status modules.\nStatus modules We call status modules those sets of terminological axioms which specify the institutional consequences of the occurrence of a given institutional state-of-affairs, for instance, the fact that agent i enacts role r. EXAMPLE 3.\n(A status module for roles) Enacting a role within an institution bears some institutional consequences that are grouped under the notion of status: by playing a role an agent acquires a specific status.\nSome of these consequences are deontic and concern the obligations, rights, permissions under which the agent puts itself once it enacts the role.\nAn example which pertains to the normative description of the status of both a buyer and a seller roles is the following: rea(i, buyer) rea(j, seller) win bid(i, j, b) \u2200\u00acPAY(i, j, b).\nviol(i) (13) If agent i enacts the buyer role and j the seller role and i wins bid b then if i does not perform a transition of type PAY (i, j, b), i.e., does not pay to j the price corresponding to bid b, then the system ends up in a state that the institution classifies as a violation state with i being the violator.\nNotice that Formula 13 formalizes at the same time an obligation pertaining to the role buyer and a right pertaining to the role seller.\nOf particular interest are then those consequences that attribute powers to agents enacting specific roles: rea(i, buyer) rea(j, seller) \u2200BID(i, j, b).\nbid(i, j, b) (14) SEND(i, j, msg49) BID(i, j, b) (15) If agent i enacts the buyer role and j the seller role, every time agent i bids b to j this action results in an institutional state testifying that the corresponding bid has been placed by i (Formula 14).\nFormula 15 states how the bidding action can be executed by sending a specific message to j (SEND(i, j, msg49)).\nSome observations are in order.\nAs readers acquainted with deontic logic have probably already noticed, our treatment of the notion The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 631 of obligation (Formula 13) makes again use of a standard reduction approach ([16]).\nMore interesting is instead how the notion of institutional power is modeled.\nEssentially, the empowerment phenomenon is analyzed in term of two rules: one specifying the institutional effects of an institutional action (Formula 14), and one translating the institutional transition type in a brute one (Formula 15).\nSystems of rules of this type empower the agents enacting some relevant role by establishing a connection between the brute actions of the agents and some institutional effect.\nWhether the agents are actually able to execute the required `brute'' actions is a different issue, since agent i can be in some states (or even all states) unable to effectuate a SEND(i, j, msg49) transition.\nThis is the case also in human societies: priests are empowered to give rise to marriages but if a priest is not in state of performing the required speech acts he is actually unable to marry anybody.\nThere is a difference between being entitled to make a bid and being in state of making a bid ([4]).\nIn other words, Formulae 14 and 15 express only that agents playing the buyer role are entitled to make bids.\nThe actual possibility of performing the required `brute'' actions is not an institutional issue, but rather an issue concerning the implementation of an institution in a concrete system.\nWe address this issue extensively in Section 33 .\nInstitutive modules We call institutive modules those sets of terminological axioms of an institution specification describing how states with certain institutional properties can be reached, for instance, how an agent i can reach a state in which it enacts role r.\nThey can be seen as procedures that the institution define in order for the agents to bring institutional states of affairs about.\nEXAMPLE 4.\n(An institutive module for roles) The fact that an agent i enacts a role r (rea(i, r)) is the effect of a corresponding enactment action ENACT(i, r) performed under certain circumstances (Formula 16), namely that the agent does not already enact the role, and that the agent satisfies given conditions (cond(i, r)), which might for instance pertain the computational capabilities required for an agent to play the chosen role, or its capability to interact with some specific system``s infrastructures.\nFormula 17 specifies instead the procedure counting as an action of type ENACT(i, r).\nSuch a procedure is performed through a system infrastructure s, which notifies to i that it has been registered as enacting role r after sending the necessary piece of data d (SEND(i, s, d)), e.g., a valid credit card number.\n\u00acrea(i, r) cond(i, r) ENACT(i, r).\nrea(i, r) (16) SEND(i, s, d) \u25e6 NOTIFY(s, i) ENACT(i, r) (17) Terminative modules Analogously, we call terminative modules those sets of terminological axioms stating how a state with certain institutional properties can be left.\nRules of this kind state for instance how an agent can stop enacting a certain role.\nThey can be thus thought of as procedures that the institution defines in order for the agent to see to it that certain institutional states stop holding.\nEXAMPLE 5.\n(A terminative module for roles) Terminative modules for roles specify, for instance, how a transition type DEACT(i, r) can be executed which has as consequence the reaching of a state of type \u00acrea(i, r): rea(i, r) DEACT(i, r).\n\u00acrea(i, r) (18) SEND(i, s, msg9) DEACT(i, r) (19) That is to say, i deacting a role r always leads to a state where 3 See in particular Example 6 and Definition 5 i does not enact role r; and i sending message No.9 to a specific interface infrastructure s count as i deacting role r. Examples 3-5 have shown how roles can be formalized in our framework thereby getting a formal semantics: roles are also sets of terminological axioms concerning state types of the sort rea(i, r).\nIt is worth noticing that this modeling option is aligned with work on social theory addressing the concept of role such as [20].\n2.5 Tractable specifications of institutions In the previous sections we fully deployed the expressivity of the language introduced in Section 2.1 and used its semantics to provide a formal understanding of many essential aspects of institutions in terms of transition systems.\nThis section spends a few words about the viability of performing automated reasoning in the logic presented.\nThe satisfiability problem4 in logic ALCH( ,\u25e6,\u00ac,id) is undecidable since transition type inclusion axioms correspond to a version of what in Description Logic are known as role-value maps and logics extending ALC with role-value maps are known to be undecidable ([3]).\nTractable (i.e., polynomial time decidable) fragments of logic ALCH( ,\u25e6,\u00ac,id) can however be isolated which still exhibit some key expressive features.\nOne of them is logic ELH(\u25e6) .\nIt is obtained from description logic EL, which contains only state types intersection , existential restriction \u2203 and 5 , but extended with the \u22a5 state type and with transition type inclusion axioms of a complex form: a1 \u25e6...\u25e6an a (with n finite number).\nLogic ELH(\u25e6) is also a fragment of the well investigated description logic EL++ whose satisfiability problem has been shown in [2] to be decidable in polynomial time.\nDespite the very limited expressivity of this fragment, some rudimentary institutional specifications can still be successfully represented.\nSpecifically, institutive and terminative modules can be represented which contain transition types inclusion axioms.\nRestricted versions of status modules can also be represented enabling two essential deontic notions: it is possible (respectively, impossible) to reach a violation state by performing a transition of a certain type, and it is possible (respectively, impossible) to reach a legal state by performing a transition of a certain type.\nTo this aim language Lins would need to be expanded with a set of state types {legal(i)}0\u2264i\u2264n whose intuitive meaning is to denote legal states as opposed to states of type viol(i).\nFragments like ELH(\u25e6) could be used as target logics within theory approximation approaches ([24]) by aiming at compiling TBoxes expressed in ALCH( ,\u25e6,\u00ac,id) into approximations in those fragments.\n3.\nFROM NORMS TO STRUCTURES 3.1 Infrastructures In discussing Example 3 we observed how being entitled to make a bid does not imply being in state of making a bid.\nIn other words, an institution can empower agents by means of appropriate rules but this empowerment can remain dead letter.\nSimilar 4 This problem amounts to check whether a state description \u03b3 is satisfiable w.r.t. a given TBox T, i.e., to check if there exists a model m of T such that \u2205 \u2282 I(\u03b3).\nNotice that language ALCH( ,\u25e6,\u00ac,id) contains negation and intersection of arbitrary state types.\nIt is well-known that if these operators are available then all most typical reasoning tasks at the TBox level can be reduced to the satisfiability problem.\n5 Notice therefore that EL is a seriously restricted fragment of ALC since it does not contain the negation operator for state types (operators and \u2200 remain thus undefinable).\n632 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) observations apply also to deontic notions: agents might be allowed to perform certain transactions under some relevant conditions but they might be unable to do so under those same conditions.\nWe refer to this kind of problems as infrastructural.\nThe implementation of an institution in a concrete system calls therefore for the design of appropriate infrastructures or artifacts ([19]).\nThe formal specification of an infrastructure amounts to the formal specification of interaction requirements, that is to say, the specification of which relevant transition types are executable and under what conditions.\nDEFINITION 4.\n(Infrastructures as TBoxes) An infrastructure Inf = \u0393inf , Ainf for institution Ins is a TBox on Lbrute such that for all a \u2208 L(Abridge) there exist terminological axioms in \u0393inf of the following form: \u03b3 \u2261 \u2203a. (a is executable exactly in \u03b3 states) and \u03b3 \u2261 \u2203\u00aca. (the negation of a is executable exactly in \u03b3 states).\nIn other words, an infrastructure specification states all and only the conditions under which an atomic brute transition type and its negation are executable, which occur in the brute alphabet of the bridge axioms of Ins.\nIt states what can be in concrete done and under what conditions.\nEXAMPLE 6.\n(Infrastructure specification) Consider the institution specified in Example 1.\nA simple infrastructure Inf for that institution could contain for instance the following terminological axioms for any pair of different agents i, j and message type msg: \u2203SEND(msg33, i, j).\n(20) The formula states that it is always in the possibilities of agent i to send message No. 33 to agent j.\nIt then follows on the grounds of Example 1 that agent i can always accept agent j. \u2203ACCEPT(i, j).\n(21) Notice that the executability condition is just .\nWe call a concrete institution specification CIns an institution specification Ins coupled with an infrastructure specification Inf.\nDEFINITION 5.\n(Concrete institution) A concrete institution obtained by joining the institution Ins = \u0393ins, Ains and the infrastructure Inf = \u0393inf , Ainf is a TBox CIns = \u0393, A such that \u0393 = \u0393ins \u222a \u0393inf and A = Ains \u222a Ainf .\nObviously, different infrastructures can be devised for a same institution giving rise to different concrete institutions which makes precise implementation choices explicit.\nOf particular relevance are the implementation choices concerning abstract norms like the one represented in Formula 13.\nA designer can choose to regiment such norm ([15]), i.e., make violation states unreachable, via an appropriate infrastructure.\nEXAMPLE 7.\n(Regimentation via infrastructure specification) Consider Example 3 and suppose the following translation rule to be also part of the institution: BNK(i, j, b) CC(i, j, b) \u2261 PAY(i, j, b) (22) condition pay(i, j, b) \u2261 rea(i, buyer) rea(j, seller) win bid(i, j, b) (23) The first formula states how the payment can be concretely carried out (via bank transfer or credit card) and the second just provides a concrete label grouping the institutional state types relevant for the norm.\nIn order to specify a regimentation at the infrastructural level it is enough to state that: condition pay(i, j, b) \u2261 \u2203(BNK(i, j, b) CC(i, j, b)).\n(24) \u00accondition pay(i, j, b) \u2261 \u2203\u00ac(BNK(i, j, b) CC(i, j, b)).\n(25) In other words, in states of type condition pay(i, j, b) the only executable brute actions are BANK(i, j, b) or CC(i, j, b) and, therefore, PAY(i, j, b) would necessarily be executed.\nAs a result, the following inclusion does not hold with respect to the corresponding concrete institution: condition pay(i, j, b) \u2203\u00acPAY(i).\nviol(i).\n3.2 Organizational Structures This section briefly summarizes and adapts the perspective and results on organizational structures presented in [14, 11].\nWe refer to that work for a more comprehensive exposition.\nOrganizational structures typically concern the way agents interact within organizations.\nThese interactions can be depicted as the links of a graph defined on the set of roles of the organization.\nSuch links are then to be labeled on the basis of the type of interaction they stand for.\nFirst of all, it should be clear whether a link denotes that a certain interaction between two roles can, or ought to, or may etc. take place.\nSecondly, links should be labeled according to the transition type \u03b1 they refer to and the conditions \u03b3 in which that transition can, ought to, may etc. take place.\nLinks in a formal specification of an organizational structure stand therefore for statements of the kind: role r can (ought to, may) execute \u03b1 w.r.t. role s if \u03b3 is the case.\nFor the sake of simplicity, the following definition will consider only the can and ought-to interaction modalities.\nState and transition types in Lins \u222aLbrute will be used to label the links of the structure.\nInteraction modalities can therefore be of an institutional kind or of a brute kind.\nDEFINITION 6.\n(Organizational structure) An organizational structure is a multi-graph: OS = Roles, {Cp}p\u2208Mod, {Op}p\u2208Mod where: \u2022 Mod denotes a set of pairs p = \u03b3 : \u03b1, that is, a set of state type (condition) and transition type (action) pairs of Lins\u222aLbrute with \u03b1 being an atomic transition-type indexed with a pair (i, j) denoting placeholders for the actor and the recipient of the transition; \u2022 C (can) denotes links to be interpreted in terms of the executability of the related \u03b1 in \u03b3, whereas O (ought) denotes links to be interpreted in terms of the obligation to execute the related \u03b1 in \u03b3.\nBy the expressions (r, s) \u2208 C\u03b3:\u03b1 and (r, s) \u2208 O\u03b3:\u03b1 we mean therefore: agents enacting role r can and, respectively, ought to interact with agents enacting role s by performing \u03b1 in states of type \u03b3.\nAs shown in [11] such formal representations of organizational structures are of use for investigating the structural properties (robustness, flexibility, etc.) that a given organization exhibits.\nAt this point all the formal means are put in place which allow us to formally represent institutions as well as organizational structures.\nThe next and final step of the work consists in providing a formal relation between the two frameworks.\nThis formal relation will make explicit how institutions are related to organizational structures and vice versa.\nIn particular, it will become clear how a normative conception of the notion of role relates to a structural one, that is, how the view of roles as a sets of norms (specifying how an agent can enact and deact the role, and what social status it obtains by doing that) relates to the view of roles as positions within social structures.\n3.3 Relating institutions to organizations The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 633 To translate a given concrete institution into a corresponding organizational structure we need a function t assigning pairs of roles to axioms.\nLet us denote with Sub the set of all state type inclusion statements \u03b31 \u03b32 that can be expressed on Lins \u222a Lbrute.\nFunction t is a partial function Sub Roles \u00d7 Roles such that, for any x \u2208 Sub if x = rea(i, r) rea(j, s) \u03b3 \u2203\u03b1.\n(executability) or x = rea(i, r) rea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) (obligation) then t(x) = (r, s), where \u03b1 is an atomic transition-type indexed with a pair (i, j).\nThat is to say, executability and obligation laws containing the enactment configuration rea(i, r) rea(j, s) as a premise and concerning transition of types \u03b1, with i actor and j recipient of the \u03b1 transition, are translated into role pairs (r, s).\nDEFINITION 7.\n(Correspondence of specifications) A concrete institution CIns = \u0393, A is said to correspond to an organizational structure OS (and vice versa) if, for every x \u2208 \u0393: \u2022 x = rea(i, r) rea(j, s) \u03b3 \u2203\u03b1.\niff t(x) \u2208 C\u03b3:\u03b1 \u2022 x = rea(i, r) rea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) iff t(x) \u2208 O\u03b3:\u03b1 Intuitively, function t takes axioms from \u0393 (i.e., the set of state type terminological axioms of CIns) and yields pairs of roles.\nDefinition 7 labels the yielded pairs accordingly to the syntactic form of the translated axioms.\nMore concretely, axioms of the form rea(i, r) rea(j, s) \u03b3 \u2203\u03b1.\n(executability laws) are translated into the pair (r, s) belonging to the executability dimension (i.e., C) of the organizational structure w.r.t. the execution of \u03b1 under circumstances \u03b3.\nAnalogously, axioms of the form rea(i, r) rea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) (obligation laws) are translated into the pair (r, s) belonging to the obligation dimension (i.e., O) of the organizational structure w.r.t. the execution of \u03b1 under circumstances \u03b3.\nLeaving technicalities aside, function t distills thus the terminological and infrastructural constraints of CIns into structural ones.\nThe institutive, terminative and status modules of roles are translated into definitions of positions within a OS.\nFrom a design perspective the interpretation of Definition 7 is twofold.\nOn the one hand (from left to right), it can make explicit what the structural consequences are of a given institution supported by a given infrastructure.\nOn the other hand (from right to left), it can make explicit what kind of institution is actually implemented by a given organizational structure.\nLet us see this in some more details.\nGiven a concrete institution CIns, Definition 7 allows a designer to be aware of the impact that specific terminological choices (in particular, the choice of certain bridge axioms) and infrastructural ones have at a structural level.\nNotice that Definition 7 supports the inference of links in a structure.\nBy checking whether a given inclusion statement of the relevant syntactic form follows from CIns (i.e., the so-called subsumption problem of DL) it is possible, via t, to add new links to the corresponding organizational structure.\nThis can be recursively done by just adding any new inferred inclusion x to the previous set of axioms \u0393, thus obtaining an updated institutional specification containing \u0393 \u222a {x}.\nThis process can be thought of as the inference of structural links from institutional specifications.\nIn other words, it is possible to use institution specifications as inference tools for structural specifications.\nFor instance, the infrastructural choice formalized in Example 7 implies that for the pair of roles (buyer, seller), it is always the case that (buyer, seller) \u2208 C :PAY(i,j,b).\nThis link follows from links (buyer, seller) \u2208 C :BNK(i,j,b) and (buyer, seller) \u2208 C :CC(i,j,b) on the grounds of the bridge axioms of the institution (Formula 22).\nSuppose now a designer to be interested in a system which, besides implementing an institution, also incorporates an organizational structure enjoying desirable structural properties such as flexibility, or robustness6 .\nBy relating structural links to state type inclusions it is therefore possible to check whether adding a link in OS results in a stronger institutional specification, that is, if the corresponding inclusion statement is not already implied by Ins.\nTo draw a parallelism with what just said in the previous paragraph, this process can be thought of as the inference of norms and infrastructural constraints from the specification of organizational structures.\nTo give a simple example consider again Example 6 but from a reversed perspective.\nSuppose a designer wants a fully connected graph in the dimension C :SEND(i,j) of the organizational structure.\nExploiting Definition 7, we would obtain a number of executability laws in the fashion of Formula 20 for all roles in Roles (thus 2|Roles| axioms).\nDefinition 7 establishes a correspondence between two essentially different perspectives on the design of open systems allowing for feedbacks between the two to be formally analyzed.\nOne last observation is in order.\nWhile given a concrete institution an organizational structure can be in principle fully specified (by checking for all -finitely many- relevant inclusion statements whether they are implied or not by the institution), it is not possible to obtain a full terminological specification from an organizational structure.\nThis lies on the fact that in Definition 6 the strictly terminological information contained in the specification of an institution (eminently, the set of transition type axioms A and therefore the bridge axioms) is lost while moving to a structural description.\nThis shows, in turn, that the added value of the specification of institutions lies precisely in the terminological link they establish between institutional and brute, i.e., system level notions.\n4.\nCONCLUSIONS The paper aimed at providing a comprehensive formal analysis of the institutional metaphor and its relation to the organizational one.\nThe predominant formal tool has been description logic.\nTBoxes has been used to represent the specifications of institutions (Definition 3) and their infrastructures (Definition 6), providing therefore a transition system semantics for a number of institutional notions (Examples 1-7).\nMulti-graphs has then been used to represent the specification of organizational structures (Definition 6).\nThe last result presented concerned the definition of a formal correspondence between institution and organization specifications (Definition 7), which provides a formal way for switching between the two paradigms.\nAll in all, these results deliver a way for relating abstract system specifications (i.e., institutions as sets of norms) to specifications that are closer to an implemented system (i.e., organizational structures).\n5.\nREFERENCES [1] G. Azzoni.\nIl cavallo di caligola.\nIn Ontologia sociale potere deontico e regole costitutive, pages 45-54.\nQuodlibet, Macerata, Italy, 2003.\n[2] F. Baader, S. Brandt, and C. Lutz.\nPushing the EL envelope.\nIn Proceedings of IJCAI``05, Edinburgh, UK, 2005.\nMorgan-Kaufmann Publishers.\n[3] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and P. Patel-Schneider.\nThe Description Logic Handbook.\nCambridge Univ..\nPress, Cambridge, 2002.\n[4] C. Castelfranchi.\nThe micro-macro constitution of power.\nProtoSociology, 18:208-268, 2003.\n6 In [11] it is shown how these and analogous properties can be precisely measured within the type of structures presented in Definition 6.\n634 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) [5] D. D. Harel amd Kozen and J. Tiuryn.\nDynamic logic.\nIn D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic: Volume II, pages 497-604.\nReidel, Dordrecht, 1984.\n[6] M. Esteva, D. de la Cruz, and C. Sierra.\nISLANDER: an electronic institutions editor.\nIn Proceedings of AAMAS``02, pages 1045-1052, New York, NY, USA, 2002.\nACM Press.\n[7] M. Esteva, J. Rodr\u00b4\u0131guez-Aguilar, B. Rosell, and J. Arcos.\nAmeli: An agent-based middleware for electronic institutions.\nIn Proceedings of AAMAS``04, New York, US, July 2004.\n[8] M. S. Fox.\nAn organizational view of distributed systems.\nIEEE Trans.\nSyst.\nMan Cyber, 11(1)70 - 80, 1981.\n[9] C. Ghidini and F. Giunchiglia.\nA semantics for abstraction.\nIn R. de M\u00b4antaras and L. Saitta, editors, Proceedings of ECAI``04, pages 343-347, 2004.\n[10] D. Grossi, H. Aldewereld, J. V\u00b4azquez-Salceda, and F. Dignum.\nOntological aspects of the implementation of norms in agent-based electronic institutions.\nComputational & Mathematical Organization Theory, 12(2-3):251-275, April 2006.\n[11] D. Grossi, F. Dignum, V. Dignum, M. Dastani, and L. Royakkers.\nStructural evaluation of agent organizations.\nIn Proceedings of AAMAS``06, pages 1110 - 1112, Hakodate, Japan, May 2006.\nACM Press.\n[12] D. Grossi, F. Dignum, and J.-J.\nC. Meyer.\nContext in categorization.\nIn L. Serafini and P. Bouquet, editors, Proceedings of CRR``05, volume 136 of CEUR Workshp Proceedings, Paris, June 2005.\n[13] D. Grossi, J.-J.\nMeyer, and F. Dignum.\nClassificatory aspects of counts-as: An analysis in modal logic.\nJournal of Logic and Computation, October 2006.\ndoi: 10.1093\/logcom\/exl027.\n[14] J. F. H\u00a8ubner, J. S. Sichman, and O. Boissier.\nMoise+: Towards a structural functional and deontic model for mas organization.\nIn Proceedings of AAMAS``02, Bologna, Italy, July 2002.\nACM Press.\n[15] A. J. I. Jones and M. Sergot.\nOn the characterization of law and computer systems: The normative systems perspective.\nDeontic Logic in Computer Science, pages 275-307, 1993.\n[16] J. Krabbendam and J.-J.\nC. Meyer.\nContextual deontic logics.\nIn P. McNamara and H. Prakken, editors, Norms, Logics and Information Systems, pages 347-362, Amsterdam, 2003.\nIOS Press.\n[17] J.-J.\nMeyer, F. de Boer, R. M. van Eijk, K. V. Hindriks, and W. van der Hoek.\nOn programming karo agents.\nLogic Journal of the IGPL, 9(2), 2001.\n[18] D. C. North.\nInstitutions, Institutional Change and Economic Performance.\nCambridge University Press, Cambridge, 1990.\n[19] A. Omicini, A. Ricci, A. Viroli, C. Castelfranchi, and L. Tummolini.\nCoordination artifacts: Environment-based coordination for intelligent agents.\nIn Proceedings of AAMAS``04, 2004.\n[20] I. P\u00a8orn.\nAction theory and social science.\nSome formal models.\nReidel Publishing Company, Dordrecht, The Netherlands, 1977.\n[21] S. Pufendorf.\nDe Jure Naturae et Gentium.\nAmsterdam, 1688.\nEnglish translation, Clarendon, 1934.\n[22] A. S. Rao and M. P. Georgeff.\nModeling rational agents within a BDI-architecture.\nIn J. Allen, R. Fikes, and E. Sandewall, editors, Proceedings of KR``91), pages 473-484.\nMorgan Kaufmann: San Mateo, CA, USA, 1991.\n[23] D. W. P. Ruiter.\nA basic classification of legal institutions.\nRatio Juris, 10:357-371, 1997.\n[24] M. Schaerf and M. Cadoli.\nTractable reasoning via approximation.\nArtificial Intelligence, 74(2):249-310, 1995.\n[25] J. Searle.\nThe Construction of Social Reality.\nFree Press, 1995.\n[26] J. V\u00b4azquez-Salceda.\nThe role of Norms and Electronic Institutions in Multi-Agent Systems.\nBirkhuser Verlag AG, 2004.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 635","lvl-3":"A Formal Road from Institutional Norms to Organizational Structures\nABSTRACT\nUp to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics.\nIn order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones.\nThe paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures.\nAs a result, it is shown how institutional norms can be refined to constructs--organizational structures--which are closer to an implemented system.\nIt is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.\n1.\nINTRODUCTION\nThe opportunity of a \"technology transfer\" from the field of organizational and social theory to distributed AI and multiagent systems (MASs) has long been advocated ([8]).\nIn MASs the application of the organizational and institutional metaphors to system design has proven to be useful for the development of methodologies and tools.\nIn many cases, however, the application of these conceptual apparatuses amounts to mere heuristics guiding the high level design of the systems.\nIt is our thesis that the application of those apparatuses can be pushed further once their key concepts are\ntreated formally, that is, once notions such as norm, role, structure, etc. obtain a formal semantics.\nThis has been the case for agent programming languages after the relevant concepts borrowed from folk psychology (belief, intention, desire, knowledge, etc.) have been addressed in comprehensive formal logical theories such as, for instance, BDICTL ([22]) and KARO ([17]).\nAs a matter of fact, those theories have fostered the production of architectures and programming languages.\nWhat is lacking at the moment for the design and development of open MASs is, in our opinion, something that can play the role that BDI-like formalisms have played for the design and development of single-agent architectures.\nAim of the present paper is to fill this gap with respect to the notion of institution providing formal foundations for the application of the institutional metaphor and for its relation to the organizational one.\nThe main result of the paper consists in showing how abstract constraints (institutions) can be step by step refined to concrete structural descriptions (organizational structures) of the to-be-implemented system, bridging thus the gap between abstract norms and concrete system specifications.\nConcretely, in Section 2, a logical framework is presented which provides a formal semantics for the notions of institution, norm, role, and which supports the account of key features of institutions such as the translation of abstract norms into concrete and implementable ones, the institutional empowerment of agents, and some aspects of the design of norm enforcement.\nIn Section 3 the framework is extended to deal with the notion of the infrastructure of an institution.\nThe extended framework is then studied in relation to the formalism for representing organizational structures presented in [11].\nIn Section 4 some conclusions follow.\n2.\nINSTITUTIONS\n2.1 Preliminaries: a very expressive DL\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 629\n2.2 Institutions as terminologies\n2.3 From abstract to concrete norms\n630 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.4 Institutional modules and roles\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 631\n2.5 Tractable specifications of institutions\n3.\nFROM NORMS TO STRUCTURES\n3.1 Infrastructures\n632 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nDEFINITION 5.\n(Concrete institution)\n3.2 Organizational Structures\n3.3 Relating institutions to organizations\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 633\nDEFINITION 7.\n(Correspondence of specifications)\n4.\nCONCLUSIONS\nThe paper aimed at providing a comprehensive formal analysis of the institutional metaphor and its relation to the organizational one.\nThe predominant formal tool has been description logic.\nTBoxes has been used to represent the specifications of institutions (Definition 3) and their infrastructures (Definition 6), providing therefore a transition system semantics for a number of institutional notions (Examples 1-7).\nMulti-graphs has then been used to represent the specification of organizational structures (Definition 6).\nThe last result presented concerned the definition of a formal correspondence between institution and organization specifications (Definition 7), which provides a formal way for switching between the two paradigms.\nAll in all, these results deliver a way for relating abstract system specifications (i.e., institutions as sets of norms) to specifications that are closer to an implemented system (i.e., organizational structures).","lvl-4":"A Formal Road from Institutional Norms to Organizational Structures\nABSTRACT\nUp to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics.\nIn order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones.\nThe paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures.\nAs a result, it is shown how institutional norms can be refined to constructs--organizational structures--which are closer to an implemented system.\nIt is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.\n1.\nINTRODUCTION\nThe opportunity of a \"technology transfer\" from the field of organizational and social theory to distributed AI and multiagent systems (MASs) has long been advocated ([8]).\nIn MASs the application of the organizational and institutional metaphors to system design has proven to be useful for the development of methodologies and tools.\nIn many cases, however, the application of these conceptual apparatuses amounts to mere heuristics guiding the high level design of the systems.\ntreated formally, that is, once notions such as norm, role, structure, etc. obtain a formal semantics.\nAim of the present paper is to fill this gap with respect to the notion of institution providing formal foundations for the application of the institutional metaphor and for its relation to the organizational one.\nThe main result of the paper consists in showing how abstract constraints (institutions) can be step by step refined to concrete structural descriptions (organizational structures) of the to-be-implemented system, bridging thus the gap between abstract norms and concrete system specifications.\nConcretely, in Section 2, a logical framework is presented which provides a formal semantics for the notions of institution, norm, role, and which supports the account of key features of institutions such as the translation of abstract norms into concrete and implementable ones, the institutional empowerment of agents, and some aspects of the design of norm enforcement.\nIn Section 3 the framework is extended to deal with the notion of the infrastructure of an institution.\nThe extended framework is then studied in relation to the formalism for representing organizational structures presented in [11].\nIn Section 4 some conclusions follow.\n4.\nCONCLUSIONS\nThe paper aimed at providing a comprehensive formal analysis of the institutional metaphor and its relation to the organizational one.\nThe predominant formal tool has been description logic.\nTBoxes has been used to represent the specifications of institutions (Definition 3) and their infrastructures (Definition 6), providing therefore a transition system semantics for a number of institutional notions (Examples 1-7).\nMulti-graphs has then been used to represent the specification of organizational structures (Definition 6).\nThe last result presented concerned the definition of a formal correspondence between institution and organization specifications (Definition 7), which provides a formal way for switching between the two paradigms.\nAll in all, these results deliver a way for relating abstract system specifications (i.e., institutions as sets of norms) to specifications that are closer to an implemented system (i.e., organizational structures).","lvl-2":"A Formal Road from Institutional Norms to Organizational Structures\nABSTRACT\nUp to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics.\nIn order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones.\nThe paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures.\nAs a result, it is shown how institutional norms can be refined to constructs--organizational structures--which are closer to an implemented system.\nIt is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.\n1.\nINTRODUCTION\nThe opportunity of a \"technology transfer\" from the field of organizational and social theory to distributed AI and multiagent systems (MASs) has long been advocated ([8]).\nIn MASs the application of the organizational and institutional metaphors to system design has proven to be useful for the development of methodologies and tools.\nIn many cases, however, the application of these conceptual apparatuses amounts to mere heuristics guiding the high level design of the systems.\nIt is our thesis that the application of those apparatuses can be pushed further once their key concepts are\ntreated formally, that is, once notions such as norm, role, structure, etc. obtain a formal semantics.\nThis has been the case for agent programming languages after the relevant concepts borrowed from folk psychology (belief, intention, desire, knowledge, etc.) have been addressed in comprehensive formal logical theories such as, for instance, BDICTL ([22]) and KARO ([17]).\nAs a matter of fact, those theories have fostered the production of architectures and programming languages.\nWhat is lacking at the moment for the design and development of open MASs is, in our opinion, something that can play the role that BDI-like formalisms have played for the design and development of single-agent architectures.\nAim of the present paper is to fill this gap with respect to the notion of institution providing formal foundations for the application of the institutional metaphor and for its relation to the organizational one.\nThe main result of the paper consists in showing how abstract constraints (institutions) can be step by step refined to concrete structural descriptions (organizational structures) of the to-be-implemented system, bridging thus the gap between abstract norms and concrete system specifications.\nConcretely, in Section 2, a logical framework is presented which provides a formal semantics for the notions of institution, norm, role, and which supports the account of key features of institutions such as the translation of abstract norms into concrete and implementable ones, the institutional empowerment of agents, and some aspects of the design of norm enforcement.\nIn Section 3 the framework is extended to deal with the notion of the infrastructure of an institution.\nThe extended framework is then studied in relation to the formalism for representing organizational structures presented in [11].\nIn Section 4 some conclusions follow.\n2.\nINSTITUTIONS\nSocial theory usually thinks of institutions as \"the rules of the game\" ([18, 23]).\nFrom an agent perspective institutions are, to paraphrase this quote, \"the rules of the various games agents can play in order to interact with one another\".\nTo assume an institutional perspective on MASs means therefore to think of MASs in normative terms: [...] law, computer systems, and many other kinds of organizational structure may be viewed as instances of normative systems.\nWe use the term to refer to any set of interacting agents whose behavior can usefully be regarded as governed by norms ([15], p. 276).\nThe normative system perspective on institutions is, as such, nothing original and it is already a quite acknowledged position within the community working on electronic institutions, or eInstitutions ([26]).\nWhat has not been sufficiently investigated and understood with formal methods is, in our view, the question: what does it\namount to, for a MAS, to be put under a set of norms?\nOr in other words: what does it mean for a designer of an eInstitution to state a set of norms?\nWe advance a precise thesis on this issue, which is also inspired by work in social theory: Now, as the original manner of producing physical entities is creation, there is hardly a better way to describe the production of moral entities than by the word ` imposition' [impositio].\nFor moral entities do not arise from the intrinsic substantial principles of things but are superadded to things already existent and physically complete ([21], pp. 100-101).\nBy ignoring for a second the philosophical jargon of the Seventeenth century we can easily extract an illuminating message from the excerpt: what institutions do is to impose properties on already existing entities.\nThat is to say, institutions provide descriptions of entities by making use of conceptualizations that are not proper of the common descriptions of those entities.\nFor example, that cars have wheels is a common factual property, whereas the fact that cars count as vehicles in some technical legal sense is a property that law imposes on the concept \"car\".\nTo say it with [25], the fact that cars have wheels is a brute fact, while the fact that cars are vehicles is an institutional fact.\nInstitutions build structured descriptions of institutional properties upon brute descriptions of a given domain.\nAt this point, the step toward eInstitutions is natural.\neInstitutions impose properties on the possible states of a MAS: they specify what are the states in which an agent i enacts a role r; what are the states in which a certain agent is violating the norms of the institution, etc. .\nThey do this via linking some institutional properties of the possible states and transitions of the system (e.g., agent i enacts role r) to some brute properties of those states and transitions (e.g., agent i performs protocol No. 56).\nAn institutional property is therefore a property of system states or system transitions (i.e., a state type or a transition type) that does not belong to a merely technical, or factual, description of the system.\nTo sum up, institution are viewed as sets of norms (normative system perspective), and norms are thought of as the imposition of an institutional description of the system upon its description in terms of brute properties.\nIn a nutshell, institutions are impositions of institutional terminologies upon brute ones.\nThe following sections provide a formal analysis of this thesis and show its explanatory power in delivering a rigorous understanding of key features of institutions.\nBecause of its suitability for representing complex domain descriptions, the formal framework we will make use of is the one of Description Logics (DL).\nThe use of such formalism will also stress the idea of viewing institutions as the impositions of domain descriptions.\n2.1 Preliminaries: a very expressive DL\nThe description logic language enabling the necessary expressivity expands the standard description logic language ALC ([3]) with relational operators (~, \u25e6, \u00ac, id) to express complex transition types, and relational hierarchies (H) to express inclusion between transition types.\nFollowing a notational convention common within DL we denote this language with ALCH (~, \u25e6, \u00ac, id).\nwhere a and c are atomic transition types and, respectively, atomic state types.\nIt is worth providing the intuitive reading of a couple of the operators and the constructs just introduced.\nIn particular \u2200 \u03b1.ry has to be read as: \"after all executions of transitions of type \u03b1, states of type ry are reached\".\nThe operator \u25e6 denotes the concatenation of transition types.\nThe operator id applies to a state description ry and yields a transition description, namely, the transition ending in ry states.\nIt is the description logic variant of the test operator in Dynamic Logic ([5]).\nNotice that we use the same symbols ~ and \u00ac for denoting the boolean operators of disjunction and negation of both state and transition types.\nAtomic state types c are often indexed by an agent identifier i in order to express agent properties (e.g., dutch (i)), and atomic transition types a are often indexed by a pair of agent identifiers (i, j) (e.g., PAY (i, j)) denoting the actor and, respectively, the recipient of the transition.\nBy removing the agent identifiers from state types and transition types we obtain state type forms (e.g., dutch or rea (r)) and transition type form (e.g., PAY).\nA terminological box (henceforth TBox) T = ~ \u0393, A ~ consists of a finite set \u0393 of state type inclusion assertions (ry1 ry2), and of a finite set A of transition type inclusion assertions (\u03b11 \u03b12).\nThe semantics of ALCH (~, \u25e6, \u00ac, id) is model theoretical and it is given in terms of interpreted transition systems.\nAs usual, state types are interpreted as sets of states and transition types as sets of state pairs.\nAn interpreted transition system m is a model of a state type inclusion assertion ry1 ry2 if I (ry1) \u2286 I (ry2).\nIt is a model of a transition type inclusion assertion \u03b11 \u03b12 if I (\u03b11) \u2286 I (\u03b12).\nAn interpreted transition system m is a model of a TBox T = ~ \u0393, A ~ if m is a model of each inclusion assertion in \u0393 and A. REMARK 1.\n(Derived constructs) The correspondence between description logic and dynamic logic is well-known ([31).\nIn fact, the language presented in Definitions 1 and 2 is a notational variant of the language of Dynamic Logic ([51) without the iteration operator of transition types.\nAs a consequence, some key constructs are still definable in ALCH (~, \u25e6, \u00ac, id).\nIn particular we will make use of the following definition of the if-then-else transition type: if ry then \u03b11else \u03b12 = (id (ry) \u25e6 \u03b11) ~ (id (\u00ac ry) \u25e6 \u03b12).\nBoolean operators are defined as usual.\nWe will come back to some complexity features of this logic in Section 2.5.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 629\n2.2 Institutions as terminologies\nWe have upheld that institutions \"impose\" new system descriptions which are formulated in terms of sets of norms.\nThe step toward a formal grounding of this view of institutions is now short: norms can be thought of as terminological axioms, and institutions as sets of terminological axioms, i.e., terminological boxes.\nAn institution can be specified as a terminological box Ins = (Pins, Ains), where each inclusion statement in Pins and Ains models a norm of the institution.\nObviously, not every TBox can be considered to be an institution specification.\nIn particular, an institution specification Ins must have some precise linguistic relationship with the ` brute' descriptions upon which the institution is specified.\nWe denote by Gins the non-logical alphabet containing only institutional state and transition types, and by Gbrute the nonlogical alphabet containing those types taken to talk about, instead, ` brute' states and transitions1.\nDEFINITION 3.\n(Institutions as TBoxes)\nA TBox Ins = (Pins, Ains) is an institution specification if:\n1.\nThe non-logical alphabet on which Ins is specified contains elements of both Gins and Gbrute.\nIn symbols: G (Ins) C Gins u Gbrute.\n2.\nThere exist sets of terminological axioms Pbridge C Pins and Abridge C Ains such that either the left-hand side of these axioms is always a description expressed in Gbrute and the right-hand side a description expressed in Gins, or those axioms are definitions.\nIn symbols: if \u03b31 C \u03b32 E Pbridge then either \u03b31 E Gbrute and \u03b32 E Gins or it is the case that also \u03b32 C \u03b31 E Pbridge.\nThe clause for Abridge is analogous.\n3.\nThe remaining sets of terminological axioms Pins \\ Pbridge and Ains \\ Abridge are all expressed in Gins.\nIn symbols: G (Pins \\ Pbridge) C Gins and G (Ains \\ Abridge) C Gins.\nThe definition states that an institution specification needs to be expressed on a language including institutional as well as brute terms (1); that a part of the specification concerns a description of mere institutional terms (3); and that there needs to be a part of the specification which connects institutional terms to brute ones (2).\nTerminological axioms in Pbridge and Abridge formalize in DL the Searlean notion of counts-as conditional ([25]), that is, rules stating what kind of meaning an institution gives to certain brute facts and transitions (e.g., checking box No. 4 in form No. 2 counts as accepting your personal data to be used for research purposes).\nA formal theory of counts-as statements has been thoroughly developed in a series of papers among which [10, 13].\nThe technical content of the present paper heavily capitalizes on that work.\nNotice also that given the semantics presented in Definition 2, if institutions can be specified via TBoxes then the meaning of such specifications is a set of interpreted transition systems, i.e., the models of those TBoxes.\nThese transitions systems can be in turn thought of as all the possible MASs which model the specified institution.\n2.3 From abstract to concrete norms\nTo illustrate Definition 3, and show its explanatory power, an example follows which depicts an essential phenomenon of institutions.\nEXAMPLE 1.\n(From abstract to concrete norms) Consider an institution supposed to regulate access to a set of public web services.\nIt may contain the following norm: \"it is forbidden to discriminate access on the basis of citizenship\".\nSuppose now a system has to be built which complies with this norm.\nThe first question is: what does it mean, in concrete, \"to discriminate on the basis of citizenship\"?\nThe system designer should make some concrete choices for interpreting the norm and these choices should be kept track of in order to explicitly link the abstract norm to its concrete interpretation.\nThe problem can be represented as follows.\nThe abstract norm is formalized by Formula 1 by making use of a standard reduction technique of deontic notions (see [16]): the statement \"it is forbidden to discriminate on the basis of citizenship\" amounts to the statement \"after every execution of a transition of type DISCR (i, j) the system always ends up in a violation state\".\nTogether with the norm also some intuitive background knowledge about the discrimination action needs to be formalized.\nHere, as well as in the rest of the examples in the paper, we provide just that part of the formalization which is strictly functional to show how the formalism works in practice.\nFormulae 2 and 3 express two effect laws: if the requester j is Dutch then after all executions of transitions of type DISCR (i, j) j is accepted by i, whereas if it is not all the executions of the transitions of the same type have as an effect that it is not accepted.\nAll formulae have to be read as schemata determining a finite number of subsumption expressions depending on the number of agents i, j considered.\nThe rest of the axioms concern the translation of the abstract type DISCR (i, j) to concrete transition types.\nFormula 4 refines it by making explicit that a precise if-then-else procedure counts as a discriminatory act of agent i. Formulae 5 and 6 specify which messages of i to j count as acceptance and rejection.\nIf the designer uses transition types SEND (msg33, i, j) and SEND (msg38, i, j) for the concrete system specification, then Formulae 5 and 6 can be thought of as bridge axioms connecting notions belonging to the institutional alphabet (to accept, and to reject) to concrete ones (to send specific messages).\nFinally, Formulae 7 and 8 state two intuitive effect laws concerning the ACCEPT (i, j) and REJECT (i, j) types.\nIt is easy to see, on the grounds of the semantics exposed in Definition 2, that the following concrete inclusion statement holds w.r.t.\n630 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThis scenario exemplifies a pervasive feature of human institutions which, as extensively argued in [10], should be incorporated by electronic ones.\nCurrent formal approaches to institutions, such as ISLANDER [6], do not allow for the formal specification of explicit translations of abstract norms into concrete ones, and focus only on norms that can be specified at the concrete system specification level.\nWhat Example 1 shows is that the problem of the abstractness of norms in institutions can be formally addressed and can be given a precise formal semantics.\nThe scenario suggests that, just by modifying an appropriate set of terminological axioms, it is possible for the designer to obtain a different institution by just modifying the sets of bridge axioms without touching the terminological axioms expressed only in the institutional language Lins.\nIn fact, it is the case that a same set of abstract norms can be translated to different and even incompatible sets of concrete norms.\nThis translation can nevertheless not be arbitrary ([1]).\nEXAMPLE 2.\n(Acceptable and unacceptable translations of abstract norms) Reconsider again the scenario sketched in Example 1.\nThe transition type DISCR (i, j) has been translated to a complex procedure composed by concrete transition types.\nWould any translation do?\nConsider an alternative institution specification Ins' containing Formulae 1-3 and the following translation rule:\nWould this formula be an acceptable translation of the abstract norm expressed in Formula 1?\nThe axiom states that transitions where i receives $10 from j count as transitions of type DISCR (i, j).\nNeedless to say this is not intuitive, because the abstract transition type DISCR (i, j) obeys some intuitive conceptual constraints (Formulae 2 and 3) that all its translations should also obey.\nIn fact, the following inclusions would then hold in Ins':\nIn fact, there properties of the transition type PAY (j, i, $10) look at least awkward: if an agent is Dutch than by paying $10 it would be accepted, while if it was not Dutch the same action would make it not accepted.\nThe problem is that the meaning of ` paying' is not intuitively subsumed by the meaning of ` discriminating'.\nIn other words, a transition type PAY (j, i, $10) does not intuitively yield the effects that a sub-type of DISCR (i, j) yields.\nIt is on the contrary perfectly intuitive that Formula 9 obeys the constraints in Formulae 2 and 3, which it does, as it can be easily checked on the grounds of the semantics.\nIt is worth stressing that without providing a model-theoretic semantics for the translation rules linking the institutional notions to the brute ones, it would not be so straightforward to model the logical constraints to which the translations are subjected (Example 2).\nThis is precisely the advantage of viewing translation rules as specific terminological axioms, i.e., \u0393bridge and Abridge, working as a bridge between two languages (Definition 3).\nIn [12], we have thoroughly compared this approach with approaches such as [9] which conceive of translation rules as inference rules.\nThe two examples have shown how our approach can account for some essential features of institutions.\nIn the next section the same framework is applied to provide a formal analysis of the notion of role.\n2.4 Institutional modules and roles\nViewing institutions as the impositions of institutional descriptions on systems' states and transitions allows for analyzing the normative system perspective itself (i.e., institutions are sets of norms) at a finer granularity.\nWe have seen that the terminological axioms specifying an institution concern complex descriptions of new institutional notions.\nSome of the institutional state types occurring in the institution specification play a key role in structuring the specification of the institution itself.\nThe paradigmatic example in this sense ([25]) are facts such as \"agent i enacts role r\" which will be denoted by state types rea (i, r).\nBy stating how an agent can enact and ` deact' a role r, and what normative consequences follow from the enactment of r, an institution describes expected forms of agents' behavior while at the same time abstracting from the concrete agents taking part to the system.\nThe sets of norms specifying an institution can be clustered on the grounds of the rea state types.\nFor each relevant institutional state type (e.g., rea (i, r)), the terminological axioms which define an institution, i.e., its norms, can be clustered in (possibly overlapping) sets of three different types: the axioms specifying how states of that institutional type can be reached (e.g., how an agent i can enact the role r); how states of that type can be left (e.g., how an agent i can ` deact' the a role r); and what kind of institutional consequences do those states bear (e.g., what rights and power does agent i acquire by enacting role r).\nBorrowing the terminology from work in legal and institutional theory ([23, 25]), these clusters of norms can be called, respectively, institutive, terminative and status modules.\nStatus modules We call status modules those sets of terminological axioms which specify the institutional consequences of the occurrence of a given institutional state-of-affairs, for instance, the fact that agent i enacts role r.\nEXAMPLE 3.\n(A status module for roles) Enacting a role within an institution bears some institutional consequences that are grouped under the notion of status: by playing a role an agent acquires a specific status.\nSome of these consequences are deontic and concern the obligations, rights, permissions under which the agent puts itself once it enacts the role.\nAn example which pertains to the normative description of the status of both a \"buyer\" and a \"seller\" roles is the following:\nIf agent i enacts the buyer role and j the seller role, every time agent i bids b to j this action results in an institutional state testifying that the corresponding bid has been placed by i (Formula 14).\nFormula 15 states how the bidding action can be executed by sending a specific message to j (SEND (i, j, msg49)).\nSome observations are in order.\nAs readers acquainted with deontic logic have probably already noticed, our treatment of the notion\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 631\nof obligation (Formula 13) makes again use of a standard reduction approach ([16]).\nMore interesting is instead how the notion of institutional power is modeled.\nEssentially, the empowerment phenomenon is analyzed in term of two rules: one specifying the institutional effects of an institutional action (Formula 14), and one translating the institutional transition type in a brute one (Formula 15).\nSystems of rules of this type empower the agents enacting some relevant role by establishing a connection between the brute actions of the agents and some institutional effect.\nWhether the agents are actually able to execute the required ` brute' actions is a different issue, since agent i can be in some states (or even all states) unable to effectuate a SEND (i, j, msg49) transition.\nThis is the case also in human societies: priests are empowered to give rise to marriages but if a priest is not in state of performing the required speech acts he is actually unable to marry anybody.\nThere is a difference between \"being entitled\" to make a bid and \"being in state of\" making a bid ([4]).\nIn other words, Formulae 14 and 15 express only that agents playing the buyer role are entitled to make bids.\nThe actual possibility of performing the required ` brute' actions is not an institutional issue, but rather an issue concerning the implementation of an institution in a concrete system.\nWe address this issue extensively in Section 33.\nInstitutive modules We call institutive modules those sets of terminological axioms of an institution specification describing how states with certain institutional properties can be reached, for instance, how an agent i can reach a state in which it enacts role r.\nThey can be seen as procedures that the institution define in order for the agents to bring institutional states of affairs about.\n17 specifies instead the procedure counting as an action of type ENACT (i, r).\nSuch a procedure is performed through a system infrastructure s, which notifies to i that it has been registered as enacting role r after sending the necessary piece ofdata d (SEND (i, s, d)), e.g., a valid credit card number.\nTerminative modules Analogously, we call terminative modules those sets of terminological axioms stating how a state with certain institutional properties can be left.\nRules of this kind state for instance how an agent can stop enacting a certain role.\nThey can be thus thought of as procedures that the institution defines in order for the agent to see to it that certain institutional states stop holding.\nEXAMPLE 5.\n(A terminative module for roles) Terminative modules for roles specify, for instance, how a transition type DEACT (i, r) can be executed which has as consequence the reaching of a state of type - rea (i, r):\nThat is to say, i deacting a role r always leads to a state where 3See in particular Example 6 and Definition 5 i does not enact role r; and i sending message No. 9 to a specific interface infrastructure s count as i deacting role r. Examples 3-5 have shown how roles can be formalized in our framework thereby getting a formal semantics: roles are also sets of terminological axioms concerning state types of the sort rea (i, r).\nIt is worth noticing that this modeling option is aligned with work on social theory addressing the concept of role such as [20].\n2.5 Tractable specifications of institutions\nIn the previous sections we fully deployed the expressivity of the language introduced in Section 2.1 and used its semantics to provide a formal understanding of many essential aspects of institutions in terms of transition systems.\nThis section spends a few words about the viability of performing automated reasoning in the logic presented.\nThe satisfiability problem4 in logic ALCW (~, \u25e6, \u00ac, id) is undecidable since transition type inclusion axioms correspond to a version of what in Description Logic are known as \"role-value maps\" and logics extending ALC with role-value maps are known to be undecidable ([3]).\nTractable (i.e., polynomial time decidable) fragments of logic ALCW (~, \u25e6, \u00ac, id) can however be isolated which still exhibit some key expressive features.\nOne of them is logic ELW (\u25e6).\nIt is obtained from description logic EL, which contains only state types intersection n, existential restriction 3 and T5, but extended with the \u22a5 state type and with transition type inclusion axioms of a complex form: a1 o. .\n.\no an C a (with n finite number).\nLogic ELW (\u25e6) is also a fragment of the well investigated description logic EL + + whose satisfiability problem has been shown in [2] to be decidable in polynomial time.\nDespite the very limited expressivity of this fragment, some rudimentary institutional specifications can still be successfully represented.\nSpecifically, institutive and terminative modules can be represented which contain transition types inclusion axioms.\nRestricted versions of status modules can also be represented enabling two essential deontic notions: \"it is possible (respectively, impossible) to reach a violation state by performing a transition of a certain type\", and \"it is possible (respectively, impossible) to reach a legal state by performing a transition of a certain type\".\nTo this aim language Lins would need to be expanded with a set of state types {legal (i)} 0 \u2264 i \u2264 n whose intuitive meaning is to denote legal states as opposed to states of type viol (i).\nFragments like ELW (\u25e6) could be used as target logics within theory approximation approaches ([24]) by aiming at compiling TBoxes expressed in ALCW (~, \u25e6, \u00ac, id) into approximations in those fragments.\n3.\nFROM NORMS TO STRUCTURES\n3.1 Infrastructures\nIn discussing Example 3 we observed how \"being entitled\" to make a bid does not imply \"being in state of\" making a bid.\nIn other words, an institution can empower agents by means of appropriate rules but this empowerment can remain dead letter.\nSimilar 4This problem amounts to check whether a state description \u03b3 is satisfiable w.r.t. a given TBox T, i.e., to check if there exists a model m of T such that 0 \u2282 Z (\u03b3).\nNotice that language ALCW (~, \u25e6, \u00ac, id) contains negation and intersection of arbitrary state types.\nIt is well-known that if these operators are available then all most typical reasoning tasks at the TBox level can be reduced to the satisfiability problem.\n5Notice therefore that EL is a seriously restricted fragment of ALC since it does not contain the negation operator for state types (operators u and t1 remain thus undefinable).\n632 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nobservations apply also to deontic notions: agents might be allowed to perform certain transactions under some relevant conditions but they might be unable to do so under those same conditions.\nWe refer to this kind of problems as infrastructural.\nThe implementation of an institution in a concrete system calls therefore for the design of appropriate infrastructures or artifacts ([19]).\nThe formal specification of an infrastructure amounts to the formal specification of interaction requirements, that is to say, the specification of which relevant transition types are executable and under what conditions.\nDEFINITION 4.\n(Infrastructures as TBoxes) An infrastructure Inf = ~ \u0393inf, Ainf ~ for institution Ins is a TBox on Lbrute such that for all a \u2208 L (Abridge) there exist terminological axioms in \u0393inf of the following form: \u03b3 \u2261 \u2203 a. ~ (a is executable exactly in \u03b3 states) and \u03b3 \u2261 \u2203 \u00ac a. ~ (the negation of a is executable exactly in \u03b3 states).\nIn other words, an infrastructure specification states all and only the conditions under which an atomic brute transition type and its negation are executable, which occur in the brute alphabet of the bridge axioms of Ins.\nIt states what can be in concrete done and under what conditions.\nEXAMPLE 6.\n(Infrastructure specification) Consider the institution specified in Example 1.\nA simple infrastructure Inf for that institution could contain for instance the following terminological axioms for any pair of different agents i, j and message type msg:\nNotice that the executability condition is just ~.\nWe call a concrete institution specification (Ins an institution specification Ins coupled with an infrastructure specification Inf.\nDEFINITION 5.\n(Concrete institution)\nA concrete institution obtained by joining the institution Ins = ~ \u0393ins, Ains ~ and the infrastructure Inf = ~ \u0393inf, Ainf ~ is a TBox (Ins = ~ \u0393, A ~ such that \u0393 = \u0393ins \u222a \u0393inf and A = Ains \u222a Ainf.\nObviously, different infrastructures can be devised for a same institution giving rise to different concrete institutions which makes precise implementation choices explicit.\nOf particular relevance are the implementation choices concerning abstract norms like the one represented in Formula 13.\nA designer can choose to regiment such norm ([15]), i.e., make violation states unreachable, via an appropriate infrastructure.\nEXAMPLE 7.\n(Regimentation via infrastructure specification) Consider Example 3 and suppose the following translation rule to be also part of the institution:\nIn other words, in states of type condition pay (i, j, b) the only executable brute actions are BANK (i, j, b) or CC (i, j, b) and, therefore, PAY (i, j, b) would necessarily be executed.\nAs a result, the following inclusion does not hold with respect to the corresponding concrete institution: condition pay (i, j, b) ~ \u2203 \u00ac PAY (i).\nviol (i).\n3.2 Organizational Structures\nThis section briefly summarizes and adapts the perspective and results on organizational structures presented in [14, 11].\nWe refer to that work for a more comprehensive exposition.\nOrganizational structures typically concern the way agents interact within organizations.\nThese interactions can be depicted as the links of a graph defined on the set of roles of the organization.\nSuch links are then to be labeled on the basis of the type of interaction they stand for.\nFirst of all, it should be clear whether a link denotes that a certain interaction between two roles can, or ought to, or may etc. take place.\nSecondly, links should be labeled according to the transition type \u03b1 they refer to and the conditions \u03b3 in which that transition can, ought to, may etc. take place.\nLinks in a formal specification of an organizational structure stand therefore for statements of the kind: \"role r can (ought to, may) execute \u03b1 w.r.t. role s if \u03b3 is the case\".\nFor the sake of simplicity, the following definition will consider only the \"can\" and \"ought-to\" interaction modalities.\nState and transition types in Lins \u222a Lbrute will be used to label the links of the structure.\nInteraction modalities can therefore be of an institutional kind or of a brute kind.\nwhere:\n\u2022 Mod denotes a set of pairs p = \u03b3: \u03b1, that is, a set of state type (condition) and transition type (action) pairs of Lins \u222a Lbrute with \u03b1 being an atomic transition-type indexed with a pair (i, j) denoting placeholders for the actor and the recipient of the transition; \u2022 C (\"can\") denotes links to be interpreted in terms of the exe\ncutability of the related \u03b1 in \u03b3, whereas O (\"ought\") denotes links to be interpreted in terms of the obligation to execute the related \u03b1 in \u03b3.\nBy the expressions (r, s) \u2208 C\u03b3: \u03b1 and (r, s) \u2208 O\u03b3: \u03b1 we mean therefore: agents enacting role r can and, respectively, ought to interact with agents enacting role s by performing \u03b1 in states of type \u03b3.\nAs shown in [11] such formal representations of organizational structures are of use for investigating the structural properties (robustness, flexibility, etc.) that a given organization exhibits.\nAt this point all the formal means are put in place which allow us to formally represent institutions as well as organizational structures.\nThe next and final step of the work consists in providing a formal relation between the two frameworks.\nThis formal relation will make explicit how institutions are related to organizational structures and vice versa.\nIn particular, it will become clear how a normative conception of the notion of role relates to a structural one, that is, how the view of roles as a sets of norms (specifying how an agent can enact and deact the role, and what social status it obtains by doing that) relates to the view of roles as positions within social structures.\n3.3 Relating institutions to organizations\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 633\nTo translate a given concrete institution into a corresponding organizational structure we need a function t assigning pairs of roles to axioms.\nLet us denote with Sub the set of all state type inclusion statements - y1 C - y2 that can be expressed on Lins U Lbrute.\nFunction t is a partial function Sub--+ Roles x Roles such that, for any x \u2208 Sub if x = rea (i, r) fl rea (j, s) fl - y C 3\u03b1.\nT (executability) or x = rea (i, r) fl rea (j, s) fl - y C b' - \u03b1.viol (i) (obligation) then t (x) = (r, s), where \u03b1 is an atomic transition-type indexed with a pair (i, j).\nThat is to say, executability and obligation laws containing the enactment configuration rea (i, r) fl rea (j, s) as a premise and concerning transition of types \u03b1, with i actor and j recipient of the \u03b1 transition, are translated into role pairs (r, s).\nDEFINITION 7.\n(Correspondence of specifications)\nA concrete institution CIns = (\u0393, A) is said to correspond to an organizational structure OS (and vice versa) if, for every x \u2208 \u0393:\n\u2022 x = rea (i, r) fl rea (j, s) fl - y C 3\u03b1.\nTiff t (x) \u2208 C\u03b3: \u03b1 \u2022 x = rea (i, r) fl rea (j, s) fl - y C b' - \u03b1.viol (i) iff t (x) \u2208 O\u03b3: \u03b1\nIntuitively, function t takes axioms from \u0393 (i.e., the set of state type terminological axioms of CIns) and yields pairs of roles.\nDefinition 7 labels the yielded pairs accordingly to the syntactic form of the translated axioms.\nMore concretely, axioms of the form rea (i, r) fl rea (j, s) fl - y C 3\u03b1.\nT (executability laws) are translated into the pair (r, s) belonging to the executability dimension (i.e., C) of the organizational structure w.r.t. the execution of \u03b1 under circumstances - y. Analogously, axioms of the form rea (i, r) fl rea (j, s) fl - y C b' - \u03b1.viol (i) (obligation laws) are translated into the pair (r, s) belonging to the obligation dimension (i.e., O) of the organizational structure w.r.t. the execution of \u03b1 under circumstances - y. Leaving technicalities aside, function t distills thus the terminological and infrastructural constraints of CIns into structural ones.\nThe institutive, terminative and status modules of roles are translated into definitions of positions within a OS.\nFrom a design perspective the interpretation of Definition 7 is twofold.\nOn the one hand (from left to right), it can make explicit what the structural consequences are of a given institution supported by a given infrastructure.\nOn the other hand (from right to left), it can make explicit what kind of institution is actually implemented by a given organizational structure.\nLet us see this in some more details.\nGiven a concrete institution CIns, Definition 7 allows a designer to be aware of the impact that specific terminological choices (in particular, the choice of certain bridge axioms) and infrastructural ones have at a structural level.\nNotice that Definition 7 supports the inference of links in a structure.\nBy checking whether a given inclusion statement of the relevant syntactic form follows from CIns (i.e., the so-called subsumption problem of DL) it is possible, via t, to add new links to the corresponding organizational structure.\nThis can be recursively done by just adding any new inferred inclusion x to the previous set of axioms \u0393, thus obtaining an updated institutional specification containing \u0393 U {x}.\nThis process can be thought of as the inference of structural links from institutional specifications.\nIn other words, it is possible to use institution specifications as inference tools for structural specifications.\nFor instance, the infrastructural choice formalized in Example 7 implies that for the pair of roles (buyer, seller), it is always the case that (buyer, seller) \u2208 CT: PAY (i, j, b).\nThis link follows from links (buyer, seller) \u2208 CT: BNK (i, j, b) and (buyer, seller) \u2208 CT: CC (i, j, b) on the grounds of the bridge axioms of the institution (Formula 22).\nSuppose now a designer to be interested in a system which, besides implementing an institution, also incorporates an organizational structure enjoying desirable structural properties such as flexibility, or robustness6.\nBy relating structural links to state type inclusions it is therefore possible to check whether adding a link in OS results in a stronger institutional specification, that is, if the corresponding inclusion statement is not already implied by Ins.\nTo draw a parallelism with what just said in the previous paragraph, this process can be thought of as the inference of norms and infrastructural constraints from the specification of organizational structures.\nTo give a simple example consider again Example 6 but from a reversed perspective.\nSuppose a designer wants a fully connected graph in the dimension CT: SEND (i, j) of the organizational structure.\nExploiting Definition 7, we would obtain a number of executability laws in the fashion of Formula 20 for all roles in Roles (thus 21Roles1 axioms).\nDefinition 7 establishes a correspondence between two essentially different perspectives on the design of open systems allowing for feedbacks between the two to be formally analyzed.\nOne last observation is in order.\nWhile given a concrete institution an organizational structure can be in principle fully specified (by checking for all - finitely many - relevant inclusion statements whether they are implied or not by the institution), it is not possible to obtain a full terminological specification from an organizational structure.\nThis lies on the fact that in Definition 6 the strictly terminological information contained in the specification of an institution (eminently, the set of transition type axioms A and therefore the bridge axioms) is lost while moving to a structural description.\nThis shows, in turn, that the added value of the specification of institutions lies precisely in the terminological link they establish between institutional and brute, i.e., system level notions.\n4.\nCONCLUSIONS\nThe paper aimed at providing a comprehensive formal analysis of the institutional metaphor and its relation to the organizational one.\nThe predominant formal tool has been description logic.\nTBoxes has been used to represent the specifications of institutions (Definition 3) and their infrastructures (Definition 6), providing therefore a transition system semantics for a number of institutional notions (Examples 1-7).\nMulti-graphs has then been used to represent the specification of organizational structures (Definition 6).\nThe last result presented concerned the definition of a formal correspondence between institution and organization specifications (Definition 7), which provides a formal way for switching between the two paradigms.\nAll in all, these results deliver a way for relating abstract system specifications (i.e., institutions as sets of norms) to specifications that are closer to an implemented system (i.e., organizational structures).","keyphrases":["institut norm","institut","norm","organiz structur","formal method","role","abstract constraint","formal for repres organiz structur","entiti","properti","descript logic","dynam logic","terminolog axiom","infrastructur","logic"],"prmu":["P","P","P","P","P","P","M","M","U","U","U","U","U","U","U"]} {"id":"I-9","title":"Temporal Linear Logic as a Basis for Flexible Agent Interactions","abstract":"Interactions between agents in an open system such as the Internet require a significant degree of flexibility. A crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents. In this paper, we investigate an approach to model commitments with tight integration with protocol actions. This means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments. We show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning. We also discuss the application of this framework to scenarios such as online commerce.","lvl-1":"Temporal Linear Logic as a Basis for Flexible Agent Interactions Duc Q. Pham, James Harland School of CS&IT RMIT University GPO Box 2476V Melbourne, 3001, Australia {qupham,jah}@cs.\nrmit.edu.au ABSTRACT Interactions between agents in an open system such as the Internet require a significant degree of flexibility.\nA crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents.\nIn this paper, we investigate an approach to model commitments with tight integration with protocol actions.\nThis means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments.\nWe show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning.\nWe also discuss the application of this framework to scenarios such as online commerce.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent Agents; D.3.2 [Programming Languages]: Language Classifications General Terms Theory, Design 1.\nINTRODUCTION AND MOTIVATION Recently, software development has evolved toward the development of intelligent, interconnected systems working in a distributed manner.\nThe agent paradigm has become well suited as a design metaphor to deal with complex systems comprising many components each having their own thread of control and purposes and involved in dynamic and complex interactions.\nIn multi-agent environments, agents often need to interact with each other to fulfill their goals.\nProtocols are used to regulate interactions.\nIn traditional approaches to protocol specification, like those using Finite State Machines or Petri Nets, protocols are often predetermined legal sequences of interactive behaviors.\nIn frequently changing environments such as the Internet, such fixed sequences can quickly become outdated and are prone to failure.\nTherefore, agents are required to adapt their interactive behaviors to succeed and interactions among agents should not be constructed rigidly.\nTo achieve flexibility, as characterized by Yolum and Singh in [11], interaction protocols should ensure that agents have autonomy over their interactive behaviors, and be free from any unnecessary constraints.\nAlso, agents should be allowed to adjust their interactive actions to take advantages of opportunities or handle exceptions that arise during interaction.\nFor example, consider the scenario below for online sales.\nA merchant Mer has 200 cricket bats available for sale with a unit price of 10 dollars.\nA customer Cus has $50.\nCus has a goal of obtaining from Mer a cricket bat at some time.\nThere are two options for Cus to pay.\nIf Cus uses credit payment, Mer needs a bank Ebank to check Cus``s credit.\nIf Cus``s credit is approved, Ebank will arrange the credit payment.\nOtherwise, Cus may then take the option to pay via PayPal.\nThe interaction ends when goods are delivered and payment is arranged.\nA flexible approach to this example should include several features.\nFirstly, the payment method used by Cus should be at Cus``s own choice and have the property that if Cus``s credit check results in a disapproval, this exception should also be handled automatically by Cus``s switching to PayPal.\nSecondly, there should be no unnecessary constraint on the order in which actions are performed, such as which of making payments and sending the cricket bat should come first.\nThirdly, choosing a sequence of interactive actions should be based on reasoning about the intrinsic meanings of protocol actions, which are based on the notion of commitment, i.e. which refers to a strong promise to other agent(s) to undertake some courses of action.\nCurrent approaches [11, 12, 10, 1] to achieve flexibilities using the notion of commitment make use of an abstract layer of commitments.\nHowever, in these approaches, a mapping from protocol actions onto operations on commitments 124 978-81-904262-7-5 (RPS) c 2007 IFAAMAS as well as handling and enforcement mechanisms of commitments must be externally provided.\nExecution of protocol actions also requires concurrent execution of operations on related commitments.\nAs a result, the overhead of processing the commitment layer makes specification and execution of protocols more complicated and error prone.\nThere is also a lack of a logic to naturally express aspects of resources, internal and external choices as well as time of protocols.\nRather than creating another layer of commitment outside protocol actions, we try to achieve a modeling of commitments that is integrated with protocol actions.\nBoth commitments and protocol actions can then be reasoned about in one consistent system.\nIn order to achieve that, we specify protocols in a declarative manner, i.e. what is to be achieved rather then how agents should interact.\nA key to this is using logic.\nTemporal logic, in particular, is suitable for describing and reasoning about temporal constraints while linear logic [3] is quite suitable for modeling resources.\nWe suggest using a combination of linear logic and temporal logic to construct a commitment based interaction framework which allows both temporal and resource-related reasoning for interaction protocols.\nThis provides a natural manipulation and reasoning mechanism as well as internal enforcement mechanisms for commitments based on proof search.\nThis paper is organized as follows.\nSection 2 discusses the background material of linear logic, temporal linear logic and commitments.\nSection 3 introduces our modeling framework and specification of protocols.\nSection 4 discusses how our framework can be used for an example of online sale interactions between a merchant, a bank and a customer.\nWe then discuss the advantages and limitations of using our framework to model interaction protocols and achieve flexibility in Section 5.\nSection 6 presents our conclusions and items of further work.\n2.\nBACKGROUND In order to increase the agents'' autonomy over their interactive behaviors, protocols should be specified in terms of what is to be achieved rather than how the agents should act.\nIn other words, protocols should be specified in a declarative manner.\nUsing logic is central to this specification process.\n2.1 Linear Logic Logic has been used as formalism to model and reason about agent systems.\nLinear logic [3] is well-known for modeling resources as well as updating processes.\nIt has been considered in agent systems to support agent negotiation and planning by means of proof search [5, 8].\nIn real life, resources are consumed and new resources are created.\nIn such logic as classical or temporal logic, however, a direct mapping of resources onto formulas is troublesome.\nIf we model resources like A as one dollar and B as a chocolate bar, then A \u21d2 B in classical logic is read as from one dollar we can get a chocolate bar.\nThis causes problems as the implication allows to get a chocolate bar (B is true) while retaining one dollar (A remains true).\nIn order to resolve such resource - formula mapping issues, Girard proposed the constraints on which formulas will be used exactly once and can no longer be freely added or removed in derivations and hence treating linear logic formulas as resources.\nIn linear logic, a linear implication A B, however, allows A to be removed after deriving B, which means the dollar is gone after using one dollar to buy a chocolate bar.\nClassical conjunction (and) and disjunction (or) are recast over different uses of contexts - multiplicative as combining and additive as sharing to come up with four connectives.\nA \u2297 (multiplicative conjunction) A, means that one has two As at the same time, which is different from A \u2227 A = A. Hence, \u2297 allows a natural expression of proportion.\nA \u2118 (multiplicative disjunction) B, means that if not A then B or vice versa but not both A and B.\nThe ability to specify choices via the additive connectives is a particularly useful feature of linear logic.\nA (additive conjunction) B, stands for one own choice, either of A or B but not both.\nA \u2295 (additive disjunction) B, stands for the possibility of either A or B, but we don``t know which.\nAs remarked in [5], and \u2295 allow choices to be made clear between internal choices (one``s own), and external choices (others'' choice).\nFor instance, to specify that the choice of places A or B for goods'' delivery is ours as the supplier, we use A B, or is the client``s, we use A \u2295 B.\nIn agent systems, this duality between inner and outer choices is manifested by one agent having the power to choose between alternatives and the other having to react to whatever choice is made.\nMoreover, during interaction, the ability to match consumption and supply of resources among agents can simplify the specification of resource allocations.\nLinear logic is a natural mechanism to provide this ability [5].\nIn addition, it is emphasized in [8] that linear logic is used to model agent states as sets of consumable resources and particularly, linear implication is used to model transitions among states and capabilities of agents.\n2.2 Temporal Linear Logic While linear logic provides advantages to modeling and reasoning about resources, it does not deal naturally with time constraints.\nTemporal logic, on the other hand, is a formal system which addresses the description and reasoning about the changes of truth values of logic expressions over time [2].\nTemporal logic can be used for specification and verification of concurrent and reactive programs [2].\nTemporal Linear Logic (TLL) [6] is the result of introducing temporal logic into linear logic and hence is resourceconscious as well as deals with time.\nThe temporal operators used are (next), (anytime), and (sometime) [6].\nFormulas with no temporal operators can be considered as being available only at present.\nAdding to a formula A, i.e. A, means that A can be used only at the next time and exactly once.\nSimilarly, A means that A can be used at any time and exactly once.\nA means that A can be used once at some time.\nThough both and refer to a point in time, the choice of which time is different.\nRegarding , the choice is an internal choice, as appropriate to one``s own capability.\nWith , the choice is externally decided by others.\n2.3 Commitment The concept of social commitment has been recognized as fundamental to agent interaction.\nIndeed, social commitment provides intrinsic meanings of protocol actions and states [11].\nIn particular, persistence in commitments introduces into agents'' consideration a certain level of predictability of other agents'' actions, which is important when agents deal with issues of inter-dependencies, global constraints or The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 125 resources sharing [7].\nCommitment based approaches associate protocols actions with operations on commitments and protocol states with the set of effective commitments [11].\nCompleting the protocol is done via means-end reasoning on commitment operations to bring the current state to final states where all commitments are resolved.\nFrom then, the corresponding legal sequences of interactive actions are determined.\nHence, the approaches systematically enhance a variety of legal computations [11].\nCommitments can be reduced to a more fundamental form known as pre-commitments.\nA pre-commitment here refers to a potential commitment that specifies what the owner agent is willing to commit [4], like performing some actions or achieving a particular state.\nAgents can negotiate about pre-commitments by sending proposals of them to others.\nThe others can respond by agreeing or disagreeing with the proposal or proposing another pre-commitment.\nOnce a precommitment is agreed, it then becomes a commitment and the process moves from negotiation phase to commitment phase, in which the agents act to fulfill their commitments.\n3.\nMODELING AGENT INTERACTIONS Protocols are normally viewed external to agents and are essentially a set of commitments externally imposed on participating agents.\nWe take an internal view of protocols, i.e. from the view of participating agents by putting the specification of commitments locally at the respective agents according to their roles.\nSuch an approach enables agents to manage their own protocol commitments.\nIndeed, agents no longer accept and follow a given set of commitments but can reason about which commitments of theirs to offer and which commitments of others to take, while considering the current needs and the environment.\nProtocols arise as commitments are then linked together via agents'' reasoning based on proof search during the interaction.\nAlso, ongoing changes in the environment are taken as input into the generation of protocols by agent reasoning.\nThis is the reverse of other approaches which try to make the specification flexible to accommodate changes in the environment.\nHence, it is a step closer to enabling emergent protocols, which makes protocols more dynamic and flexible to the context.\nIn a nutshell, services are what agents are capable of providing to other agents.\nCommitments can then be seen to arise from combinations of services, i.e. an agent``s capabiliA unit of consumable resources is modeled as a proposition in linear logic.\nNumeric figures can be used to abbreviate a multiplicative conjunction of the same instances.\nFor example, 2 dollar = dollar \u2297 dollar.\nMoreover, such 3 A is a shorthand for A.\nIn order to address the dynamic manipulation of resources, we also include information about the location and ownership in the encoding of resources to address the relocation and changes in possession of resources during agent interaction.\nThat resource A is located at agent \u03b1 and owned by agent \u03b2 is expressed via a shorthand notation as A@\u03b1\u03b2 , which is treated as a logic proposition in our framework.\nThis notation can be later extended to a more complex logic construct to reason about changes in location and ownership.\nIn our running example, a cricket bat cricket b being located at and owned by agent Mer is denoted as cricket b@M .\nM After a successful sale to the agent customer Cus, the cricket bat will be relocated to and owned by agent Cus.\nThe formula cricket b@CC will replace the formula cricket b@MM to reflect the changes.\nOur treatment of unlimited resources is to model it as a number \u03c3 of copies of the resource``s formula such that the number \u03c3 is chosen to be extremely large, relative to the context.\nFor instance, to indicate that the merchant Mer can issue an unlimited number of sale quotes at any time, we use \u03c3 sale quote@M .\nM Declaration of actions is also modeled in a similar manner as of resources.\nThe capabilities of agents refer to producing, consuming, relocating and changing ownership of resources.\nCapabilities are represented by describing the state before and after performing them.\nThe general representation form is \u0393 \u0394, in which \u0393 describes the conditions before and \u0394 describes the conditions after.\nThe linear implication in linear logic indeed ensures that the conditions before will be transformed into the conditions after.\nMoreover, some capabilities can be applied at any number of times in the interaction context and their formulas are also preceded by the number \u03c3.\nTo take an example, we consider the capability of agent Mer of selling a cricket bat for 10 dollars.\nThe conditions before are 10 dollars and a payment method from agent Cus: 10$@CC \u2297 pay m@C .\nGiven these, by applying theC capability, Mer will gain 10 dollars (10$@MM ) and com\u22a5 ) so that CusM mit to providing a cricket bat (cricket b@M will get a cricket bat (cricket b@CC ).\nTogether, the capability is encoded as 10$@C \u2297 pay m@C 10$@MC C \u2297 cricket b@C M \u2297 ties.\nHence, our approach shifts specifying a set of protocol commitments to specifying sets of pre-commitments as capabilities for each agent.\nCommitments are then can be \u22a5 M cricket b@M .\nC 3.2 Modeling commitmentsreasoned about and manipulated by the same logic mechanism as is used for the agents'' actions, resources and goals, which greatly simplifies the system.\nOur framework uses TLL as a means of specifying interaction protocols.\nWe encode various concepts such as resource, capability and commitment in TLL.\nThe symmetry between a formula and its negation in TLL is explored as a way to model resources and commitments.\nWe then discuss the central role of pre-commitments, and how they are specified at each participating agent.\nIt then remains for agents to reason about pre-commitments to form protocol commitments, We discuss the modeling of various types of commitments, their fulfillments and enforcement mechanisms.\nDue to duality in linear logic, positive formulas can be regarded as formulas in supply and negative formulas can be regarded as formulas in demand.\nHence, we take an approach of modeling non-conditional or base commitments as negative formulas.\nIn particular, by turning a formula into its negative form, a base commitment to derive the resources or carry out the actions associated with the formula is created.\nIn the above example, a commitment of agent Mer to \u22a5 M .\nwhich are subsequently discharged.\nprovide a cricket bat (cricket b@MM ) is cricket b@M A base commitment is fulfilled (discharged) whenever the 3.1 Modeling resources and capabilities committing agent successfully brings about the respective 126 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) resources or carries out the actions as required by the commitment.\nIn TLL modeling, this means that the corresponding positive formula is derived.\nResolution of commitments can then be naturally carried out by inference in TLL.\nFor example, cricket b@M will fulfil the commit-M ment cricket b@M\u22a5 and both formulas are automaticallyM removed as cricket b@MM \u2297 cricket b@M\u22a5 .\nM \u22a5 Under a further assumption that agents are expected to resolve all formulas in demand (removing negative formulas), this creates a driving pressure on agents to resolve base commitments.\nThis pressure then becomes a natural and internal enforcement mechanism for base commitments.\nA commitment with conditions (or conditional commitment) can be modeled by connecting the conditions to base commitments via a linear implication.\nA general form is \u0393 \u0394 where \u0393 is the condition part and \u0394 includes base commitments.\nIf the condition \u0393 is derived, by consuming \u0393, the linear implication will ensure that \u0394 results, which means the base commitments in \u0394 become effective.\nIf the conditions can not be achieved, the linear implication can not be applied and hence commitment part in the conditional commitment is still inactive.\nIn our approach, conditional commitments are specified in their potential form as pre-commitments of participating agents.\nPre-commitments are negotiated among agents via proposals and upon being accepted, will form conditional commitments among the engaged agents.\nConditional commitments are interpreted as that the condition \u0393 is required of the proposed agent and the commitment part \u0394 is the responsibility of the owner (proposing) agent.\nIndeed, such interpretation and the encoding of realize the notion of a conditional commitment that owner agent is willing to commit to deriving \u0394 given the proposed agent satisfies the conditions \u0393.\nConditional commitments, pre-commitments and capabilities all have similar encodings.\nHowever, their differences lie in the phases of commitment that they are in.\nCapabilities are used internally by the owner agent and do not involve any commitment.\nPre-commitments can be regarded as capabilities intended for forming conditional commitments.\nUpon being accepted, pre-commitments will turn into conditional commitments and bring the two engaged agents into a commitment phase.\nAs an example, consider that Mer has a capability of selling cricket bats: (10$@CC \u2297pay m@CC ) (10$@M \u2297 cricket b@M\u22a5 \u2297 cricket b@CC ).\nWhen MerM M proposes its capability to Cus, the capability acts as a precommitment.\nWhen the proposal gets accepted, that precommitment will turn into a conditional commitment in which Mer commits to fulfilling the base commitment cricket b@M\u22a5 (which leads to having cricket b@CC ) uponM the condition that Cus derives 10$@CC \u2297pay m@C (whichC leads to having 10$@MM ).\nBreakable commitments which are in place to provide agents with the desired flexibility to remove itself from its commitments (cancel commitments) are also modeled naturally in our framework.\nA base commitment Com\u22a5 is turned into a breakable base commitment (cond \u2295 Com)\u22a5 .\nThe extra token cond reflects the agent``s internal deliberation about when the commitment to derive Com is broken.\nOnce cond is produced, due to the logic deduction cond \u2297 (cond \u2295 Com)\u22a5 \u22a5, the commitment (cond \u2295 Com)\u22a5 is removed, and hence breaking the commitment of deriving Com.\nMoreover, a breakable conditional commitment is modeled as A (1 B), instead of A B.\nWhen the condition A is provided, the linear implication brings about (1 B) and it is now up to the owner agent``s internal choice whether 1 or B is resulted.\nIf the agent chooses 1, which practically means nothing is derived, then the conditional commitment is deliberately broken.\n3.3 Protocol Construction Given the modeling of various interaction concepts like resource, action, capability, and commitment, we will discuss how protocols can be specified.\nIn our framework, each agent is encoded with the resources, actions, capabilities, pre-commitments, any pending commitments that it has.\nPre-commitments, which stem from services the agents are capable of providing, are designated to be fair exchanges.\nIn a pre-commitment, all the requirements of the other party are put in the condition part and all the effects to be provided by the owner agent are put on the commitment part to make up a trade-off.\nSuch a design allows agents to freely propose pre-commitments to any interested parties.\nAn example of pre-commitments is that of an agent Merchant regarding a sale of a cricket bat: [10$@CC \u2297pay m@C 10 $@MM \u2297 cricket b@CC \u2297cricket b@M\u22a5 M ].\nThe condition is the requirement that the customer agent provides 10 dollars, which is assumed to be the price of a cricket bat, via a payment method.\nThe exchange is the cricket bat for the customer ( cricket b@CC ) and hence is fair to the merchant.\nProtocols are specified in terms of sets of pre-commitments at participating agents.\nGiven some initial interaction commitments, a protocol emerges as agents are reasoning about which pre-commitments to offer and accept in order to fulfill these commitments.\nGiven a such a protocol specification, we then discuss how interaction might take place.\nAn interaction can start with a request or a proposal.\nWhen an agent can not achieve some commitments by itself, it can make a request of them or propose a relevant pre-commitment to an appropriate agent to fulfill them.\nThe choice of which pre-commitments depends on if such pre-commitments can produce the formulas to fulfill the agent``s pending commitments.\nWhen an agent receives a request, it searches for precommitments that can together produce the required formulas of the requests.\nThose pre-commitments found will be used as proposals to the requesting agents.\nOtherwise, a failure notice will be returned.\nWhen a proposal is received, the recipient agent also performs a search with the inclusion of the proposal for a proof of those formulas that can resolve its commitments.\nIf the search result is positive, the proposal is accepted and becomes a commitment.\nThe recipient then attempts to fulfill conditions of the commitments.\nOtherwise, the proposal is refused and no commitment is formed.\nThroughout the interaction, proof search has played a vital role in protocol construction.\nProof search reveals that some commitments can not be resolved locally or some pre-commitments can be used to resolve pending commitments, which prompts the agent to make a request or proposal respectively.\nProof search also determines which precommitments are relevant to fulfillment of a request, which helps agents to decide which pre-commitments to propose to answer the request.\nMoreover, whether a received proposal C The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 127 is relevant to any pending commitments or not is also determined by a search for proof of these commitments with an inclusion of the proposal.\nConditions of proposals can be resolved by proof search as it links them with the agents'' current resources and capabilities as well as any relevant precommitments.\nTherefore, it can be seen that proof search performed by participating agents can link up their respective pre-commitments and turn them into commitments as appropriate, which give rise to protocol formation.\nWe will demonstrate this via our running example in section 4.\n3.4 Interactive Messages Agents interact by sending messages.\nWe address agent interaction in a simple model which contains messages of type requests, proposals, acceptance, refusal and failure notice.\nWe denote Source to Destination: prior to each message to indicate the source and destination of the message.\nFor example Cust to Mer: denotes that the message is sent from agent Cust to agent Mer.\nRequest messages start with the key word REQUEST: REQUEST + formula.\nFormulas in request messages are of commitments.\nProposal messages are preceded with PROPOSE.\nFormulas are of capabilities.\nFor example, \u03b1 to \u03b2: PROPOSE \u0393 \u0394 is a proposal from agent \u03b1 to agent \u03b2.\nThere are messages that agents use to response to a proposal.\nAgents can indicate an acceptance: ACCEPT, or a refusal: REFUSE.\nTo notice a failure in fulfilling a request or proposal, agents reply with that request or proposal message appended with FAIL.\n3.5 Generating Interactions As we have seen, temporal linear logic provides an elegant means for encoding the various concepts of agent interaction in a commitment based specification framework.\nAppropriate interaction is generated as agents negotiate their specified pre-commitments to fulfill their goals.\nThe association among pre-commitments at participating agents and the monitoring of commitments to ensure that all are discharged are performed by proof search.\nIn the next section, we will demonstrate how specification and generation of interactions in our framework might work.\n4.\nEXAMPLE We return to the online sales scenario introduced in Section 1.\n4.1 Specifying Protocol We design a set of pre-commitments and capabilities to implement the above scenario.\nFor simplicity, we refer to them as rules.\nRules at agent Mer Mer has available at any time 200 cricket bats for sale and can issue sale quotes at any time: 200 cricket b@M \u2297 \u03c3 sale quote@M .\nM M Rule 1: Mer commits to offering a cricket bat (cricket b@M\u22a5 M ) to Cus ( cricket b@CC ) if Cus pays 10 dollars (10$@CC ) either via Paypal or credit card.\nThe choice is at Cus.\n\u03c3 [10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M 10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5 M C M ] Rule 2: If EBank carries out the credit payment to Mer then the requirement of credit payment at Mer is fulfilled: \u03c3 [credit paym@M credit paid@MM ]B Rule 3: If Ebank informs Mer of its disapproval of Cus``s credit then Mer will also let Cus know.\n\u03c3 [credit not appr@M credit not appr@CB ]B Rules at agent Ebank Rule 4: Upon receiving a sale quote from Mer, at the next time point, Ebank commits to either informing Mer that Cus``s credit is not approved ( credit not appr@MB ) or arranging a credit payment to Mer ( credit paym@MB ).\nThe decision is dependent on the credibility of Cus and hence is external (\u2295) to Ebank and Mer: \u03c3 [sale quote@M ( credit not appr@MB ) \u2295M credit paym@MB ] Rules at agent Cus Cus has an amount of 50 dollars available at any time, can be used for credit payment or cash payment: $50@C.\nCus has a commitment of obtaining a cricket bat at some time: [ cricket b@CC ]\u22a5 .\nRule 5: Cus will pay Mer via Paypal if there is an indication from EBank that Cus``s credit is not approved: \u03c3 [credit not appr@C Paypal paid@MM ]B 4.2 Description of the interaction Cus requests from Mer a cricket bat at some time.\nMer replies with a proposal in which each cricket bat costs 10 dollars.\nCus needs to prepare 10 dollars and payment can be made by credit or via Paypal.\nAssuming that Cus only pays via Paypal if credit payment fails, Cus will let Mer charges by credit.\nMer will then ask EBank to arrange a credit payment.\nEBank proposes that Mer gives a quote of sale and depending on Cus``s credibility, at the next time point, either credit payment will be arranged or a disapproval of Cus``s credit will be informed.\nMer accepts and fulfills the conditions.\nIf the first case happens, credit payment is done.\nIf the second case happens, credit payment is failed, Cus may back track to take the option paying via Paypal.\nOnce payment is arranged, Mer will apply its original proposal to satisfy the Cus``s request of a cricket bat and hence removing one cricket bat and adding 10 dollars into its set of resources.\n4.3 Interaction 1.\nCus can not fulfill its commitment of [ (cricket b@CC )]\u22a5 and hence, makes a request to Merchant: C to M: REQUEST [ cricket b@CC ]\u22a5 2.\nTo meet the request, Mer searches for applicable rules.\nOne applications of rule 1 can derive cricket b@C and cricket b@C cricket b@C .\nMer will propose rule 1C C at a time instance n1 to Cus as a pre-commitment.\nM to C: PROPOSE n1 [10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M 10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5 M ]M C With similar analysis, Cus determines that given the conC 128 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) ditions can be satisfied, the proposals can help to derive its request.\nHence, C to M: ACCEPT Cus analyzes the conditions of the accepted proposal by proof search.\nn110$@CC ; n1Paypal paid@M or n1credit paid@MM ; -(*)-M n110$@C \u2297 ( n1Paypal paid@M \u2295 n1credit paid@MM )C M n1(10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM ))C M From (*), one way to satisfy the conditions is for Cus to derive, at the next n1 time points, 10 dollars ( n1 10$@CC ); and to choose paying via Paypal ( n1 Paypal paid@MM ) OR by credit payment ( n1 credit paid@MM ).\n3.\nDeriving n1 10$@C : as Cus has 50 dollars, it canC make use of 10 dollars: 10 $@C 10$@C n1 10$@C .\nC C C There are two options for payment method, the choice is at agent Cus.\nWe assume that Cus prefers credit payment.\n4.\nDeriving n1 credit paid@M : Cus can not deriveM this formula by itself, hence, it will make a request to Mer: C to M: REQUEST [ n1 credit paid@MM ]\u22a5 .\n5.\nRule 2 at Mer is applicable but Mer can not derive its condition ( n1 credit paym@MB ).\nHence, Mer will further make a request to EBank.\nM to E: REQUEST [ n1 credit paym@MB ]\u22a5 Ebank searches and finds rule 4 applicable.\nBecause credit paym@M will be available one time point after theB rule``s application time, Ebank proposes to Mer an instance of rule 4 at the next n1-1 time points.\n6.\nB to M: PROPOSE n1\u22121 [quote@MM ( credit not appr@M \u2295 credit paym@MB )]B With similar analysis, Mer accepts the proposal.\nM to B: ACCEPT The rule condition is fulfilled by Mer as quote@MM n1\u22121 quote@M .\nHence, Ebank then applies the proposalM to derive: n1\u22121 ( credit not appr@M \u2295 credit paym@MB ).\nB \u2295 indicates the choice is external to both agents.\nThere are two cases, Cus``s credit is approved or disapproved.\nFor simplicity, we show only the case where Cus``s credit is approved.\nAt the next (n1-1) time point, n1\u22121 ( credit not appr@MB \u2295 credit paym@MB ) becomes n1\u22121 credit paym@M n1 credit paym@M .\nB B As a result, at the next n1 time points, Ebank will arrange the credit payment.\n7.\nMer fulfills Cus``s initial request.\nWhen any of n1 Paypal paid@M (if Cus pays via Pay-M pal) or n1 credit paid@M (if Cus pays by credit card)M is derived, n1 (credit paym@M \u2295 Paypal paid@MM ) isM also derived, hence the payment method is arranged.\nTogether with the other condition 10$@C being satisfied,C this allows the initial proposal to be applied by Mer to derive n1 cricket b@CC and a commitment of n1 cricket b@M\u22a5 M for Mer, which is also resolved by the resource cricket b@MM available at Mer.\nAny values of n1 such that n1 \u2212 1 \u2265 0 \u21d4 n1 \u2265 1 will allow Mer to fulfill Cus``s initial request of [ cricket b@CC ]\u22a5 .\nThe interaction ends as all commitments are resolved.\n4.4 Flexibility The desired flexibility has been achieved in the example.\nIt is Cus``s own decision to proceed with the preferred payment method.\nAlso, non-determinism that whether Cus``s credit is disapproved or credit payment is made to Mer is faithfully represented.\nIf an exception happens that Cus``s credit is not approved, credit not appr@C is produced andB Cus can backtrack to paying via Paypal.\nRule 5 will then be utilized to allow Cus to handle the exception by paying via Paypal.\nMoreover, in order to specify that making payments and sending cricket bats can be in any order, we can add in front of payment method in rule 1 as follows: \u03c3 [10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M 10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5 M ].\nM C This addition in the condition of the rule means that the time of payment can be any time up to Cus``s choice, as long as Cus pays and hence, the time order between making payments and sending goods becomes flexible.\n5.\nENCODING ISSUES 5.1 Advantages of TLL framework Our TLL framework has demonstrated natural and expressive specification of agent interaction protocols.\nLinear implication ( ) expresses causal relationship, which makes it natural to model a removal or a consumption, especially of resources, together with its consequences.\nHence, in our framework, resource transformation is modeled as a linear implication of the consumed resources to the produced resources.\nResource relocation is modeled as a linear implication from a resource at one agent to that resource at the other agent.\nLinear implication also ensures that fulfillment of the conditions of a conditional commitment will cause the commitments to happen.\nMoreover, state updates of agents are resulted from a linear implication from the old state to the current state.\nTemporal operators ( , and ) and their combinations help to specify the time of actions, of resource availability and express the time order of events.\nParticularly, precise time points are described by the use of operator or multiple copies of it.\nBased on this ability to specify correct time points for actions or events, time order or sequencing of them can also be captured.\nAlso, a sense of duration is simulated by spreading copies of the resources or actions'' formulas across multiple adjacent time points.\nMoreover, uncertainty in time can represented and reasoned about by the use of and and their combinations with .\ncan be used to express outer non-determinism while expresses inner non-determinism.\nThese time properties of resources, actions and events are correctly respected through out the agent reasoning process based on sequent calculus rules.\nFurthermore, the centrality of the notion of commitment in agent interaction has been recognized in many frameworks [11, 12, 1, 10, 4].\nHowever, to the best of our knowledge, modeling commitments directly at the propositional level of such a resource conscious and time aware logic as TLL is The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 129 firstly investigated in our framework.\nOur framework models base commitments as negative formulas and conditional commitments via the use of linear implication and\/or negative formulas.\nThe modeling of commitments has a number of advantages: \u2022 Commitments are represented directly at the propositional logic level or via a logic connective rather than a non-logical construct like [11], which makes treatment of commitments more natural and simple and allows to make use of readily available proof search systems like using sequent calculus for handling commitments.\nExisting logic connectives like \u2297, , \u2295, are also readily available for describing the relationships among commitments.\n\u2022 Fulfillment of commitments then becomes deriving the corresponding positive formulas or condition formulas, which then simply reduces to a proof search task.\nAlso, given the required formulas, fulfillment of commitments can be implemented easily and automatically as deduction (com \u2297 com\u22a5 \u22a5).\nThe enforcement of commitments is also internal and\u2022 simply implemented via the assumption that agents are driven to remove all negative formulas for base commitments and via the use of linear implication for conditional commitments.\nRegarding making protocol specification more flexible, our approach has marked a number of significant points.\nFirstly, flexibility of protocol specifications in our framework comes from the expressive power of the connectives of TLL.\nand \u2295 refer to internal and external choices of agents on resources and actions while and refer to internal choices and external choices in time domain.\nGiven that flexibility includes the ability to make a sensible choice, having the choices expressed explicitly in the specification of interaction protocols provides agents with an opportunity to reason about the right choices during interaction and hence explore the flexibility in them.\nSecondly, instead of being sequences of interactive actions, protocols are structured on commitments, which are more abstract than protocol actions.\nExecution of protocols is then based on fulfilling commitments.\nHence, unnecessary constraints on which particular interactive actions to execute by which agents and on the order among them are now removed, which is a step forward to flexibility as compared to traditional approaches.\nOn the other hand, in the presence of changes introduced externally, agents have the freedom to explore new sets of interactive actions or skip some interactive actions ahead as long as they still fulfill the protocol``s commitments.\nThis brings more flexibility to the overall level of agents'' interactive behaviors, and thus the protocol.\nThirdly, the protocol is specified in a declarative manner essentially as sets of pre-commitments at each participating agents.\nTo achieve goals, agents use reasoning based on TLL sequent calculus to construct proofs of goals from pre-commitments and state formulas.\nThis essentially gives agents an autonomy in utilization of pre-commitments and hence agents can adapt the ways they use these to flexibly deal with changing environments.\nIn particular, as proof construction by agents selects a sequence of pre-commitments for interaction, being able to select from all the possible combinations of pre-commitments in proof search gives more chances and flexibility than selecting from only a few fixed and predefined sequences.\nIt is then also more likely to allow agents to handle exceptions or explore opportunities that arise.\nMoreover, as the actual order of pre-commitments is determined by the proof construction process rather than predefined, agents can flexibly change the order to suit new situations.\nFourthly, changes in the environment can be regarded as removing or adding formulas onto the state formulas.\nBecause the proof construction by agents takes into account the current state formulas when it picks up pre-commitments, changes in the state formulas will be reflected in the choice of which relevant pre-commitments to proceed.\nHence, the agents have the flexibility in deciding what to do to deal with changes.\nLastly, specifying protocols in our framework has a modular approach which adds ease and flexibility to the designing process of protocols.\nProtocols are specified by placing a set of pre-commitments at each participating agent according to their roles.\nEach pre-commitment can indeed be specified as a process in its own with condition formulas as its input and commitment part``s formulas as its output.\nExecution of each conditional commitment is a relatively independent thread and they are linked together by the proof search to fulfill agents'' commitments.\nAs a results, with such a design of pre-commitments, one pre-commitment can be added or removed without interfering the others and hence, achieving a modular design of the protocols.\n5.2 Limitations of TLL Framework on Modeling As all the temporal operators in TLL refer to concrete time points, we can not express durations in time faithfully.\nOne major disadvantage of simulating a duration of an event by spreading copies of that event over adjacent time points A\u2297 10 continuously (like A\u2297 2 ... A) is that it requires the time range to be provided explicitly.\nHence, such notion like until can not be naturally expressed in TLL.\nCommitments of agents can be in conflict, especially when resolving all of them requires more resources or actions than what agents have.\nOur work has not covered handling commitments that are in conflict.\nAnother troublesome aspect of this approach is that the rules for interaction require some detailed knowledge of the formulas of temporal linear logic.\nClearly it would be beneficial to have a visually-based tool similar to UML diagrams which would allow non-experts to specify the appropriate rules without having to learn the details of the formulas themselves.\n6.\nCONCLUSIONS AND FURTHER WORK This paper uses TLL for specifying interaction protocols.\nIn particular, TLL is used to model the concept of resource, capability, pre-commitment and commitment with tight integration as well as their manipulations with respect to time.\nAgents then make use of proof search techniques to perform the desired interactions.\nIn particular, the approach allows protocol specifications to capture the meaning of interactive actions via commitments, to capture the internal choices and external choices of agents about resources, commitments and about time as well as updating processes.\nThe proof construction mechanism 130 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) provides agents with the ability to dynamically select appropriate pre-commitments, and hence, help agents to gain the flexibility in choosing the interactive actions that are most suitable and the flexibility in the order of them, taking into consideration on-going changes in the environment.\nMany other approaches to modeling protocols also use the commitment concept to bring more meaning into agents'' interactive actions.\nApproaches based on commitment machines [11, 12, 10, 1] endure a number of issues.\nThese approaches use logic systems that are limited in their expressiveness to model resources.\nAlso, as an extra abstract layer of commitments is created, more tasks are created accordingly.\nIn particular, there must be a human-designed mapping between protocol actions and operations on commitments as well as between control variables (fluent) and phases of commitment achievement.\nMoreover, external mechanisms must be in place to comprehend and handle operations and resolution of commitments as well as enforcement of the notion of commitment on its abstract data type representations.\nThis requires another execution in the commitment layer in conjunction with the actual execution of the protocol.\nNot only these extra tasks create an overhead but also makes the specification and execution of protocols more error prone.\nSimilar works in [8] and [9] explore the advantages of linear logic and TLL respectively by using partial deduction techniques to help agents to figure out the missing capabilities or resources and based on that, to negotiate with other agents about cooperation strategies.\nOur approach differs in bringing the concept of commitment into the modeling of interaction, and providing a more natural and detailed map for specifying interaction, especially about choices, time and updating using the full propositional TLL.\nMoreover, we emphasize on the use of pre-commitments as interaction rules with a full set of TLL inference rules to provide the advantages of proof construction in achieving flexible interaction.\nOur further work will include using TLL to verify various properties of interaction protocols such as liveness and safety.\nAlso, we will investigate developing an execution mechanism for such TLL specifications in our framework.\nAcknowledgments We are very thankful to Michael Winikoff for many stimulating and helpful discussions of this material.\nWe also would like to acknowledge the support of the Australian Research Council under grant DP0663147.\n7.\nREFERENCES [1] A. K. Chopra and M. P. Singh.\nContextualizing commitment protocol.\nIn AAMAS ``06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 1345-1352, New York, NY, USA, 2006.\nACM Press.\n[2] E. A. Emerson.\nTemporal and modal logic.\nHandbook of Theoretical Computer Science, B, Chapter 16:995-1072, 1990.\n[3] J.-Y.\nGirard.\nLinear logic.\nTheoretical Computer Science, 50:1-102, 1987.\n[4] A. Haddadi.\nCommunication and Cooperation in Agent Systems: a pragmatic theory.\nSpringer-Verlag, Berlin Heidelberg, 1995.\n[5] J. Harland and M. Winikoff.\nAgent negotiation as proof search in linear logic.\nIn AAMAS ``02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pages 938-939, New York, NY, USA, 2002.\nACM Press.\n[6] T. Hirai.\nTemporal Linear Logic and Its Applications.\nPhD thesis, Graduate School of Science and Technology, Kobe University, 2000.\n[7] N. R. Jennings.\nCommitments and conventions: The foundation of coordination in multi-agent systems.\nThe Knowledge Engineering Review, 8(3):223-250, 1993.\n[8] P. K\u00a8ungas.\nLinear logic, partial deduction and cooperative problem solving.\nIn J. A. Leite, A. Omicini, L. Sterling, and P. Torroni, editors, Declarative Agent Languages and Technologies, First International Workshop, DALT 2003.\nMelbourne, Victoria, July 15th, 2003.\nWorkshop Notes, pages 97-112, 2003.\n[9] P. K\u00a8ungas.\nTemporal linear logic for symbolic agent negotiation.\nLecture Notes in Artificial Intelligence, 3157:23-32, 2004.\n[10] M. Venkatraman and M. P. Singh.\nVerifying compliance with commitment protocols.\nAutonomous Agents and Multi-Agent Systems, 2(3):217-236, 1999.\n[11] P. Yolum and M. P. Singh.\nCommitment machines.\nIn Proceedings of the 8th International Workshop on Agent Theories, Architectures, and Languages (ATAL-01), pages 235-247.\nSpringer-Verlag, 2002.\n[12] P. Yolum and M. P. Singh.\nFlexible protocol specification and execution: applying event calculus planning using commitments.\nIn AAMAS ``02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pages 527-534, New York, NY, USA, 2002.\nACM Press.\nAPPENDIX A. TEMPORAL SEQUENT RULES FOR TLL A, \u0393 \u0394 !\n\u0393, \u0394 A, \u039b, ?\n\u03a3 A, \u0393 \u0394 L !\n\u0393, \u0394 A, \u039b, ?\n\u03a3 R !\n\u0393, \u0394, A \u039b, ?\n\u03a3 \u0393 A.\u0394 !\n\u0393, \u0394, A \u039b, ?\n\u03a3 L \u0393 A, \u0394 R !\n\u0393, \u0394, \u039e A, \u03a6, \u039b, ?\n\u03a0 !\n\u0393, \u0394, \u039e A, \u03a6, \u039b, ?\n\u03a0 !\n\u0393, \u0394, \u039e, A \u03a6, \u039b, ?\n\u03a0 !\n\u0393, \u0394, \u039e, A \u03a6, \u039b, ?\n\u03a0 !\n\u0393, \u0394, \u039e \u03a6, \u039b, ?\n\u03a0 !\n\u0393, \u0394, \u039e \u03a6, \u039b, ?\n\u03a0 \u2192 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 131","lvl-3":"Temporal Linear Logic as a Basis for Flexible Agent Interactions\nABSTRACT\nInteractions between agents in an open system such as the Internet require a significant degree of flexibility.\nA crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents.\nIn this paper, we investigate an approach to model commitments with tight integration with protocol actions.\nThis means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments.\nWe show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning.\nWe also discuss the application of this framework to scenarios such as online commerce.\n1.\nINTRODUCTION AND MOTIVATION\nRecently, software development has evolved toward the development of intelligent, interconnected systems working in a distributed manner.\nThe agent paradigm has become well suited as a design metaphor to deal with complex systems comprising many components each having their own\nthread of control and purposes and involved in dynamic and complex interactions.\nIn multi-agent environments, agents often need to interact with each other to fulfill their goals.\nProtocols are used to regulate interactions.\nIn traditional approaches to protocol specification, like those using Finite State Machines or Petri Nets, protocols are often predetermined legal sequences of interactive behaviors.\nIn frequently changing environments such as the Internet, such fixed sequences can quickly become outdated and are prone to failure.\nTherefore, agents are required to adapt their interactive behaviors to succeed and interactions among agents should not be constructed rigidly.\nTo achieve flexibility, as characterized by Yolum and Singh in [11], interaction protocols should ensure that agents have autonomy over their interactive behaviors, and be free from any unnecessary constraints.\nAlso, agents should be allowed to adjust their interactive actions to take advantages of opportunities or handle exceptions that arise during interaction.\nFor example, consider the scenario below for online sales.\nA merchant Mer has 200 cricket bats available for sale with a unit price of 10 dollars.\nA customer Cus has $50.\nCus has a goal of obtaining from Mer a cricket bat at some time.\nThere are two options for Cus to pay.\nIf Cus uses credit payment, Mer needs a bank Ebank to check Cus's credit.\nIf Cus's credit is approved, Ebank will arrange the credit payment.\nOtherwise, Cus may then take the option to pay via PayPal.\nThe interaction ends when goods are delivered and payment is arranged.\nA flexible approach to this example should include several features.\nFirstly, the payment method used by Cus should be at Cus's own choice and have the property that if Cus's credit check results in a disapproval, this exception should also be handled automatically by Cus's switching to PayPal.\nSecondly, there should be no unnecessary constraint on the order in which actions are performed, such as which of making payments and sending the cricket bat should come first.\nThirdly, choosing a sequence of interactive actions should be based on reasoning about the intrinsic meanings of protocol actions, which are based on the notion of commitment, i.e. which refers to a strong promise to other agent (s) to undertake some courses of action.\nCurrent approaches [11, 12, 10, 1] to achieve flexibilities using the notion of commitment make use of an abstract layer of commitments.\nHowever, in these approaches, a mapping from protocol actions onto operations on commitments\nas well as handling and enforcement mechanisms of commitments must be externally provided.\nExecution of protocol actions also requires concurrent execution of operations on related commitments.\nAs a result, the overhead of processing the commitment layer makes specification and execution of protocols more complicated and error prone.\nThere is also a lack of a logic to naturally express aspects of resources, internal and external choices as well as time of protocols.\nRather than creating another layer of commitment outside protocol actions, we try to achieve a modeling of commitments that is integrated with protocol actions.\nBoth commitments and protocol actions can then be reasoned about in one consistent system.\nIn order to achieve that, we specify protocols in a declarative manner, i.e. what is to be achieved rather then how agents should interact.\nA key to this is using logic.\nTemporal logic, in particular, is suitable for describing and reasoning about temporal constraints while linear logic [3] is quite suitable for modeling resources.\nWe suggest using a combination of linear logic and temporal logic to construct a commitment based interaction framework which allows both temporal and resource-related reasoning for interaction protocols.\nThis provides a natural manipulation and reasoning mechanism as well as internal enforcement mechanisms for commitments based on proof search.\nThis paper is organized as follows.\nSection 2 discusses the background material of linear logic, temporal linear logic and commitments.\nSection 3 introduces our modeling framework and specification of protocols.\nSection 4 discusses how our framework can be used for an example of online sale interactions between a merchant, a bank and a customer.\nWe then discuss the advantages and limitations of using our framework to model interaction protocols and achieve flexibility in Section 5.\nSection 6 presents our conclusions and items of further work.\n2.\nBACKGROUND\nIn order to increase the agents' autonomy over their interactive behaviors, protocols should be specified in terms of what is to be achieved rather than how the agents should act.\nIn other words, protocols should be specified in a declarative manner.\nUsing logic is central to this specification process.\n2.1 Linear Logic\nLogic has been used as formalism to model and reason about agent systems.\nLinear logic [3] is well-known for modeling resources as well as updating processes.\nIt has been considered in agent systems to support agent negotiation and planning by means of proof search [5, 8].\nIn real life, resources are consumed and new resources are created.\nIn such logic as classical or temporal logic, however, a direct mapping of resources onto formulas is troublesome.\nIf we model resources like A as \"one dollar\" and B as \"a chocolate bar\", then A = * B in classical logic is read as \"from one dollar we can get a chocolate bar\".\nThis causes problems as the implication allows to get a chocolate bar (B is true) while retaining one dollar (A remains true).\nIn order to resolve such resource - formula mapping issues, Girard proposed the constraints on which formulas will be used exactly once and can no longer be freely added or removed in derivations and hence treating linear logic formulas as resources.\nIn linear logic, a linear implication A--B, however, allows A to be removed after deriving B, which means the dollar is gone after using one dollar to buy a chocolate bar.\nClassical conjunction (and) and disjunction (or) are recast over different uses of contexts - multiplicative as combining and additive as sharing to come up with four connectives.\nA \u00ae (multiplicative conjunction) A, means that one has two As at the same time, which is different from A n A = A. Hence, \u00ae allows a natural expression of proportion.\nA \u2118 (multiplicative disjunction) B, means that if not A then B or vice versa but not both A and B.\nThe ability to specify choices via the additive connectives is a particularly useful feature of linear logic.\nA & (additive conjunction) B, stands for one own choice, either of A or B but not both.\nA \u00ae (additive disjunction) B, stands for the possibility of either A or B, but we don't know which.\nAs remarked in [5], & and \u00ae allow choices to be made clear between internal choices (one's own), and external choices (others' choice).\nFor instance, to specify that the choice of places A or B for goods' delivery is ours as the supplier, we use A&B, or is the client's, we use A \u00ae B.\nIn agent systems, this duality between inner and outer choices is manifested by one agent having the power to choose between alternatives and the other having to react to whatever choice is made.\nMoreover, during interaction, the ability to match consumption and supply of resources among agents can simplify the specification of resource allocations.\nLinear logic is a natural mechanism to provide this ability [5].\nIn addition, it is emphasized in [8] that linear logic is used to model agent states as sets of consumable resources and particularly, linear implication is used to model transitions among states and capabilities of agents.\n2.2 Temporal Linear Logic\nWhile linear logic provides advantages to modeling and reasoning about resources, it does not deal naturally with time constraints.\nTemporal logic, on the other hand, is a formal system which addresses the description and reasoning about the changes of truth values of logic expressions over time [2].\nTemporal logic can be used for specification and verification of concurrent and reactive programs [2].\nTemporal Linear Logic (TLL) [6] is the result of introducing temporal logic into linear logic and hence is resourceconscious as well as deals with time.\nThe temporal operators used are Q (next), \u2751 (anytime), and O (sometime) [6].\nFormulas with no temporal operators can be considered as being available only at present.\nAdding Q to a formula A, i.e. QA, means that A can be used only at the next time and exactly once.\nSimilarly, \u2751 A means that A can be used at any time and exactly once.\nOA means that A can be used once at some time.\nThough both \u2751 and O refer to a point in time, the choice of which time is different.\nRegarding \u2751, the choice is an internal choice, as appropriate to one's own capability.\nWith O, the choice is externally decided by others.\n2.3 Commitment\nThe concept of social commitment has been recognized as fundamental to agent interaction.\nIndeed, social commitment provides intrinsic meanings of protocol actions and states [11].\nIn particular, persistence in commitments introduces into agents' consideration a certain level of predictability of other agents' actions, which is important when agents deal with issues of inter-dependencies, global constraints or\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 125\nresources sharing [7].\nCommitment based approaches associate protocols actions with operations on commitments and protocol states with the set of effective commitments [11].\nCompleting the protocol is done via means-end reasoning on commitment operations to bring the current state to final states where all commitments are resolved.\nFrom then, the corresponding legal sequences of interactive actions are determined.\nHence, the approaches systematically enhance a variety of legal computations [11].\nCommitments can be reduced to a more fundamental form known as pre-commitments.\nA pre-commitment here refers to a potential commitment that specifies what the owner agent is willing to commit [4], like performing some actions or achieving a particular state.\nAgents can negotiate about pre-commitments by sending proposals of them to others.\nThe others can respond by agreeing or disagreeing with the proposal or proposing another pre-commitment.\nOnce a precommitment is agreed, it then becomes a commitment and the process moves from negotiation phase to commitment phase, in which the agents act to fulfill their commitments.\n3.\nMODELING AGENT INTERACTIONS\n3.2 Modeling commitments\n3.1 Modeling resources and capabilities committing agent successfully brings about the respective\n126 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.3 Protocol Construction\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 127\n3.4 Interactive Messages\n3.5 Generating Interactions\n4.\nEXAMPLE\n4.1 Specifying Protocol\nRules at agent Mer\nRules at agent Ebank\nRules at agent Cus\n4.2 Description of the interaction\n4.3 Interaction\n128 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.4 Flexibility\n5.\nENCODING ISSUES\n5.1 Advantages of TLL framework\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 129\n5.2 Limitations of TLL Framework on Modeling\n6.\nCONCLUSIONS AND FURTHER WORK\n130 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAcknowledgments\n7.\nREFERENCES\nAPPENDIX A. TEMPORAL SEQUENT RULES FOR TLL\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 131","lvl-4":"Temporal Linear Logic as a Basis for Flexible Agent Interactions\nABSTRACT\nInteractions between agents in an open system such as the Internet require a significant degree of flexibility.\nA crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents.\nIn this paper, we investigate an approach to model commitments with tight integration with protocol actions.\nThis means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments.\nWe show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning.\nWe also discuss the application of this framework to scenarios such as online commerce.\n1.\nINTRODUCTION AND MOTIVATION\nThe agent paradigm has become well suited as a design metaphor to deal with complex systems comprising many components each having their own\nthread of control and purposes and involved in dynamic and complex interactions.\nIn multi-agent environments, agents often need to interact with each other to fulfill their goals.\nProtocols are used to regulate interactions.\nIn traditional approaches to protocol specification, like those using Finite State Machines or Petri Nets, protocols are often predetermined legal sequences of interactive behaviors.\nTherefore, agents are required to adapt their interactive behaviors to succeed and interactions among agents should not be constructed rigidly.\nTo achieve flexibility, as characterized by Yolum and Singh in [11], interaction protocols should ensure that agents have autonomy over their interactive behaviors, and be free from any unnecessary constraints.\nAlso, agents should be allowed to adjust their interactive actions to take advantages of opportunities or handle exceptions that arise during interaction.\nFor example, consider the scenario below for online sales.\nCus has a goal of obtaining from Mer a cricket bat at some time.\nThere are two options for Cus to pay.\nIf Cus uses credit payment, Mer needs a bank Ebank to check Cus's credit.\nIf Cus's credit is approved, Ebank will arrange the credit payment.\nOtherwise, Cus may then take the option to pay via PayPal.\nThe interaction ends when goods are delivered and payment is arranged.\nA flexible approach to this example should include several features.\nSecondly, there should be no unnecessary constraint on the order in which actions are performed, such as which of making payments and sending the cricket bat should come first.\nThirdly, choosing a sequence of interactive actions should be based on reasoning about the intrinsic meanings of protocol actions, which are based on the notion of commitment, i.e. which refers to a strong promise to other agent (s) to undertake some courses of action.\nCurrent approaches [11, 12, 10, 1] to achieve flexibilities using the notion of commitment make use of an abstract layer of commitments.\nHowever, in these approaches, a mapping from protocol actions onto operations on commitments\nas well as handling and enforcement mechanisms of commitments must be externally provided.\nExecution of protocol actions also requires concurrent execution of operations on related commitments.\nAs a result, the overhead of processing the commitment layer makes specification and execution of protocols more complicated and error prone.\nThere is also a lack of a logic to naturally express aspects of resources, internal and external choices as well as time of protocols.\nRather than creating another layer of commitment outside protocol actions, we try to achieve a modeling of commitments that is integrated with protocol actions.\nBoth commitments and protocol actions can then be reasoned about in one consistent system.\nIn order to achieve that, we specify protocols in a declarative manner, i.e. what is to be achieved rather then how agents should interact.\nA key to this is using logic.\nTemporal logic, in particular, is suitable for describing and reasoning about temporal constraints while linear logic [3] is quite suitable for modeling resources.\nWe suggest using a combination of linear logic and temporal logic to construct a commitment based interaction framework which allows both temporal and resource-related reasoning for interaction protocols.\nThis provides a natural manipulation and reasoning mechanism as well as internal enforcement mechanisms for commitments based on proof search.\nSection 2 discusses the background material of linear logic, temporal linear logic and commitments.\nSection 3 introduces our modeling framework and specification of protocols.\nSection 4 discusses how our framework can be used for an example of online sale interactions between a merchant, a bank and a customer.\nWe then discuss the advantages and limitations of using our framework to model interaction protocols and achieve flexibility in Section 5.\nSection 6 presents our conclusions and items of further work.\n2.\nBACKGROUND\nIn order to increase the agents' autonomy over their interactive behaviors, protocols should be specified in terms of what is to be achieved rather than how the agents should act.\nIn other words, protocols should be specified in a declarative manner.\nUsing logic is central to this specification process.\n2.1 Linear Logic\nLogic has been used as formalism to model and reason about agent systems.\nLinear logic [3] is well-known for modeling resources as well as updating processes.\nIt has been considered in agent systems to support agent negotiation and planning by means of proof search [5, 8].\nIn real life, resources are consumed and new resources are created.\nIn such logic as classical or temporal logic, however, a direct mapping of resources onto formulas is troublesome.\nIf we model resources like A as \"one dollar\" and B as \"a chocolate bar\", then A = * B in classical logic is read as \"from one dollar we can get a chocolate bar\".\nIn order to resolve such resource - formula mapping issues, Girard proposed the constraints on which formulas will be used exactly once and can no longer be freely added or removed in derivations and hence treating linear logic formulas as resources.\nIn linear logic, a linear implication A--B, however, allows A to be removed after deriving B, which means the dollar is gone after using one dollar to buy a chocolate bar.\nClassical conjunction (and) and disjunction (or) are recast over different uses of contexts - multiplicative as combining and additive as sharing to come up with four connectives.\nThe ability to specify choices via the additive connectives is a particularly useful feature of linear logic.\nA & (additive conjunction) B, stands for one own choice, either of A or B but not both.\nIn agent systems, this duality between inner and outer choices is manifested by one agent having the power to choose between alternatives and the other having to react to whatever choice is made.\nMoreover, during interaction, the ability to match consumption and supply of resources among agents can simplify the specification of resource allocations.\nLinear logic is a natural mechanism to provide this ability [5].\nIn addition, it is emphasized in [8] that linear logic is used to model agent states as sets of consumable resources and particularly, linear implication is used to model transitions among states and capabilities of agents.\n2.2 Temporal Linear Logic\nWhile linear logic provides advantages to modeling and reasoning about resources, it does not deal naturally with time constraints.\nTemporal logic, on the other hand, is a formal system which addresses the description and reasoning about the changes of truth values of logic expressions over time [2].\nTemporal logic can be used for specification and verification of concurrent and reactive programs [2].\nTemporal Linear Logic (TLL) [6] is the result of introducing temporal logic into linear logic and hence is resourceconscious as well as deals with time.\nThe temporal operators used are Q (next), \u2751 (anytime), and O (sometime) [6].\nFormulas with no temporal operators can be considered as being available only at present.\nAdding Q to a formula A, i.e. QA, means that A can be used only at the next time and exactly once.\nSimilarly, \u2751 A means that A can be used at any time and exactly once.\nOA means that A can be used once at some time.\nThough both \u2751 and O refer to a point in time, the choice of which time is different.\nRegarding \u2751, the choice is an internal choice, as appropriate to one's own capability.\nWith O, the choice is externally decided by others.\n2.3 Commitment\nThe concept of social commitment has been recognized as fundamental to agent interaction.\nIndeed, social commitment provides intrinsic meanings of protocol actions and states [11].\nIn particular, persistence in commitments introduces into agents' consideration a certain level of predictability of other agents' actions, which is important when agents deal with issues of inter-dependencies, global constraints or\nThe Sixth Intl. .\nJoint Conf.\nresources sharing [7].\nCommitment based approaches associate protocols actions with operations on commitments and protocol states with the set of effective commitments [11].\nCompleting the protocol is done via means-end reasoning on commitment operations to bring the current state to final states where all commitments are resolved.\nFrom then, the corresponding legal sequences of interactive actions are determined.\nHence, the approaches systematically enhance a variety of legal computations [11].\nCommitments can be reduced to a more fundamental form known as pre-commitments.\nA pre-commitment here refers to a potential commitment that specifies what the owner agent is willing to commit [4], like performing some actions or achieving a particular state.\nAgents can negotiate about pre-commitments by sending proposals of them to others.\nOnce a precommitment is agreed, it then becomes a commitment and the process moves from negotiation phase to commitment phase, in which the agents act to fulfill their commitments.","lvl-2":"Temporal Linear Logic as a Basis for Flexible Agent Interactions\nABSTRACT\nInteractions between agents in an open system such as the Internet require a significant degree of flexibility.\nA crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents.\nIn this paper, we investigate an approach to model commitments with tight integration with protocol actions.\nThis means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments.\nWe show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning.\nWe also discuss the application of this framework to scenarios such as online commerce.\n1.\nINTRODUCTION AND MOTIVATION\nRecently, software development has evolved toward the development of intelligent, interconnected systems working in a distributed manner.\nThe agent paradigm has become well suited as a design metaphor to deal with complex systems comprising many components each having their own\nthread of control and purposes and involved in dynamic and complex interactions.\nIn multi-agent environments, agents often need to interact with each other to fulfill their goals.\nProtocols are used to regulate interactions.\nIn traditional approaches to protocol specification, like those using Finite State Machines or Petri Nets, protocols are often predetermined legal sequences of interactive behaviors.\nIn frequently changing environments such as the Internet, such fixed sequences can quickly become outdated and are prone to failure.\nTherefore, agents are required to adapt their interactive behaviors to succeed and interactions among agents should not be constructed rigidly.\nTo achieve flexibility, as characterized by Yolum and Singh in [11], interaction protocols should ensure that agents have autonomy over their interactive behaviors, and be free from any unnecessary constraints.\nAlso, agents should be allowed to adjust their interactive actions to take advantages of opportunities or handle exceptions that arise during interaction.\nFor example, consider the scenario below for online sales.\nA merchant Mer has 200 cricket bats available for sale with a unit price of 10 dollars.\nA customer Cus has $50.\nCus has a goal of obtaining from Mer a cricket bat at some time.\nThere are two options for Cus to pay.\nIf Cus uses credit payment, Mer needs a bank Ebank to check Cus's credit.\nIf Cus's credit is approved, Ebank will arrange the credit payment.\nOtherwise, Cus may then take the option to pay via PayPal.\nThe interaction ends when goods are delivered and payment is arranged.\nA flexible approach to this example should include several features.\nFirstly, the payment method used by Cus should be at Cus's own choice and have the property that if Cus's credit check results in a disapproval, this exception should also be handled automatically by Cus's switching to PayPal.\nSecondly, there should be no unnecessary constraint on the order in which actions are performed, such as which of making payments and sending the cricket bat should come first.\nThirdly, choosing a sequence of interactive actions should be based on reasoning about the intrinsic meanings of protocol actions, which are based on the notion of commitment, i.e. which refers to a strong promise to other agent (s) to undertake some courses of action.\nCurrent approaches [11, 12, 10, 1] to achieve flexibilities using the notion of commitment make use of an abstract layer of commitments.\nHowever, in these approaches, a mapping from protocol actions onto operations on commitments\nas well as handling and enforcement mechanisms of commitments must be externally provided.\nExecution of protocol actions also requires concurrent execution of operations on related commitments.\nAs a result, the overhead of processing the commitment layer makes specification and execution of protocols more complicated and error prone.\nThere is also a lack of a logic to naturally express aspects of resources, internal and external choices as well as time of protocols.\nRather than creating another layer of commitment outside protocol actions, we try to achieve a modeling of commitments that is integrated with protocol actions.\nBoth commitments and protocol actions can then be reasoned about in one consistent system.\nIn order to achieve that, we specify protocols in a declarative manner, i.e. what is to be achieved rather then how agents should interact.\nA key to this is using logic.\nTemporal logic, in particular, is suitable for describing and reasoning about temporal constraints while linear logic [3] is quite suitable for modeling resources.\nWe suggest using a combination of linear logic and temporal logic to construct a commitment based interaction framework which allows both temporal and resource-related reasoning for interaction protocols.\nThis provides a natural manipulation and reasoning mechanism as well as internal enforcement mechanisms for commitments based on proof search.\nThis paper is organized as follows.\nSection 2 discusses the background material of linear logic, temporal linear logic and commitments.\nSection 3 introduces our modeling framework and specification of protocols.\nSection 4 discusses how our framework can be used for an example of online sale interactions between a merchant, a bank and a customer.\nWe then discuss the advantages and limitations of using our framework to model interaction protocols and achieve flexibility in Section 5.\nSection 6 presents our conclusions and items of further work.\n2.\nBACKGROUND\nIn order to increase the agents' autonomy over their interactive behaviors, protocols should be specified in terms of what is to be achieved rather than how the agents should act.\nIn other words, protocols should be specified in a declarative manner.\nUsing logic is central to this specification process.\n2.1 Linear Logic\nLogic has been used as formalism to model and reason about agent systems.\nLinear logic [3] is well-known for modeling resources as well as updating processes.\nIt has been considered in agent systems to support agent negotiation and planning by means of proof search [5, 8].\nIn real life, resources are consumed and new resources are created.\nIn such logic as classical or temporal logic, however, a direct mapping of resources onto formulas is troublesome.\nIf we model resources like A as \"one dollar\" and B as \"a chocolate bar\", then A = * B in classical logic is read as \"from one dollar we can get a chocolate bar\".\nThis causes problems as the implication allows to get a chocolate bar (B is true) while retaining one dollar (A remains true).\nIn order to resolve such resource - formula mapping issues, Girard proposed the constraints on which formulas will be used exactly once and can no longer be freely added or removed in derivations and hence treating linear logic formulas as resources.\nIn linear logic, a linear implication A--B, however, allows A to be removed after deriving B, which means the dollar is gone after using one dollar to buy a chocolate bar.\nClassical conjunction (and) and disjunction (or) are recast over different uses of contexts - multiplicative as combining and additive as sharing to come up with four connectives.\nA \u00ae (multiplicative conjunction) A, means that one has two As at the same time, which is different from A n A = A. Hence, \u00ae allows a natural expression of proportion.\nA \u2118 (multiplicative disjunction) B, means that if not A then B or vice versa but not both A and B.\nThe ability to specify choices via the additive connectives is a particularly useful feature of linear logic.\nA & (additive conjunction) B, stands for one own choice, either of A or B but not both.\nA \u00ae (additive disjunction) B, stands for the possibility of either A or B, but we don't know which.\nAs remarked in [5], & and \u00ae allow choices to be made clear between internal choices (one's own), and external choices (others' choice).\nFor instance, to specify that the choice of places A or B for goods' delivery is ours as the supplier, we use A&B, or is the client's, we use A \u00ae B.\nIn agent systems, this duality between inner and outer choices is manifested by one agent having the power to choose between alternatives and the other having to react to whatever choice is made.\nMoreover, during interaction, the ability to match consumption and supply of resources among agents can simplify the specification of resource allocations.\nLinear logic is a natural mechanism to provide this ability [5].\nIn addition, it is emphasized in [8] that linear logic is used to model agent states as sets of consumable resources and particularly, linear implication is used to model transitions among states and capabilities of agents.\n2.2 Temporal Linear Logic\nWhile linear logic provides advantages to modeling and reasoning about resources, it does not deal naturally with time constraints.\nTemporal logic, on the other hand, is a formal system which addresses the description and reasoning about the changes of truth values of logic expressions over time [2].\nTemporal logic can be used for specification and verification of concurrent and reactive programs [2].\nTemporal Linear Logic (TLL) [6] is the result of introducing temporal logic into linear logic and hence is resourceconscious as well as deals with time.\nThe temporal operators used are Q (next), \u2751 (anytime), and O (sometime) [6].\nFormulas with no temporal operators can be considered as being available only at present.\nAdding Q to a formula A, i.e. QA, means that A can be used only at the next time and exactly once.\nSimilarly, \u2751 A means that A can be used at any time and exactly once.\nOA means that A can be used once at some time.\nThough both \u2751 and O refer to a point in time, the choice of which time is different.\nRegarding \u2751, the choice is an internal choice, as appropriate to one's own capability.\nWith O, the choice is externally decided by others.\n2.3 Commitment\nThe concept of social commitment has been recognized as fundamental to agent interaction.\nIndeed, social commitment provides intrinsic meanings of protocol actions and states [11].\nIn particular, persistence in commitments introduces into agents' consideration a certain level of predictability of other agents' actions, which is important when agents deal with issues of inter-dependencies, global constraints or\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 125\nresources sharing [7].\nCommitment based approaches associate protocols actions with operations on commitments and protocol states with the set of effective commitments [11].\nCompleting the protocol is done via means-end reasoning on commitment operations to bring the current state to final states where all commitments are resolved.\nFrom then, the corresponding legal sequences of interactive actions are determined.\nHence, the approaches systematically enhance a variety of legal computations [11].\nCommitments can be reduced to a more fundamental form known as pre-commitments.\nA pre-commitment here refers to a potential commitment that specifies what the owner agent is willing to commit [4], like performing some actions or achieving a particular state.\nAgents can negotiate about pre-commitments by sending proposals of them to others.\nThe others can respond by agreeing or disagreeing with the proposal or proposing another pre-commitment.\nOnce a precommitment is agreed, it then becomes a commitment and the process moves from negotiation phase to commitment phase, in which the agents act to fulfill their commitments.\n3.\nMODELING AGENT INTERACTIONS\nProtocols are normally viewed external to agents and are essentially a set of commitments externally imposed on participating agents.\nWe take an internal view of protocols, i.e. from the view of participating agents by putting the specification of commitments locally at the respective agents according to their roles.\nSuch an approach enables agents to manage their own protocol commitments.\nIndeed, agents no longer accept and follow a given set of commitments but can reason about which commitments of theirs to offer and which commitments of others to take, while considering the current needs and the environment.\nProtocols arise as commitments are then linked together via agents' reasoning based on proof search during the interaction.\nAlso, ongoing changes in the environment are taken as input into the generation of protocols by agent reasoning.\nThis is the reverse of other approaches which try to make the specification flexible to accommodate changes in the environment.\nHence, it is a step closer to enabling emergent protocols, which makes protocols more dynamic and flexible to the context.\nIn a nutshell, services are what agents are capable of providing to other agents.\nCommitments can then be seen to arise from combinations of services, i.e. an agent's capabilities.\nHence, our approach shifts specifying a set of protocol commitments to specifying sets of pre-commitments as capabilities for each agent.\nCommitments are then can be reasoned about and manipulated by the same logic mechanism as is used for the agents' actions, resources and goals, which greatly simplifies the system.\nOur framework uses TLL as a means of specifying interaction protocols.\nWe encode various concepts such as resource, capability and commitment in TLL.\nThe symmetry between a formula and its negation in TLL is explored as a way to model resources and commitments.\nWe then discuss the central role of pre-commitments, and how they are specified at each participating agent.\nIt then remains for agents to reason about pre-commitments to form protocol commitments, A unit of consumable resources is modeled as a proposition in linear logic.\nNumeric figures can be used to abbreviate a multiplicative conjunction of the same instances.\nFor example, 2 dollar = dollar \u00ae dollar.\nMoreover, such 03A is a shorthand for Q Q GA. .\nIn order to address the dynamic manipulation of resources, we also include information about the location and ownership in the encoding of resources to address the relocation and changes in possession of resources during agent interaction.\nThat resource A is located at agent \u03b1 and owned by agent \u03b2 is expressed via a shorthand notation as A@\u03b1\u03b2, which is treated as a logic proposition in our framework.\nThis notation can be later extended to a more complex logic construct to reason about changes in location and ownership.\nIn our running example, a cricket bat cricket b being located at and owned by agent Mer is denoted as cricket b@MM.\nAfter a successful sale to the agent customer Cus, the cricket bat will be relocated to and owned by agent Cus.\nThe formula cricket b@CC will replace the formula cricket b@MM to reflect the changes.\nOur treatment of unlimited resources is to model it as a number \u03c3 of copies of the resource's formula such that the number \u03c3 is chosen to be extremely large, relative to the context.\nFor instance, to indicate that the merchant Mer can issue an unlimited number of sale quotes at any time, we use \u03c3 \u2751 sale quote@MM.\nDeclaration of actions is also modeled in a similar manner as of resources.\nThe capabilities of agents refer to producing, consuming, relocating and changing ownership of resources.\nCapabilities are represented by describing the state before and after performing them.\nThe general representation form is \u0393--\u0394, in which \u0393 describes the conditions before and \u0394 describes the conditions after.\nThe linear implication--in linear logic indeed ensures that the conditions before will be transformed into the conditions after.\nMoreover, some capabilities can be applied at any number of times in the interaction context and their formulas are also preceded by the number \u03c3.\nTo take an example, we consider the capability of agent Mer of selling a cricket bat for 10 dollars.\nThe conditions before are 10 dollars and a payment method from agent Cus: 10$@CC \u00ae pay m@CC.\nGiven these, by applying the capability, Mer will gain 10 dollars (10$@MM) and commit to providing a cricket bat (cricket b@MM\u22a5) so that Cus will get a cricket bat (cricket b@CC).\nTogether, the capability is encoded as 10$@CC \u00ae pay m@CC--10$@MM \u00ae cricket b@MM\u22a5 \u00ae cricket b@CC.\n3.2 Modeling commitments\nWe discuss the modeling of various types of commitments, their fulfillments and enforcement mechanisms.\nDue to duality in linear logic, positive formulas can be regarded as formulas in supply and negative formulas can be regarded as formulas in demand.\nHence, we take an approach of modeling non-conditional or base commitments as negative formulas.\nIn particular, by turning a formula into its negative form, a base commitment to derive the resources or carry out the actions associated with the formula is created.\nIn the above example, a commitment of agent Mer to provide a cricket bat (cricket b@MM) is cricket b@M\u22a5 which are subsequently discharged.\nM.\nA base commitment is fulfilled (discharged) whenever the\n3.1 Modeling resources and capabilities committing agent successfully brings about the respective\n126 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nresources or carries out the actions as required by the commitment.\nIn TLL modeling, this means that the corresponding positive formula is derived.\nResolution of commitments can then be naturally carried out by inference in TLL.\nFor example, cricket b@M, will fulfil the commitment cricket b@M1, and both formulas are automatically removed as cricket b@M, \u2297 cricket b@M1, ~ \u22a5.\nUnder a further assumption that agents are expected to resolve all formulas in demand (removing negative formulas), this creates a driving pressure on agents to resolve base commitments.\nThis pressure then becomes a natural and internal enforcement mechanism for base commitments.\nA commitment with conditions (or conditional commitment) can be modeled by connecting the conditions to base commitments via a linear implication.\nA general form is \u0393--\u0394 where \u0393 is the condition part and \u0394 includes base commitments.\nIf the condition \u0393 is derived, by consuming \u0393, the linear implication will ensure that \u0394 results, which means the base commitments in \u0394 become effective.\nIf the conditions cannot be achieved, the linear implication cannot be applied and hence commitment part in the conditional commitment is still inactive.\nIn our approach, conditional commitments are specified in their potential form as pre-commitments of participating agents.\nPre-commitments are negotiated among agents via proposals and upon being accepted, will form conditional commitments among the engaged agents.\nConditional commitments are interpreted as that the condition \u0393 is required of the proposed agent and the commitment part \u0394 is the responsibility of the owner (proposing) agent.\nIndeed, such interpretation and the encoding of--realize the notion of a conditional commitment that owner agent is willing to commit to deriving \u0394 given the proposed agent satisfies the conditions \u0393.\nConditional commitments, pre-commitments and capabilities all have similar encodings.\nHowever, their differences lie in the phases of commitment that they are in.\nCapabilities are used internally by the owner agent and do not involve any commitment.\nPre-commitments can be regarded as capabilities intended for forming conditional commitments.\nUpon being accepted, pre-commitments will turn into conditional commitments and bring the two engaged agents into a commitment phase.\nAs an example, consider that Mer has a capability of selling cricket bats: (10$@Cc \u2297 Pay m@Cc) --\nproposes its capability to Cus, the capability acts as a precommitment.\nWhen the proposal gets accepted, that precommitment will turn into a conditional commitment in which Mer commits to fulfilling the base commitment cricket b@M,1 (which leads to having cricket b@Cc) upon the condition that Cus derives 10$@Cc \u2297 Pay m@Cc (which leads to having 10$@M,).\nBreakable commitments which are in place to provide agents with the desired flexibility to remove itself from its commitments (cancel commitments) are also modeled naturally in our framework.\nA base commitment Com1 is turned into a breakable base commitment (cond \u2295 Com) 1.\nThe extra token cond reflects the agent's internal deliberation about when the commitment to derive Com is broken.\nOnce cond is produced, due to the logic deduction cond \u2297 (cond \u2295 Com) 1 ~ \u22a5, the commitment (cond \u2295 Com) 1 is removed, and hence breaking the commitment of deriving Com.\nMoreover, a breakable conditional commitment is modeled as A--(1 & B), instead of A--B.\nWhen the condition A is provided, the linear implication brings about (1 & B) and it is now up to the owner agent's internal choice whether 1 or B is resulted.\nIf the agent chooses 1, which practically means nothing is derived, then the conditional commitment is deliberately broken.\n3.3 Protocol Construction\nGiven the modeling of various interaction concepts like resource, action, capability, and commitment, we will discuss how protocols can be specified.\nIn our framework, each agent is encoded with the resources, actions, capabilities, pre-commitments, any pending commitments that it has.\nPre-commitments, which stem from services the agents are capable of providing, are designated to be fair exchanges.\nIn a pre-commitment, all the requirements of the other party are put in the condition part and all the effects to be provided by the owner agent are put on the commitment part to make up a trade-off.\nSuch a design allows agents to freely propose pre-commitments to any interested parties.\nAn example of pre-commitments is that of an agent Merchant regarding a sale of a cricket bat: [10$@Cc\u2297Pay m@Cc--100$@M,\u2297Ocricket b@Cc\u2297cricket b@M,1].\nThe condition is the requirement that the customer agent provides 10 dollars, which is assumed to be the price of a cricket bat, via a payment method.\nThe exchange is the cricket bat for the customer (Ocricket b@Cc) and hence is fair to the merchant.\nProtocols are specified in terms of sets of pre-commitments at participating agents.\nGiven some initial interaction commitments, a protocol emerges as agents are reasoning about which pre-commitments to offer and accept in order to fulfill these commitments.\nGiven a such a protocol specification, we then discuss how interaction might take place.\nAn interaction can start with a request or a proposal.\nWhen an agent cannot achieve some commitments by itself, it can make a request of them or propose a relevant pre-commitment to an appropriate agent to fulfill them.\nThe choice of which pre-commitments depends on if such pre-commitments can produce the formulas to fulfill the agent's pending commitments.\nWhen an agent receives a request, it searches for precommitments that can together produce the required formulas of the requests.\nThose pre-commitments found will be used as proposals to the requesting agents.\nOtherwise, a failure notice will be returned.\nWhen a proposal is received, the recipient agent also performs a search with the inclusion of the proposal for a proof of those formulas that can resolve its commitments.\nIf the search result is positive, the proposal is accepted and becomes a commitment.\nThe recipient then attempts to fulfill conditions of the commitments.\nOtherwise, the proposal is refused and no commitment is formed.\nThroughout the interaction, proof search has played a vital role in protocol construction.\nProof search reveals that some commitments cannot be resolved locally or some pre-commitments can be used to resolve pending commitments, which prompts the agent to make a request or proposal respectively.\nProof search also determines which precommitments are relevant to fulfillment of a request, which helps agents to decide which pre-commitments to propose to answer the request.\nMoreover, whether a received proposal\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 127\nis relevant to any pending commitments or not is also determined by a search for proof of these commitments with an inclusion of the proposal.\nConditions of proposals can be resolved by proof search as it links them with the agents' current resources and capabilities as well as any relevant precommitments.\nTherefore, it can be seen that proof search performed by participating agents can link up their respective pre-commitments and turn them into commitments as appropriate, which give rise to protocol formation.\nWe will demonstrate this via our running example in section 4.\n3.4 Interactive Messages\nAgents interact by sending messages.\nWe address agent interaction in a simple model which contains messages of type requests, proposals, acceptance, refusal and failure notice.\nWe denote \"Source to Destination:\" prior to each message to indicate the source and destination of the message.\nFor example \"Cust to Mer:\" denotes that the message is sent from agent Cust to agent Mer.\nRequest messages start with the key word REQUEST: \"REQUEST + formula\".\nFormulas in request messages are of commitments.\nProposal messages are preceded with \"PROPOSE\".\nFormulas are of capabilities.\nFor example, \u03b1 to) 3: \"PROPOSE \u0393--\u0394\" is a proposal from agent \u03b1 to agent) 3.\nThere are messages that agents use to response to a proposal.\nAgents can indicate an acceptance: \"ACCEPT\", or a refusal: \"REFUSE\".\nTo notice a failure in fulfilling a request or proposal, agents reply with that request or proposal message appended with \"FAIL\".\n3.5 Generating Interactions\nAs we have seen, temporal linear logic provides an elegant means for encoding the various concepts of agent interaction in a commitment based specification framework.\nAppropriate interaction is generated as agents negotiate their specified pre-commitments to fulfill their goals.\nThe association among pre-commitments at participating agents and the monitoring of commitments to ensure that all are discharged are performed by proof search.\nIn the next section, we will demonstrate how specification and generation of interactions in our framework might work.\n4.\nEXAMPLE\nWe return to the online sales scenario introduced in Section 1.\n4.1 Specifying Protocol\nWe design a set of pre-commitments and capabilities to implement the above scenario.\nFor simplicity, we refer to them as rules.\nRules at agent Mer\nMer has available at any time 200 cricket bats for sale and can issue sale quotes at any time:\nRules at agent Ebank\nRule 4: Upon receiving a sale quote from Mer, at the next time point, Ebank commits to either informing Mer that Cus's credit is not approved (0credit not appr@MB) or arranging a credit payment to Mer (Qcredit paym@MB).\nThe decision is dependent on the credibility of Cus and hence is external (\u2295) to Ebank and Mer:\nRules at agent Cus\nCus has an amount of 50 dollars available at any time, can be used for credit payment or cash payment: 0$50@C.\nCus has a commitment of obtaining a cricket bat at some time: [Ocricket b@CC]1.\nRule 5: Cus will pay Mer via Paypal if there is an indication from EBank that Cus's credit is not approved: \u03c3 0 [credit not appr@CB--0Paypal paid@MM]\n4.2 Description of the interaction\nCus requests from Mer a cricket bat at some time.\nMer replies with a proposal in which each cricket bat costs 10 dollars.\nCus needs to prepare 10 dollars and payment can be made by credit or via Paypal.\nAssuming that Cus only pays via Paypal if credit payment fails, Cus will let Mer charges by credit.\nMer will then ask EBank to arrange a credit payment.\nEBank proposes that Mer gives a quote of sale and depending on Cus's credibility, at the next time point, either credit payment will be arranged or a disapproval of Cus's credit will be informed.\nMer accepts and fulfills the conditions.\nIf the first case happens, credit payment is done.\nIf the second case happens, credit payment is failed, Cus may back track to take the option paying via Paypal.\nOnce payment is arranged, Mer will apply its original proposal to satisfy the Cus's request of a cricket bat and hence removing one cricket bat and adding 10 dollars into its set of resources.\n4.3 Interaction\n1.\nCus cannot fulfill its commitment of [O (cricket b@CC)] 1 and hence, makes a request to Merchant: C to M: REQUEST [Ocricket b@CC]1 2.\nTo meet the request, Mer searches for applicable rules.\nOne applications of rule 1 can derive 0cricket b@C and\nWith similar analysis, Cus determines that given the con\n128 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nditions can be satisfied, the proposals can help to derive its request.\nHence, C to M: ACCEPT Cus analyzes the conditions of the accepted proposal by proof search.\nFrom (*), one way to satisfy the conditions is for Cus to derive, at the next n1 time points, 10 dollars (On110$@CC); and to choose paying via Paypal (On1Paypal paid@MM) OR by credit payment (On1credit paid@MM).\n3.\nDeriving On110$@CC: as Cus has 50 dollars, it can make use of 10 dollars: 100$@CC F _ 010$@CC F _ On110$@C.\nThere are two options for payment method, the choice is at agent Cus.\nWe assume that Cus prefers credit payment.\n4.\nDeriving On1credit paid@MM: Cus cannot derive this formula by itself, hence, it will make a request to Mer: C to M: REQUEST [On1credit paid@MM] \u22a5.\n5.\nRule 2 at Mer is applicable but Mer cannot derive its condition (On1credit paym@MB).\nHence, Mer will further make a request to EBank.\nM to E: REQUEST [On1credit paym@MB] \u22a5\nEbank searches and finds rule 4 applicable.\nBecause credit paym@MB will be available one time point after the rule's application time, Ebank proposes to Mer an instance of rule 4 at the next n1-1 time points.\n6.\nB to M: PROPOSE On1\u22121[quote@MM--O (0 credit not appr@MB \u00ae 0credit paym@MB)]\nWith similar analysis, Mer accepts the proposal.\nM to B: ACCEPT The rule condition is fulfilled by Mer as 0quote@MM F _ On1\u22121quote@MM.\nHence, Ebank then applies the proposal to derive:\n\u00ae indicates the choice is external to both agents.\nThere are two cases, Cus's credit is approved or disapproved.\nFor simplicity, we show only the case where Cus's credit is approved.\nAt the next (n1-1) time point, On1 \u2212 1 O (0credit not appr@MB \u00ae 0credit paym@MB) becomes On1 \u2212 1O0credit paym@MB F _ On1credit paym@M B.\nAs a result, at the next n1 time points, Ebank will arrange the credit payment.\n7.\nMer fulfills Cus's initial request.\nWhen any of On1Paypal paid@MM (if Cus pays via Paypal) or On1credit paid@MM (if Cus pays by credit card) is derived, On1 (credit paym@MM \u00ae Paypal paid@MM) is also derived, hence the payment method is arranged.\nTogether with the other condition 10$@CC being satisfied, this allows the initial proposal to be applied by Mer to derive On1cricket b@CC and a commitment of On1cricket b@M\u22a5\nfor Mer, which is also resolved by the resource 0cricket b@MM available at Mer.\nAny values of n1 such that n1 - 1> 0 <* n1> 1 will allow Mer to fulfill Cus's initial request of [Ocricket b@CC]\u22a5.\nThe interaction ends as all commitments are resolved.\n4.4 Flexibility\nThe desired flexibility has been achieved in the example.\nIt is Cus's own decision to proceed with the preferred payment method.\nAlso, non-determinism that whether Cus's credit is disapproved or credit payment is made to Mer is faithfully represented.\nIf an exception happens that Cus's credit is not approved, credit not appr@CB is produced and Cus can backtrack to paying via Paypal.\nRule 5 will then be utilized to allow Cus to handle the exception by paying via Paypal.\nMoreover, in order to specify that making payments and sending cricket bats can be in any order, we can add O in front of payment method in rule 1 as follows: \u03c3 0 [10$@CC \u00ae O (Paypal paid@MM \u00ae credit paid@MM)\nThis addition in the condition of the rule means that the time of payment can be any time up to Cus's choice, as long as Cus pays and hence, the time order between making payments and sending goods becomes flexible.\n5.\nENCODING ISSUES\n5.1 Advantages of TLL framework\nOur TLL framework has demonstrated natural and expressive specification of agent interaction protocols.\nLinear implication (--) expresses causal relationship, which makes it natural to model a removal or a consumption, especially of resources, together with its consequences.\nHence, in our framework, resource transformation is modeled as a linear implication of the consumed resources to the produced resources.\nResource relocation is modeled as a linear implication from a resource at one agent to that resource at the other agent.\nLinear implication also ensures that fulfillment of the conditions of a conditional commitment will cause the commitments to happen.\nMoreover, state updates of agents are resulted from a linear implication from the old state to the current state.\nTemporal operators (O, 0 and O) and their combinations help to specify the time of actions, of resource availability and express the time order of events.\nParticularly, precise time points are described by the use of O operator or multiple copies of it.\nBased on this ability to specify correct time points for actions or events, time order or sequencing of them can also be captured.\nAlso, a sense of duration is simulated by spreading copies of the resources or actions' formulas across multiple adjacent time points.\nMoreover, uncertainty in time can represented and reasoned about by the use of 0 and O and their combinations with O. 0 can be used to express outer non-determinism while O expresses inner non-determinism.\nThese time properties of resources, actions and events are correctly respected through out the agent reasoning process based on sequent calculus rules.\nFurthermore, the centrality of the notion of commitment in agent interaction has been recognized in many frameworks [11, 12, 1, 10, 4].\nHowever, to the best of our knowledge, modeling commitments directly at the propositional level of such a resource conscious and time aware logic as TLL is\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 129\nfirstly investigated in our framework.\nOur framework models base commitments as negative formulas and conditional commitments via the use of linear implication and\/or negative formulas.\nThe modeling of commitments has a number of advantages: \u2022 Commitments are represented directly at the propositional logic level or via a logic connective rather than a non-logical construct like [11], which makes treatment of commitments more natural and simple and allows to make use of readily available proof search systems like using sequent calculus for handling commitments.\nExisting logic connectives like \u2297, &, \u2295,--are also readily available for describing the relationships among commitments.\n\u2022 Fulfillment of commitments then becomes deriving the corresponding positive formulas or condition formulas, which then simply reduces to a proof search task.\nAlso, given the required formulas, fulfillment of commitments can be implemented easily and automatically as deduction (com \u2297 com \u22a5 \u2666 - 1).\n\u2022 The enforcement of commitments is also internal and simply implemented via the assumption that agents are driven to remove all negative formulas for base commitments and via the use of linear implication for conditional commitments.\nRegarding making protocol specification more flexible, our approach has marked a number of significant points.\nFirstly, flexibility of protocol specifications in our framework comes from the expressive power of the connectives of TLL.\n& and \u2295 refer to internal and external choices of agents on resources and actions while \u2751 and O refer to internal choices and external choices in time domain.\nGiven that flexibility includes the ability to make a sensible choice, having the choices expressed explicitly in the specification of interaction protocols provides agents with an opportunity to reason about the right choices during interaction and hence explore the flexibility in them.\nSecondly, instead of being sequences of interactive actions, protocols are structured on commitments, which are more abstract than protocol actions.\nExecution of protocols is then based on fulfilling commitments.\nHence, unnecessary constraints on which particular interactive actions to execute by which agents and on the order among them are now removed, which is a step forward to flexibility as compared to traditional approaches.\nOn the other hand, in the presence of changes introduced externally, agents have the freedom to explore new sets of interactive actions or skip some interactive actions ahead as long as they still fulfill the protocol's commitments.\nThis brings more flexibility to the overall level of agents' interactive behaviors, and thus the protocol.\nThirdly, the protocol is specified in a declarative manner essentially as sets of pre-commitments at each participating agents.\nTo achieve goals, agents use reasoning based on TLL sequent calculus to construct proofs of goals from pre-commitments and state formulas.\nThis essentially gives agents an autonomy in utilization of pre-commitments and hence agents can adapt the ways they use these to flexibly deal with changing environments.\nIn particular, as proof construction by agents selects a sequence of pre-commitments for interaction, being able to select from all the possible combinations of pre-commitments in proof search gives more chances and flexibility than selecting from only a few fixed and predefined sequences.\nIt is then also more likely to allow agents to handle exceptions or explore opportunities that arise.\nMoreover, as the actual order of pre-commitments is determined by the proof construction process rather than predefined, agents can flexibly change the order to suit new situations.\nFourthly, changes in the environment can be regarded as removing or adding formulas onto the state formulas.\nBecause the proof construction by agents takes into account the current state formulas when it picks up pre-commitments, changes in the state formulas will be reflected in the choice of which relevant pre-commitments to proceed.\nHence, the agents have the flexibility in deciding what to do to deal with changes.\nLastly, specifying protocols in our framework has a modular approach which adds ease and flexibility to the designing process of protocols.\nProtocols are specified by placing a set of pre-commitments at each participating agent according to their roles.\nEach pre-commitment can indeed be specified as a process in its own with condition formulas as its input and commitment part's formulas as its output.\nExecution of each conditional commitment is a relatively independent thread and they are linked together by the proof search to fulfill agents' commitments.\nAs a results, with such a design of pre-commitments, one pre-commitment can be added or removed without interfering the others and hence, achieving a modular design of the protocols.\n5.2 Limitations of TLL Framework on Modeling\nAs all the temporal operators in TLL refer to concrete time points, we cannot express durations in time faithfully.\nOne major disadvantage of simulating a duration of an event by spreading copies of that event over adjacent time points continuously (like OA \u2297 O2...O A) is that it requires\nthe time range to be provided explicitly.\nHence, such notion like until cannot be naturally expressed in TLL.\nCommitments of agents can be in conflict, especially when resolving all of them requires more resources or actions than what agents have.\nOur work has not covered handling commitments that are in conflict.\nAnother troublesome aspect of this approach is that the rules for interaction require some detailed knowledge of the formulas of temporal linear logic.\nClearly it would be beneficial to have a visually-based tool similar to UML diagrams which would allow non-experts to specify the appropriate rules without having to learn the details of the formulas themselves.\n6.\nCONCLUSIONS AND FURTHER WORK\nThis paper uses TLL for specifying interaction protocols.\nIn particular, TLL is used to model the concept of resource, capability, pre-commitment and commitment with tight integration as well as their manipulations with respect to time.\nAgents then make use of proof search techniques to perform the desired interactions.\nIn particular, the approach allows protocol specifications to capture the meaning of interactive actions via commitments, to capture the internal choices and external choices of agents about resources, commitments and about time as well as updating processes.\nThe proof construction mechanism\n130 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nprovides agents with the ability to dynamically select appropriate pre-commitments, and hence, help agents to gain the flexibility in choosing the interactive actions that are most suitable and the flexibility in the order of them, taking into consideration on-going changes in the environment.\nMany other approaches to modeling protocols also use the commitment concept to bring more meaning into agents' interactive actions.\nApproaches based on commitment machines [11, 12, 10, 1] endure a number of issues.\nThese approaches use logic systems that are limited in their expressiveness to model resources.\nAlso, as an extra abstract layer of commitments is created, more tasks are created accordingly.\nIn particular, there must be a human-designed mapping between protocol actions and operations on commitments as well as between control variables (fluent) and phases of commitment achievement.\nMoreover, external mechanisms must be in place to comprehend and handle operations and resolution of commitments as well as enforcement of the notion of commitment on its abstract data type representations.\nThis requires another execution in the commitment layer in conjunction with the actual execution of the protocol.\nNot only these extra tasks create an overhead but also makes the specification and execution of protocols more error prone.\nSimilar works in [8] and [9] explore the advantages of linear logic and TLL respectively by using partial deduction techniques to help agents to figure out the missing capabilities or resources and based on that, to negotiate with other agents about cooperation strategies.\nOur approach differs in bringing the concept of commitment into the modeling of interaction, and providing a more natural and detailed map for specifying interaction, especially about choices, time and updating using the full propositional TLL.\nMoreover, we emphasize on the use of pre-commitments as interaction rules with a full set of TLL inference rules to provide the advantages of proof construction in achieving flexible interaction.\nOur further work will include using TLL to verify various properties of interaction protocols such as liveness and safety.\nAlso, we will investigate developing an execution mechanism for such TLL specifications in our framework.\nAcknowledgments\nWe are very thankful to Michael Winikoff for many stimulating and helpful discussions of this material.\nWe also would like to acknowledge the support of the Australian Research Council under grant DP0663147.\n7.\nREFERENCES\nAPPENDIX A. TEMPORAL SEQUENT RULES FOR TLL\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 131","keyphrases":["linear logic","interact behavior","multi-agent environ","tempor constraint","interact protocol","multipl conjunct","classic conjunct","predict level","pre-commit","linear implic","emerg protocol","condit commit","request messag","causal relationship","agent commun languag and protocol","logic and formal model of agenc and multi-agent system"],"prmu":["P","P","U","M","R","U","U","U","U","M","M","M","U","U","M","M"]} {"id":"I-4","title":"Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems","abstract":"A negotiation chain is formed when multiple related negotiations are spread over multiple agents. In order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a single-agent concurrent negotiation framework. This work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance. We introduce a pre-negotiation phase that allows agents to transfer meta-level information. Using this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability. This more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context. The agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations. The experimental data shows that these mechanisms improve the agents' and the system's overall performance significantly.","lvl-1":"Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems Xiaoqin Zhang Computer and Information Science Department University of Massachusetts at Dartmouth x2zhang@umassd.edu Victor Lesser Computer Science Department University of Massachusetts at Amherst lesser@cs.umass.edu ABSTRACT A negotiation chain is formed when multiple related negotiations are spread over multiple agents.\nIn order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a singleagent concurrent negotiation framework.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent``s cooperation and the system``s overall performance.\nWe introduce a pre-negotiation phase that allows agents to transfer meta-level information.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context.\nThe agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations.\nThe experimental data shows that these mechanisms improve the agents'' and the system``s overall performance significantly.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Algorithms, Performance, Experimentation 1.\nINTRODUCTION Sophisticated negotiation for task and resource allocation is crucial for the next generation of multi-agent systems (MAS) applications.\nGroups of agents need to efficiently negotiate over multiple related issues concurrently in a complex, distributed setting where there are deadlines by which the negotiations must be completed.\nThis is an important research area where there has been very little work done.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent``s cooperation and the system``s overall performance.\nThere is no single global goal in such systems, either because each agent represents a different organization\/user, or because it is difficult\/impossible to design one single global goal.\nThis issue arises due to multiple concurrent tasks, resource constrains and uncertainties, and thus no agent has sufficient knowledge or computational resources to determine what is best for the whole system [11].\nAn example of such a system would be a virtual organization [12] (i.e. a supply chain) dynamically formed in an electronic marketplace such as the one developed by the CONOISE project [5].\nTo accomplish tasks continuously arriving in the virtual organization, cooperation and sub-task relocation are needed and preferred.\nThere is no single global goal since each agent may be involved in multiple virtual organizations.\nMeanwhile, the performance of each individual agent is tightly related to other agents'' cooperation and the virtual organization``s overall performance.\nThe negotiation in such systems is not a zero-sum game, a deal that increases both agents'' utilities can be found through efficient negotiation.\nAdditionally, there are multiple encounters among agents since new tasks are arriving all the time.\nIn such negotiations, price may or may not be important, since it can be fixed resulting from a long-term contract.\nOther factors like quality and delivery time are important too.\nReputation mechanisms in the system makes cheating not attractive from a long term viewpoint due to multiple encounters among agents.\nIn such systems, agents are self-interested because they primarily focus on their own goals; but they are also semi-cooperative, meaning they are willing to be truthful and collaborate with other agents to find solutions that are beneficial to all participants, including itself; though it won``t voluntarily scarify its own utility in exchange of others'' benefits.\nAnother major difference between this work and other work on negotiation is that negotiation, here, is not viewed as a stand-alone process.\nRather it is one part of the agent``s activity which is tightly interleaved with the planning, scheduling and executing of the agent``s activities, which also may relate to other negotiations.\nBased on this recognition, this work on negotiation is concerned more about the meta-level decision-making process in negotiation rather than the basic protocols or languages.\nThe goal of this research is to develop a set of macro-strategies that allow the agents to effectively manage multiple related negotiations, including, but not limited to the following issues: how much time should be spent on each negotiation, how much flexibility (see formal definition in Formula 3) should be allocated for each negotiation, and in what order should 50 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the negotiations be performed.\nThese macro-strategies are different from those micro-strategies that direct the individual negotiation thread, such as whether the agent should concede and how much the agent should concede, etc[3].\nIn this paper we extend a multi-linked negotiation model [10] from a single-agent perspective to a multi-agent perspective, so that a group of agents involved in chains of interrelated negotiations can find nearly-optimal macro negotiation strategies for pursuing their negotiations.\nThe remainder of this paper is structured in the following manner.\nSection 2 describes the basic negotiation process and briefly reviews a single agent``s model of multi-linked negotiation.\nSection 3 introduces a complex supply-chain scenario.\nSection 4 details how to solve those problems arising in the negotiation chain.\nSection 5 reports on the experimental work.\nSection 6 discusses related work and Section 7 presents conclusions and areas of future work.\n2.\nBACKGROUND ON MULTI-LINKED NEGOTIATION In this work, the negotiation process between any pair of agents is based on an extended version of the contract net [6]: the initiator agent announces the proposal including multiple features; the responding agent evaluates it and responds with either a yes\/no answer or a counter proposal with some features modified.\nThis process can go back and forth until an agreement is reached or the agents decide to stop.\nIf an agreement is reached and one agent cannot fulfill the commitment, it needs to pay the other party a decommitment penalty as specified in the commitment.\nA negotiation starts with a proposal, which announces that a task (t) needs to be performed includes the following attributes: 1.\nearliest start time (est): the earliest start time of task t; task t cannot be started before time est. 2.\ndeadline (dl): the latest finish time of the task; the task needs to be finished before the deadline dl.\n3.\nminimum quality requirement (minq): the task needs to be finished with a quality achievement no less than minq.\n4.\nregular reward (r): if the task is finished as the contract requested, the contractor agent will get reward r. 5.\nearly finish reward rate (e): if the contractor agent can finish the task earlier than dl, it will get the extra early finish reward proportional to this rate.\n6.\ndecommitment penalty rate (p): if the contractor agent cannot perform the task as it promised in the contract or if the contractee agent needs to cancel the contract after it has been confirmed, it also needs to pay a decommitment penalty (p\u2217r) to the other agent.\nThe above attributes are also called attribute-in-negotiation which are the features of the subject (issue) to be negotiated, and they are domain-dependent.\nAnother type of attribute 1 is the attribute-ofnegotiation, which describes the negotiation process itself and is domain-independent, such as: 1 These attributes are similar to those used in project management; however, the multi-linked negotiation problem cannot be reduced to a project management problem or a scheduling problem.\nThe multi-linked negotiation problem has two dimensions: the negotiations, and the subjects of negotiations.\nThe negotiations are interrelated and the subjects are interrelated; the attributes of negotiations and the attributes of the subjects are interrelated as well.\nThis two-dimensional complexity of interrelationships distinguishes it from the classic project management problem or scheduling problem, where all tasks to be scheduled are local tasks and no negotiation is needed.\n1.\nnegotiation duration (\u03b4(v)): the maximum time allowed for negotiation v to complete, either reaching an agreed upon proposal (success) or no agreement (failure).\n2.\nnegotiation start time (\u03b1(v)): the start time of negotiation v. \u03b1(v) is an attribute that needs to be decided by the agent.\n3.\nnegotiation deadline ( (v)): negotiation v needs to be finished before this deadline (v).\nThe negotiation is no longer valid after time (v), which is the same as a failure outcome of this negotiation.\n4.\nsuccess probability (ps(v)): the probability that v is successful.\nIt depends on a set of attributes, including both attributes-in-negotiation (i.e. reward, flexibility, etc.) and attributes-of-negotiation (i.e. negotiation start time, negotiation deadline, etc.).\nAn agent involved in multiple related negotiation processes needs to reason on how to manage these negotiations in terms of ordering them and choosing the appropriate values for features.\nThis is the multi-linked negotiation problem [10] : DEFINITION 2.1.\nA multi-linked negotiation problem is defined as an undirected graph (more specifically, a forest as a set of rooted trees): M = (V, E), where V = {v} is a finite set of negotiations, and E = {(u, v)} is a set of binary relations on V .\n(u, v) \u2208 E denotes that negotiation u and negotiation v are directly-linked.\nThe relationships among the negotiations are described by a forest, a set of rooted trees {Ti}.\nThere is a relation operator associated with every non-leaf negotiation v (denoted as \u03c1(v)), which describes the relationship between negotiation v and its children.\nThis relation operator has two possible values: AND and OR.\nThe AND relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires all its children nodes have successful accomplishments.\nThe OR relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires at least one child node have successful accomplishment, where the multiple children nodes represent alternatives to accomplish the same goal.\nMulti-linked negotiation problem is a local optimization problem.\nTo solve a multi-linked negotiation problem is to find a negotiation solution (\u03c6, \u03d5) with optimized expected utility EU(\u03c6, \u03d5), which is defined as: EU(\u03c6, \u03d5) = 2n X i=1 P(\u03c7i, \u03d5) \u2217 (R(\u03c7i, \u03d5) \u2212 C(\u03c7i, \u03c6, \u03d5)) (1) A negotiation ordering \u03c6 defines a partial order of all negotiation issues.\nA feature assignment \u03d5 is a mapping function that assigns a value to each attribute that needs to be decided in the negotiation.\nA negotiation outcome \u03c7 for a set of negotiations {vj}, (j = 1, ..., n) specifies the result for each negotiation, either success or failure.\nThere are a total of 2n different outcomes for n negotiations: {chii}, (i = 1, ..., 2n ).\nP(\u03c7i, \u03d5) denotes the probability of the outcome \u03c7i given the feature assignment \u03d5, which is calculated based on the success probability of each negotiation.\nR(\u03c7i, \u03d5) denotes the agent``s utility increase given the outcome \u03c7i and the feature assignment \u03d5, and C(\u03c7i, \u03c6, \u03d5) is the sum of the decommitment penalties of those negotiations, which are successful, but need to be abandoned because the failure of other directly related negotiations; these directly related negotiations are performed concurrently with this negotiation or after this negotiation according to the negotiation ordering \u03c6.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 51 Computer Producer CPU Other Tasks Distribution Center Memory Producer Transporter Deliver Hardware Order Memory (2) Other Tasks Other Tasks Order Chips PC Manufacturer Order Store Order Memory (1) Other Tasks Purchase Memory Customer Deliver Computer Hardware Computer Order Purchase Figure 1: A Complex Negotiation Chain Scenario A heuristic search algorithm [10] has been developed to solve the single agent``s multi-linked negotiation problem that produces nearly-optimal solutions.\nThis algorithm is used as the core of the decision-making for each individual agent in the negotiation chain scenario.\nIn the rest of the paper, we present our work on how to improve the local solution of a single agent in the global negotiation chain context.\n3.\nNEGOTIATION CHAIN PROBLEM Negotiation chain problem occurs in a multi-agent system, where each agent represents an individual, a company, or an organization, and there is no absolute authority in the system.\nEach agent has its own utility function for defining the implications of achieving its goals.\nThe agent is designed to optimize its expected utility given its limited information, computational and communication resources.\nDynamic tasks arrive to individual agents, most tasks requiring the coordination of multiple agents.\nEach agent has the scheduling and planning ability to manage its local activities, some of these activities are related to other agents'' activities.\nNegotiation is used to coordinate the scheduling of these mutual related activities.\nThe negotiation is tightly connected with the agent``s local scheduling\/planning processes and is also related to other negotiations.\nAn agent may be involved in multiple related negotiations with multiple other agents, and each of the other agents may be involved in related negotiations with others too.\nFigure 1 describes a complex negotiation chain scenario.\nThe Store, the PC manufacturer, the Memory Producer and the Distribution Center are all involved in multi-linked negotiation problems.\nFigure 2 shows a distributed model of part of the negotiation chain described in Figure 1.\nEach agent has a local optimization problem - the multi-linked negotiation problem (represented as an and-or tree), which can be solved using the model and procedures described in Section 2.\nHowever, the local optimal solution may not be optimal in the global context given the local model is neither complete or accurate.\nThe dash line in Figure 2 represents the connection of these local optimization problem though the common negotiation subject.\nNegotiation chain problem O is a group of tightly-coupled local optimization problems: O = {O1, O2, ...On}, Oi denotes the local optimization problem (multi-linked negotiation problem) of agent Ai Agent Ai``s local optimal solution Slo i maximizes the expected local utility based on an incomplete information and assumptions about other agents'' local strategies - we defined such incomplete information and imperfect assumptions of agent i as Ii): Uexp i (Slo i , Ii) \u2265 Uexp i (Sx i , Ii) for all x = lo.\nHowever, the combination of these local optimal solutions {Slo i } : < Slo 1 , Slo 2 , ...Slo n > can be sub-optimal to a set of better local optimal solutions {Sblo i } : < Sblo 1 , Sblo 2 , ...Sblo n > if the global utility can be improved without any agent``s local utility being decreased by using {Sblo i }.\nIn other words, {Slo i } is dominated by {Sblo i } ({Slo i } \u227a {Sblo i }) iff: Ui(< Slo 1 , Slo 2 , ...Slo n >) \u2264 Ui(< Sblo 1 , Sblo 2 , ...Sblo n >) for i = 1, ...n and Pn i=1 Ui(< Slo 1 , Slo 2 , ...Slo n >) < Pn i=1 Ui(< Sblo 1 , Sblo 2 , ...Sblo n >) There are multiple sets of better local optimal solutions: {Sblo1 i }, {Sblo2 i }, ... {Sblom i }.\nSome of them may be dominated by others.\nA set of better local optimal solutions {S blog i } that is not dominated by any others is called best local optimal.\nIf a set of best local optimal solutions {S blog i } dominates all others, {S blog i } is called globally local optimal.\nHowever, sometimes the globally local optimal set does not exist, instead, there exist multiple sets of best local optimal solutions.\nEven if the globally local optimal solution does exist in theory, finding it may not be realistic given the agents are making decision concurrently, to construct the perfect local information and assumptions about other agents (Ii) in this dynamic environment is a very difficult and sometimes even impossible task.\nThe goal of this work is to improve each agent``s local model about other agents (Ii) through meta-level coordination.\nAs Ii become more accurate, the agent``s local optimal solution to its local multi-linked negotiation problem become a better local optimal solution in the context of the global negotiation chain problem.\nWe are not arguing that this statement is a universal valid statement that holds in all situations, but our experimental work shows that the sum of the agents'' utilities in the system has been improved by 95% on average when meta-level coordination is used to improve each agent``s local model Ii.\nIn this work, we focus on improving the agent``s local model through two directions.\nOne direction is to build a better function to describe the relationship between the success probability of the negotiation and the flexibility allocated to the negotiation.\nThe other direction is to find how to allocate time more efficiently for each negotiation in the negotiation chain context.\n4.\nNEW MECHANISM - META-LEVEL COORDINATION In order for an agent to get a better local model about other agents in the negotiation chain context, we introduce a pre-negotiation phase into the local negotiation process.\nDuring the pre-negotiation phase, agents communicate with other agents who have tasks contracting relationships with them, they transfer meta-level information before they decide on how and when to do the negotiations.\nEach agent tells other agents what types of tasks it will ask them to perform, and the probability distributions of some parameters of those tasks, i.e. the earliest start times and the deadlines, etc..\nWhen these probability distributions are not available directly, agents can learn such information from their past experience.\nIn our experiment described later, such distributed information is learned rather than being directly told by other agents.\nSpecifically, each agent provides the following information to other related agents: \u2022 Whether additional negotiation is needed in order to make a decision on the contracting task; if so, how many more negotiations are needed.\nnegCount represents the total number of additional negotiations needed for a task, including additional negotiations needed for its subtasks that happen among other agents.\nIn a negotiation chain situation, this information is being propagated and updated through the chain until 52 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) E: Order Hardware F: Deliver Computer H: Get Memory I: Deliver Hardware I: Deliver Hardware F: Deliver Computer G: Get CPU E: Get Hardware and TransporterDistribution Center A: Purchase Computer B: Purchase Memory C: Order Computer D: Order Memory Store Agent PC Manufacturer and C: Order Computer Figure 2: Distributed Model of Negotiation Chains every agent has accurate information.\nLet subNeg(T) be a set of subtasks of task T that require additional negotiations, then we have: negCount(T) = |subNeg(T)| + X t\u2208subNeg(T ) (negCount(t)) (2) For example, in the scenario described in Figure 1, for the distribution center, task Order Hardware consists of three subtasks that need additional negotiations with other agents: Order Chips, Order Memory and Deliver Hardware.\nHowever, no further negotiations are needed for other agents to make decision on these subtasks, hence the negCount for these subtasks are 0.\nThe following information is sent to the PC manufacturer by the distribution center: negCount(Order Hardware) = 3 For the PC manufacturer task Order Computer contains two subtasks that requires additional negotiations: Deliver Computer and Order Hardware.\nWhen the PC manufacturer receives the message from the Distribution Center, it updates its local information: negCount(Order Computer) = 2+ negCount(Deliver Computer)(0)+ negCount(Order Hardware)(3) = 5 and sends the updated information to the Store Agent.\n\u2022 Whether there are other tasks competing with this task and what is the likelihood of conflict.\nConflict means that given all constrains, the agent cannot accomplish all tasks on time, it needs to reject some tasks.\nThe likelihood of conflict Pcij between a task of type i and another task of type j is calculated based on the statistical model of each task``s parameters, including earliest start time (est), deadline (dl), task duration (dur) and slack time (sl), using a formula [7]: Pcij = P(dli \u2212 estj \u2264 duri + durj \u2227 dlj \u2212 esti \u2264 duri + durj) When there are more than two types of tasks, the likelihood of no conflict between task i and the rest of the tasks, is calculated as: PnoConflict(i) = Qn j=1,j=i(1 \u2212 Pcij) For example, the Memory Producer tells the Distribution Center about the task Order Memory.\nIts local decision does not involve additional negotiation with other agents (negCount = 0), however, there is another task from the Store Agent that competes with this task, thus the likelihood of no conflict is 0.5 (PnoConflict = 0.5).\nOn the other hand, the CPU Producer tells the Distribution Center about the task Order Chips: its local decision does not involve additional negotiation with other agents, and there are no other tasks competing with this task (PnoConflict = 1.0) given the current environment setting.\nBased on the above information, the Distribution Center knows that task Order Memory needs more flexibility than task Order Chips in order to be successful in negotiation.\nMeanwhile, the Distribution Center would tell the PC Manufacturer that task Order Hardware involves further negotiation with other agents (negCount = 3), and that its local decision depends on other agents'' decisions.\nThis piece of information helps the PC Manufacturer allocate appropriate flexibility for task Order Hardware in negotiation.\nIn this work, we introduce a short period and Produce_Computer Get_Software Install_Software Deliver_Computer Memory ProducerHardware Producer Transporter Consumer Agent Order_Computer Order_Memory Order_Hardware Order_Hardware process\u2212time: 3 Distribution Center PC Manufacturer Order_Chips Deliver_HardwareGet_Parts process\u2212time: 11 enables enables process\u2212time: 4 process\u2212time: 3 and and enables process\u2212time: 4 and enables process\u2212time: 3 process\u2212time: 2 Figure 3: Task Structures of PC Manufacturer and Distribution Center for agents to learn the characteristics of those incoming tasks, including est, dl, dur and sl, which are used to calculate Pcij and PnoConflict for the meta-level coordination.\nDuring system performance, agents are continually monitoring these characteristics.\nAn updated message will be send to related agents when there is significant change of the meta-level information.\nNext we will describe how the agent uses the meta-level information transferred during the pre-negotiation phase.\nThis information will be used to improve the agent``s local model, more specifically, they are used in the agent``s local decision-making process by affecting the values of some features.\nEspecially, we will be concerned with two features that have strong implications for the agent``s macro strategy for the multi-linked negotiations, and hence also affect the performance of a negotiation chain significantly.\nThe first is the amount of flexibility specified in the negotiation parameter.\nThe second feature we will explore is the time allocated for the negotiation process to complete.\nThe time allocated for each negotiation affects the possible ordering of those negotiations, and it also affects the negotiation outcome.\nDetails are discussed in the following sections.\n4.1 Flexibility and Success Probability Agents not only need to deal with complex negotiation problems, they also need to handle their own local scheduling and planning process that are interleaved with the negotiation process.\nFigure 3 shows the local task structures of the PC Manufacturer and the Distribution Center.\nSome of these tasks can be performed locally by the PC manufacturer, such as Get Software and Install Software, while other tasks (non-local tasks) such as Order Hardware and Deliver Computer need to be performed by other agents.The PC Manufacturer needs to negotiate with the Distribution Center and the Transporter about whether they can perform these tasks, and if so, when and how they will perform them.\nWhen the PC Manufacturer negotiates with other agents about the non-local task, it needs to have the other agents'' arrangement fit into its local schedule.\nSince the PC Manufacturer is dealing with multiple non-local tasks simultaneously, it also needs to ensure the commitments on these non-local tasks are consistent with each other.\nFor example, the deadline of task Order Hardware cannot be later than the start time of task Deliver Computer.\nFigure 4 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 53 Order_Hardware Deliver_Computer [34, 40] process time: 4 process time: 3 [11, 28] [11, 28] process time: 11 Get_Software Install_Software [28, 34] process time: 2 Order_Computer starts at time 11 and finishes by 40 Figure 4: A Sample Local Schedule of the PC Manufacturer shows a sample local schedule of the PC Manufacturer.\nAccording to this schedule, as long as task Order Hardware is performed during time [11, 28] and task Deliver Computer is performed during time [34, 40], there exists a feasible schedule for all tasks and task Order Computer can be finished by time 40, which is the deadline promised to the Customer.\nThese time ranges allocated for task Order Hardware and task Deliver Computer are called consistent ranges; the negotiations on these tasks can be performed independently within these ranges without worrying about conflict.\nNotice that each task should be allocated with a time range that is large enough to accommodate the estimated task process time.\nThe larger the range is, the more likely the negotiation will succeed, because it is easier for the other agent to find a local schedule for this task.\nThen the question is, how big should this time range be?\nWe defined a quantitative measure called flexibility: Given a task t, suppose the allocated time range for t is [est, dl], est is the earliest start time and dl stands for the deadline, flexibility(t) = dl \u2212 est \u2212 process time(t) process time(t) (3) Flexibility is an important attribute because it directly affects the possible outcome of the negotiation.\nThe success probability of a negotiation can be described as a function of the flexibility.\nIn this work, we adopt the following formula for the success probability function based on the flexibility of the negotiation issue: ps(v) = pbs(v) \u2217 (2\/\u03c0) \u2217 (arctan(f(v) + c))) (4) This function describes a phenomenon where initially the likelihood of a successful negotiation increases significantly as the flexibility grows, and then levels off afterward, which mirrors our experience from previous experiments.\npbs is the basic success probability of this negotiation v when the flexibility f(v) is very large.\nc is a parameter used to adjust the relationship.\nDifferent function patterns can result from different parameter values, as shown in Figure 5.\nThis function describes the agent``s assumption about how the other agent involved in this negotiation would response to this particular negotiation request, when it has flexibility f(v).\nThis function is part of the agent``s local model about other agents.\nTo improve the accuracy of this function and make it closer to the reality, the agent adjusts these two values according to the meta-level information transferred during pre-negotiation phase.\nThe values of c depends on whether there is further negotiation involved and whether there are other tasks competing with this task for common resources.\nIf so, more flexibility is needed for this issue and hence c should be assigned a smaller value.\nIn our implementation, the following procedure is used to calculate c based on the meta-level information negCount and PnoConflict: if(PnoConflict > 0.99) \/\/ no other competing task c = Clarge \u2212 negCount else \/\/ competing task exists c = Csmall This procedure works as follows: when there is no other competing Figure 5: Different Success Probability Functions task, c depends on the number of additional negotiations needed.\nThe more additional negotiations that are needed, the smaller value c has, hence more flexibility will be assigned to this issue to ensure the negotiation success.\nIf no more negotiation is needed, c is assigned to a large number Clarge, meaning that less flexibility is needed for this issue.\nWhen there are other competing tasks, c is assigned to a small number Csmall, meaning that more flexibility is needed for this issue.\nIn our experimental work, we have Clarge as 5 and Csmall as 1.\nThese values are selected according to our experience; however, a more practical approach is to have agents learn and dynamically adjust these values.\nThis is also part of our future work.\npbs is calculated based on PnoConflict, f(v) (the flexibility of v in previous negotiation), and c, using the reverse format of equation 4.\npbs(v) = min(1.0, PnoConflict(v)\u2217(\u03c0\/2)\/(arctan(f(v)+c))) (5) For example, based on the scenario described above, the agents have the following values for c and pbs based on the meta-level information transferred: \u2022 PC Manufacturer, Order Hardware: pbs = 1.0, c = 2; \u2022 Distribution Center, Order Chips: pbs = 1.0, c = 5; \u2022 Store Agent, Order Memory: pbs = 0.79, c = 1; Figure 5 shows the different patterns of the success probability function given different parameter values.\nBased on such patterns, the Store Agent would allocate more flexibility to the task Order Memory to increase the likelihood of success in negotiation.\nIn the agent``s further negotiation process, formula 4 with different parameter values is used in reasoning on how much flexibility should be allocated to a certain issue.\nThe pre-negotiation communication occurs before negotiation, but not before every negotiation session.\nAgents only need to communicate when the environment changes, for example, new types of tasks are generated, the characteristics of tasks changes, the negotiation partner changes, etc..\nIf no major change happens, the agent can just use the current knowledge from previous communications.\nThe communication and computation overhead of this prenegotiation mechanism is very small, given the simple information collection procedure and the short message to be transferred.\nWe will discuss the effect of this mechanism in Section 5.\n4.2 Negotiation Duration and Deadline In the agent``s local model, there are two attributes that describe how soon the agent expects the other agent would reply to the negotiation v: negotiation duration \u03b4(v) and negotiation deadline (v) 54 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Examples of negotiations (\u03b4(v): negotiation duration, s.p.: success probability) index task-name \u03b4(v) reward s.p. penalty 1 Order Hardware 4 6 0.99 3 2 Order Chips 4 1 0.99 0.5 3 Order Memory 4 1 0.80 0.5 4 Deliver Hardware 4 1 0.70 0.5 .\nThese two important attributes that affect the negotiation solution.\nPart of the negotiation solution is a negotiation ordering \u03c6 which specifies in what order the multiple negotiations should be performed.\nIn order to control the negotiation process, every negotiation should be finished before its negotiation deadline, and the negotiation duration is the time allocated for this negotiation.\nIf a negotiation cannot be finished during the allocated time, the agent has to stop this negotiation and consider it as a failure.\nThe decision about the negotiation order depends on the success probability, reward, and decommitment penalty of each negotiation.\nA good negotiation order should reduce the risk of decommitment and hence reduce the decommitment penalty.\nA search algorithm has been developed to find such negotiation order described in [10].\nFor example, Table 1 shows some of the negotiations for the Distribution Center and their related attributes.\nGiven enough time (negotiation deadline is greater than 16), the best negotiation order is: 4 \u2192 3 \u2192 2 \u2192 1.\nThe most uncertain negotiation (4: Deliver Hardware) is performed first.\nThe negotiation with highest penalty (1: Order hardware) is performed after all related negotiations (2, 3, and 4) have been completed so as to reduce the risk of decommitment.\nIf the negotiation deadline is less than 12 and greater than 8, the following negotiation order is preferred: (4, 3, 2) \u2192 1, which means negotiation 4, 3, 2 can be performed in parallel, and 1 needs to be performed after them.\nIf the negotiation deadline is less than 8, then all negotiations have to be performed in parallel, because there is no time for sequencing negotiations.\nIn the original model for single agent [10], the negotiation deadline (v) is assumed to be given by the agent who initiates the contract.\nThe negotiation duration \u03b4(v) is an estimation of how long the negotiation takes based on experience.\nHowever, the situation is not that simple in a negotiation chain problem.\nConsidering the following scenario.\nWhen the customer posts a contract for task Purchase Computer, it could require the Store Agent to reply by time 20.\nTime 20 can be considered as the negotiation deadline for Purchase Computer.\nWhen the Store Agent negotiates with the PC Manufacturer about Order Computer, what negotiation deadline should it specify?\nHow long the negotiation on Order Computer takes depends on how the PC Manufacturer handles its local multiple negotiations: whether it replies to the Store Agent first or waits until all other related negotiations have been settled.\nHowever, the ordering of negotiations depends on the negotiation deadline on Order Computer, which should be provided by the Store Agent.\nThe negotiation deadline of Order Computer for the PC Manufacturer is actually decided based on the negotiation duration of Order Computer for the Store Agent.\nHow much time the Store Agent would like to spend on the negotiation Order Computer is its duration, and also determines the negotiation deadline for the PC Manufacturer.\nNow the question arises: how should an agent decide how much time it should spend on each negotiation, which actually affects the other agents'' negotiation decisions.\nThe original model does not handle this question since it assumes the negotiation duration \u03b4(v) is known.\nHere we propose three different approaches to handle this issue.\n1.\nsame-deadline policy.\nUse the same negotiation deadline for all related negotiations, which means allocate all available time to all negotiations: \u03b4(v) = total available time For example if the negotiation deadline for Purchase Computer is 20, the Store Agent will tell the PC Manufacturer to reply by 20 for Order Computer (ignoring the communication delay).\nThis strategy allows every negotiation to have the largest possible duration, however it also eliminates the possibility of performing negotiations in sequence - all negotiations need to be performed in parallel because the total available time is the same as the duration of each negotiation.\n2.\nmeta-info-deadline policy.\nAllocate time for each negotiation according to the meta-level information transferred in the pre-negotiation phase.\nA more complicated negotiation, which involves further negotiations, should be allocated additional time.\nFor example, the PC Manufacturer allocates a duration of 12 for the negotiation Order Hardware, and a duration of 4 for Deliver Computer.\nThe reason is that the negotiation with the Distribution Center about Order Hardware is more complicated because it involves further negotiations between the Distribution Center and other agents.\nIn our implementation, we use the following procedure to decide the negotiation duration \u03b4(v): if(negCount(v) >= 3) \/\/ more additional negotiation needed \u03b4(v) = (negCount(v)\u22121)\u2217basic neg cycle else if(negCount(v) > 0) \/\/ one or two additional negotiations needed \u03b4(v) = 2 \u2217 basic neg cycle else \/\/no additional negotiation \u03b4(v) = basic neg cycle + 1 basic neg cycle represents the minimum time needed for a negotiation cycle (proposal-think-reply), which is 3 in our system setting including communication delay.\nOne additional time unit is allocated for the simplest negotiation because it allows the agent to perform a more complicated reasoning process in thinking.\nAgain, the structure of this procedure is selected according to experience, and it can be learned and adjusted by agents dynamically.\n3.\nevenly-divided-deadline policy.\nEvenly divide the available time among the n related negotiations: \u03b4(v) = total available time\/n For example, if the current time is 0, and the negotiation deadline for Order Computer is 21, given two other related negotiations, Order Hardware and Deliver Computer, each negotiation is allocated with a duration of 7.\nIntuitively we feel the strategy 1 may not be a good one, because performing all negotiations in parallel would increase the risk of decommitment and hence also decommitment penalties.\nHowever, it is not very clear how strategy 2 and 3 perform, and we will discuss some experimental results in Section 5.\n5.\nEXPERIMENTS To verify and evaluate the mechanisms presented for the negotiation chain problem, we implemented the scenario described in Figure 1 .\nNew tasks were randomly generated with decommitment penalty rate p \u2208 [0, 1], early finish reward rate e \u2208 [0, 0.3], and deadline dl \u2208 [10, 60] (this range allows different flexibilities available for those sub-contracted tasks), and arrived at the store agent periodically.\nWe performed two sets of experiments to study The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 55 Table 2: Parameter Values Without\/With Meta-level Information fixed-flex meta-info-flex negotiation pbs pbs c Order Computer 0.95 1.0 0 Order Memory (1) 0.95 0.79 1 Order Hardware 0.95 1.0 2 Deliver Computer 0.95 1.0 1 Deliver Hardware 0.95 1.0 5 Order Chips 0.95 1.0 1 Order Memory (2) 0.95 0.76 1 Figure 6: Different Flexibility Policies how the success probability functions and negotiation deadlines affect the negotiation outcome, the agents'' utilities and the system``s overall utility.\nIn this experiment, agents need to make decision on negotiation ordering and feature assignment for multiple attributes including: earliest start time, deadline, promised finish time, and those attributes-of-negotiation.\nTo focus on the study of flexibility, in this experiment, the regular rewards for each type of tasks are fixed and not under negotiation.\nHere we only describe how agents handle the negotiation duration and negotiation deadlines because they are the attributes affected by the pre-negotiation phase.\nAll other attributes involved in negotiation are handled according to how they affect the feasibility of local schedule (time-related attributes) and how they affect the negotiation success probability (time and cost related attributes) and how they affect the expect utility.\nA search algorithm [10] and a set of partial order scheduling algorithms are used to handle these attributes.\nWe tried two different flexibility policies.\n1.\nfixed-flexibility policy: the agent uses a fixed value as the success probability (ps(v) = pbs(v)), according to its local knowledge and estimation.\n2.\nmeta-info-flexibility policy: the agent uses the function ps(v) = pbs(v) \u2217 (2\/\u03c0) \u2217 (arctan(f(v) + c))) to model the success probability.\nIt also adjusts those parameters (pbs(v) and c) according to the meta-level information obtained in prenegotiation phase as described in Section 4.\nTable 2 shows the values of those parameters for some negotiations.\nFigure 6 shows the results of this experiment.\nThis set of experiments includes 10 system runs, and each run is for 1000 simulating time units.\nIn the first 200 time units, agents are learning about the task characteristics, which will be used to calculate the conflict probabilities Pcij.\nAt time 200, agents perform meta-level information communication, and in the next 800 time units, agents use the meta-level information in their local reasoning process.\nThe data was collected over the 800 time units after the pre-negotiation Figure 7: Different Negotiation Deadline Policies phase 2 .\nOne Purchase Computer task is generated every 20 time units, and two Purchase Memory tasks are generated every 20 time units.\nThe deadline for task Purchase Computer is randomly generated in the range of [30, 60], the deadline for task Purchase Memory is in the range of [10, 30].\nThe decommitment penalty rate is randomly generated in the range of [0, 1].\nThis setting creates multiple concurrent negotiation chain situations; there is one long chain: Customer - Store - PC Manufacturer - Distribution Center - Producers - Transporter and two short chains: Customer - Store - Memory Producer This demonstrates that this mechanism is capable of handling multiple concurrent negotiation chains.\nAll agents perform better in this example (gain more utility) when they are using the meta-level information to adjust their local control through the parameters in the success probability function (meta-info-flex policy).\nEspecially for those agents in the middle of the negotiation chain, such as the PC Manufacturer and the Distribution Center, the flexibility policy makes a significant difference.\nWhen the agent has a better understanding of the global negotiation scenario, it is able to allocate more flexibility for those tasks that involve complicated negotiations and resource contentions.\nTherefore, the success probability increases and fewer tasks are rejected or canceled (90% of the tasks have been successfully negotiated over when using meta-level information, compared to 39% when no pre-negotiation is used), resulting in both the agent and the system achieving better performance.\nIn the second set of experiments studies, we compare three negotiation deadline policies described in Section 4.2 when using the meta-info flexibility policy described above.\nThe initial result shows that the same-deadline policy and the meta-info-deadline policy perform almost the same when the amount of system workload level is moderate, tasks can be accommodated given sufficient flexibility.\nIn this situation, with either of the policies, most negotiations are successful, and there are few decommitment occurrences, so the ordering of negotiations does not make too much difference.\nTherefore, in this second set of experiments, we increase the number of new tasks generated to raise the average workload in the system.\nOne Purchase Computer task is generated every 15 time units, three Purchase Memory tasks are generated every 2 We only measure the utility collected after the learning phase because that the learning phase is relatively short comparing to the evaluation phase, also during the learning phase, no meta-level information is used, so some of the policies are invalid.\n56 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 15 time units, and one task Deliver Gift (directly from the customer to the Transporter) is generated every 10 time units.\nThis setup generates a higher level of system workload, which results in some tasks not being completed no matter what negotiation ordering is used.\nIn this situation, we found the meta-info-deadline policy performs much better than same-deadline policy (See Figure 7).\nWhen an agent uses the same-deadline policy, all negotiations have to be performed in parallel.\nIn the case that one negotiation fails, all related tasks have to be canceled, and the agent needs to pay multiple decommitment penalties.\nWhen the agent uses the meta-info-deadline policy, complicated negotiations are allocated more time and, correspondingly, simpler negotiations are allocated less time.\nThis also has the effect of allowing some negotiations to be performed in sequence.\nThe consequence of sequencing negotiation is that, if there is failure, an agent can simply cancel the other related negotiations that have not been started.\nIn this way, the agent does not have to pay decommitment penalty for those canceled negotiations because no commitment has been established yet.\nThe evenly-divided-deadline policy performs much worse than the meta-info-deadline policy.\nIn the evenly-divideddeadline policy, the agent allocates negotiation time evenly among the related negotiations, hence the complicated negotiation does not get enough time to complete.\nThe above experiment results show that the meta-level information transferred among agents during the pre-negotiation phase is critical in building a more accurate model of the negotiation problem.\nThe reasoning process based on this more accurate model produces an efficient negotiation solution, which improves the agent``s and the system``s overall utility significantly.\nThis conclusion holds for those environments where the system is facing moderate heavy load and tasks have relatively tight time deadline (our experiment setup is to produce such environment); the efficient negotiation is especially important in such environments.\n6.\nRELATED WORK Fatima, Wooldridge and Jennings [1] studied the multiple issues in negotiation in terms of the agenda and negotiation procedure.\nHowever, this work is limited since it only involves a single agent``s perspective without any understanding that the agent may be part of a negotiation chain.\nMailler and Lesser [4] have presented an approach to a distributed resource allocation problem where the negotiation chain scenario occurs.\nIt models the negotiation problem as a distributed constraint optimization problem (DCOP) and a cooperative mediation mechanism is used to centralize relevant portions of the DCOP.\nIn our work, the negotiation involves more complicated issues such as reward, penalty and utility; also, we adopt a distribution approach where no centralized control is needed.\nA mediator-based partial centralized approach has been applied to the coordination and scheduling of complex task network [8], which is different from our work since the system is a complete cooperative system and individual utility of single agent is not concerned at all.\nA combinatorial auction [2, 9] could be another approach to solving the negotiation chain problem.\nHowever, in a combinatorial auction, the agent does not reason about the ordering of negotiations.\nThis would lead to a problem similar to those we discussed when the same-deadline policy is used.\n7.\nCONCLUSION AND FUTURE WORK In this paper, we have solved negotiation chain problems by extending our multi-linked negotiation model from the perspective of a single agent to multiple agents.\nInstead of solving the negotiation chain problem in a centralized approach, we adopt a distributed approach where each agent has an extended local model and decisionmaking process.\nWe have introduced a pre-negotiation phase that allows agents to transfer meta-level information on related negotiation issues.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing the appropriate negotiation solution.\nThe experimental data shows that these mechanisms improve the agent``s and the system``s overall performance significantly.\nIn future extension of this work, we would like to develop mechanisms to verify how reliable the agents are.\nWe also recognize that the current approach of applying the meta-level information is mainly heuristic, so we would like to develop a learning mechanism that enables the agent to learn how to use such information to adjust its local model from previous experience.\nTo further verify this distributed approach, we would like to develop a centralized approach, so we can evaluate how good the solution from this distributed approach is compared to the optimal solution found by the centralized approach.\n8.\nREFERENCES [1] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nOptimal negotiation strategies for agents with incomplete information.\nIn Revised Papers from the 8th International Workshop on Intelligent Agents VIII, pages 377-392.\nSpringer-Verlag, 2002.\n[2] L. Hunsberger and B. J. Grosz.\nA combinatorial auction for collaborative planning.\nIn Proceedings of the Fourth International Conference on Multi-Agent Systems (ICMAS-2000), 2000.\n[3] N. R. Jennings, P. Faratin, T. J. Norman, P. O``Brien, B. Odgers, and J. L. Alty.\nImplementing a business process management system using adept: A real-world case study.\nInt.\nJournal of Applied Artificial Intelligence, 2000.\n[4] R. Mailler and V. Lesser.\nA Cooperative Mediation-Based Protocol for Dynamic, Distributed Resource Allocation.\nIEEE Transaction on Systems, Man, and Cybernetics, Part C, Special Issue on Game-theoretic Analysis and Stochastic Simulation of Negotiation Agents, 2004.\n[5] T. J. Norman, A. Preece, S. Chalmers, N. R. Jennings, M. Luck, V. D. Dang, T. D. Nguyen, V. Deora, J. Shao, A. Gray, and N. Fiddian.\nAgent-based formation of virtual organisations.\nInt.\nJ. Knowledge Based Systems, 17(2-4):103-111, 2004.\n[6] T. Sandholm and V. Lesser.\nIssues in automated negotiation and electronic commerce: Extending the contract net framework.\nIn Proceedings of the First International Conference on Multi-Agent Systems (ICMAS95), pages 328-335, 1995.\n[7] J. Shen, X. Zhang, and V. Lesser.\nDegree of Local Cooperation and its Implication on Global Utility.\nProceedings of Third International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2004), July 2004.\n[8] M. Sims, H. Mostafa, B. Horling, H. Zhang, V. Lesser, and D. Corkill.\nLateral and Hierarchical Partial Centralization for Distributed Coordination and Scheduling of Complex Hierarchical Task Networks.\nAAAI 2006 Spring Symposium on Distributed Plan and Schedule Management, 2006.\n[9] W. Walsh, M. Wellman, and F. Ygge.\nCombinatorial auctions for supply chain formation.\nIn Second ACM Conference on Electronic Commerce, 2000.\n[10] X. Zhang, V. Lesser, and S. Abdallah.\nEfficient management of multi-linked negotiation based on a formalized model.\nAutonomous Agents and MultiAgent Systems, 10(2):165-205, 2005.\n[11] X. Zhang, V. Lesser, and T. Wagner.\nIntegrative negotiation among agents situated in organizations.\nIEEE Transactions on System, Man, and Cybernetics: Part C, Special Issue on Game-theoretic Analysis and Stochastic Simulation of Negotiation Agents, 36(1):19-30, January 2006.\n[12] Q. Zheng and X. Zhang.\nAutomatic formation and analysis of multi-agent virtual organization.\nJournal of the Brazilian Computer Society: Special Issue on Agents Organizations, 11(1):74-89, July 2005.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 57","lvl-3":"Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems\nABSTRACT\nA negotiation chain is formed when multiple related negotiations are spread over multiple agents.\nIn order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a singleagent concurrent negotiation framework.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nWe introduce a pre-negotiation phase that allows agents to transfer meta-level information.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context.\nThe agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations.\nThe experimental data shows that these mechanisms improve the agents' and the system's overall performance significantly.\n1.\nINTRODUCTION\nSophisticated negotiation for task and resource allocation is crucial for the next generation of multi-agent systems (MAS) applications.\nGroups of agents need to efficiently negotiate over multiple related issues concurrently in a complex, distributed setting where there are deadlines by which the negotiations must be completed.\nThis is an important research area where there has been very little work done.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nThere is no single global goal in such systems, either because each agent represents a different organization\/user, or because it is difficult\/impossible to design one single global goal.\nThis issue arises due to multiple concurrent tasks, resource constrains and uncertainties, and thus no agent has sufficient knowledge or computational resources to determine what is best for the whole system [11].\nAn example of such a system would be a virtual organization [12] (i.e. a supply chain) dynamically formed in an electronic marketplace such as the one developed by the CONOISE project [5].\nTo accomplish tasks continuously arriving in the virtual organization, cooperation and sub-task relocation are needed and preferred.\nThere is no single global goal since each agent may be involved in multiple virtual organizations.\nMeanwhile, the performance of each individual agent is tightly related to other agents' cooperation and the virtual organization's overall performance.\nThe negotiation in such systems is not a zero-sum game, a deal that increases both agents' utilities can be found through efficient negotiation.\nAdditionally, there are multiple encounters among agents since new tasks are arriving all the time.\nIn such negotiations, price may or may not be important, since it can be fixed resulting from a long-term contract.\nOther factors like quality and delivery time are important too.\nReputation mechanisms in the system makes cheating not attractive from a long term viewpoint due to multiple encounters among agents.\nIn such systems, agents are self-interested because they primarily focus on their own goals; but they are also semi-cooperative, meaning they are willing to be truthful and collaborate with other agents to find solutions that are beneficial to all participants, including itself; though it won't voluntarily scarify its own utility in exchange of others' benefits.\nAnother major difference between this work and other work on negotiation is that negotiation, here, is not viewed as a stand-alone process.\nRather it is one part of the agent's activity which is tightly interleaved with the planning, scheduling and executing of the agent's activities, which also may relate to other negotiations.\nBased on this recognition, this work on negotiation is concerned more about the meta-level decision-making process in negotiation rather than the basic protocols or languages.\nThe goal of this research is to develop a set of macro-strategies that allow the agents to effectively manage multiple related negotiations, including, but not limited to the following issues: how much time should be spent on each negotiation, how much flexibility (see formal definition in Formula 3) should be allocated for each negotiation, and in what order should\nthe negotiations be performed.\nThese macro-strategies are different from those micro-strategies that direct the individual negotiation thread, such as whether the agent should concede and how much the agent should concede, etc [3].\nIn this paper we extend a multi-linked negotiation model [10] from a single-agent perspective to a multi-agent perspective, so that a group of agents involved in chains of interrelated negotiations can find nearly-optimal macro negotiation strategies for pursuing their negotiations.\nThe remainder of this paper is structured in the following manner.\nSection 2 describes the basic negotiation process and briefly reviews a single agent's model of multi-linked negotiation.\nSection 3 introduces a complex supply-chain scenario.\nSection 4 details how to solve those problems arising in the negotiation chain.\nSection 5 reports on the experimental work.\nSection 6 discusses related work and Section 7 presents conclusions and areas of future work.\n2.\nBACKGROUND ON MULTI-LINKED NEGOTIATION\nIn this work, the negotiation process between any pair of agents is based on an extended version of the contract net [6]: the initiator agent announces the proposal including multiple features; the responding agent evaluates it and responds with either a yes\/no answer or a counter proposal with some features modified.\nThis process can go back and forth until an agreement is reached or the agents decide to stop.\nIf an agreement is reached and one agent cannot fulfill the commitment, it needs to pay the other party a decommitment penalty as specified in the commitment.\nA negotiation starts with a proposal, which announces that a task (t) needs to be performed includes the following attributes:\n1.\nearliest start time (est): the earliest start time of task t; task t cannot be started before time est. 2.\ndeadline (dl): the latest finish time of the task; the task needs to be finished before the deadline dl.\n3.\nminimum quality requirement (minq): the task needs to be finished with a quality achievement no less than minq.\n4.\nregular reward (r): if the task is finished as the contract requested, the contractor agent will get reward r. 5.\nearly finish reward rate (e): if the contractor agent can finish the task earlier than dl, it will get the extra early finish reward proportional to this rate.\n6.\ndecommitment penalty rate (p): if the contractor agent cannot perform the task as it promised in the contract or if the contractee agent needs to cancel the contract after it has been confirmed, it also needs to pay a decommitment penalty (p \u2217 r) to the other agent.\nThe above attributes are also called attribute-in-negotiation which are the features of the subject (issue) to be negotiated, and they are domain-dependent.\nAnother type of attribute 1 is the attribute-ofnegotiation, which describes the negotiation process itself and is domain-independent, such as: 1These attributes are similar to those used in project management; however, the multi-linked negotiation problem cannot be reduced to a project management problem or a scheduling problem.\nThe multi-linked negotiation problem has two dimensions: the negotiations, and the subjects of negotiations.\nThe negotiations are interrelated and the subjects are interrelated; the attributes of negotiations and the attributes of the subjects are interrelated as well.\nThis two-dimensional complexity of interrelationships distinguishes it from the classic project management problem or scheduling problem, where all tasks to be scheduled are local tasks and no negotiation is needed.\n1.\nnegotiation duration (\u03b4 (v)): the maximum time allowed for negotiation v to complete, either reaching an agreed upon proposal (success) or no agreement (failure).\n2.\nnegotiation start time (\u03b1 (v)): the start time of negotiation v. \u03b1 (v) is an attribute that needs to be decided by the agent.\n3.\nnegotiation deadline (e (v)): negotiation v needs to be finished before this deadline e (v).\nThe negotiation is no longer valid after time e (v), which is the same as a failure outcome of this negotiation.\n4.\nsuccess probability (ps (v)): the probability that v is successful.\nIt depends on a set of attributes, including both attributes-in-negotiation (i.e. reward, flexibility, etc.) and attributes-of-negotiation (i.e. negotiation start time, negotiation deadline, etc.).\nAn agent involved in multiple related negotiation processes needs to reason on how to manage these negotiations in terms of ordering them and choosing the appropriate values for features.\nThis is the multi-linked negotiation problem [10]:\n\u03c1 (v)), which describes the relationship between negotiation v and its children.\nThis relation operator has two possible values: AND and OR.\nThe AND relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires all its children nodes have successful accomplishments.\nThe OR relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires at least one child node have successful accomplishment, where the multiple children nodes represent alternatives to accomplish the same goal.\nMulti-linked negotiation problem is a local optimization problem.\nTo solve a multi-linked negotiation problem is to find a negotiation solution (0, \u03d5) with optimized expected utility Elf (0, \u03d5), which is defined as:\nA negotiation ordering 0 defines a partial order of all negotiation issues.\nA feature assignment \u03d5 is a mapping function that assigns a value to each attribute that needs to be decided in the negotiation.\nA negotiation outcome \u03c7 for a set of negotiations {vj 1, (j = 1,..., n) specifies the result for each negotiation, either success or failure.\nThere are a total of 2n different outcomes for n negotiations: {chii1, (i = 1,..., 2n).\nP (\u03c7i, \u03d5) denotes the probability of the outcome \u03c7i given the feature assignment \u03d5, which is calculated based on the success probability of each negotiation.\nR (\u03c7i, \u03d5) denotes the agent's utility increase given the outcome \u03c7i and the feature assignment \u03d5, and C (\u03c7i, 0, \u03d5) is the sum of the decommitment penalties of those negotiations, which are successful, but need to be abandoned because the failure of other directly related negotiations; these directly related negotiations are performed concurrently with this negotiation or after this negotiation according to the negotiation ordering 0.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 51\nFigure 1: A Complex Negotiation Chain Scenario\nA heuristic search algorithm [10] has been developed to solve the single agent's multi-linked negotiation problem that produces nearly-optimal solutions.\nThis algorithm is used as the core of the decision-making for each individual agent in the negotiation chain scenario.\nIn the rest of the paper, we present our work on how to improve the local solution of a single agent in the global negotiation chain context.\n3.\nNEGOTIATION CHAIN PROBLEM\n4.\nNEW MECHANISM - META-LEVEL COORDINATION\n52 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nConsumer Agent\n4.1 Flexibility and Success Probability\n4.2 Negotiation Duration and Deadline\n5.\nEXPERIMENTS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 55\n56 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nRELATED WORK\nFatima, Wooldridge and Jennings [1] studied the multiple issues in negotiation in terms of the agenda and negotiation procedure.\nHowever, this work is limited since it only involves a single agent's perspective without any understanding that the agent may be part of a negotiation chain.\nMailler and Lesser [4] have presented an approach to a distributed resource allocation problem where the negotiation chain scenario occurs.\nIt models the negotiation problem as a distributed constraint optimization problem (DCOP) and a cooperative mediation mechanism is used to centralize relevant portions of the DCOP.\nIn our work, the negotiation involves more complicated issues such as reward, penalty and utility; also, we adopt a distribution approach where no centralized control is needed.\nA mediator-based partial centralized approach has been applied to the coordination and scheduling of complex task network [8], which is different from our work since the system is a complete cooperative system and individual utility of single agent is not concerned at all.\nA combinatorial auction [2, 9] could be another approach to solving the negotiation chain problem.\nHowever, in a combinatorial auction, the agent does not reason about the ordering of negotiations.\nThis would lead to a problem similar to those we discussed when the same-deadline policy is used.\n7.\nCONCLUSION AND FUTURE WORK In this paper, we have solved negotiation chain problems by extending our multi-linked negotiation model from the perspective of a single agent to multiple agents.\nInstead of solving the negotiation chain problem in a centralized approach, we adopt a distributed approach where each agent has an extended local model and decisionmaking process.\nWe have introduced a pre-negotiation phase that allows agents to transfer meta-level information on related negotiation issues.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing the appropriate negotiation solution.\nThe experimental data shows that these mechanisms improve the agent's and the system's overall performance significantly.\nIn future extension of this work, we would like to develop mechanisms to verify how reliable the agents are.\nWe also recognize that the current approach of applying the meta-level information is mainly heuristic, so we would like to develop a learning mechanism that enables the agent to learn how to use such information to adjust its local model from previous experience.\nTo further verify this distributed approach, we would like to develop a centralized approach, so we can evaluate how good the solution from this distributed approach is compared to the optimal solution found by the centralized approach.","lvl-4":"Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems\nABSTRACT\nA negotiation chain is formed when multiple related negotiations are spread over multiple agents.\nIn order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a singleagent concurrent negotiation framework.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nWe introduce a pre-negotiation phase that allows agents to transfer meta-level information.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context.\nThe agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations.\nThe experimental data shows that these mechanisms improve the agents' and the system's overall performance significantly.\n1.\nINTRODUCTION\nSophisticated negotiation for task and resource allocation is crucial for the next generation of multi-agent systems (MAS) applications.\nGroups of agents need to efficiently negotiate over multiple related issues concurrently in a complex, distributed setting where there are deadlines by which the negotiations must be completed.\nThis is an important research area where there has been very little work done.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nThere is no single global goal in such systems, either because each agent represents a different organization\/user, or because it is difficult\/impossible to design one single global goal.\nThis issue arises due to multiple concurrent tasks, resource constrains and uncertainties, and thus no agent has sufficient knowledge or computational resources to determine what is best for the whole system [11].\nTo accomplish tasks continuously arriving in the virtual organization, cooperation and sub-task relocation are needed and preferred.\nThere is no single global goal since each agent may be involved in multiple virtual organizations.\nMeanwhile, the performance of each individual agent is tightly related to other agents' cooperation and the virtual organization's overall performance.\nThe negotiation in such systems is not a zero-sum game, a deal that increases both agents' utilities can be found through efficient negotiation.\nAdditionally, there are multiple encounters among agents since new tasks are arriving all the time.\nIn such negotiations, price may or may not be important, since it can be fixed resulting from a long-term contract.\nOther factors like quality and delivery time are important too.\nReputation mechanisms in the system makes cheating not attractive from a long term viewpoint due to multiple encounters among agents.\nAnother major difference between this work and other work on negotiation is that negotiation, here, is not viewed as a stand-alone process.\nRather it is one part of the agent's activity which is tightly interleaved with the planning, scheduling and executing of the agent's activities, which also may relate to other negotiations.\nBased on this recognition, this work on negotiation is concerned more about the meta-level decision-making process in negotiation rather than the basic protocols or languages.\nthe negotiations be performed.\nThese macro-strategies are different from those micro-strategies that direct the individual negotiation thread, such as whether the agent should concede and how much the agent should concede, etc [3].\nIn this paper we extend a multi-linked negotiation model [10] from a single-agent perspective to a multi-agent perspective, so that a group of agents involved in chains of interrelated negotiations can find nearly-optimal macro negotiation strategies for pursuing their negotiations.\nSection 2 describes the basic negotiation process and briefly reviews a single agent's model of multi-linked negotiation.\nSection 3 introduces a complex supply-chain scenario.\nSection 4 details how to solve those problems arising in the negotiation chain.\nSection 5 reports on the experimental work.\nSection 6 discusses related work and Section 7 presents conclusions and areas of future work.\n2.\nBACKGROUND ON MULTI-LINKED NEGOTIATION\nThis process can go back and forth until an agreement is reached or the agents decide to stop.\nIf an agreement is reached and one agent cannot fulfill the commitment, it needs to pay the other party a decommitment penalty as specified in the commitment.\nA negotiation starts with a proposal, which announces that a task (t) needs to be performed includes the following attributes:\n1.\ndeadline (dl): the latest finish time of the task; the task needs to be finished before the deadline dl.\n3.\nminimum quality requirement (minq): the task needs to be finished with a quality achievement no less than minq.\n4.\nregular reward (r): if the task is finished as the contract requested, the contractor agent will get reward r. 5.\nearly finish reward rate (e): if the contractor agent can finish the task earlier than dl, it will get the extra early finish reward proportional to this rate.\n6.\nThe multi-linked negotiation problem has two dimensions: the negotiations, and the subjects of negotiations.\nThe negotiations are interrelated and the subjects are interrelated; the attributes of negotiations and the attributes of the subjects are interrelated as well.\nThis two-dimensional complexity of interrelationships distinguishes it from the classic project management problem or scheduling problem, where all tasks to be scheduled are local tasks and no negotiation is needed.\n1.\nnegotiation duration (\u03b4 (v)): the maximum time allowed for negotiation v to complete, either reaching an agreed upon proposal (success) or no agreement (failure).\n2.\nnegotiation start time (\u03b1 (v)): the start time of negotiation v. \u03b1 (v) is an attribute that needs to be decided by the agent.\n3.\nnegotiation deadline (e (v)): negotiation v needs to be finished before this deadline e (v).\nThe negotiation is no longer valid after time e (v), which is the same as a failure outcome of this negotiation.\n4.\nIt depends on a set of attributes, including both attributes-in-negotiation (i.e. reward, flexibility, etc.) and attributes-of-negotiation (i.e. negotiation start time, negotiation deadline, etc.).\nAn agent involved in multiple related negotiation processes needs to reason on how to manage these negotiations in terms of ordering them and choosing the appropriate values for features.\nThis is the multi-linked negotiation problem [10]:\n\u03c1 (v)), which describes the relationship between negotiation v and its children.\nThe AND relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires all its children nodes have successful accomplishments.\nThe OR relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires at least one child node have successful accomplishment, where the multiple children nodes represent alternatives to accomplish the same goal.\nMulti-linked negotiation problem is a local optimization problem.\nTo solve a multi-linked negotiation problem is to find a negotiation solution (0, \u03d5) with optimized expected utility Elf (0, \u03d5), which is defined as:\nA negotiation ordering 0 defines a partial order of all negotiation issues.\nA feature assignment \u03d5 is a mapping function that assigns a value to each attribute that needs to be decided in the negotiation.\nA negotiation outcome \u03c7 for a set of negotiations {vj 1, (j = 1,..., n) specifies the result for each negotiation, either success or failure.\nThere are a total of 2n different outcomes for n negotiations: {chii1, (i = 1,..., 2n).\nP (\u03c7i, \u03d5) denotes the probability of the outcome \u03c7i given the feature assignment \u03d5, which is calculated based on the success probability of each negotiation.\nThe Sixth Intl. .\nJoint Conf.\nFigure 1: A Complex Negotiation Chain Scenario\nA heuristic search algorithm [10] has been developed to solve the single agent's multi-linked negotiation problem that produces nearly-optimal solutions.\nThis algorithm is used as the core of the decision-making for each individual agent in the negotiation chain scenario.\nIn the rest of the paper, we present our work on how to improve the local solution of a single agent in the global negotiation chain context.\n6.\nRELATED WORK\nFatima, Wooldridge and Jennings [1] studied the multiple issues in negotiation in terms of the agenda and negotiation procedure.\nHowever, this work is limited since it only involves a single agent's perspective without any understanding that the agent may be part of a negotiation chain.\nMailler and Lesser [4] have presented an approach to a distributed resource allocation problem where the negotiation chain scenario occurs.\nIt models the negotiation problem as a distributed constraint optimization problem (DCOP) and a cooperative mediation mechanism is used to centralize relevant portions of the DCOP.\nIn our work, the negotiation involves more complicated issues such as reward, penalty and utility; also, we adopt a distribution approach where no centralized control is needed.\nA mediator-based partial centralized approach has been applied to the coordination and scheduling of complex task network [8], which is different from our work since the system is a complete cooperative system and individual utility of single agent is not concerned at all.\nA combinatorial auction [2, 9] could be another approach to solving the negotiation chain problem.\nHowever, in a combinatorial auction, the agent does not reason about the ordering of negotiations.\nThis would lead to a problem similar to those we discussed when the same-deadline policy is used.\n7.\nCONCLUSION AND FUTURE WORK In this paper, we have solved negotiation chain problems by extending our multi-linked negotiation model from the perspective of a single agent to multiple agents.\nInstead of solving the negotiation chain problem in a centralized approach, we adopt a distributed approach where each agent has an extended local model and decisionmaking process.\nWe have introduced a pre-negotiation phase that allows agents to transfer meta-level information on related negotiation issues.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing the appropriate negotiation solution.\nThe experimental data shows that these mechanisms improve the agent's and the system's overall performance significantly.\nIn future extension of this work, we would like to develop mechanisms to verify how reliable the agents are.","lvl-2":"Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems\nABSTRACT\nA negotiation chain is formed when multiple related negotiations are spread over multiple agents.\nIn order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a singleagent concurrent negotiation framework.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nWe introduce a pre-negotiation phase that allows agents to transfer meta-level information.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context.\nThe agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations.\nThe experimental data shows that these mechanisms improve the agents' and the system's overall performance significantly.\n1.\nINTRODUCTION\nSophisticated negotiation for task and resource allocation is crucial for the next generation of multi-agent systems (MAS) applications.\nGroups of agents need to efficiently negotiate over multiple related issues concurrently in a complex, distributed setting where there are deadlines by which the negotiations must be completed.\nThis is an important research area where there has been very little work done.\nThis work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent's cooperation and the system's overall performance.\nThere is no single global goal in such systems, either because each agent represents a different organization\/user, or because it is difficult\/impossible to design one single global goal.\nThis issue arises due to multiple concurrent tasks, resource constrains and uncertainties, and thus no agent has sufficient knowledge or computational resources to determine what is best for the whole system [11].\nAn example of such a system would be a virtual organization [12] (i.e. a supply chain) dynamically formed in an electronic marketplace such as the one developed by the CONOISE project [5].\nTo accomplish tasks continuously arriving in the virtual organization, cooperation and sub-task relocation are needed and preferred.\nThere is no single global goal since each agent may be involved in multiple virtual organizations.\nMeanwhile, the performance of each individual agent is tightly related to other agents' cooperation and the virtual organization's overall performance.\nThe negotiation in such systems is not a zero-sum game, a deal that increases both agents' utilities can be found through efficient negotiation.\nAdditionally, there are multiple encounters among agents since new tasks are arriving all the time.\nIn such negotiations, price may or may not be important, since it can be fixed resulting from a long-term contract.\nOther factors like quality and delivery time are important too.\nReputation mechanisms in the system makes cheating not attractive from a long term viewpoint due to multiple encounters among agents.\nIn such systems, agents are self-interested because they primarily focus on their own goals; but they are also semi-cooperative, meaning they are willing to be truthful and collaborate with other agents to find solutions that are beneficial to all participants, including itself; though it won't voluntarily scarify its own utility in exchange of others' benefits.\nAnother major difference between this work and other work on negotiation is that negotiation, here, is not viewed as a stand-alone process.\nRather it is one part of the agent's activity which is tightly interleaved with the planning, scheduling and executing of the agent's activities, which also may relate to other negotiations.\nBased on this recognition, this work on negotiation is concerned more about the meta-level decision-making process in negotiation rather than the basic protocols or languages.\nThe goal of this research is to develop a set of macro-strategies that allow the agents to effectively manage multiple related negotiations, including, but not limited to the following issues: how much time should be spent on each negotiation, how much flexibility (see formal definition in Formula 3) should be allocated for each negotiation, and in what order should\nthe negotiations be performed.\nThese macro-strategies are different from those micro-strategies that direct the individual negotiation thread, such as whether the agent should concede and how much the agent should concede, etc [3].\nIn this paper we extend a multi-linked negotiation model [10] from a single-agent perspective to a multi-agent perspective, so that a group of agents involved in chains of interrelated negotiations can find nearly-optimal macro negotiation strategies for pursuing their negotiations.\nThe remainder of this paper is structured in the following manner.\nSection 2 describes the basic negotiation process and briefly reviews a single agent's model of multi-linked negotiation.\nSection 3 introduces a complex supply-chain scenario.\nSection 4 details how to solve those problems arising in the negotiation chain.\nSection 5 reports on the experimental work.\nSection 6 discusses related work and Section 7 presents conclusions and areas of future work.\n2.\nBACKGROUND ON MULTI-LINKED NEGOTIATION\nIn this work, the negotiation process between any pair of agents is based on an extended version of the contract net [6]: the initiator agent announces the proposal including multiple features; the responding agent evaluates it and responds with either a yes\/no answer or a counter proposal with some features modified.\nThis process can go back and forth until an agreement is reached or the agents decide to stop.\nIf an agreement is reached and one agent cannot fulfill the commitment, it needs to pay the other party a decommitment penalty as specified in the commitment.\nA negotiation starts with a proposal, which announces that a task (t) needs to be performed includes the following attributes:\n1.\nearliest start time (est): the earliest start time of task t; task t cannot be started before time est. 2.\ndeadline (dl): the latest finish time of the task; the task needs to be finished before the deadline dl.\n3.\nminimum quality requirement (minq): the task needs to be finished with a quality achievement no less than minq.\n4.\nregular reward (r): if the task is finished as the contract requested, the contractor agent will get reward r. 5.\nearly finish reward rate (e): if the contractor agent can finish the task earlier than dl, it will get the extra early finish reward proportional to this rate.\n6.\ndecommitment penalty rate (p): if the contractor agent cannot perform the task as it promised in the contract or if the contractee agent needs to cancel the contract after it has been confirmed, it also needs to pay a decommitment penalty (p \u2217 r) to the other agent.\nThe above attributes are also called attribute-in-negotiation which are the features of the subject (issue) to be negotiated, and they are domain-dependent.\nAnother type of attribute 1 is the attribute-ofnegotiation, which describes the negotiation process itself and is domain-independent, such as: 1These attributes are similar to those used in project management; however, the multi-linked negotiation problem cannot be reduced to a project management problem or a scheduling problem.\nThe multi-linked negotiation problem has two dimensions: the negotiations, and the subjects of negotiations.\nThe negotiations are interrelated and the subjects are interrelated; the attributes of negotiations and the attributes of the subjects are interrelated as well.\nThis two-dimensional complexity of interrelationships distinguishes it from the classic project management problem or scheduling problem, where all tasks to be scheduled are local tasks and no negotiation is needed.\n1.\nnegotiation duration (\u03b4 (v)): the maximum time allowed for negotiation v to complete, either reaching an agreed upon proposal (success) or no agreement (failure).\n2.\nnegotiation start time (\u03b1 (v)): the start time of negotiation v. \u03b1 (v) is an attribute that needs to be decided by the agent.\n3.\nnegotiation deadline (e (v)): negotiation v needs to be finished before this deadline e (v).\nThe negotiation is no longer valid after time e (v), which is the same as a failure outcome of this negotiation.\n4.\nsuccess probability (ps (v)): the probability that v is successful.\nIt depends on a set of attributes, including both attributes-in-negotiation (i.e. reward, flexibility, etc.) and attributes-of-negotiation (i.e. negotiation start time, negotiation deadline, etc.).\nAn agent involved in multiple related negotiation processes needs to reason on how to manage these negotiations in terms of ordering them and choosing the appropriate values for features.\nThis is the multi-linked negotiation problem [10]:\n\u03c1 (v)), which describes the relationship between negotiation v and its children.\nThis relation operator has two possible values: AND and OR.\nThe AND relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires all its children nodes have successful accomplishments.\nThe OR relationship associated with a negotiation v means the successful accomplishment of the commitment on v requires at least one child node have successful accomplishment, where the multiple children nodes represent alternatives to accomplish the same goal.\nMulti-linked negotiation problem is a local optimization problem.\nTo solve a multi-linked negotiation problem is to find a negotiation solution (0, \u03d5) with optimized expected utility Elf (0, \u03d5), which is defined as:\nA negotiation ordering 0 defines a partial order of all negotiation issues.\nA feature assignment \u03d5 is a mapping function that assigns a value to each attribute that needs to be decided in the negotiation.\nA negotiation outcome \u03c7 for a set of negotiations {vj 1, (j = 1,..., n) specifies the result for each negotiation, either success or failure.\nThere are a total of 2n different outcomes for n negotiations: {chii1, (i = 1,..., 2n).\nP (\u03c7i, \u03d5) denotes the probability of the outcome \u03c7i given the feature assignment \u03d5, which is calculated based on the success probability of each negotiation.\nR (\u03c7i, \u03d5) denotes the agent's utility increase given the outcome \u03c7i and the feature assignment \u03d5, and C (\u03c7i, 0, \u03d5) is the sum of the decommitment penalties of those negotiations, which are successful, but need to be abandoned because the failure of other directly related negotiations; these directly related negotiations are performed concurrently with this negotiation or after this negotiation according to the negotiation ordering 0.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 51\nFigure 1: A Complex Negotiation Chain Scenario\nA heuristic search algorithm [10] has been developed to solve the single agent's multi-linked negotiation problem that produces nearly-optimal solutions.\nThis algorithm is used as the core of the decision-making for each individual agent in the negotiation chain scenario.\nIn the rest of the paper, we present our work on how to improve the local solution of a single agent in the global negotiation chain context.\n3.\nNEGOTIATION CHAIN PROBLEM\nNegotiation chain problem occurs in a multi-agent system, where each agent represents an individual, a company, or an organization, and there is no absolute authority in the system.\nEach agent has its own utility function for defining the implications of achieving its goals.\nThe agent is designed to optimize its expected utility given its limited information, computational and communication resources.\nDynamic tasks arrive to individual agents, most tasks requiring the coordination of multiple agents.\nEach agent has the scheduling and planning ability to manage its local activities, some of these activities are related to other agents' activities.\nNegotiation is used to coordinate the scheduling of these mutual related activities.\nThe negotiation is tightly connected with the agent's local scheduling\/planning processes and is also related to other negotiations.\nAn agent may be involved in multiple related negotiations with multiple other agents, and each of the other agents may be involved in related negotiations with others too.\nFigure 1 describes a complex negotiation chain scenario.\nThe Store, the PC manufacturer, the Memory Producer and the Distribution Center are all involved in multi-linked negotiation problems.\nFigure 2 shows a distributed model of part of the negotiation chain described in Figure 1.\nEach agent has a local optimization problem - the multi-linked negotiation problem (represented as an and-or tree), which can be solved using the model and procedures described in Section 2.\nHowever, the local optimal solution may not be optimal in the global context given the local model is neither complete or accurate.\nThe dash line in Figure 2 represents the connection of these local optimization problem though the common negotiation subject.\nNegotiation chain problem O is a group of tightly-coupled local optimization problems: O = {O1, O2,...On}, Oi denotes the local optimization problem (multi-linked negotiation problem) of agent Ai Agent Ai's local optimal solution Sloi maximizes the expected local utility based on an incomplete information and assumptions about other agents' local strategies - we defined such incomplete information and imperfect assumptions of agent i as Ii):\ni},...{Sblom i}.\nSome of them may be dominated by others.\nA set of better local optimal solutions {Sblog i} that is not dominated by any others is called best local optimal.\nIf a set of best local optimal solutions {Sblogi} dominates all others, {Sblog i} is called globally local optimal.\nHowever, sometimes the globally local optimal set does not exist, instead, there exist multiple sets of best local optimal solutions.\nEven if the globally local optimal solution does exist in theory, finding it may not be realistic given the agents are making decision concurrently, to construct the perfect local information and assumptions about other agents (Ii) in this dynamic environment is a very difficult and sometimes even impossible task.\nThe goal of this work is to improve each agent's local model about other agents (Ii) through meta-level coordination.\nAs Ii become more accurate, the agent's local optimal solution to its local multi-linked negotiation problem become a better local optimal solution in the context of the global negotiation chain problem.\nWe are not arguing that this statement is a universal valid statement that holds in all situations, but our experimental work shows that the sum of the agents' utilities in the system has been improved by 95% on average when meta-level coordination is used to improve each agent's local model Ii.\nIn this work, we focus on improving the agent's local model through two directions.\nOne direction is to build a better function to describe the relationship between the success probability of the negotiation and the flexibility allocated to the negotiation.\nThe other direction is to find how to allocate time more efficiently for each negotiation in the negotiation chain context.\n4.\nNEW MECHANISM - META-LEVEL COORDINATION\nIn order for an agent to get a better local model about other agents in the negotiation chain context, we introduce a pre-negotiation phase into the local negotiation process.\nDuring the pre-negotiation phase, agents communicate with other agents who have tasks contracting relationships with them, they transfer meta-level information before they decide on how and when to do the negotiations.\nEach agent tells other agents what types of tasks it will ask them to perform, and the probability distributions of some parameters of those tasks, i.e. the earliest start times and the deadlines, etc. .\nWhen these probability distributions are not available directly, agents can learn such information from their past experience.\nIn our experiment described later, such distributed information is learned rather than being directly told by other agents.\nSpecifically, each agent provides the following information to other related agents:\n\u2022 Whether additional negotiation is needed in order to make a\ndecision on the contracting task; if so, how many more negotiations are needed.\nnegCount represents the total number of additional negotiations needed for a task, including additional negotiations needed for its subtasks that happen among other agents.\nIn a negotiation chain situation, this information is being propagated and updated through the chain until\n52 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 2: Distributed Model of Negotiation Chains every agent has accurate information.\nLet subNeg (T) be a\nset of subtasks of task T that require additional negotiations, then we have:\nFor example, in the scenario described in Figure 1, for the distribution center, task Order Hardware consists of three subtasks that need additional negotiations with other agents: Order Chips, Order Memory and Deliver Hardware.\nHowever, no further negotiations are needed for other agents to make decision on these subtasks, hence the negCount for these subtasks are 0.\nThe following information is sent to the PC manufacturer by the distribution center: negCount (Order Hardware) = 3 For the PC manufacturer task Order Computer contains two subtasks that requires additional negotiations: Deliver Computer and Order Hardware.\nWhen the PC manufacturer receives the message from the Distribution Center, it updates its local information:\nand sends the updated information to the Store Agent.\n\u2022 Whether there are other tasks competing with this task and what is the likelihood of conflict.\nConflict means that given all constrains, the agent cannot accomplish all tasks on time, it needs to reject some tasks.\nThe likelihood of conflict Pcij between a task of type i and another task of type j is calculated based on the statistical model of each task's parameters, including earliest start time (est), deadline (dl), task duration (dur) and slack time (sl), using a formula [7]: Pcij =\nWhen there are more than two types of tasks, the likelihood of no conflict between task i and the rest of the tasks, is calculated as: PnoConflict (i) = Qnj = 1, j ~ = i (1--Pcij) For example, the Memory Producer tells the Distribution Center about the task Order Memory.\nIts local decision does not involve additional negotiation with other agents (negCount = 0), however, there is another task from the Store Agent that competes with this task, thus the likelihood of no conflict is 0.5 (PnoConflict = 0.5).\nOn the other hand, the CPU Producer tells the Distribution Center about the task Order Chips: its local decision does not involve additional negotiation with other agents, and there are no other tasks competing with this task (PnoConflict = 1.0) given the current environment setting.\nBased on the above information, the Distribution Center knows that task Order Memory needs more flexibility than task Order Chips in order to be successful in negotiation.\nMeanwhile, the Distribution Center would tell the PC Manufacturer that task Order Hardware involves further negotiation with other agents (negCount = 3), and that its local decision depends on other agents' decisions.\nThis piece of information helps the PC Manufacturer allocate appropriate flexibility for task Order Hardware in negotiation.\nIn this work, we introduce a short period\nConsumer Agent\nFigure 3: Task Structures of PC Manufacturer and Distribution Center\nfor agents to learn the characteristics of those incoming tasks, including est, dl, dur and sl, which are used to calculate Pcij and PnoConflict for the meta-level coordination.\nDuring system performance, agents are continually monitoring these characteristics.\nAn updated message will be send to related agents when there is significant change of the meta-level information.\nNext we will describe how the agent uses the meta-level information transferred during the pre-negotiation phase.\nThis information will be used to improve the agent's local model, more specifically, they are used in the agent's local decision-making process by affecting the values of some features.\nEspecially, we will be concerned with two features that have strong implications for the agent's macro strategy for the multi-linked negotiations, and hence also affect the performance of a negotiation chain significantly.\nThe first is the amount offlexibility specified in the negotiation parameter.\nThe second feature we will explore is the time allocated for the negotiation process to complete.\nThe time allocated for each negotiation affects the possible ordering of those negotiations, and it also affects the negotiation outcome.\nDetails are discussed in the following sections.\n4.1 Flexibility and Success Probability\nAgents not only need to deal with complex negotiation problems, they also need to handle their own local scheduling and planning process that are interleaved with the negotiation process.\nFigure 3 shows the local task structures of the PC Manufacturer and the Distribution Center.\nSome of these tasks can be performed locally by the PC manufacturer, such as Get Software and Install Software, while other tasks (non-local tasks) such as Order Hardware and Deliver Computer need to be performed by other agents.The PC Manufacturer needs to negotiate with the Distribution Center and the Transporter about whether they can perform these tasks, and if so, when and how they will perform them.\nWhen the PC Manufacturer negotiates with other agents about the non-local task, it needs to have the other agents' arrangement fit into its local schedule.\nSince the PC Manufacturer is dealing with multiple non-local tasks simultaneously, it also needs to ensure the commitments on these non-local tasks are consistent with each other.\nFor example, the deadline of task Order Hardware cannot be later than the start time of task Deliver Computer.\nFigure 4\nFigure 4: A Sample Local Schedule of the PC Manufacturer\nshows a sample local schedule of the PC Manufacturer.\nAccording to this schedule, as long as task Order Hardware is performed during time [11, 28] and task Deliver Computer is performed during time [34, 40], there exists a feasible schedule for all tasks and task Order Computer can be finished by time 40, which is the deadline promised to the Customer.\nThese time ranges allocated for task Order Hardware and task Deliver Computer are called consistent ranges; the negotiations on these tasks can be performed independently within these ranges without worrying about conflict.\nNotice that each task should be allocated with a time range that is large enough to accommodate the estimated task process time.\nThe larger the range is, the more likely the negotiation will succeed, because it is easier for the other agent to find a local schedule for this task.\nThen the question is, how big should this time range be?\nWe defined a quantitative measure called flexibility: Given a task t, suppose the allocated time range for t is lest, dl], est is the earliest start time and dl stands for the deadline,\nFlexibility is an important attribute because it directly affects the possible outcome of the negotiation.\nThe success probability of a negotiation can be described as a function of the flexibility.\nIn this work, we adopt the following formula for the success probability function based on the flexibility of the negotiation issue:\nThis function describes a phenomenon where initially the likelihood of a successful negotiation increases significantly as the flexibility grows, and then levels off afterward, which mirrors our experience from previous experiments.\npbs is the basic success probability of this negotiation v when the flexibility f (v) is very large.\nc is a parameter used to adjust the relationship.\nDifferent function patterns can result from different parameter values, as shown in Figure 5.\nThis function describes the agent's assumption about how the other agent involved in this negotiation would response to this particular negotiation request, when it has flexibility f (v).\nThis function is part of the agent's local model about other agents.\nTo improve the accuracy of this function and make it closer to the reality, the agent adjusts these two values according to the meta-level information transferred during pre-negotiation phase.\nThe values of c depends on whether there is further negotiation involved and whether there are other tasks competing with this task for common resources.\nIf so, more flexibility is needed for this issue and hence c should be assigned a smaller value.\nIn our implementation, the following procedure is used to calculate c based on the meta-level information negCount and PnoConflict:\nThis procedure works as follows: when there is no other competing\nFigure 5: Different Success Probability Functions\ntask, c depends on the number of additional negotiations needed.\nThe more additional negotiations that are needed, the smaller value c has, hence more flexibility will be assigned to this issue to ensure the negotiation success.\nIf no more negotiation is needed, c is assigned to a large number Clarge, meaning that less flexibility is needed for this issue.\nWhen there are other competing tasks, c is assigned to a small number Csmall, meaning that more flexibility is needed for this issue.\nIn our experimental work, we have Clarge as 5 and Csmall as 1.\nThese values are selected according to our experience; however, a more practical approach is to have agents learn and dynamically adjust these values.\nThis is also part of our future work.\npbs is calculated based on PnoConflict, f (v) (the flexibility of v in previous negotiation), and c, using the reverse format of equation 4.\nFor example, based on the scenario described above, the agents have the following values for c and pbs based on the meta-level information transferred:\n\u2022 PC Manufacturer, Order Hardware: pbs = 1.0, c = 2; \u2022 Distribution Center, Order Chips: pbs = 1.0, c = 5; \u2022 Store Agent, Order Memory: pbs = 0.79, c = 1;\nFigure 5 shows the different patterns of the success probability function given different parameter values.\nBased on such patterns, the Store Agent would allocate more flexibility to the task Order Memory to increase the likelihood of success in negotiation.\nIn the agent's further negotiation process, formula 4 with different parameter values is used in reasoning on how much flexibility should be allocated to a certain issue.\nThe pre-negotiation communication occurs before negotiation, but not before every negotiation session.\nAgents only need to communicate when the environment changes, for example, new types of tasks are generated, the characteristics of tasks changes, the negotiation partner changes, etc. .\nIf no major change happens, the agent can just use the current knowledge from previous communications.\nThe communication and computation overhead of this prenegotiation mechanism is very small, given the simple information collection procedure and the short message to be transferred.\nWe will discuss the effect of this mechanism in Section 5.\n4.2 Negotiation Duration and Deadline\nIn the agent's local model, there are two attributes that describe how soon the agent expects the other agent would reply to the negotiation v: negotiation duration \u03b4 (v) and negotiation deadline e (v)\nTable 1: Examples of negotiations (\u03b4 (v): negotiation duration, s.p.: success probability)\n.\nThese two important attributes that affect the negotiation solution.\nPart of the negotiation solution is a negotiation ordering \u03c6 which specifies in what order the multiple negotiations should be performed.\nIn order to control the negotiation process, every negotiation should be finished before its negotiation deadline, and the negotiation duration is the time allocated for this negotiation.\nIf a negotiation cannot be finished during the allocated time, the agent has to stop this negotiation and consider it as a failure.\nThe decision about the negotiation order depends on the success probability, reward, and decommitment penalty of each negotiation.\nA good negotiation order should reduce the risk of decommitment and hence reduce the decommitment penalty.\nA search algorithm has been developed to find such negotiation order described in [10].\nFor example, Table 1 shows some of the negotiations for the Distribution Center and their related attributes.\nGiven enough time (negotiation deadline is greater than 16), the best negotiation order is: 4--* 3--* 2--* 1.\nThe most uncertain negotiation (4: Deliver Hardware) is performed first.\nThe negotiation with highest penalty (1: Order hardware) is performed after all related negotiations (2, 3, and 4) have been completed so as to reduce the risk of decommitment.\nIf the negotiation deadline is less than 12 and greater than 8, the following negotiation order is preferred: (4, 3, 2)--* 1, which means negotiation 4, 3, 2 can be performed in parallel, and 1 needs to be performed after them.\nIf the negotiation deadline is less than 8, then all negotiations have to be performed in parallel, because there is no time for sequencing negotiations.\nIn the original model for single agent [10], the negotiation deadline e (v) is assumed to be given by the agent who initiates the contract.\nThe negotiation duration \u03b4 (v) is an estimation of how long the negotiation takes based on experience.\nHowever, the situation is not that simple in a negotiation chain problem.\nConsidering the following scenario.\nWhen the customer posts a contract for task Purchase Computer, it could require the Store Agent to reply by time 20.\nTime 20 can be considered as the negotiation deadline for Purchase Computer.\nWhen the Store Agent negotiates with the PC Manufacturer about Order Computer, what negotiation deadline should it specify?\nHow long the negotiation on Order Computer takes depends on how the PC Manufacturer handles its local multiple negotiations: whether it replies to the Store Agent first or waits until all other related negotiations have been settled.\nHowever, the ordering of negotiations depends on the negotiation deadline on Order Computer, which should be provided by the Store Agent.\nThe negotiation deadline of Order Computer for the PC Manufacturer is actually decided based on the negotiation duration of Order Computer for the Store Agent.\nHow much time the Store Agent would like to spend on the negotiation Order Computer is its duration, and also determines the negotiation deadline for the PC Manufacturer.\nNow the question arises: how should an agent decide how much time it should spend on each negotiation, which actually affects the other agents' negotiation decisions.\nThe original model does not handle this question since it assumes the negotiation duration \u03b4 (v) is known.\nHere we propose three different approaches to handle this issue.\n1.\nsame-deadline policy.\nUse the same negotiation deadline for all related negotiations, which means allocate all available time to all negotiations:\nFor example if the negotiation deadline for Purchase Computer is 20, the Store Agent will tell the PC Manufacturer to reply by 20 for Order Computer (ignoring the communication delay).\nThis strategy allows every negotiation to have the largest possible duration, however it also eliminates the possibility of performing negotiations in sequence - all negotiations need to be performed in parallel because the total available time is the same as the duration of each negotiation.\n2.\nmeta-info-deadline policy.\nAllocate time for each negotiation according to the meta-level information transferred in the pre-negotiation phase.\nA more complicated negotiation, which involves further negotiations, should be allocated additional time.\nFor example, the PC Manufacturer allocates a duration of 12 for the negotiation Order Hardware, and a duration of 4 for Deliver Computer.\nThe reason is that the negotiation with the Distribution Center about Order Hardware is more complicated because it involves further negotiations between the Distribution Center and other agents.\nIn our implementation, we use the following procedure to decide the negotiation duration \u03b4 (v):\nbasic neg cycle represents the minimum time needed for a negotiation cycle (proposal-think-reply), which is 3 in our system setting including communication delay.\nOne additional time unit is allocated for the simplest negotiation because it allows the agent to perform a more complicated reasoning process in thinking.\nAgain, the structure of this procedure is selected according to experience, and it can be learned and adjusted by agents dynamically.\n3.\nevenly-divided-deadline policy.\nEvenly divide the available time among the n related negotiations: \u03b4 (v) = total available time\/n For example, if the current time is 0, and the negotiation deadline for Order Computer is 21, given two other related negotiations, Order Hardware and Deliver Computer, each negotiation is allocated with a duration of 7.\nIntuitively we feel the strategy 1 may not be a good one, because performing all negotiations in parallel would increase the risk of decommitment and hence also decommitment penalties.\nHowever, it is not very clear how strategy 2 and 3 perform, and we will discuss some experimental results in Section 5.\n5.\nEXPERIMENTS\nTo verify and evaluate the mechanisms presented for the negotiation chain problem, we implemented the scenario described in Figure 1.\nNew tasks were randomly generated with decommitment penalty rate p E [0, 1], early finish reward rate e E [0, 0.3], and deadline dl E [10, 60] (this range allows different flexibilities available for those sub-contracted tasks), and arrived at the store agent periodically.\nWe performed two sets of experiments to study\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 55\nFigure 6: Different Flexibility Policies\nhow the success probability functions and negotiation deadlines affect the negotiation outcome, the agents' utilities and the system's overall utility.\nIn this experiment, agents need to make decision on negotiation ordering and feature assignment for multiple attributes including: earliest start time, deadline, promised finish time, and those attributes-of-negotiation.\nTo focus on the study of flexibility, in this experiment, the regular rewards for each type of tasks are fixed and not under negotiation.\nHere we only describe how agents handle the negotiation duration and negotiation deadlines because they are the attributes affected by the pre-negotiation phase.\nAll other attributes involved in negotiation are handled according to how they affect the feasibility of local schedule (time-related attributes) and how they affect the negotiation success probability (time and cost related attributes) and how they affect the expect utility.\nA search algorithm [10] and a set of partial order scheduling algorithms are used to handle these attributes.\nWe tried two different flexibility policies.\n1.\nfixed-flexibility policy: the agent uses a fixed value as the success probability (ps (v) = pbs (v)), according to its local knowledge and estimation.\n2.\nmeta-info-flexibility policy: the agent uses the function ps (v) = pbs (v) * (2 \/ \u03c0) * (arctan (f (v) + c))) to model the success probability.\nIt also adjusts those parameters (pbs (v) and c) according to the meta-level information obtained in prenegotiation phase as described in Section 4.\nTable 2 shows the values of those parameters for some negotiations.\nFigure 6 shows the results of this experiment.\nThis set of experiments includes 10 system runs, and each run is for 1000 simulating time units.\nIn the first 200 time units, agents are learning about the task characteristics, which will be used to calculate the conflict probabilities Pcij.\nAt time 200, agents perform meta-level information communication, and in the next 800 time units, agents use the meta-level information in their local reasoning process.\nThe data was collected over the 800 time units after the pre-negotiation\nFigure 7: Different Negotiation Deadline Policies\nphase 2.\nOne Purchase Computer task is generated every 20 time units, and two Purchase Memory tasks are generated every 20 time units.\nThe deadline for task Purchase Computer is randomly generated in the range of [30, 60], the deadline for task Purchase Memory is in the range of [10, 30].\nThe decommitment penalty rate is randomly generated in the range of [0, 1].\nThis setting creates multiple concurrent negotiation chain situations; there is one long chain:\nThis demonstrates that this mechanism is capable of handling multiple concurrent negotiation chains.\nAll agents perform better in this example (gain more utility) when they are using the meta-level information to adjust their local control through the parameters in the success probability function (meta-info-flex policy).\nEspecially for those agents in the middle of the negotiation chain, such as the PC Manufacturer and the Distribution Center, the flexibility policy makes a significant difference.\nWhen the agent has a better understanding of the global negotiation scenario, it is able to allocate more flexibility for those tasks that involve complicated negotiations and resource contentions.\nTherefore, the success probability increases and fewer tasks are rejected or canceled (90% of the tasks have been successfully negotiated over when using meta-level information, compared to 39% when no pre-negotiation is used), resulting in both the agent and the system achieving better performance.\nIn the second set of experiments studies, we compare three negotiation deadline policies described in Section 4.2 when using the meta-info flexibility policy described above.\nThe initial result shows that the same-deadline policy and the meta-info-deadline policy perform almost the same when the amount of system workload level is moderate, tasks can be accommodated given sufficient flexibility.\nIn this situation, with either of the policies, most negotiations are successful, and there are few decommitment occurrences, so the ordering of negotiations does not make too much difference.\nTherefore, in this second set of experiments, we increase the number of new tasks generated to raise the average workload in the system.\nOne Purchase Computer task is generated every 15 time units, three Purchase Memory tasks are generated every 2We only measure the utility collected after the learning phase because that the learning phase is relatively short comparing to the evaluation phase, also during the learning phase, no meta-level information is used, so some of the policies are invalid.\n56 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n15 time units, and one task Deliver Gift (directly from the customer to the Transporter) is generated every 10 time units.\nThis setup generates a higher level of system workload, which results in some tasks not being completed no matter what negotiation ordering is used.\nIn this situation, we found the meta-info-deadline policy performs much better than same-deadline policy (See Figure 7).\nWhen an agent uses the same-deadline policy, all negotiations have to be performed in parallel.\nIn the case that one negotiation fails, all related tasks have to be canceled, and the agent needs to pay multiple decommitment penalties.\nWhen the agent uses the meta-info-deadline policy, complicated negotiations are allocated more time and, correspondingly, simpler negotiations are allocated less time.\nThis also has the effect of allowing some negotiations to be performed in sequence.\nThe consequence of sequencing negotiation is that, if there is failure, an agent can simply cancel the other related negotiations that have not been started.\nIn this way, the agent does not have to pay decommitment penalty for those canceled negotiations because no commitment has been established yet.\nThe evenly-divided-deadline policy performs much worse than the meta-info-deadline policy.\nIn the evenly-divideddeadline policy, the agent allocates negotiation time evenly among the related negotiations, hence the complicated negotiation does not get enough time to complete.\nThe above experiment results show that the meta-level information transferred among agents during the pre-negotiation phase is critical in building a more accurate model of the negotiation problem.\nThe reasoning process based on this more accurate model produces an efficient negotiation solution, which improves the agent's and the system's overall utility significantly.\nThis conclusion holds for those environments where the system is facing moderate heavy load and tasks have relatively tight time deadline (our experiment setup is to produce such environment); the efficient negotiation is especially important in such environments.\n6.\nRELATED WORK\nFatima, Wooldridge and Jennings [1] studied the multiple issues in negotiation in terms of the agenda and negotiation procedure.\nHowever, this work is limited since it only involves a single agent's perspective without any understanding that the agent may be part of a negotiation chain.\nMailler and Lesser [4] have presented an approach to a distributed resource allocation problem where the negotiation chain scenario occurs.\nIt models the negotiation problem as a distributed constraint optimization problem (DCOP) and a cooperative mediation mechanism is used to centralize relevant portions of the DCOP.\nIn our work, the negotiation involves more complicated issues such as reward, penalty and utility; also, we adopt a distribution approach where no centralized control is needed.\nA mediator-based partial centralized approach has been applied to the coordination and scheduling of complex task network [8], which is different from our work since the system is a complete cooperative system and individual utility of single agent is not concerned at all.\nA combinatorial auction [2, 9] could be another approach to solving the negotiation chain problem.\nHowever, in a combinatorial auction, the agent does not reason about the ordering of negotiations.\nThis would lead to a problem similar to those we discussed when the same-deadline policy is used.\n7.\nCONCLUSION AND FUTURE WORK In this paper, we have solved negotiation chain problems by extending our multi-linked negotiation model from the perspective of a single agent to multiple agents.\nInstead of solving the negotiation chain problem in a centralized approach, we adopt a distributed approach where each agent has an extended local model and decisionmaking process.\nWe have introduced a pre-negotiation phase that allows agents to transfer meta-level information on related negotiation issues.\nUsing this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability.\nThis more accurate model helps the agent in choosing the appropriate negotiation solution.\nThe experimental data shows that these mechanisms improve the agent's and the system's overall performance significantly.\nIn future extension of this work, we would like to develop mechanisms to verify how reliable the agents are.\nWe also recognize that the current approach of applying the meta-level information is mainly heuristic, so we would like to develop a learning mechanism that enables the agent to learn how to use such information to adjust its local model from previous experience.\nTo further verify this distributed approach, we would like to develop a centralized approach, so we can evaluate how good the solution from this distributed approach is compared to the optimal solution found by the centralized approach.","keyphrases":["negoti chain","multipl agent","agent","negoti framework","pre-negoti","flexibl","semi-cooper multi-agent system","multi-link negoti","distribut set","multipl concurr task","virtual organ","sub-task reloc","reput mechan","complex suppli-chain scenario"],"prmu":["P","P","P","P","P","P","M","M","U","M","U","U","M","U"]} {"id":"I-16","title":"An Advanced Bidding Agent for Advertisement Selection on Public Displays","abstract":"In this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen -- an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices. Our bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within. It then uses these models to maximise the exposure that its adverts receive. We evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight. Our bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy. Moreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.","lvl-1":"An Advanced Bidding Agent for Advertisement Selection on Public Displays Alex Rogers1 , Esther David2 , Terry R. Payne1 and Nicholas R. Jennings1 1 Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK.\n{acr,trp,nrj}@ecs.\nsoton.ac.uk 2 Ashkelon College, Ashkelon, Israel.\nastrdod@ash-college.ac.il ABSTRACT In this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen - an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices.\nOur bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within.\nIt then uses these models to maximise the exposure that its adverts receive.\nWe evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight.\nOur bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy.\nMoreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Algorithms, Design, Theory 1.\nINTRODUCTION Electronic displays are increasingly being used within public environments, such as airports, city centres and retail stores, in order to advertise commercial products, or to entertain and inform passersby.\nRecently, researchers have begun to investigate how the content of such displays may be varied dynamically over time in order to increase its variety, relevance and exposure [9].\nParticular research attention has focused on the need to take into account the dynamic nature of the display``s audience, and to this end, a number of interactive public displays have been proposed.\nThese displays have typically addressed the needs of a closed set of known users with pre-defined interests and requirements, and have facilitated communication with these users through the active use of handheld devices such as PDAs or phones [3, 7].\nAs such, these systems assume prior knowledge about the target audience, and require either that a single user has exclusive access to the display, or that users carry specific tracking devices so that their presence can be identified [6, 11].\nHowever, these approaches fail to work in public spaces, where no prior knowledge regarding the users who may view the display exists, and where such displays need to react to the presence of several users simultaneously.\nBy contrast, Payne et al. have developed an intelligent public display system, named BluScreen, that detects and tracks users through the Bluetooth enabled devices that they carry with them everyday [8].\nWithin this system, a decentralised multi-agent auction mechanism is used to efficiently allocate advertising time on each public display.\nEach advert is represented by an individual advertising agent that maintains a history of users who have already been exposed to the advert.\nThis agent then seeks to acquire advertising cycles (during which it can display its advert on the public displays) by submitting bids to a marketplace agent who implements a sealed bid auction.\nThe value of these bids is based upon the number of users who are currently present in front of the screen, the history of these users, and an externally derived estimate of the value of exposing an advert to a user.\nIn this paper, we present an advanced bidding agent that significantly extends the sophistication of this approach.\nIn particular, we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a user.\nThis is likely to be the case for BluScreen installations within private organisations where the items being advertised are forthcoming events or news items of interest to employees and visitors, and thus have no direct monetary value (indeed in this case bidding is likely to be conducted in some virtual currency).\nIn addition, it is also likely to be the case within new commercial installations where limited market experience makes estimating a valuation impossible.\nIn both cases, it is more appropriate to assume that an advertising agents will be assigned a total advertising budget, and that it will have a limited period of time in which to spend this budget (particularly so where the adverts are for forthcoming events).\nThe advertising agent is then simply tasked with using this budget to maximum effect (i.e. to achieve the maximum possible advert exposure within this time period).\nNow, in order to achieve this goal, the advertising agent must be capable of modelling the behaviour of the users in order to predict the number who will be present in any future advertising cycle.\nIn addition, it must also understand the auction environment in which 263 978-81-904262-7-5 (RPS) c 2007 IFAAMAS it competes, in order that it may make best use of its limited budget.\nThus, in developing an advanced bidding agent that achieves this, we advance the state of the art in four key ways: 1.\nWe enable the advertising agents to model the arrival and departure of users as independent Poisson processes, and to make maximum likelihood estimates of the rates of these processes based on their observations.\nWe show how these agents can then calculate the expected number of users who will be present during any future advertising cycle.\n2.\nUsing a decision theoretic approach we enable the advertising agents to model the probability of winning any given auction when a specific amount is bid.\nThe cumulative form of the gamma distribution is used to represent this probability, and its parameters are fitted using observations of both the closing price of previous auctions, and the bids that that advertising agent itself submits.\n3.\nWe show that our explicit assumption that the advertising agent derives no additional benefit by showing an advert to a single user more than once, causes the expected utility of each future advertising cycle to be dependent on the expected outcome of all the auctions that precede it.\nWe thus present a stochastic optimisation algorithm based upon simulated annealing that enables the advertising agent to calculate the optimal sequence of bids that maximises its expected utility.\n4.\nFinally, we demonstrate that this advanced bidding strategy outperforms a simple strategy with none of these features (within an heterogenous population the advertising agents who use the advanced bidding strategy are able to expose their adverts to 25% more users than those using the simple bidding strategy), and we show that it performs within 7.5% of that of a centralised optimiser with perfect knowledge of the number of users who will arrival and depart in all future advertising cycles.\nThe remainder of this paper is organised as follows: Section 2 discusses related work where agents and auction-based marketplaces are used to allocated advertising space.\nSection 3 describes the prototype BluScreen system that motivates our work.\nIn section 4 we present a detailed description of the auction allocation mechanism, and in section 5 we describe our advanced bidding strategy for the advertising agents.\nIn section 6 we present an empirical validation of our approach, and finally, we conclude in section 7.\n2.\nRELATED WORK The commercial attractiveness of targeted advertising has been amply demonstrated on the internet, where recommendation systems and contextual banner adverts are the norm [1].\nThese systems typically select content based upon prior knowledge of the individual viewing the material, and such systems work well on personal devices where the owner``s preferences and interests can be gathered and cached locally, or within interactive environments which utilise some form of credential to identify the user (e.g. e-commerce sites such as Amazon.com).\nAttempts to apply these approaches within the real world have been much more limited.\nGerding et al. present a simulated system (CASy) whereby a Vickrey auction mechanism is used to sell advertising space within a modelled electronic shopping mall [2].\nThe auction is used to rank a set of possible advertisements provided by different retail outlets, and the top ranking advertisements are selected for presentation on public displays.\nFeedback is provided through subsequent sales information, allowing the model to build up a profile of a user``s preferences.\nHowever, unlike the BluScreen Figure 1: A deployed BluScreen prototype.\nsystem that we consider here, it is not suitable for advertising to many individuals simultaneously, as it requires explicit interaction with a single user to acquire the user``s preferences.\nBy contrast, McCarthy et al. have presented a prototype implementation of a system (GroupCast) that attempts to respond to a group of individuals by assuming a priori profiles of several members of the audience [7].\nUser identification is based on infrared badges and embedded sensors within an office environment.\nWhen several users pass by the display, a centralised system compares the user``s profiles to identify common areas of interest, and content that matches this common interest is shown.\nThus, whilst CASy is a simulated system that allows advertisers to compete for the attention of single user, GroupCast is a prototype system that detects the presence of groups of users and selects content to match their profiles.\nDespite their similarities, neither system addresses the settings that interests us here: how to allocate advertising space between competing advertisers who face an audience of multiple individuals about whom there is no a priori profile information.\nThus, in the next section we describe the prototype BluScreen system that motivates our work.\n3.\nTHE BLUSCREEN PROTOTYPE BluScreen is based on the notion of a scalable, extendable, advertising framework whereby adverts can be efficiently displayed to as many relevant users as possible, within a knowledge-poor environment.\nTo achieve these goals, several requirements have been identified: 1.\nAdverts should be presented to as diverse an audience as possible, whilst minimising the number of times the advert is presented to any single user.\n2.\nUsers should be identified by existing ubiquitous, consumer devices, so that future deployments within public arenas will not require uptake of new hardware.\n3.\nThe number of displays should be scalable, such that adverts appear on different displays at different times.\n4.\nKnowledge about observed behaviour and composition of the audience should be exploited to facilitate inference of user interests which can be exploited by the system.\nTo date, a prototype systems that addresses the first two goals has been demonstrated [8].\nThis system uses a 23 inch flat-screen display deployed within an office environment to advertise events and news items.\nRather than requiring the deployment of specialised hardware, such as active badges (see [11] for details), BluScreen detects the presence of users in the vicinity of each display through the Bluetooth-enabled devices that they carry with them everyday1 .\n1 Devices must be in discovery mode to detectable.\n264 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Device Type Unique Samples Devices Occasional < 10 135 Frequent 10 \u2212 1000 70 Persistent > 1000 6 Table 1: Number of Bluetooth devices observed at different frequencies over a six month sample period.\nThis approach is attractive since the Bluetooth wireless protocol is characterised by its relative maturity, market penetration, and emphasis on short-range communication.\nTable 1 summarises the number of devices detected by this prototype installation over a six month period.\nOf the 212 Bluetooth devices detected, approximately 70 were detected regularly, showing that Bluetooth is a suitable proxy for detecting individuals in front of the screen.\nIn order to achieve a scalable and extendable solution a multiagent systems design philosophy is adopted whereby a number of different agents types interact (see figure 2).\nThe interactions of these agents are implemented through a web services protocol2 , and they constitute a decentralised marketplace that allocates advertising space in an efficient and timely manner.\nIn more detail, the responsibilities of each agent types are: Bluetooth Device Detection Agent: This agent monitors the environment in the vicinity of a BluScreen display and determines the number and identity of any Bluetooth devices that are close by.\nIt keeps historical records of the arrival and departure of Bluetooth devices, and makes this information available to advertising agents as requested.\nMarketplace Agent: This agent facilitates the sale of advertising space to the advertising agents.\nA single marketplace agent represents each BluScreen display, and access to this screen is divided into discrete advertising cycles of fixed duration.\nBefore the start of each advertising cycle, the marketplace agent holds a sealed-bid auction (see section 4 for more details).\nThe winner of this auction is allocated access to the display during the next cycle.\nAdvertising Agent: This agent represents a single advert and is responsible for submitting bids to the marketplace agent in order that it may be allocated advertising cycles, and thus, display its advert to users.\nIt interacts with the device detection agent in order to collect information regarding the number and identity of users who are currently in front of the display.\nOn the basis of this information, its past experiences, and its bidding strategy, it calculates the value of the bid that it should submit to the marketplace agent.\nThus, having described the prototype BluScreen system, we next go on to describe the details of the auction mechanism that we consider in this work, and then the advanced bidding agent that operates bids within this auction.\n4.\nTHE AUCTION MECHANISM As described above, BluScreen is designed to efficiency allocate advertising cycles in a distributed and timely manner.\nThus, oneshot sealed bid auctions are used for the market mechanism of the marketplace agent.\nIn previous work, each advertising agent was assumed to have an externally derived estimate of the value of exposing an advert to a user.\nUnder this assumption, a secondprice sealed-bid auction was shown to be effective, since advertis2 This is implemented on a distributed Mac OS X based system using the Bonjour networking protocol for service discovery.\nAdvert Advert Marketplace Agent Device ID Advert Advertising Agent Device ID Device ID Advertising Agent Advertising Agent BluetoothDevice DetectionAgent 2) Bids based on predicted future device presence 1) Device presence detected 3) Winning Agent displays advert on the screen Device ID Figure 2: The BluScreen agent architecture for a single display.\ning agents have a simple strategy of truthfully bidding their valuation in each auction [8].\nHowever, as described earlier, in this paper we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a single user.\nThis may be because the BluScreen installation is within a private organisation where what is being advertised (e.g. news items or forthcoming events) has no monetary value, or it may be a new commercial installation where limited market experience makes estimating such a valuation impossible.\nIn the absence of such a valuation, the attractive economic properties of the second-price auction can not be achieved in practise, and thus, in our work there is no need to limit our attention to the second-price auction.\nIndeed, since these auctions are actually extremely rare within real world settings [10], in this work we consider the more widely adopted first-price auction since this increases the applicability of our results.\nThus, in more detail, we consider an instance of a BluScreen installation with a single display screen that is managed by a single marketplace agent3 .\nWe consider that access to the display screen is divided into discrete advertising cycles, each of length tc, and a first-price sealed bid auction is held immediately prior to the start of each advertising cycle.\nThe marketplace agent announces the start and deadline of the auction, and collects sealed bids from each advertising agent.\nAt the closing time of the auction the marketplace agent announces to all participants and observers the amount of the winning bid, and informs the winning advertising agent that it was successful (the identity of the winning advertising agent is not announced to all observers).\nIn the case that no bids are placed within any auction, a default advert is displayed.\nHaving described the market mechanism that the marketplace agent implements, we now go on to describe and evaluate an advanced bidding strategy for the advertising agents to adopt.\n5.\nADVANCED BIDDING STRATEGY As described above, we consider the case that the advertising agents do not have an externally derived estimate of the value of exposing the advert to a single user.\nRather, they have a constrained budget, B, and a limited period of interest during which they wish to display their advert.\nTheir goal is then to find the appropriate amount to bid within each auction in this period, in order to maximise the exposure of their advert.\nIn attempting to achieve this goal the advertising agent is faced with a high level of uncertainty about future events.\nIt will be uncertain of the number of users who will be present during any advertising cycle since even if the number of users currently present 3 This assumption of having a single BluScreen instance is made to simplify our task of validating the correctness and the efficiency of the proposed mechanism and strategy, and generalising these results to the case of multiple screens is the aim of our future work.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 265 is known, some may leave before the advert commences, and others may arrive.\nMoreover, the amount that must be bid to ensure that an auction is won is uncertain since it depends on the number and behaviour of the competing advertising agents.\nThus, we enable the agent to use its observations of the arrival and departure of users to build a probabilistic model, based upon independent Poisson processes, that describes the number of users who are likely to be exposed to any advert.\nIn addition, we enable the agent to observe the outcome of previous advertising cycle auctions, and use the observations of the closing price, and the success or otherwise of the bids that it itself submitted, to build a probabilistic model of the bid required to win the auction.\nThe agent then uses these two models to calculate its expected utility in each advertising cycle, and in turn, determine the optimal sequence of bids that maximises this utility given its constrained budget.\nHaving calculated this sequence of bids, then the first bid in the sequence is actually used in the auction for the next advertising cycle.\nHowever, at the close of this cycle, the process is repeated with a new optimal sequence of bids being calculated in order take to account of what actually happened in the preceding auction (i.e. whether the bid was successful or not, and how many users arrived or departed).\nThus, in the next three subsections we describe these two probabilistic models, and their application within the bidding strategy of the advertising agent.\n5.1 Predicting the Number of Users In order to predict the number of users that will be present in any future advertising cycle, it is necessary to propose a probabilistic model for the behaviour of the users.\nThus, our advanced bidding strategy assumes that their arrival and departures are determined by two independent Poisson processes4 with arrival rate, \u03bba, and departure rate, \u03bbd.\nThis represents a simple model that is commonly applied within queuing theory5 [5], yet is one that we believe well describes the case where BluScreen displays are placed in communal areas where people meet and congregate.\nGiven the history of users'' arrivals and departures obtained from the device detection agent, the advertising agent makes a maximum likelihood estimation of the values of \u03bba and \u03bbd.\nIn more detail, if the advertising agent has observed n users arriving within a time period t, then the maximum likelihood estimation for the arrival rate \u03bba is simply given by: \u03bba = n t (1) Likewise, if an agent observes n users each with a duration of stay of t1, t2, ... , tn time periods, then the maximum likelihood estimation for the departure rate \u03bbd is given by: 1 \u03bbd = 1 n n i=1 ti (2) 4 Given a Poisson distribution with rate parameter \u03bb, the number of events, n, within an interval of time t is given by: P(n) = e\u2212\u03bbt (\u03bbt)n n!\nIn addition, the probability of having to wait a period of time, t, before the next event is determined by: P(t) = \u03bbe\u03bbt 5 Note however that in queuing theory it is typically the arrival rate and service times of customers that are modelled as Poisson processes.\nOur users are not actually modelled as a queue since the duration of their stay is independent of that of the other users.\n0 t t + tc \u03c4 (i) n users ?\n(iii) \u03bbatc users ?\n(ii) \u03bbat users ?\nFigure 3: Example showing how to predict the number of users who see an advert shown in an advertising cycle of length tc, commencing at time t in the future.\nIn environments where these rates are subject to change, the agent can use a limited time window over which observations are used to estimate these rates.\nAlternatively, in situations where cyclic changes in these rates are likely to occur (i.e. changing arrival and departure rates at different times of the day, as may be seen in areas where commuters pass through), the agent can estimate separate values over each hour long period.\nHaving estimated the arrival and departure rate of users, and knowing the number of users who are present at the current time, the advertising agent is then able to predict the number of users who are likely to be present in any future advertising cycle6 .\nThus, we consider the problem of predicting this number for an advertising cycle of duration tc that starts at a time t in the future, given that n users are currently present (see figure 3).\nThis number will be composed of three factors: (i) the fraction of the n users that are initially present who do not leave in the interval, 0 \u2264 \u03c4 < t, before the advertising cycle commences, (ii) users that actually arrive in the interval, 0 \u2264 \u03c4 < t, and are still present when the advertising cycle actually commences, and finally, (iii) users that arrive during the course of the advertising cycle, t \u2264 \u03c4 < t + tc.\nNow, considering case (i) above, the probability of one of the n users still being present when the advertising cycle starts is given by \u221e t \u03bbde\u2212\u03bbd\u03c4 d\u03c4 = e\u2212\u03bbdt .\nThus we expect ne\u2212\u03bbdt of these users to be present.\nIn case (ii), we expect \u03bbat new users to arrive before the advertising cycle commences, and the probability that any of these will still be there when it actually does so is given by 1 t t 0 e\u2212\u03bbd(t\u2212\u03c4) d\u03c4 = 1 \u03bbdt 1 \u2212 e\u2212\u03bbdt .\nThus we expect \u03bba \u03bbd 1 \u2212 e\u2212\u03bbdt of these users to be present.\nFinally, in case (iii) we expect \u03bbatc users to arrive during the course of the advertising cycle.\nThus, the combination of these three factors gives an expression for the expected number of users who will be present within an advertising cycle of length tc, that commencing at time t in the future, given that there are n users currently present: Nn,t = ne\u2212\u03bbdt + \u03bba \u03bbd 1 \u2212 e\u2212\u03bbdt + \u03bbatc (3) Note that as t increases the results become less dependent upon the initial number of users, n.\nThe mean number of users present at any time is simply \u03bba\/\u03bbd, and the mean number of users exposed to an advert in any advertising cycle is given by \u03bba tc + 1 \u03bbd .\n5.2 Predicting the Probability of Winning In addition to estimating the number of users who will be present in any advertising cycle, an effective bidding agent must also be able to predict the probability of it winning an auction given that it submits any specified bid.\nThis is a common problem within bidding agents, and approaches can generally be classified as game theoretic or decision theoretic.\nSince our advertising agents are unaware of the number or identity of the competing advertising 6 Note that we do not require a user to be present for the entire advertising cycle in order to be counted as present.\n266 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) agents, the game theoretic approach is precluded.\nThus, we take a decision theoretic approach similar to that adopted within continuous double auctions where bidding agents estimate the market price of goods by observing transaction prices [4].\nThus, our advertising agents uses a parameterised function to describe the probability of winning the auction given any submitted bid, P(b).\nThis function must have support [0, \u221e) since bids must be positive.\nIn addition, we expect it to exhibit by an `s'' shaped curve whereby the probability of winning an auction is small when the submitted bid is very low, the probability is close to one when the bid is very high, and there is a transition point that characterises the change from a losing to a wining bid.\nTo this end, we use the cumulative form of the gamma distribution for this function: P(b) = \u03b3 (k, b\/\u03b8) \u0393 (k) (4) where \u0393(k) is the standard gamma function, and \u03b3 (k, b\/\u03b8) is the incomplete gamma function.\nThis function has the necessary properties described above, and has two parameters, k and \u03b8.\nThe transition point where P(b) = 0.5 is given by k\u03b8 and the sharpness of the transition is described by k\u03b82 .\nIn figure 4 we show examples of this function for three different values of k and \u03b8.\nThe advertising agent chooses the most appropriate values of k and \u03b8 by fitting the probability function to observations of previous auctions.\nAn observation is a pair {bi, oi} consisting of the bid, bi, and an auction outcome, oi.\nEach auction generates at least one pair in which bi is equal to the closing price of the auction, and oi = 1.\nIn addition, another pair is generated for each unsuccessful bid submitted by the advertising agent itself, and in this case oi = 0.\nThus, having collected N such pairs7 , the agent finds the values of k and \u03b8 by evaluating: arg min k,\u03b8 N i=1 oi \u2212 \u03b3 (k, bi\/\u03b8) \u0393 (k) 2 (5) This expression can not be evaluated analytically, but can be simply found using a numerical gradient descent method whereby the values of k and \u03b8 are initially estimated using their relationship to the transition point described above.\nThe gradient of this expression is then numerically evaluated at these points, and new estimates of k and \u03b8 calculated by making a fixed size move in the direction of maximum gradient.\nThis process is repeated until k and \u03b8 have converged to an appropriate degree of accuracy.\n5.3 Expected Utility of an Advertising Cycle The goal of the advertising agent is to gain the maximum exposure for its advert given its constrained budget.\nWe define the utility of any advertising cycle as the expected number of users who will see the advert for the first time during that cycle, and hence, we explicitly assume that no additional utility is derived by showing the advert to any user more than once8 .\nThus, we can use the results of the previous two sections to calculate the expected utility of each advertising cycle remaining within the advertising agent``s period of 7 In the case that no unsuccessful bids have been observed, there is no evidence of where the transition point between successful and unsuccessful bids is likely to occur.\nThus, in this case, an additional pair with value {\u03b1 min(b1 ... bn), 0} is automatically created.\nHere \u03b1 \u2208 [0, 1] determines how far below the lowest successful bid the advertising agent believes the transition point to be.\nWe have typically used \u03b1 = 0.5 within our experiments.\n8 As noted before, we assume that a user has seen the advert if they are present during any part of the advertising cycle, and we do not differentiate between users who see the entire advert, or users who see a fraction of it.\n0 10 20 30 40 0 0.2 0.4 0.6 0.8 1 Probability of Winning Auction P(b) Bid (b) k = 5 k = 10 k = 20 Figure 4: Cumulative gamma distribution representing the probability of winning an auction (\u03b8 = 1 and k = 5, 10 & 20).\ninterest.\nIn the first advertising cycle this is simply determined by the probability of the advertising agent winning the auction, given that it submits a bid b1, and the number of users who are currently in front of the BluScreen display, but have not seen the advert before, is n. Thus, the expected utility of this advertising cycle is simply described by: u1 = P(b1)Nn,0 (6) Now, in the second advertising cycle, the expected utility will clearly depend on the outcome of the auction for the first.\nIf the first auction was indeed won by the agent, then there will be no users who have yet to see the advert present at the start of the second advertising cycle.\nThus, in this case, the expected number of new users who will see the advert in the second advertising cycle is described by N0,0 (i.e. only newly arriving users will contribute any utility).\nBy contrast, if the first auction was not won by the agent, then the expected number of users who have yet to see the advert is given by Nn,tc where tc is the length of the preceding adverting cycle (i.e. exactly the case described in section 5.1 where there are n users initially present and the advertising cycle starts at a time tc in the future).\nThus, the expected utility of the second advertising cycle is given by: u2 = P(b2) [P(b1)N0,0 + (1 \u2212 P(b1))Nn,tc ] (7) We can generalise this result by noting that the number of users expected to be present within any future advertising cycle will depend on the number of cycles since an auction was last won (since at this point the number of users who are present but have not seen the advert must be equal to zero).\nThus, we must sum over all possible ways in which this can occur, and weight each by its probability.\nHence, the general case for any advertising cycle is described by the rather complex expression: ui = P(bi) i\u22121 j=1 N0,(i\u2212j\u22121)tc P(bj) i\u22121 m=j+1 (1 \u2212 P(bm)) + Nn,(i\u22121)tc i\u22121 m=1 (1 \u2212 P(bm)) (8) Thus, given this expression, the goal of the advertising agent is to calculate the sequence of bids over the c remaining auctions, such that the total expected utility is maximised, whilst ensuring that the remaining budget, B, is not exceeded: arg max b1...bc c i=1 ui such that c i=1 bi = B (9) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 267 0 0.25 0.5 0.75 1 0 1 2 3 4 5 6 Expected Utility (U) b 1 \/ B B = 5 B = 10 B = 20 B = 30 B = 40 Figure 5: Total expected utility of the advertising agent over a continuous range of values of b1 for a number of discrete values of budget, B, when there are just two auction cycles.\nHaving calculated this sequence, a bid of b1 is submitted in the next auction.\nOnce the outcome of this auction is known, the process repeats with a new optimal sequence of bids being calculated for the remaining advertising cycles of the agent``s period of interest.\n5.4 Optimal Sequence of Bids Solving for the optimal sequence of bids expressed in equation 9 can not be performed analytically.\nInstead we develop a numerical routine to perform this maximisation.\nHowever, it is informative to initially consider the simple case of just two auctions.\n5.4.1 Two Auction Example In this case the expected utility of the advertising agent is simply given by u1 + u2 (as described in equations 6 and 7), and the bidding sequence is solely dependent on b1 (since b2 = B\u2212b1).\nThus, we can plot the total expected utility against b1 and graphically determine the optimal value of b1 (and thus also b2).\nTo this end, figure 5 shows an example calculated using parameter values \u03bba = 1\/120, \u03bbd = 1\/480 and tc = 120.\nIn this case, we assume that k = 10 and \u03b8 = 1, and thus, given that k\u03b8 describes the midpoint of the cumulative gamma distribution, a bid of 10 represents a 50% chance of winning any auction (i.e. P(10) = 0.5).\nIn addition, we assume that n = \u03bba\/\u03bbd = 4, and thus the initial number of users present is equal to the mean number that we expect to find present at any time.\nThe plot indicates that when the budget is small, then the maximum utility is achieved at the extreme values of b1.\nThis corresponds to bidding in just one of the two auctions (i.e. b1 = 0 and b2 = B or b1 = B and b2 = 0).\nHowever, as the budget increases, the plot passes through a transition whereby the maximum utility occurs at the midpoint of the x-axis, corresponding to bidding equally in both auctions (i.e. b1 = b2 = B\/2).\nThis is simply understood by the fact that continuing to allocate the budget to a single auction results in diminishing returns as the probability of actually winning this auction approaches one.\nIn this case, the plot is completely symmetrical since the number of users present at the start is equal to its expected value (i.e. n = \u03bba\/\u03bbd).\nIf however, n < \u03bba\/\u03bbd the plot is skewed such that when the budget is small, it should be allocated to the second auction (since more users are expected to arrive before this advertising cycle commences).\nConversely, when n > \u03bba\/\u03bbd the entire budget should be allocated to the first auction (since the users who are currently present are likely to depart in the near future).\nHowever, in both cases, a transition occurs whereby given sufficient budget it is preferable to allocate the budget evenly between both auctions9 .\n9 In fact, one auction is still slightly preferred, but the difference in temp \u2190 1 rate \u2190 0.995 bold \u2190 initial random allocation Uold \u2190 Evaluate(bold ) WHILE temp > 0.0001 i, j \u2190 random integer index within b t \u2190 random real number between 0 and bi bnew \u2190 bold bnew i \u2190 bold i \u2212 t bnew j \u2190 bold j + t Unew \u2190 Evaluate(bnew ) IF rand < exp((Unew \u2212 Uold )\/temp) THEN bold \u2190 bnew Uold \u2190 Unew ENDIF temp \u2190 temp \u00d7 rate ENDWHILE Figure 6: Stochastic optimisation algorithm to calculate the optimal sequence of bids in the general case of multiple auctions.\n5.4.2 General Case In general, the behaviour seen in the previous example characterises the optimal bidding behaviour of the advertising agent.\nIf there is sufficient budget, bidding equally in all auctions results in the maximum expected utility.\nHowever, typically this is not possible and thus utility is maximised by concentrating what budget is available into a subset of the available auction.\nThe choice of this subset is determined by a number of factors.\nIf there are very few users currently present, it is optimal to allocate the budget to later auctions in the expectation that more users will arrive.\nConversely, if there are many users present, a significant proportion of the budget should be allocated to the first auction to ensure that it is indeed won, and these users see the advert.\nFinally, since no utility is derived by showing the advert to a single user more than once, the budget should be allocated such that there are intervals between showings of the advert, in order that new users may arrive.\nNow, due to the complex form of the expression for the expected utility of the agent (shown in equation 8) it is not possible to analytically calculate the optimal sequence of bids.\nHowever, the inverse problem (that of calculating the expected utility for any given sequence of bids) is easy.\nThus, we can use a stochastic optimisation routine based on simulated annealing to solve the maximisation problem.\nThis algorithm starts by assuming some initial random allocation of bids (normalised such that the total of all the bids is equal to the budget B).\nIt then makes small adjustments to this allocation by randomly transferring the budget from one auction to another.\nIf this transfer results in an increase in expected utility, then it is accepted.\nIf it results in a decrease in expected utility, it might still be accepted, but with a probability that is determined by a temperature parameter.\nThis temperature parameter is annealed such that the probability of accepting such transfers decreases over time.\nIn figure 6 we present this algorithm in pseudo-code.\n6.\nEVALUATION In order to evaluate the effectiveness of the advanced bidding strategy developed within this paper we compare its performance to three alternative mechanisms.\nOne of these mechanisms represents a simple alternative bidding strategy for the advertising agents, whilst the other two are centralised allocation mechanisms that represent expected utility between this and an even allocation is negliable.\n268 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 20 30 40 50 60 0 0.2 0.4 0.6 0.8 1 Number Advertising Agents Mean Normalised Exposure Random Allocation Simple Bidding Strategy Advanced Bidding Strategy Optimal Allocation Figure 7: Comparison of four different allocation mechanisms for allocating advertising cycles to advertising agents.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\nthe upper and lower bounds to the overall performance of the system.\nIn more detail, the four mechanisms that we compare are: Random Allocation: Rather than implementing the auction mechanism, the advertising cycle is randomly allocated to one of the advertising agents.\nSimple Bidding Strategy: We implement the full auction mechanism but with a population of advertising agents that employ a simple bidding strategy.\nThese advertising agents do not attempt to model the users or the auction environment in which they bid, but rather, they simply evenly allocate their remaining budget over the remaining advertising cycles.\nAdvanced Bidding Strategy: We implement the full auction mechanism with a population of advertising agents using the probabilistic models and the bidding strategy described here.\nOptimal Allocation: Rather than implementing the auction mechanism, the advertising cycle is allocated to the advertising agent that will derive the maximum utility from it, given perfect knowledge of the number of users who will arrive and depart in all future advertising cycles.\nUsing these four alternative allocation mechanisms, we ran repeated simulations of two hours of operation of the entire BluScreen environment for a default set of parameters whereby the arrival and departure rate of the users are given by \u03bba = 1\/120s and \u03bbd = 1\/480s, and the length of an advertising cycle is 120s.\nEach advertising agent is assigned an advert with a period of interest drawn from a Poisson distribution with a mean of 8 advertising cycles, and these agents are initially allocated a budget equal to 10 times their period of interest.\nFor each simulation run, we measure the mean normalised exposure of each advert.\nThat is, the fraction of users who were detected by the BluScreen display during the period of interest of the advertising agent who were actually exposed to the agent``s advert.\nThus a mean normalised exposure of 1 indicates that the agent managed to expose its advert to all of the users who were present during its period of interest (and a mean normalised exposure of 0 means that no users were exposed to the advert).\nFigure 7 shows the results of this experiments.\nWe first observe the general result that as the number of advertising agents increases, and thus the competition between them increases, then the mean normalised exposure of all allocation mechanisms decreases.\nWe then observe that in all cases, there is no statistically significant improvement in using the simple bidding strategy compared to random allocation (p > 0.25 in Student``s t-test).\nSince this simple bidding strategy does not take account of the number of users present, and in general, simply increases its bid price in each auction until it does in fact win one, this is not unexpected.\nHowever, in all cases the advanced bidding strategy does indeed significantly outperform the simple bidding agent (p < 0.0005 in Student``s t-test), and its performance is within 7.5% of that of the optimal allocation that has perfect knowledge of the number of users who will arrival and depart in all future advertising cycles.\nIn addition, we present results of experiments performed over a range of parameter values, and also with a mixed population of advertising agents using both the advanced and simple bidding strategies.\nThis is an important scenario since advertisers may wish to supply their own bidding agents, and thus, a homogeneous population is not guaranteed.\nIn each case, keeping all other parameters fixed, we varied one parameter, and these results are shown in figure 8.\nIn general, we see the similar trends as before.\nIncreasing the departure rate causes an decrease in the mean normalised exposure since advertising agents have less opportunities to expose users to their adverts.\nIncreasing the period of interest of each agent decreases the mean normalised exposure, since more advertising agents are now competing for the same users.\nFinally, increasing the arrival rate of the users causes the results of the simple and advanced bidding strategies to approach one another, since the variance in the number of users who are present during any advertising cycle decreases, and thus, modelling their behaviour provides less gain.\nHowever, in all cases, the advanced bidding strategy significantly outperforms the simple one (p < 0.0005 in Student``s t-test).\nOn average, we observe that advertising agents who use the advanced bidding strategy are able to expose their adverts to 25% more users than those using the simple bidding strategy.\nFinally, we show that a rational advertising agent, who has a choice of bidding strategy, would always opt to use the advanced bidding strategy over the simple bidding strategy, regardless of the composition of the population that it finds itself in.\nFigure 9 shows the average normalised exposure of the advertising agents when the population is composed of different fractions of the two bidding strategies.\nIn each case, the advanced bidding strategy shows a significant gain in performance compared to the simple bidding strategy (p < 0.0005 in Student``s t-test), and thus, gains improved exposure over all population compositions.\n7.\nCONCLUSIONS In this paper, we presented an advanced bidding strategy for use by advertising agents within the BluScreen advertising system.\nThis bidding strategy enabled advertising agents to model and predict the arrival and departure of users, and also to model their success within a first-price sealed bid auction by observing both the bids that they themselves submitted and the winning bid.\nThe exThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 269 1\/600 1\/480 1\/360 0 0.2 0.4 0.6 0.8 Departure Rate (\u03bb )d Mean Normalised Exposure Simple Bidding Strategy Advanced Bidding Strategy 6 8 10 0 0.2 0.4 0.6 0.8 Mean Period of Interest (Cycles) Mean Normalised Exposure Simple Bidding Strategy Advanced Bidding Strategy 1\/240 1\/120 1\/80 0 0.2 0.4 0.6 0.8 Arrival Rate (\u03bb )a Mean Normalised Exposure Simple Bidding Strategy Advanced Bidding Strategy (a) (b) (c) Figure 8: Comparison of an evenly mixed population of advertising agents using simple and advanced bidding strategies over a range of parameter settings.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\n1\/39 5\/35 10\/30 20\/20 30\/10 5\/35 1\/39 0 0.2 0.4 0.6 0.8 Number of Advertising Agents Mean Normalised Exposure Simple Bidding Strategy Advanced Bidding Strategy Figure 9: Comparison of an unevenly mixed population of advertising agents using simple and advanced bidding strategies.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\npected utility, measured as the number of users who the advertising agent exposes its advert to, was shown to depend on these factors, and resulted in a complex expression where the expected utility of each auction depended on the success or otherwise of earlier auctions.\nWe presented an algorithm based upon simulated annealing to solve for the optimal bidding strategy, and in simulation, this bidding strategy was shown to significantly outperform a simple bidding strategy that had none of these features.\nIts performance closely approached that of a central optimal allocation, with perfect knowledge of the arrival and departure of users, despite the uncertain environment in which the strategy must operate.\nOur future work in this area consists of extending this bidding strategy to richer environments where there are multiple interrelated display screens, where maintaining profiles of users allows a richer matching of user to advert, and where alternative auction mechanisms are applied (we a particularly interesting in introducing a `pay per user'' auction setting similar to the `pay per click'' auctions employed by internet search websites).\nThis work will continue to be done in conjunction with the deployment of more BluScreen prototypes in order to gain further real world experience.\n8.\nACKNOWLEDGEMENTS The authors would like to thank Heather Packer and Matthew Sharifi (supported by the ALADDIN project - www.aladdinproject.org) for their help in developing the deployed prototype.\n9.\nREFERENCES [1] A. Amiri and S. Menon.\nEfficient scheduling of internet banner advertisements.\nACM Transactions on Internet Technology, 3(4):334-346, 2003.\n[2] S. M. Bohte, E. Gerding, and H. L. Poutre.\nMarket-based recommendation: Agents that compete for consumer attention.\nACM Transactions on Internet Technology, 4(4):420-448, 2004.\n[3] K. Cheverst, A. Dix, D. Fitton, C. Kray, M. Rouncefield, C. Sas, G. Saslis-Lagoudakis, and J. G. Sheridan.\nExploring bluetooth based mobile phone interaction with the hermes photo display.\nIn Proc.\nof the 7th Int.\nConf.\non Human Computer Interaction with Mobile Devices & Services, pages 47-54, Salzburg, Austria, 2005.\n[4] S. Gjerstad and J. Dickhaut.\nPrice formation in double auctions.\nGames and Economic Behavior, (22):1-29, 1998.\n[5] D. Gross and C. M. Harris.\nFundamentals of Queueing Theory.\nWiley, 1998.\n[6] J. Hightower and G. Borriella.\nLocation systems for ubiquitous computing.\nIEEE Computer, 34(8):57-66, 2001.\n[7] J. F. McCarthy, T. J. Costa, and E. S. Liongosari.\nUnicast, outcast & groupcast: Three steps toward ubiquitous, peripheral displays.\nIn Proc.\nof the 3rd Int.\nConf.\non Ubiquitous Computing, pages 332-345, Atlanta, USA, 2001.\n[8] T. R. Payne, E. David, M. Sharifi, and N. R. Jennings.\nAuction mechanisms for efficient advertisment selection on public display.\nIn Proc.\nof the 17th European Conf.\non Artificial Intelligence, pages 285-289, Trentino, Italy, 2006.\n[9] A. Ranganathan and R. H. Campbell.\nAdvertising in a pervasive computing environment.\nIn Proc.\nof the 2nd Int.\nWorkshop on Mobile Commerce, pages 10-14, Atlanta, Georgia, USA, 2002.\n[10] M. Rothkopf, T. Teisberg, and E. Kahn.\nWhy are vickrey auctions rare?\nJournal of Political Economy, 98(1):94-109, 1990.\n[11] R. Want, A. Hopper, V. Falcao, and J. Gibbons.\nThe active badge location system.\nACM Transactions on Information Systems, 10(1):91-102, 1992.\n270 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"An Advanced Bidding Agent for Advertisement Selection on Public Displays\nABSTRACT\nIn this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen--an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices.\nOur bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within.\nIt then uses these models to maximise the exposure that its adverts receive.\nWe evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight.\nOur bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy.\nMoreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.\n1.\nINTRODUCTION\nElectronic displays are increasingly being used within public environments, such as airports, city centres and retail stores, in order to advertise commercial products, or to entertain and inform passersby.\nRecently, researchers have begun to investigate how the content of such displays may be varied dynamically over time in order to increase its variety, relevance and exposure [9].\nParticular research attention has focused on the need to take into account the dynamic nature of the display's audience, and to this end, a number\nof interactive public displays have been proposed.\nThese displays have typically addressed the needs of a closed set of known users with pre-defined interests and requirements, and have facilitated communication with these users through the active use of handheld devices such as PDAs or phones [3, 7].\nAs such, these systems assume prior knowledge about the target audience, and require either that a single user has exclusive access to the display, or that users carry specific tracking devices so that their presence can be identified [6, 11].\nHowever, these approaches fail to work in public spaces, where no prior knowledge regarding the users who may view the display exists, and where such displays need to react to the presence of several users simultaneously.\nBy contrast, Payne et al. have developed an intelligent public display system, named BluScreen, that detects and tracks users through the Bluetooth enabled devices that they carry with them everyday [8].\nWithin this system, a decentralised multi-agent auction mechanism is used to efficiently allocate advertising time on each public display.\nEach advert is represented by an individual advertising agent that maintains a history of users who have already been exposed to the advert.\nThis agent then seeks to acquire advertising cycles (during which it can display its advert on the public displays) by submitting bids to a marketplace agent who implements a sealed bid auction.\nThe value of these bids is based upon the number of users who are currently present in front of the screen, the history of these users, and an externally derived estimate of the value of exposing an advert to a user.\nIn this paper, we present an advanced bidding agent that significantly extends the sophistication of this approach.\nIn particular, we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a user.\nThis is likely to be the case for BluScreen installations within private organisations where the items being advertised are forthcoming events or news items of interest to employees and visitors, and thus have no direct monetary value (indeed in this case bidding is likely to be conducted in some virtual currency).\nIn addition, it is also likely to be the case within new commercial installations where limited market experience makes estimating a valuation impossible.\nIn both cases, it is more appropriate to assume that an advertising agents will be assigned a total advertising budget, and that it will have a limited period of time in which to spend this budget (particularly so where the adverts are for forthcoming events).\nThe advertising agent is then simply tasked with using this budget to maximum effect (i.e. to achieve the maximum possible advert exposure within this time period).\nNow, in order to achieve this goal, the advertising agent must be capable of modelling the behaviour of the users in order to predict the number who will be present in any future advertising cycle.\nIn addition, it must also understand the auction environment in which\nit competes, in order that it may make best use of its limited budget.\nThus, in developing an advanced bidding agent that achieves this, we advance the state of the art in four key ways:\n1.\nWe enable the advertising agents to model the arrival and departure of users as independent Poisson processes, and to make maximum likelihood estimates of the rates of these processes based on their observations.\nWe show how these agents can then calculate the expected number of users who will be present during any future advertising cycle.\n2.\nUsing a decision theoretic approach we enable the advertising agents to model the probability of winning any given auction when a specific amount is bid.\nThe cumulative form of the gamma distribution is used to represent this probability, and its parameters are fitted using observations of both the closing price of previous auctions, and the bids that that advertising agent itself submits.\n3.\nWe show that our explicit assumption that the advertising agent derives no additional benefit by showing an advert to a single user more than once, causes the expected utility of each future advertising cycle to be dependent on the expected outcome of all the auctions that precede it.\nWe thus present a stochastic optimisation algorithm based upon simulated annealing that enables the advertising agent to calculate the optimal sequence of bids that maximises its expected utility.\n4.\nFinally, we demonstrate that this advanced bidding strategy outperforms a simple strategy with none of these features (within an heterogenous population the advertising agents who use the advanced bidding strategy are able to expose their adverts to 25% more users than those using the simple bidding strategy), and we show that it performs within 7.5% of that of a centralised optimiser with perfect knowledge of the number of users who will arrival and depart in all future advertising cycles.\nThe remainder of this paper is organised as follows: Section 2 discusses related work where agents and auction-based marketplaces are used to allocated advertising space.\nSection 3 describes the prototype BluScreen system that motivates our work.\nIn section 4 we present a detailed description of the auction allocation mechanism, and in section 5 we describe our advanced bidding strategy for the advertising agents.\nIn section 6 we present an empirical validation of our approach, and finally, we conclude in section 7.\n2.\nRELATED WORK\nThe commercial attractiveness of targeted advertising has been amply demonstrated on the internet, where recommendation systems and contextual banner adverts are the norm [1].\nThese systems typically select content based upon prior knowledge of the individual viewing the material, and such systems work well on personal devices where the owner's preferences and interests can be gathered and cached locally, or within interactive environments which utilise some form of credential to identify the user (e.g. e-commerce sites such as Amazon.com).\nAttempts to apply these approaches within the real world have been much more limited.\nGerding et al. present a simulated system (CASy) whereby a Vickrey auction mechanism is used to sell advertising space within a modelled electronic shopping mall [2].\nThe auction is used to rank a set of possible advertisements provided by different retail outlets, and the top ranking advertisements are selected for presentation on public displays.\nFeedback is provided through subsequent sales information, allowing the model to build up a profile of a user's preferences.\nHowever, unlike the BluScreen\nFigure 1: A deployed BluScreen prototype.\nsystem that we consider here, it is not suitable for advertising to many individuals simultaneously, as it requires explicit interaction with a single user to acquire the user's preferences.\nBy contrast, McCarthy et al. have presented a prototype implementation of a system (GroupCast) that attempts to respond to a group of individuals by assuming a priori profiles of several members of the audience [7].\nUser identification is based on infrared badges and embedded sensors within an office environment.\nWhen several users pass by the display, a centralised system compares the user's profiles to identify common areas of interest, and content that matches this common interest is shown.\nThus, whilst CASy is a simulated system that allows advertisers to compete for the attention of single user, GroupCast is a prototype system that detects the presence of groups of users and selects content to match their profiles.\nDespite their similarities, neither system addresses the settings that interests us here: how to allocate advertising space between competing advertisers who face an audience of multiple individuals about whom there is no a priori profile information.\nThus, in the next section we describe the prototype BluScreen system that motivates our work.\n3.\nTHE BLUSCREEN PROTOTYPE\n4.\nTHE AUCTION MECHANISM\n5.\nADVANCED BIDDING STRATEGY\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 265\n5.1 Predicting the Number of Users\n5.2 Predicting the Probability of Winning\n266 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.3 Expected Utility of an Advertising Cycle\n5.4 Optimal Sequence of Bids\n5.4.1 Two Auction Example\n5.4.2 General Case\n6.\nEVALUATION\n7.\nCONCLUSIONS\nIn this paper, we presented an advanced bidding strategy for use by advertising agents within the BluScreen advertising system.\nThis bidding strategy enabled advertising agents to model and predict the arrival and departure of users, and also to model their success within a first-price sealed bid auction by observing both the bids that they themselves submitted and the winning bid.\nThe ex\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 269\nFigure 8: Comparison of an evenly mixed population of advertising agents using simple and advanced bidding strategies over a range of parameter settings.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\nFigure 9: Comparison of an unevenly mixed population of advertising agents using simple and advanced bidding strategies.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\npected utility, measured as the number of users who the advertising agent exposes its advert to, was shown to depend on these factors, and resulted in a complex expression where the expected utility of each auction depended on the success or otherwise of earlier auctions.\nWe presented an algorithm based upon simulated annealing to solve for the optimal bidding strategy, and in simulation, this bidding strategy was shown to significantly outperform a simple bidding strategy that had none of these features.\nIts performance closely approached that of a central optimal allocation, with perfect knowledge of the arrival and departure of users, despite the uncertain environment in which the strategy must operate.\nOur future work in this area consists of extending this bidding strategy to richer environments where there are multiple interrelated display screens, where maintaining profiles of users allows a richer matching of user to advert, and where alternative auction mechanisms are applied (we a particularly interesting in introducing a ` pay per user' auction setting similar to the ` pay per click' auctions employed by internet search websites).\nThis work will continue to be done in conjunction with the deployment of more BluScreen prototypes in order to gain further real world experience.","lvl-4":"An Advanced Bidding Agent for Advertisement Selection on Public Displays\nABSTRACT\nIn this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen--an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices.\nOur bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within.\nIt then uses these models to maximise the exposure that its adverts receive.\nWe evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight.\nOur bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy.\nMoreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.\n1.\nINTRODUCTION\nElectronic displays are increasingly being used within public environments, such as airports, city centres and retail stores, in order to advertise commercial products, or to entertain and inform passersby.\nof interactive public displays have been proposed.\nAs such, these systems assume prior knowledge about the target audience, and require either that a single user has exclusive access to the display, or that users carry specific tracking devices so that their presence can be identified [6, 11].\nHowever, these approaches fail to work in public spaces, where no prior knowledge regarding the users who may view the display exists, and where such displays need to react to the presence of several users simultaneously.\nBy contrast, Payne et al. have developed an intelligent public display system, named BluScreen, that detects and tracks users through the Bluetooth enabled devices that they carry with them everyday [8].\nWithin this system, a decentralised multi-agent auction mechanism is used to efficiently allocate advertising time on each public display.\nEach advert is represented by an individual advertising agent that maintains a history of users who have already been exposed to the advert.\nThis agent then seeks to acquire advertising cycles (during which it can display its advert on the public displays) by submitting bids to a marketplace agent who implements a sealed bid auction.\nThe value of these bids is based upon the number of users who are currently present in front of the screen, the history of these users, and an externally derived estimate of the value of exposing an advert to a user.\nIn this paper, we present an advanced bidding agent that significantly extends the sophistication of this approach.\nIn particular, we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a user.\nIn addition, it is also likely to be the case within new commercial installations where limited market experience makes estimating a valuation impossible.\nThe advertising agent is then simply tasked with using this budget to maximum effect (i.e. to achieve the maximum possible advert exposure within this time period).\nNow, in order to achieve this goal, the advertising agent must be capable of modelling the behaviour of the users in order to predict the number who will be present in any future advertising cycle.\nIn addition, it must also understand the auction environment in which\nit competes, in order that it may make best use of its limited budget.\nThus, in developing an advanced bidding agent that achieves this, we advance the state of the art in four key ways:\n1.\nWe enable the advertising agents to model the arrival and departure of users as independent Poisson processes, and to make maximum likelihood estimates of the rates of these processes based on their observations.\nWe show how these agents can then calculate the expected number of users who will be present during any future advertising cycle.\n2.\nUsing a decision theoretic approach we enable the advertising agents to model the probability of winning any given auction when a specific amount is bid.\nThe cumulative form of the gamma distribution is used to represent this probability, and its parameters are fitted using observations of both the closing price of previous auctions, and the bids that that advertising agent itself submits.\n3.\nWe show that our explicit assumption that the advertising agent derives no additional benefit by showing an advert to a single user more than once, causes the expected utility of each future advertising cycle to be dependent on the expected outcome of all the auctions that precede it.\nWe thus present a stochastic optimisation algorithm based upon simulated annealing that enables the advertising agent to calculate the optimal sequence of bids that maximises its expected utility.\n4.\nThe remainder of this paper is organised as follows: Section 2 discusses related work where agents and auction-based marketplaces are used to allocated advertising space.\nSection 3 describes the prototype BluScreen system that motivates our work.\nIn section 4 we present a detailed description of the auction allocation mechanism, and in section 5 we describe our advanced bidding strategy for the advertising agents.\nIn section 6 we present an empirical validation of our approach, and finally, we conclude in section 7.\n2.\nRELATED WORK\nThe commercial attractiveness of targeted advertising has been amply demonstrated on the internet, where recommendation systems and contextual banner adverts are the norm [1].\nAttempts to apply these approaches within the real world have been much more limited.\nGerding et al. present a simulated system (CASy) whereby a Vickrey auction mechanism is used to sell advertising space within a modelled electronic shopping mall [2].\nThe auction is used to rank a set of possible advertisements provided by different retail outlets, and the top ranking advertisements are selected for presentation on public displays.\nFeedback is provided through subsequent sales information, allowing the model to build up a profile of a user's preferences.\nHowever, unlike the BluScreen\nFigure 1: A deployed BluScreen prototype.\nsystem that we consider here, it is not suitable for advertising to many individuals simultaneously, as it requires explicit interaction with a single user to acquire the user's preferences.\nUser identification is based on infrared badges and embedded sensors within an office environment.\nWhen several users pass by the display, a centralised system compares the user's profiles to identify common areas of interest, and content that matches this common interest is shown.\nThus, whilst CASy is a simulated system that allows advertisers to compete for the attention of single user, GroupCast is a prototype system that detects the presence of groups of users and selects content to match their profiles.\nDespite their similarities, neither system addresses the settings that interests us here: how to allocate advertising space between competing advertisers who face an audience of multiple individuals about whom there is no a priori profile information.\nThus, in the next section we describe the prototype BluScreen system that motivates our work.\n7.\nCONCLUSIONS\nIn this paper, we presented an advanced bidding strategy for use by advertising agents within the BluScreen advertising system.\nThis bidding strategy enabled advertising agents to model and predict the arrival and departure of users, and also to model their success within a first-price sealed bid auction by observing both the bids that they themselves submitted and the winning bid.\nThe ex\nThe Sixth Intl. .\nJoint Conf.\nFigure 8: Comparison of an evenly mixed population of advertising agents using simple and advanced bidding strategies over a range of parameter settings.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\nFigure 9: Comparison of an unevenly mixed population of advertising agents using simple and advanced bidding strategies.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\npected utility, measured as the number of users who the advertising agent exposes its advert to, was shown to depend on these factors, and resulted in a complex expression where the expected utility of each auction depended on the success or otherwise of earlier auctions.\nWe presented an algorithm based upon simulated annealing to solve for the optimal bidding strategy, and in simulation, this bidding strategy was shown to significantly outperform a simple bidding strategy that had none of these features.\nIts performance closely approached that of a central optimal allocation, with perfect knowledge of the arrival and departure of users, despite the uncertain environment in which the strategy must operate.\nThis work will continue to be done in conjunction with the deployment of more BluScreen prototypes in order to gain further real world experience.","lvl-2":"An Advanced Bidding Agent for Advertisement Selection on Public Displays\nABSTRACT\nIn this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen--an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices.\nOur bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within.\nIt then uses these models to maximise the exposure that its adverts receive.\nWe evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight.\nOur bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy.\nMoreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.\n1.\nINTRODUCTION\nElectronic displays are increasingly being used within public environments, such as airports, city centres and retail stores, in order to advertise commercial products, or to entertain and inform passersby.\nRecently, researchers have begun to investigate how the content of such displays may be varied dynamically over time in order to increase its variety, relevance and exposure [9].\nParticular research attention has focused on the need to take into account the dynamic nature of the display's audience, and to this end, a number\nof interactive public displays have been proposed.\nThese displays have typically addressed the needs of a closed set of known users with pre-defined interests and requirements, and have facilitated communication with these users through the active use of handheld devices such as PDAs or phones [3, 7].\nAs such, these systems assume prior knowledge about the target audience, and require either that a single user has exclusive access to the display, or that users carry specific tracking devices so that their presence can be identified [6, 11].\nHowever, these approaches fail to work in public spaces, where no prior knowledge regarding the users who may view the display exists, and where such displays need to react to the presence of several users simultaneously.\nBy contrast, Payne et al. have developed an intelligent public display system, named BluScreen, that detects and tracks users through the Bluetooth enabled devices that they carry with them everyday [8].\nWithin this system, a decentralised multi-agent auction mechanism is used to efficiently allocate advertising time on each public display.\nEach advert is represented by an individual advertising agent that maintains a history of users who have already been exposed to the advert.\nThis agent then seeks to acquire advertising cycles (during which it can display its advert on the public displays) by submitting bids to a marketplace agent who implements a sealed bid auction.\nThe value of these bids is based upon the number of users who are currently present in front of the screen, the history of these users, and an externally derived estimate of the value of exposing an advert to a user.\nIn this paper, we present an advanced bidding agent that significantly extends the sophistication of this approach.\nIn particular, we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a user.\nThis is likely to be the case for BluScreen installations within private organisations where the items being advertised are forthcoming events or news items of interest to employees and visitors, and thus have no direct monetary value (indeed in this case bidding is likely to be conducted in some virtual currency).\nIn addition, it is also likely to be the case within new commercial installations where limited market experience makes estimating a valuation impossible.\nIn both cases, it is more appropriate to assume that an advertising agents will be assigned a total advertising budget, and that it will have a limited period of time in which to spend this budget (particularly so where the adverts are for forthcoming events).\nThe advertising agent is then simply tasked with using this budget to maximum effect (i.e. to achieve the maximum possible advert exposure within this time period).\nNow, in order to achieve this goal, the advertising agent must be capable of modelling the behaviour of the users in order to predict the number who will be present in any future advertising cycle.\nIn addition, it must also understand the auction environment in which\nit competes, in order that it may make best use of its limited budget.\nThus, in developing an advanced bidding agent that achieves this, we advance the state of the art in four key ways:\n1.\nWe enable the advertising agents to model the arrival and departure of users as independent Poisson processes, and to make maximum likelihood estimates of the rates of these processes based on their observations.\nWe show how these agents can then calculate the expected number of users who will be present during any future advertising cycle.\n2.\nUsing a decision theoretic approach we enable the advertising agents to model the probability of winning any given auction when a specific amount is bid.\nThe cumulative form of the gamma distribution is used to represent this probability, and its parameters are fitted using observations of both the closing price of previous auctions, and the bids that that advertising agent itself submits.\n3.\nWe show that our explicit assumption that the advertising agent derives no additional benefit by showing an advert to a single user more than once, causes the expected utility of each future advertising cycle to be dependent on the expected outcome of all the auctions that precede it.\nWe thus present a stochastic optimisation algorithm based upon simulated annealing that enables the advertising agent to calculate the optimal sequence of bids that maximises its expected utility.\n4.\nFinally, we demonstrate that this advanced bidding strategy outperforms a simple strategy with none of these features (within an heterogenous population the advertising agents who use the advanced bidding strategy are able to expose their adverts to 25% more users than those using the simple bidding strategy), and we show that it performs within 7.5% of that of a centralised optimiser with perfect knowledge of the number of users who will arrival and depart in all future advertising cycles.\nThe remainder of this paper is organised as follows: Section 2 discusses related work where agents and auction-based marketplaces are used to allocated advertising space.\nSection 3 describes the prototype BluScreen system that motivates our work.\nIn section 4 we present a detailed description of the auction allocation mechanism, and in section 5 we describe our advanced bidding strategy for the advertising agents.\nIn section 6 we present an empirical validation of our approach, and finally, we conclude in section 7.\n2.\nRELATED WORK\nThe commercial attractiveness of targeted advertising has been amply demonstrated on the internet, where recommendation systems and contextual banner adverts are the norm [1].\nThese systems typically select content based upon prior knowledge of the individual viewing the material, and such systems work well on personal devices where the owner's preferences and interests can be gathered and cached locally, or within interactive environments which utilise some form of credential to identify the user (e.g. e-commerce sites such as Amazon.com).\nAttempts to apply these approaches within the real world have been much more limited.\nGerding et al. present a simulated system (CASy) whereby a Vickrey auction mechanism is used to sell advertising space within a modelled electronic shopping mall [2].\nThe auction is used to rank a set of possible advertisements provided by different retail outlets, and the top ranking advertisements are selected for presentation on public displays.\nFeedback is provided through subsequent sales information, allowing the model to build up a profile of a user's preferences.\nHowever, unlike the BluScreen\nFigure 1: A deployed BluScreen prototype.\nsystem that we consider here, it is not suitable for advertising to many individuals simultaneously, as it requires explicit interaction with a single user to acquire the user's preferences.\nBy contrast, McCarthy et al. have presented a prototype implementation of a system (GroupCast) that attempts to respond to a group of individuals by assuming a priori profiles of several members of the audience [7].\nUser identification is based on infrared badges and embedded sensors within an office environment.\nWhen several users pass by the display, a centralised system compares the user's profiles to identify common areas of interest, and content that matches this common interest is shown.\nThus, whilst CASy is a simulated system that allows advertisers to compete for the attention of single user, GroupCast is a prototype system that detects the presence of groups of users and selects content to match their profiles.\nDespite their similarities, neither system addresses the settings that interests us here: how to allocate advertising space between competing advertisers who face an audience of multiple individuals about whom there is no a priori profile information.\nThus, in the next section we describe the prototype BluScreen system that motivates our work.\n3.\nTHE BLUSCREEN PROTOTYPE\nBluScreen is based on the notion of a scalable, extendable, advertising framework whereby adverts can be efficiently displayed to as many relevant users as possible, within a knowledge-poor environment.\nTo achieve these goals, several requirements have been identified:\n1.\nAdverts should be presented to as diverse an audience as possible, whilst minimising the number of times the advert is presented to any single user.\n2.\nUsers should be identified by existing ubiquitous, consumer devices, so that future deployments within public arenas will not require uptake of new hardware.\n3.\nThe number of displays should be scalable, such that adverts appear on different displays at different times.\n4.\nKnowledge about observed behaviour and composition of the\naudience should be exploited to facilitate inference of user interests which can be exploited by the system.\nTo date, a prototype systems that addresses the first two goals has been demonstrated [8].\nThis system uses a 23 inch flat-screen display deployed within an office environment to advertise events and news items.\nRather than requiring the deployment of specialised hardware, such as active badges (see [11] for details), BluScreen detects the presence of users in the vicinity of each display through the Bluetooth-enabled devices that they carry with them everyday1.\nTable 1: Number of Bluetooth devices observed at different frequencies over a six month sample period.\nThis approach is attractive since the Bluetooth wireless protocol is characterised by its relative maturity, market penetration, and emphasis on short-range communication.\nTable 1 summarises the number of devices detected by this prototype installation over a six month period.\nOf the 212 Bluetooth devices detected, approximately 70 were detected regularly, showing that Bluetooth is a suitable proxy for detecting individuals in front of the screen.\nIn order to achieve a scalable and extendable solution a multiagent systems design philosophy is adopted whereby a number of different agents types interact (see figure 2).\nThe interactions of these agents are implemented through a web services protocol2, and they constitute a decentralised marketplace that allocates advertising space in an efficient and timely manner.\nIn more detail, the responsibilities of each agent types are: Bluetooth Device Detection Agent: This agent monitors the environment in the vicinity of a BluScreen display and determines the number and identity of any Bluetooth devices that are close by.\nIt keeps historical records of the arrival and departure of Bluetooth devices, and makes this information available to advertising agents as requested.\nMarketplace Agent: This agent facilitates the sale of advertising space to the advertising agents.\nA single marketplace agent represents each BluScreen display, and access to this screen is divided into discrete advertising cycles of fixed duration.\nBefore the start of each advertising cycle, the marketplace agent holds a sealed-bid auction (see section 4 for more details).\nThe winner of this auction is allocated access to the display during the next cycle.\nAdvertising Agent: This agent represents a single advert and is responsible for submitting bids to the marketplace agent in order that it may be allocated advertising cycles, and thus, display its advert to users.\nIt interacts with the device detection agent in order to collect information regarding the number and identity of users who are currently in front of the display.\nOn the basis of this information, its past experiences, and its bidding strategy, it calculates the value of the bid that it should submit to the marketplace agent.\nThus, having described the prototype BluScreen system, we next go on to describe the details of the auction mechanism that we consider in this work, and then the advanced bidding agent that operates bids within this auction.\n4.\nTHE AUCTION MECHANISM\nAs described above, BluScreen is designed to efficiency allocate advertising cycles in a distributed and timely manner.\nThus, oneshot sealed bid auctions are used for the market mechanism of the marketplace agent.\nIn previous work, each advertising agent was assumed to have an externally derived estimate of the value of exposing an advert to a user.\nUnder this assumption, a secondprice sealed-bid auction was shown to be effective, since advertis2This is implemented on a distributed Mac OS X based system using the Bonjour networking protocol for service discovery.\nFigure 2: The BluScreen agent architecture for a single display.\ning agents have a simple strategy of truthfully bidding their valuation in each auction [8].\nHowever, as described earlier, in this paper we consider the more general setting in which it is impossible to determine an a priori valuation for exposing an advert to a single user.\nThis may be because the BluScreen installation is within a private organisation where what is being advertised (e.g. news items or forthcoming events) has no monetary value, or it may be a new commercial installation where limited market experience makes estimating such a valuation impossible.\nIn the absence of such a valuation, the attractive economic properties of the second-price auction cannot be achieved in practise, and thus, in our work there is no need to limit our attention to the second-price auction.\nIndeed, since these auctions are actually extremely rare within real world settings [10], in this work we consider the more widely adopted first-price auction since this increases the applicability of our results.\nThus, in more detail, we consider an instance of a BluScreen installation with a single display screen that is managed by a single marketplace agent3.\nWe consider that access to the display screen is divided into discrete advertising cycles, each of length tc, and a first-price sealed bid auction is held immediately prior to the start of each advertising cycle.\nThe marketplace agent announces the start and deadline of the auction, and collects sealed bids from each advertising agent.\nAt the closing time of the auction the marketplace agent announces to all participants and observers the amount of the winning bid, and informs the winning advertising agent that it was successful (the identity of the winning advertising agent is not announced to all observers).\nIn the case that no bids are placed within any auction, a default advert is displayed.\nHaving described the market mechanism that the marketplace agent implements, we now go on to describe and evaluate an advanced bidding strategy for the advertising agents to adopt.\n5.\nADVANCED BIDDING STRATEGY\nAs described above, we consider the case that the advertising agents do not have an externally derived estimate of the value of exposing the advert to a single user.\nRather, they have a constrained budget, B, and a limited period of interest during which they wish to display their advert.\nTheir goal is then to find the appropriate amount to bid within each auction in this period, in order to maximise the exposure of their advert.\nIn attempting to achieve this goal the advertising agent is faced with a high level of uncertainty about future events.\nIt will be uncertain of the number of users who will be present during any advertising cycle since even if the number of users currently present 3This assumption of having a single BluScreen instance is made to simplify our task of validating the correctness and the efficiency of the proposed mechanism and strategy, and generalising these results to the case of multiple screens is the aim of our future work.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 265\nis known, some may leave before the advert commences, and others may arrive.\nMoreover, the amount that must be bid to ensure that an auction is won is uncertain since it depends on the number and behaviour of the competing advertising agents.\nThus, we enable the agent to use its observations of the arrival and departure of users to build a probabilistic model, based upon independent Poisson processes, that describes the number of users who are likely to be exposed to any advert.\nIn addition, we enable the agent to observe the outcome of previous advertising cycle auctions, and use the observations of the closing price, and the success or otherwise of the bids that it itself submitted, to build a probabilistic model of the bid required to win the auction.\nThe agent then uses these two models to calculate its expected utility in each advertising cycle, and in turn, determine the optimal sequence of bids that maximises this utility given its constrained budget.\nHaving calculated this sequence of bids, then the first bid in the sequence is actually used in the auction for the next advertising cycle.\nHowever, at the close of this cycle, the process is repeated with a new optimal sequence of bids being calculated in order take to account of what actually happened in the preceding auction (i.e. whether the bid was successful or not, and how many users arrived or departed).\nThus, in the next three subsections we describe these two probabilistic models, and their application within the bidding strategy of the advertising agent.\n5.1 Predicting the Number of Users\nIn order to predict the number of users that will be present in any future advertising cycle, it is necessary to propose a probabilistic model for the behaviour of the users.\nThus, our advanced bidding strategy assumes that their arrival and departures are determined by two independent Poisson processes4 with arrival rate, aa, and departure rate, ad.\nThis represents a simple model that is commonly applied within queuing theory5 [5], yet is one that we believe well describes the case where BluScreen displays are placed in communal areas where people meet and congregate.\nGiven the history of users' arrivals and departures obtained from the device detection agent, the advertising agent makes a maximum likelihood estimation of the values of aa and ad.\nIn more detail, if the advertising agent has observed n users arriving within a time period t, then the maximum likelihood estimation for the arrival rate aa is simply given by:\nLikewise, if an agent observes n users each with a duration of stay of t1, t2,..., tn time periods, then the maximum likelihood estimation for the departure rate ad is given by: ti (2) 4Given a Poisson distribution with rate parameter a, the number of events, n, within an interval of time t is given by:\nIn addition, the probability of having to wait a period of time, t, before the next event is determined by:\n5Note however that in queuing theory it is typically the arrival rate and service times of customers that are modelled as Poisson processes.\nOur users are not actually modelled as a queue since the duration of their stay is independent of that of the other users.\nFigure 3: Example showing how to predict the number of users who see an advert shown in an advertising cycle of length tc, commencing at time t in the future.\nIn environments where these rates are subject to change, the agent can use a limited time window over which observations are used to estimate these rates.\nAlternatively, in situations where cyclic changes in these rates are likely to occur (i.e. changing arrival and departure rates at different times of the day, as may be seen in areas where commuters pass through), the agent can estimate separate values over each hour long period.\nHaving estimated the arrival and departure rate of users, and knowing the number of users who are present at the current time, the advertising agent is then able to predict the number of users who are likely to be present in any future advertising cycle6.\nThus, we consider the problem of predicting this number for an advertising cycle of duration tc that starts at a time t in the future, given that n users are currently present (see figure 3).\nThis number will be composed of three factors: (i) the fraction of the n users that are initially present who do not leave in the interval, 0 <\u03c4 \u03bba \/ \u03bbd the entire budget should be allocated to the first auction (since the users who are currently present are likely to depart in the near future).\nHowever, in both cases, a transition occurs whereby given sufficient budget it is preferable to allocate the budget evenly between both auctions9.\nFigure 6: Stochastic optimisation algorithm to calculate the optimal sequence of bids in the general case of multiple auctions.\n5.4.2 General Case\nIn general, the behaviour seen in the previous example characterises the optimal bidding behaviour of the advertising agent.\nIf there is sufficient budget, bidding equally in all auctions results in the maximum expected utility.\nHowever, typically this is not possible and thus utility is maximised by concentrating what budget is available into a subset of the available auction.\nThe choice of this subset is determined by a number of factors.\nIf there are very few users currently present, it is optimal to allocate the budget to later auctions in the expectation that more users will arrive.\nConversely, if there are many users present, a significant proportion of the budget should be allocated to the first auction to ensure that it is indeed won, and these users see the advert.\nFinally, since no utility is derived by showing the advert to a single user more than once, the budget should be allocated such that there are intervals between showings of the advert, in order that new users may arrive.\nNow, due to the complex form of the expression for the expected utility of the agent (shown in equation 8) it is not possible to analytically calculate the optimal sequence of bids.\nHowever, the inverse problem (that of calculating the expected utility for any given sequence of bids) is easy.\nThus, we can use a stochastic optimisation routine based on simulated annealing to solve the maximisation problem.\nThis algorithm starts by assuming some initial random allocation of bids (normalised such that the total of all the bids is equal to the budget B).\nIt then makes small adjustments to this allocation by randomly transferring the budget from one auction to another.\nIf this transfer results in an increase in expected utility, then it is accepted.\nIf it results in a decrease in expected utility, it might still be accepted, but with a probability that is determined by a temperature parameter.\nThis temperature parameter is annealed such that the probability of accepting such transfers decreases over time.\nIn figure 6 we present this algorithm in pseudo-code.\n6.\nEVALUATION\nIn order to evaluate the effectiveness of the advanced bidding strategy developed within this paper we compare its performance to three alternative mechanisms.\nOne of these mechanisms represents a simple alternative bidding strategy for the advertising agents, whilst the other two are centralised allocation mechanisms that represent expected utility between this and an even allocation is negliable.\nFigure 7: Comparison of four different allocation mechanisms for allocating advertising cycles to advertising agents.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\nthe upper and lower bounds to the overall performance of the system.\nIn more detail, the four mechanisms that we compare are: Random Allocation: Rather than implementing the auction mechanism, the advertising cycle is randomly allocated to one of the advertising agents.\nSimple Bidding Strategy: We implement the full auction mechanism but with a population of advertising agents that employ a simple bidding strategy.\nThese advertising agents do not attempt to model the users or the auction environment in which they bid, but rather, they simply evenly allocate their remaining budget over the remaining advertising cycles.\nAdvanced Bidding Strategy: We implement the full auction mechanism with a population of advertising agents using the probabilistic models and the bidding strategy described here.\nOptimal Allocation: Rather than implementing the auction mechanism, the advertising cycle is allocated to the advertising agent that will derive the maximum utility from it, given perfect knowledge of the number of users who will arrive and depart in all future advertising cycles.\nUsing these four alternative allocation mechanisms, we ran repeated simulations of two hours of operation of the entire BluScreen environment for a default set of parameters whereby the arrival and departure rate of the users are given by \u03bba = 1\/120s and \u03bbd = 1\/480s, and the length of an advertising cycle is 120s.\nEach advertising agent is assigned an advert with a period of interest drawn from a Poisson distribution with a mean of 8 advertising cycles, and these agents are initially allocated a budget equal to 10 times their period of interest.\nFor each simulation run, we measure the mean normalised exposure of each advert.\nThat is, the fraction of users who were detected by the BluScreen display during the period of interest of the advertising agent who were actually exposed to the agent's advert.\nThus a mean normalised exposure of 1 indicates that the agent managed to expose its advert to all of the users who were present during its period of interest (and a mean normalised exposure of 0 means that no users were exposed to the advert).\nFigure 7 shows the results of this experiments.\nWe first observe the general result that as the number of advertising agents increases, and thus the competition between them increases, then the mean normalised exposure of all allocation mechanisms decreases.\nWe then observe that in all cases, there is no statistically significant improvement in using the simple bidding strategy compared to random allocation (p> 0.25 in Student's t-test).\nSince this simple bidding strategy does not take account of the number of users present, and in general, simply increases its bid price in each auction until it does in fact win one, this is not unexpected.\nHowever, in all cases the advanced bidding strategy does indeed significantly outperform the simple bidding agent (p <0.0005 in Student's t-test), and its performance is within 7.5% of that of the optimal allocation that has perfect knowledge of the number of users who will arrival and depart in all future advertising cycles.\nIn addition, we present results of experiments performed over a range of parameter values, and also with a mixed population of advertising agents using both the advanced and simple bidding strategies.\nThis is an important scenario since advertisers may wish to supply their own bidding agents, and thus, a homogeneous population is not guaranteed.\nIn each case, keeping all other parameters fixed, we varied one parameter, and these results are shown in figure 8.\nIn general, we see the similar trends as before.\nIncreasing the departure rate causes an decrease in the mean normalised exposure since advertising agents have less opportunities to expose users to their adverts.\nIncreasing the period of interest of each agent decreases the mean normalised exposure, since more advertising agents are now competing for the same users.\nFinally, increasing the arrival rate of the users causes the results of the simple and advanced bidding strategies to approach one another, since the variance in the number of users who are present during any advertising cycle decreases, and thus, modelling their behaviour provides less gain.\nHowever, in all cases, the advanced bidding strategy significantly outperforms the simple one (p <0.0005 in Student's t-test).\nOn average, we observe that advertising agents who use the advanced bidding strategy are able to expose their adverts to 25% more users than those using the simple bidding strategy.\nFinally, we show that a rational advertising agent, who has a choice of bidding strategy, would always opt to use the advanced bidding strategy over the simple bidding strategy, regardless of the composition of the population that it finds itself in.\nFigure 9 shows the average normalised exposure of the advertising agents when the population is composed of different fractions of the two bidding strategies.\nIn each case, the advanced bidding strategy shows a significant gain in performance compared to the simple bidding strategy (p <0.0005 in Student's t-test), and thus, gains improved exposure over all population compositions.\n7.\nCONCLUSIONS\nIn this paper, we presented an advanced bidding strategy for use by advertising agents within the BluScreen advertising system.\nThis bidding strategy enabled advertising agents to model and predict the arrival and departure of users, and also to model their success within a first-price sealed bid auction by observing both the bids that they themselves submitted and the winning bid.\nThe ex\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 269\nFigure 8: Comparison of an evenly mixed population of advertising agents using simple and advanced bidding strategies over a range of parameter settings.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\nFigure 9: Comparison of an unevenly mixed population of advertising agents using simple and advanced bidding strategies.\nResults are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\npected utility, measured as the number of users who the advertising agent exposes its advert to, was shown to depend on these factors, and resulted in a complex expression where the expected utility of each auction depended on the success or otherwise of earlier auctions.\nWe presented an algorithm based upon simulated annealing to solve for the optimal bidding strategy, and in simulation, this bidding strategy was shown to significantly outperform a simple bidding strategy that had none of these features.\nIts performance closely approached that of a central optimal allocation, with perfect knowledge of the arrival and departure of users, despite the uncertain environment in which the strategy must operate.\nOur future work in this area consists of extending this bidding strategy to richer environments where there are multiple interrelated display screens, where maintaining profiles of users allows a richer matching of user to advert, and where alternative auction mechanisms are applied (we a particularly interesting in introducing a ` pay per user' auction setting similar to the ` pay per click' auctions employed by internet search websites).\nThis work will continue to be done in conjunction with the deployment of more BluScreen prototypes in order to gain further real world experience.","keyphrases":["advanc bid agent","bid agent","auction","bluscreen","experiment public advertis system","bluetooth","probabilist model","centralis optim alloc","distribut artifici intellig","decentralis multi-agent auction mechan","independ poisson process","decis theoret approach","stochast optimis algorithm","public displai"],"prmu":["P","P","P","P","P","P","P","P","U","M","U","U","U","M"]} {"id":"I-5","title":"Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment","abstract":"Distributed applications require distributed techniques for efficient resource allocation. These techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments. In this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multi-agent systems. Our solution is based on the self-organisation of agents, which does not require any facilitator or management layer. The resource allocation in the system is a purely emergent effect. We present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.","lvl-1":"Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment Tino Schlegel1 , Ryszard Kowalczyk2 Swinburne University of Technology Faculty of Information and Communication Technologies Hawthorn, 3122 Victoria, Australia {tschlegel1 ,rkowalczyk2 }@ict.\nswin.edu.au ABSTRACT Distributed applications require distributed techniques for efficient resource allocation.\nThese techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments.\nIn this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multiagent systems.\nOur solution is based on the self-organisation of agents, which does not require any facilitator or management layer.\nThe resource allocation in the system is a purely emergent effect.\nWe present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Coherence and coordination General Terms Algorithms 1.\nINTRODUCTION With the increasing popularity of distributed computing technologies such as Grid [12] and Web services [20], the Internet is becoming a powerful computing platform where different software peers (e.g., agents) can use existing computing resources to perform tasks.\nIn this sense, each agent is a resource consumer that acquires a certain amount of resources for the execution of its tasks.\nIt is difficult for a central resource allocation mechanism to collect and manage the information about all shared resources and resource consumers to effectively perform the allocation of resources.\nHence, distributed solutions of the resource allocation problem are required.\nResearchers have recognised these requirements [10] and proposed techniques for distributed resource allocation.\nA promising kind of such distributed approaches are based on economic market models [4], inspired by principles of real stock markets.\nEven if those approaches are distributed, they usually require a facilitator for pricing, resource discovery and dispatching jobs to resources [5, 9].\nAnother mainly unsolved problem of those approaches is the fine-tuning of price and time, budget constraints to enable efficient resource allocation in large, dynamic systems [22].\nIn this paper we propose a distributed solution of the resource allocation problem based on self-organisation of the resource consumers in a system with limited resources.\nIn our approach, agents dynamically allocate tasks to servers that provide a limited amount of resources.\nIn our approach, agents select autonomously the execution platform for the task rather than ask a resource broker to do the allocation.\nAll control needed for our algorithm is distributed among the agents in the system.\nThey optimise the resource allocation process continuously over their lifetime to changes in the availability of shared resources by learning from past allocation decisions.\nThe only information available to all agents are resource load and allocation success information from past resource allocations.\nAdditional resource load information about servers is not disseminated.\nThe basic concept of our solution is inspired by inductive reasoning and bounded rationality introduced by W. Brian Arthur [2].\nThe proposed mechanism does not require a central controlling authority, resource management layer or introduce additional communication between agents to decide which task is allocated on which server.\nWe demonstrate that this mechanism performs well dynamic systems with a large number of tasks and can easily be adapted to various system sizes.\nIn addition, the overall system performance is not affected in case agents or servers fail or become unavailable.\nThe proposed approach provides an easy way to implement distributed resource allocation and takes into account multi-agent system tendencies toward autonomy, heterogeneity and unreliability of resources and agents.\nThis proposed technique can be easily supplemented by techniques for queuing or rejecting resource allocation requests of agents [11].\nSuch self-managing capabilities of software agents allow a reliable resource allocation even in an environment with unreliable resource providers.\nThis can be achieved by the mutual interactions between agents by applying techniques from complex system theory.\nSelforganisation of all agents leads to a self-organisation of the 74 978-81-904262-7-5 (RPS) c 2007 IFAAMAS system resources and is an emergent property of the system [21].\nThe remainder of the paper is structured as follows: The next section gives an overview of the related work already done in the area of load balancing, resource allocation or scheduling.\nSection 3 describes the model of a multi-agent environment that was used to conduct simulations for a performance evaluation.\nSections 4 and 5 describe the distributed resource allocation algorithm and presents various experimental results.\nA summary, conclusion and outlook to future work finish this paper.\n2.\nRELATED WORK Resource allocation is an important problem in the area of computer science.\nOver the past years, solutions based on different assumptions and constraints have been proposed by different research groups [7, 3, 15, 10].\nGenerally speaking, resource allocation is a mechanism or policy for the efficient and effective management of the access to a limited resource or set of resources by its consumers.\nIn the simplest case, resource consumers ask a central broker or dispatcher for available resources where the resource consumer will be allocated.\nThe broker usually has full knowledge about all system resources.\nAll incoming requests are directed to the broker who is the solely decision maker.\nIn those approaches, the resource consumer cannot influence the allocation decision process.\nLoad balancing [3] is a special case of the resource allocation problem using a broker that tries to be fair to all resources by balancing the system load equally among all resource providers.\nThis mechanism works best in a homogeneous system.\nA simple distributed technique for resource management is capacity planning by refusing or queuing incoming agents to avoid resource overload [11].\nFrom the resource owner perspective, this technique is important to prevent overload at the resource but it is not sufficient for effective resource allocation.\nThis technique can only provide a good supplement for distributed resource allocation mechanisms.\nMost of today``s techniques for resource allocation in grid computing toolkits like Globus [12] or Condor-G [13] coordinate the resource allocation with an auctioneer, arbitrator, dispatcher, scheduler or manager.\nThose coordinators usually need to have global knowledge on the state of all system resources.\nAn example of a dynamic resource allocation algorithm is the Cactus project [1] for the allocation of computational very expensive jobs.\nThe value of distributed solutions for the resource allocation problem has been recognised by research [10].\nInspired by the principles in stock markets, economic market models have been developed for trading resources for the regulation of supply and demand in the grid.\nThese approaches use different pricing strategies such as posted price models, different auction methods or a commodity market model.\nUsers try to purchase cheap resources required to run the job while providers try to make as much profit as possible and operate the available resources at full capacity.\nA collection of different distributed resource allocation techniques based on market models is presented in Clearwater [10].\nBuyya et al. developed a resource allocation framework based on the regulation of supply and demand [4] for Nimrod-G [6] with the main focus on job deadlines and budget constraints.\nThe Agent based Resource Allocation Model (ARAM) for grids is designed to schedule computational expensive jobs using agents.\nDrawback of this model is the extensive use of message exchange between agents for periodic monitoring and information exchange within the hierarchical structure.\nSubtasks of a job migrate through the network until they find a resource that meets the price constraints.\nThe job``s migration itinerary is determined by the resources in connecting them in different topologies [17].\nThe proposed mechanism in this paper eliminates the need of periodic information exchange about resource loads and does not need a connection topology between the resources.\nThere has been considerable work on decentralised resource allocation techniques using game theory published over recent years.\nMost of them are formulated as repetitive games in an idealistic and simplified environment.\nFor example, Arthur [2] introduced the so called El Farol bar problem that does not allow a perfect, logical and rational solution.\nIt is an ill-defined decision problem that assumes and models inductive reasoning.\nIt is probably one of the most studied examples of complex adaptive systems derived from the human way of deciding ill-defined problems.\nA variation of the El Farol problem is the so called minority game [8].\nIn this repetitive decision game, an odd number of agents have to choose between two resources based on past success information trying to allocate itself at the resource with the minority.\nGalstyan et al. [14] studied a variation with more than two resources, changing resource capacities and information from neighbour agents.\nThey showed that agents can adapt effectively to changing capacities in this environment using a set of simple look-up tables (strategies) per agent.\nAnother distributed technique that is employed for solving the resource allocation problem is based on reinforcement learning [18].\nSimilar to our approach, a set of agents compete for a limited number of resources based only on prior individual experience.\nIn this paper, the system objective is to maximise system throughput while ensuring fairness to resources, measured as the average processing time per job unit.\nA resource allocation approach for sensor networks based on self-organisation techniques and reinforcement learning is presented in [16] with main focus on the optimisation of energy consumption of network nodes.\nWe [19] proposed a self-organising load balancing approach for a single server with focus on optimising the communication costs of mobile agents.\nA mobile agent will reject a migration to a remote agent server, if it expects the destination server to be already overloaded by other agents or server tasks.\nAgents make their decisions themselves based on forecasts of the server utilisation.\nIn this paper a solution for a multi-server environment is presented without consideration of communication or migration costs.\n3.\nMODEL DESCRIPTION We model a distributed multi-agent system as a network of servers L = {l1, ... , lm}, agents A = {a1, ... , an} and tasks T = {T1, ..., Tm}.\nEach agent has a number of tasks Ti that needs to be executed during its lifetime.\nA task Ti requires U(Ti, t) resources for its execution at time t independent from its execution server.\nResources for the execution of tasks are provided by each server li.\nThe task``s execution location in general is specified by the map L : T \u00d7t \u2192 L.\nAn agent has to know about the existence of server resources in order to allocate tasks at those resources.\nWe write LS (ai) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 75 Sysytem Resources Host l4Host l3Host l2 2a 3a 4a a Host l1 1a 6a5 T1 T2 T3 T4 T5 T6 Figure 1: An illustration of our multi-server model with exclusive and shared resources for the agent execution.\nto address the set of resources known by agent ai.\nResources in the system can be used by all agents for the execution of tasks.\nThe amount of provided resources C(li, t) of each server can vary over time.\nThe resource utilisation of a server li at time t is calculated using equation 1, by adding the resource consumption U(Tj, t) of each task Tj that is executed at the resource at time t. All resource units used in our model represent real metrics such as memory or processor cycles.\nU(li, t) = n j=1 U(Tj, t)| L(Tj, t) = li (1) Additional to the case that the total amount of system resources is enough to execute all tasks, we are also interested in the case that not enough system resources are provided to fulfil all allocation requests.\nThat is, the overall shared resource capacity is lower than the amount of requested resources by agents.\nIn this case, some agents must wait with their allocation request until free resources are expected.\nThe multi-agent system model used for our simulations is illustrated in Fig. 1.\n4.\nSELF-ORGANISING RESOURCE ALLOCATION The resource allocation algorithm as described in this section is integrated in each agent.\nThe only information required in order to make a resource allocation decision for a task is the server utilisation from completed task allocations at those servers.\nThere is no additional information dissemination about server resource utilisation or information about free resources.\nOur solution demonstrates that agents can self-organise in a dynamic environment without active monitoring information that causes a lot of network traffic overhead.\nAdditionally, we do not have any central controlling authority.\nAll behaviour that leads to the resource allocation is created by the effective competition of the agents for shared resources and is a purely emergent effect.\nThe agents in our multi-agent system compete for resources or a set of resources to execute tasks.\nThe collective action of these agents change the environment and, as time goes by, they have to adapt to these changes to compete more effectively in the newly created environment.\nOur approach is based on different agent beliefs, represented by predictors and different information about their environment.\nAgents prefer a task allocation at a server with free resources.\nHowever, there is no way to be sure of the amount of free server resources in advance.\nAll agents have the same preferences and a agent will allocate a task on a server if it expects enough free resources for its execution.\nThere is no communication between agents.\nActions taken by agents influence the actions of other agents indirectly.\nThe applied mechanism is inspired by inductive reasoning and bounded rationality principles [2].\nIt is derived from the human way of deciding ill-defined problems.\nHumans tend to keep in mind many hypotheses and act on the most plausible one.\nTherefore, each agent keeps track of the performance of a private collection of its predictors and selects the one that is currently most promising for decision making.\n4.1 Resource Allocation Algorithm This section describes the decision mechanism for our selforganising resource allocation.\nAll necessary control is integrated in the agents themselves.\nThere is no higher controlling authority, management layer for decision support or information distribution.\nAll agents have a set of predictors for each resource to forecast the future resource utilisation of these servers for potential task allocation.\nTo do so, agents use historical information from past task allocations at those resources.\nBased on the forecasted resource utilisation, the agent will make its resource allocation decision.\nAfter the task has finished its execution and returned the results back to the agent, the predictor performances are evaluated and history information is updated.\nAlgorithm 1 shows the resource allocation algorithm for each agent.\nThe agent first predicts the next step``s resource load for each server with historical information (line 3-7).\nIf the predicted resource load plus the task``s resource consumption is below the last known server capacity, this server is added to the list of candidates for the allocation.\nThe agent then evaluates if any free shared resources for the task allocation are expected.\nIn the case, no free resources are expected (line 9), the agent will explore resources by allocating the task at a randomly selected server from all not predictable servers to gather resource load information.\nThis is the standard case at the beginning of the agent life-cycle as there is no information about the environment available.\nThe resource load prediction itself uses a set of r predictors P(a, l) := {pi|1 \u2264 i \u2264 r} per server.\nOne predictor pA \u2208 P of each set is called active predictor, which forecasts the next steps resource load.\nEach predictor is a function P : H \u2192 \u2135+ \u222a {0} from the space of history data H to a non-negative integer, which is the forecasted value.\nFor example, a predictor could forecast a resource load equal to the average amount of occupied resources during the last execution at this resource.\nA history H of resource load information is a list of up to m history items hi = (xi, yi), comprising the observation date xi and the observed value yi.\nThe most recent history item is h0.\nHm(li) = ((x0, y0), ..., (xk, yk))| 0 \u2264 k < m (2) Our algorithm uses a set of predictors rather than only one, to avoid that all agents make the same decision based on the predicted value leading to an invalidation of their beliefs.\nImagine that only one shared resource is known by a number of agents using one predictor forecasting the 76 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) ResourceLoad Time (a) Predictor6 Predictor7 Predictor 8 Predictor9 Predictor10 Predictor2 Predictor 4 Predictor 3 Predictor5 Predictor1 (b) Figure 2: (a) Collected resource load information from previous task allocations that is used for future predictions.\n(b) Predictor``s probability distribution for selecting the new active predictor.\nsame value as the last known server resource utilisation.\nAll agents that allocated a task at a server that was slightly overloaded would dismiss another allocation at this server as they expect the server to be overloaded again based on the predictions.\nAs the result, the server would have a large amount of free resources.\nA set of different predictors that predict different values avoids this situation of invalidating the beliefs of the agents [19].\nAn example of a collected resource load information from the last 5 visits of an agent at a shared resource can be seen in Fig. 2(a).\nIt shows that the resource was visited frequently, which means free resources for execution were available and an exploration of other servers was unnecessary.\nThis may change in the future as the resource load has significantly increased recently.\nIn the case where the set of servers predicted having free resources available is not empty (line 13), the agent selects one of those for allocation.\nWe have implemented two alternative algorithms for the selection of a server for the task allocation.\nAlgorithm 1 Resource Allocation algorithm of an agent 1 L \u2190 \u2205 \/\/server with free resources 2 u \u2190 U(T, t + 1) \/\/task resource consumption 3 for all P(a, l)|l \u2208 LS (a) do 4 U(l) \u2190 resourceLoadPrediction(P(a, l), t + 1) 5 if U(l) + u \u2264 C(l) then 6 L \u2190 L \u222a {P(a, l)} 7 end if 8 end for 9 if L = \u2205 then 10 \/\/all unpredictable shared resources 11 E \u2190 LS \/{l \u2208 LS (a)|P(a, l) \u2208 L} 12 allocationServer \u2190 a random element of E 13 else 14 allocationServer \u2190 serverSelection (L) 15 end if 16 return allocationServer Algorithm 2 shows the first method, which is a non-deterministic selection according to the predictability of the server resource utilisation.\nA probability distribution is calculated from the confidence levels of the resource predictions.\nThe confidence level depends on three factors: the accuracy of the active predictor, the amount of historical information about the server and the average age of the history information (see Eq.\n3.\nThe server with the highest confidence level has the biggest chance to be selected as the active server.\nG(P) = w1 \u00b7 size(H) m + w2 \u00b7 Age(H) max Age(H) + w3 \u00b7 g(p) max (g(p)) (3) where: wi \u2212 weights size(H) \u2212 number of data in history m \u2212 maximal number of history values Age(H) \u2212 average age of historical data g(p) \u2212 see eq.\n4 Algorithm 2 serverSelection(L)- best predictable server 1 for all P(a, l) \u2208 L do 2 calculate G(P) 3 end for 4 transform all G(P) into a probability distribution 5 return l \u2208 LS selected according to the probability distribution Algorithm 3 serverSelection(L) - most free resources 1 for all P(a, l) \u2208 L do 2 c(l) \u2190 C(l) \u2212 Ul 3 end for 4 return l \u2208 LS |c(l) is maximum The second alternative selection method of a server from the set of predicted servers with free resources is deterministic and shown in algorithm 3.\nThe server with most expected free resources from the set L of server with expected free resources is chosen.\nIn the case where all agents predict the most free resources for one particular server, all agents will allocate the task at this server, which would invalidate the agent``s beliefs.\nHowever, our experiments show that different individual history information and the non-deterministic active predictor selection usually prevent this situation.\nIn the case, the resource allocation algorithm does not return any server (Alg.\n1, line 16), the allocation at a resource in not recommended.\nThe agent will not allocate the task at a resource.\nThis case happens only if a resource load prediction for all servers is possible but no free resources are expected.\nAfter the agent execution has finished, the evaluation process described in algorithm 4 is preformed.\nThis process is divided into three cases.\nFirst, the task was not allocated at a resource.\nIn this case, the agent cannot decide if the decision not to allocate the task was correct or not.\nThe agent then removes old historical data.\nThis is necessary for a successful adaptation in the future.\nIf the agent would not delete old historical information, the prediction would always forecast that no free resources are available.\nThe agent would never allocate a task at one of the resources in the future.\nOld historical information is removed from the agent``s resource history using a decay rate.\nThe decay rate is a cumulative distribution function that calculates the probability that a history item is deleted after it has reached a certain The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 77 Age of the history data Decayrate 0 1 older Figure 3: Decay rate of historical information age.\nThe current implementation uses a constant probability density function in a configurable domain.\nFigure 3 shows an example of such a cumulative distribution function for the decay rate.\nDepending on the environment, the probability density function must be altered.\nIf the number of potential server per agent is high, historical information must be kept longer to avoid the exploration of unexplored resources.\nIn addition, a dynamic environment requires more up-to-date information to make more reliable predictions.\nThe second case in the evaluation process (Alg.\n4, line 5) describes the actions taken after a server was visited the first time.\nThe agent creates a new predictor set for this server and records the historical information.\nAll predictors for this set are chosen randomly from some predefined set.\ng(p) = l i=0 ri (4) where: ri = \u23a7 \u23a8 \u23a9 1 if ith correct decision 0 if ith unknown outcome \u22121 if ith wrong decision The general case (Alg.\n4, line 8) is the evaluation after the agent allocated the task at a resource.\nThe agent evaluates all predictors of the predictor set for this resource by predicting the resource load with all predictors based on the old historical data.\nPredictors that made a correct prediction meaning the resource allocation was correct, will receive a positive rating.\nThis is the case that the resource was not overloaded and free resources for execution were predicted, or the resource was overloaded and this predictor would have prevented the allocation.\nAll predictors that predicted values which would lead to wrong decisions will receive negative ratings.\nIn all other cases, which includes that no prediction was possible, a neutral rating is given to the predictors.\nBased on these performance ratings, the confidence levels are calculated using equation 4.\nThe confidence for all predictors that cannot predict with the current historical information about the server is set to zero to prevent the selection of those as the new active predictor.\nThese values are transformed into a probability distribution.\nAccording to this probability distribution the new active predictor is chosen, implemented as a roulette wheel selection.\nFigure 2(b) illustrates the probabilities of a set of 10 predictors, which have been calculated from the predictor confidence levels.\nEven if predictor 9 has the highest selection probability, its was not chosen by roulette wheel selection process as the active predictor.\nThis non-deterministic predictor selection prevents the invalidation of the agents'' beliefs in case agents have the same set of predictors.\nThe prediction accuracy that is the error of the prediction compared to the observed value is not taken into consideration.\nSuppose the active predictor predicts slightly above the resource capacity which leads not to a allocation on a resources.\nIn fact, enough resources for the execution would be available.\nA less accurate prediction which is far below the capacity would lead to the correct decision and is therefore preferred.\nThe last action of the evaluation algorithm (Alg.\n4, line 22) updates the history with the latest resource load information of the server.\nThe oldest history data is overwritten if already m history values are recorded for the server.\nAlgorithm 4 Decision Evaluation 1 if l \u2208 LE then 2 for all P(a, l)|l \u2208 LS (a) do 3 evaporate old historical data 4 end for 5 else if P(a, l) = null then 6 create (P(a, l)) 7 update H(l) 8 else 9 for all p \u2208 P(a, l) do 10 pred \u2190 resourceLoadPrediction(p) 11 if (U(l) \u2264 C(l) AND pred + U(a, t) \u2264 C(l)) OR (U(l) > C(l) AND pred + U(a, t) > C(l)) then 12 addPositiveRating(p) 13 else if U(l) \u2264 C(l) AND pred + U(a, t) > C(l) OR U(l) \u2264 C(l) AND pred + U(a, t) > C(l) then 14 addNegativeRating(p) 15 else 16 addNeutralRating(p) 17 end if 18 end for 19 calculate all g(p); g(p) \u2190 0, if p is not working 20 transform all g(p) into a probability distribution 21 pA \u2190 p \u2208 P(a, l) is selected according to this probability distribution 22 update H(l) 23 end if 4.2 Remarks and Limitation of the Approach Our prediction mechanism uses a number of different types of simple predictors rather than of one sophisticated predictor.\nThis method assures that agents can compete more effectively in a changing environment.\nDifferent types of predictors are suitable for different situations and environments.\nTherefore, all predictors are being evaluated after each decision and the active predictor is selected.\nThis nondeterministic of the new active predictor supports that the agents'' beliefs will not be invalidated, which happens in the case that all predictors are making the same decision.\nEspecially if there is only one shared resource available and all agents have only the choice to go the this one shared resource or not [19].\nOur self-organising approach is robust against failures of resources or agents in the system.\nIf they join or leave, the system can self-organise quickly and adapts to the new conditions.\nThere is no classical bottleneck or single point of failure like in centralised mechanisms.\nThe limitations are the reliance on historical resource utilisation information about other servers.\nA forecast of the resource utilisation of a remote server is only possible if an agent has a number of historical information about a shared resource.\nIf the number of servers per agent is very large, there is no efficient way to gather historical information about remote servers.\nThis problem occurs if the amount of provided shared resources 78 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) is limited and not enough for all resource consumers.\nIn this case, the an agent would randomly try all known servers until it will find one with free resources or there is no one.\nIn the worst case, by the time for trying all servers, historical information of the servers is already outdated.\n5.\nEXPERIMENTAL EVALUATION The first part of this section gives a short overview of the setup of our simulation environment.\nIn the rest of the section, results of the experiments are presented and discussed.\nAll experiments are conducted in a special testbed that simulates and models a multi-agent system.\nWe have implemented this test-bed in the Java programming language, independent from any specific agent toolkit.\nIt allows a variety of experiments in in stable as well as dynamic environments with a configurable number of agents, tasks and servers.\nAn event-driven model is used to trigger all activities in the system.\nFor all simulations, we limited the number of history data for each server to 10, the number of performance ratings per predictor to 10 and assigned 10 predictors to every predictor set for each agent.\nAll predictors are chosen randomly from a arbitrary predefined set of 32 predictors of the following type.\nPredictors differ in different cycles or window sizes.\n- n-cycle predictor: p(n) = yn uses the nth -last history value - n-mean predictor: p(n) = 1 n \u00b7 n i=1 yi uses the mean value of the n-last history values - n-linear regression predictor: p(n, t) = a\u00b7t+b uses the linear regression value from the last n history values where a, b are calculated using linear regression with least squares fitting under consideration of the last n history data.\n- n-distribution predictor: uses a random value from the frequency distribution of the n last history values - n-mirror predictor: p(n) = 2 \u00b7 H \u2212 yn uses the mirror image around the mean of all history values of the nth last history value The efficiency of our proposed self-organising resource allocation is assessed by the resource load development of each server over the simulation as well as the total resource load development cumulated over all shared resources.\nResource loads for each server are calculated using equation 1 as the sum of the resource consumption of all currently executed agents at this server.\nThe total resource load of the system is calculated as the sum of the resources load of all resources.\nThe self-organising resource allocation algorithm has random elements.\nTherefore, the presented results show mean values and standard derivation calculated over 100 repeated experiments.\n5.1 Experimental Setup The following parameters have an impact on the resource allocation process.\nWe give an overview of the parameters and a short description.\n- Agents: The number of agents involved in the resource allocation.\nThis number varies in the experiments between 650 and 750 dependent on the total amount of available system resources.\n- Resource consumption: Each task consumes server resources for its execution.\nThe resource consumption is assigned randomly to each task prior to its allocation from an interval.\nResource consumption is specified in resource units which corresponds to real world metrics like memory or processor cycles.\n- Agent home server: All agents are located on a home agent server.\nThe resources of those servers not considered in our simulation and does not affect the resource allocation performance.\n- Server resources: Experiments use servers with different amount of available shared resources.\nThe first experiment is conducted in a static server environment that provides the same amount of shared resources, while the other experiment varies the available server resource during the simulation.\nThe total amount of resources remains constant in both experiments.\n- Execution time: The execution time of a task for the execution, independent from the execution platform.\nFor this time the task consumes the assigned amount of server resources.\nThis parameter is randomly assigned before the execution.\n- Task creation time: The time before the next task is created after successful or unsuccessful completion.\nThis parameter influences the age of the historical information about resources and has a major influence on the length of the initial adaptation phase.\nThis parameter is randomly assigned after the task was completed.\n5.2 Experimental Results This section shows results from selected experiments that demonstrate the performance of our proposed resource allocation mechanism.\nThe first experiment show the performance in a stable environment where a number of agents allocate tasks to servers that provide a constant amount of resources.\nThe second experiment was conducted in a dynamic server environment with a constant number of agents.\nThe first experiment shows our model in a stable 3-server environment that provide a total amount of 7000 resource units.\nThe resource capacity of each server remains constant over the experiment.\nWe used 650 agents with the parameters of the execution time between 1 and 15 time units and a task creation time in the interval [0 \u2212 30] time units.\nThe task``s resource consumption is randomly assigned from the interval [1 \u2212 45] resource units.\nFigure 4 shows the results from 100 repetitions of this experiment.\nFigure 4(a) shows that the total amount of provided resources is larger than the demand of resource in average.\nAt the beginning of the experiment, all agents allocate their tasks randomly at one of the available servers and explore the available capacities and resource utilisations for about 150 time units.\nThis initial exploration phase shows that the average resource load of each server has a similar level.\nThis causes an overload situation at server 1 because of its low capacity of shared resources, and a large amount of free resources on server 2.\nAgents that allocated tasks to server 1 detect the overload situation and explore randomly other available servers.\nThey find free resources at server 2.\nAfter learning period, the agents have self-organised themselves in this stable environment and find a stable solution for the allocation of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 79 0 250\u00a0500\u00a0750 1,000 1,250 1,500 Time 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 ResourceLoad (a) Total resource load versus total shared resource capacity 0 250\u00a0500\u00a0750 1,000 1,250 1,500 Time 0 500 1,000 1,500 2,000 2,500 ResourceLoad (b) Resource load server 0 0 250\u00a0500\u00a0750 1,000 1,250 1,500 Time 0 500 1,000 1,500 2,000 2,500 ResourceLoad (c) Resource load server 1 0 250\u00a0500\u00a0750 1,000 1,250 1,500 Time 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 ResourceLoad (d) Resource load server 2 Figure 4: Results of experiment 1 in a static 3-server environment averaged over 100 repetitions.\nall tasks.\nThe standard deviation of the resource loads are small for each server, which indicates that our distributed approach find stable solutions in almost every run.\nThis experiment used algorithm 2 for the selection of the active server.\nWe also ran the same experiment with the most free resources selection mechanism to select the active server.\nThe resource allocation for each server is similar.\nThe absolute amount of free resources per server is almost the same.\nExperiment 2 was conducted in a dynamic 3-server environment with a number of 750 agents.\nThe amount of resources of server 0 and server 1 changes periodically, while the total amount of available resources remains constant.\nServer 0 has an initial capacity of 1000 units, server 1 start with a capacity of 4000 units.\nThe change in capacity starts after 150 time units, which is approximately the end of the learning phase.\nFigure 5 (b, c, d) shows the behaviour of our self-organising resource allocation in this environment.\nAll agents use the deterministic most free resources selection mechanism to select the active server.\nIt can bee seen in Fig. 5(b) and 5(c) that the number of allocated resources to server 0 and server 1 changes periodically with the amount of provided resources.\nThis shows that agents can sense available resources in this dynamic environment and are able to adapt to those changes.\nThe resource load development of server 2 (see Fig. 5(d)) shows a periodic change because some agent try to be allocated tasks to this server in case their previously favoured server reduce the amount of shared resources.\nThe total resource load of all shared resources is constant over the experiments, which indicates the all agents allocate their tasks to one of the shared resource (comp.\nFig. 4(a)).\n6.\nCONCLUSIONS AND FUTURE WORK In this paper a self-organising distributed resource allocation technique for multi-agent systems was presented.\nWe enable agents to select the execution platform for their tasks themselves before each execution at run-time.\nIn our approach the agents compete for an allocation at one of the 0 500 1,000 1,500 2,000 Time 0 2,500 5,000 7,500 ResourceLoad (a) Total resource load versus total shared resource capacity 0 500 1,000 1,500 2,000 Time 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 ResourceLoad (b) Resource load server 1 0 500 1,000 1,500 2,000 Time 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 ResourceLoad (c) Resource load server 2 0 500 1,000 1,500 2,000 Time 0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 ResourceLoad (d) Resource load server 3 Figure 5: Results of experiment 2 in a dynamic server environment averaged over 100 repetitions.\navailable shared resource.\nAgents sense their server environment and adopt their action to compete more efficient in the new created environment.\nThis process is adaptive and has a strong feedback as allocation decisions influence indirectly decisions of other agents.\nThe resource allocation is a purely emergent effect.\nOur mechanism demonstrates that resource allocation can be done by the effective competition of individual and autonomous agents.\nNeither do they need coordination or information from a higher authority nor is an additional direct communication between agents required.\nThis mechanism was inspired by inductive reasoning and bounded rationality principles which enables the agents'' adaptation of their strategies to compete effectively in a dynamic environment.\nIn the case of a server becomes unavailable, the agents can adapt quickly to this new situation by exploring new resources or remain at the home server if an allocation is not possible.\nEspecially in dynamic and scalable environments such as grid systems, a robust and distributed mechanism for resource allocation is required.\nOur self-organising resource allocation approach was evaluated with a number of simulation experiments in a dynamic environment of agents and server resources.\nThe presented results for this new approach for strategic migration optimisation are very promising and justify further investigation in a real multi-agent system environment.\nIt is a distributed, scalable and easy-to-understand policy for the regulation of supply and demand of resources.\nAll control is implemented in the agents.\nA simple decision mechanism based on different beliefs of the agent creates an emergent behaviour that leads to effective resource allocation.\nThis approach can be easily extended or supported by resource balancing\/queuing mechanisms provided by resources.\nOur approach adapts to changes in the environment but it is not evolutionary.\nThere is no discovery of new strategies by the agents.\nThe set of predictors stays the same over the whole life.\nIn fact, we believe that this could further improve the system``s behaviour over a long term period and could be 80 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) investigated in the future.\nThe evolution would be very slow and selective and will not influence the system behaviour in a short-term period that is covered by our experimental results.\nIn the near future we will investigate if an automatic adaptation of the decay rate of historical information our algorithm is possible and can improve the resource allocation performance.\nThe decay rate is currently predefined and must be altered manually depending on the environment.\nA large number of shared resources requires older historical information to avoid a too frequently resources exploration.\nIn contrast, a dynamic environment with varying capacities requires more up-to-date information to make more reliable predictions.\nWe are aware of the long learning phase in environments with a large number of shared resources known by each agent.\nIn the case that more resources are requested by agents than shared resources are provided by all servers, all agents will randomly explore all known servers.\nThis process of acquiring resource load information about all servers can take a long time in the case that no not enough shared resources for all tasks are provided.\nIn the worst case, by the time for exploring all servers, historical information of some servers could be already outdated and the exploration starts again.\nIn this situation, it is difficult for an agent to efficiently gather historical information about all remote servers.\nThis issue needs more investigation in the future.\n7.\nREFERENCES [1] G. Allen, W. Benger, T. Dramlitsch, T. Goodale, H.-C.\nHege, G. Lanfermann, A. Merzky, T. Radke, E. Seidel, and J. Shalf.\nCactus Tools for Grid Applications.\nIn Cluster Computing, volume 4, pages 179-188, Hingham, MA, USA, 2001.\nKluwer Academic Publishers.\n[2] W. B. Arthur.\nInductive Reasoning and Bounded Rationality.\nAmerican Economic Review (Papers and Proceedings), 84(2):406-411, May 1994.\n[3] T. Bourke.\nServer Load Balancing.\nO``Reilly Media, 1 edition, August 2001.\n[4] R. Buyya.\nEconomic-based Distributed Resource Management and Scheduling for Grid Computing.\nPhD thesis, Monash University, Melbourne, Australia, May 2002.\n[5] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger.\nEconomic Models for Resource Management and Scheduling in Grid Computing.\nSpecial Issue on Grid Computing Environments of the Journal Concurrency and Computation, 13-15(14):1507-1542, 2002.\n[6] R. Buyya, S. Chapin, and D. DiNucci.\nArchitectural Models for Resource Management in the Grid.\nIn Proceedings of the First International Workshop on Grid Computing, pages 18-35.\nSpringer LNCS, 2000.\n[7] T. L. Casavant and J. G. Kuhl.\nA taxonomy of scheduling in general-purpose distributed computing systems.\nIEEE Transactions on Software Engineering, 14(2):141-154, February 1988.\n[8] D. Challet and Y. Zhang.\nEmergence of Cooperation and Organization in an Evolutionary Game.\nPhysica A, 407(246), 1997.\n[9] K.-P.\nChow and Y.-K.\nKwok.\nOn load balancing for distributed multiagent computing.\nIn IEEE Transactions on Parallel and Distributed Systems, volume 13, pages 787- 801.\nIEEE, August 2002.\n[10] S. H. Clearwater.\nMarket-based control.\nA Paradigm for Distributed Resource Allocation.\nWorld Scientific, Singapore, 1996.\n[11] C. Fl\u00a8us.\nCapacity Planning of Mobile Agent Systems Designing Efficient Intranet Applications.\nPhD thesis, Universit\u00a8at Duisburg-Essen (Germany), Feb. 2005.\n[12] I. Foster and C. Kesselman.\nGlobus: A Metacomputing Infrastructure Toolkit.\nInternational Journal of Supercomputing Applications, 11(2):115-129, 1997.\n[13] J. Frey, T. Tannenbaum, I. Foster, M. Livny, and S. Tuecke.\nCondor-G: A Computation Management Agent for Multi-Institutional Grids.\nCluster Computing, 5(3):237-246, 2002.\n[14] A. Galstyan, S. Kolar, and K. Lerman.\nResource allocation games with changing resource capacities.\nIn Proceedings of the second international joint conference on Autonomous agents and multiagent systems, pages 145 - 152, Melbourne, Australia, 2003.\nACM Press, New York, NY, USA.\n[15] C. Georgousopoulos and O. F. Rana.\nCombining state and model-based approaches for mobile agent load balancing.\nIn SAC ``03: Proceedings of the 2003 ACM symposium on Applied computing, pages 878-885, New York, NY, USA, 2003.\nACM Press.\n[16] G. Mainland, D. C. Parkes, and M. Welsh.\nDecentralized Adaptive Resource Allocation for Sensor Networks.\nIn Proceedings of the 2nd USENIX Symposium on Network Systems Design and Implementation(NSDI ``05), May 2005.\n[17] S. Manvi, M. Birje, and B. Prasad.\nAn Agent-based Resource Allocation Model for Computational Grids.\nMultiagent and Grid Systems - An International Journal, 1(1):17-27, 2005.\n[18] A. Schaerf, Y. Shoham, and M. Tennenholtz.\nAdaptive Load Balancing: A Study in Multi-Agent Learning.\nIn Journal of Artificial Intelligence Research, volume 2, pages 475-500, 1995.\n[19] T. Schlegel, P. Braun, and R. Kowalczyk.\nTowards Autonomous Mobile Agents with Emergent Migration Behaviour.\nIn Proceedings of the Fifth International Joint Conference on Autonomous Agents & Multi Agent Systems (AAMAS 2006), Hakodate (Japan), pages 585-592.\nACM Press, May 2006.\n[20] W3C.\nWeb services activity, 2002.\nhttp:\/\/www.w3.org\/2002\/ws - last visited 23.10.2006.\n[21] M. M. Waldrop.\nComplexity: The Emerging Science at the Edge of Order and Chaos.\nSimon & Schuster, 1st edition, 1992.\n[22] R. Wolsk, J. S. Plank, J. Brevik, and T. Bryan.\nAnalyzing Market-Based Resource Allocation Strategies for the Computational Grid.\nIn International Journal of High Performance Computing Applications, volume 15, pages 258-281.\nSage Science Press, 2001.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 81","lvl-3":"Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment\nABSTRACT\nDistributed applications require distributed techniques for efficient resource allocation.\nThese techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments.\nIn this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multiagent systems.\nOur solution is based on the self-organisation of agents, which does not require any facilitator or management layer.\nThe resource allocation in the system is a purely emergent effect.\nWe present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.\n1.\nINTRODUCTION\nWith the increasing popularity of distributed computing technologies such as Grid [12] and Web services [20], the Internet is becoming a powerful computing platform where different software peers (e.g., agents) can use existing computing resources to perform tasks.\nIn this sense, each agent is a resource consumer that acquires a certain amount of resources for the execution of its tasks.\nIt is difficult for a central resource allocation mechanism to collect and manage the information about all shared resources and resource consumers to effectively perform the allocation of resources.\nHence, distributed solutions of the resource allocation problem are required.\nResearchers have recognised these requirements [10] and proposed techniques for distributed resource allocation.\nA promising kind of such distributed approaches are based on economic market models [4], inspired by principles of real stock markets.\nEven if those approaches are distributed, they usually require a facilitator for pricing, resource discovery and dispatching jobs to resources [5, 9].\nAnother mainly unsolved problem of those approaches is the fine-tuning of price and time, budget constraints to enable efficient resource allocation in large, dynamic systems [22].\nIn this paper we propose a distributed solution of the resource allocation problem based on self-organisation of the resource consumers in a system with limited resources.\nIn our approach, agents dynamically allocate tasks to servers that provide a limited amount of resources.\nIn our approach, agents select autonomously the execution platform for the task rather than ask a resource broker to do the allocation.\nAll control needed for our algorithm is distributed among the agents in the system.\nThey optimise the resource allocation process continuously over their lifetime to changes in the availability of shared resources by learning from past allocation decisions.\nThe only information available to all agents are resource load and allocation success information from past resource allocations.\nAdditional resource load information about servers is not disseminated.\nThe basic concept of our solution is inspired by inductive reasoning and bounded rationality introduced by W. Brian Arthur [2].\nThe proposed mechanism does not require a central controlling authority, resource management layer or introduce additional communication between agents to decide which task is allocated on which server.\nWe demonstrate that this mechanism performs well dynamic systems with a large number of tasks and can easily be adapted to various system sizes.\nIn addition, the overall system performance is not affected in case agents or servers fail or become unavailable.\nThe proposed approach provides an easy way to implement distributed resource allocation and takes into account multi-agent system tendencies toward autonomy, heterogeneity and unreliability of resources and agents.\nThis proposed technique can be easily supplemented by techniques for queuing or rejecting resource allocation requests of agents [11].\nSuch self-managing capabilities of software agents allow a reliable resource allocation even in an environment with unreliable resource providers.\nThis can be achieved by the mutual interactions between agents by applying techniques from complex system theory.\nSelforganisation of all agents leads to a self-organisation of the\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nRELATED WORK\nResource allocation is an important problem in the area of computer science.\nOver the past years, solutions based on different assumptions and constraints have been proposed by different research groups [7, 3, 15, 10].\nGenerally speaking, resource allocation is a mechanism or policy for the efficient and effective management of the access to a limited resource or set of resources by its consumers.\nIn the simplest case, resource consumers ask a central broker or dispatcher for available resources where the resource consumer will be allocated.\nThe broker usually has full knowledge about all system resources.\nAll incoming requests are directed to the broker who is the solely decision maker.\nIn those approaches, the resource consumer cannot influence the allocation decision process.\nLoad balancing [3] is a special case of the resource allocation problem using a broker that tries to be fair to all resources by balancing the system load equally among all resource providers.\nThis mechanism works best in a homogeneous system.\nA simple distributed technique for resource management is capacity planning by refusing or queuing incoming agents to avoid resource overload [11].\nFrom the resource owner perspective, this technique is important to prevent overload at the resource but it is not sufficient for effective resource allocation.\nThis technique can only provide a good supplement for distributed resource allocation mechanisms.\nMost of today's techniques for resource allocation in grid computing toolkits like Globus [12] or Condor-G [13] coordinate the resource allocation with an auctioneer, arbitrator, dispatcher, scheduler or manager.\nThose coordinators usually need to have global knowledge on the state of all system resources.\nAn example of a dynamic resource allocation algorithm is the Cactus project [1] for the allocation of computational very expensive jobs.\nThe value of distributed solutions for the resource allocation problem has been recognised by research [10].\nInspired by the principles in stock markets, economic market models have been developed for trading resources for the regulation of supply and demand in the grid.\nThese approaches use different pricing strategies such as posted price models, different auction methods or a commodity market model.\nUsers try to purchase cheap resources required to run the job while providers try to make as much profit as possible and operate the available resources at full capacity.\nA collection of different distributed resource allocation techniques based on market models is presented in Clearwater [10].\nBuyya et al. developed a resource allocation framework based on the regulation of supply and demand [4] for Nimrod-G [6] with the main focus on job deadlines and budget constraints.\nThe Agent based Resource Allocation Model (ARAM) for grids is designed to schedule computational expensive jobs using agents.\nDrawback of this model is the extensive use of message exchange between agents for periodic monitoring and information exchange within the hierarchical structure.\nSubtasks of a job migrate through the network until they find a resource that meets the price constraints.\nThe job's migration itinerary is determined by the resources in connecting them in different topologies [17].\nThe proposed mechanism in this paper eliminates the need of periodic information exchange about resource loads and does not need a connection topology between the resources.\nThere has been considerable work on decentralised resource allocation techniques using game theory published over recent years.\nMost of them are formulated as repetitive games in an idealistic and simplified environment.\nFor example, Arthur [2] introduced the so called El Farol bar problem that does not allow a perfect, logical and rational solution.\nIt is an ill-defined decision problem that assumes and models inductive reasoning.\nIt is probably one of the most studied examples of complex adaptive systems derived from the human way of deciding ill-defined problems.\nA variation of the El Farol problem is the so called minority game [8].\nIn this repetitive decision game, an odd number of agents have to choose between two resources based on past success information trying to allocate itself at the resource with the minority.\nGalstyan et al. [14] studied a variation with more than two resources, changing resource capacities and information from neighbour agents.\nThey showed that agents can adapt effectively to changing capacities in this environment using a set of simple look-up tables (strategies) per agent.\nAnother distributed technique that is employed for solving the resource allocation problem is based on reinforcement learning [18].\nSimilar to our approach, a set of agents compete for a limited number of resources based only on prior individual experience.\nIn this paper, the system objective is to maximise system throughput while ensuring fairness to resources, measured as the average processing time per job unit.\nA resource allocation approach for sensor networks based on self-organisation techniques and reinforcement learning is presented in [16] with main focus on the optimisation of energy consumption of network nodes.\nWe [19] proposed a self-organising load balancing approach for a single server with focus on optimising the communication costs of mobile agents.\nA mobile agent will reject a migration to a remote agent server, if it expects the destination server to be already overloaded by other agents or server tasks.\nAgents make their decisions themselves based on forecasts of the server utilisation.\nIn this paper a solution for a multi-server environment is presented without consideration of communication or migration costs.\n3.\nMODEL DESCRIPTION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 75\n4.\nSELF-ORGANISING RESOURCE ALLOCATION\n4.1 Resource Allocation Algorithm\n4.2 Remarks and Limitation of the Approach\n5.\nEXPERIMENTAL EVALUATION\n5.1 Experimental Setup\n5.2 Experimental Results\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 79\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper a self-organising distributed resource allocation technique for multi-agent systems was presented.\nWe enable agents to select the execution platform for their tasks themselves before each execution at run-time.\nIn our approach the agents compete for an allocation at one of the\nFigure 5: Results of experiment 2 in a dynamic server environment averaged over 100 repetitions.\navailable shared resource.\nAgents sense their server environment and adopt their action to compete more efficient in the new created environment.\nThis process is adaptive and has a strong feedback as allocation decisions influence indirectly decisions of other agents.\nThe resource allocation is a purely emergent effect.\nOur mechanism demonstrates that resource allocation can be done by the effective competition of individual and autonomous agents.\nNeither do they need coordination or information from a higher authority nor is an additional direct communication between agents required.\nThis mechanism was inspired by inductive reasoning and bounded rationality principles which enables the agents' adaptation of their strategies to compete effectively in a dynamic environment.\nIn the case of a server becomes unavailable, the agents can adapt quickly to this new situation by exploring new resources or remain at the home server if an allocation is not possible.\nEspecially in dynamic and scalable environments such as grid systems, a robust and distributed mechanism for resource allocation is required.\nOur self-organising resource allocation approach was evaluated with a number of simulation experiments in a dynamic environment of agents and server resources.\nThe presented results for this new approach for strategic migration optimisation are very promising and justify further investigation in a real multi-agent system environment.\nIt is a distributed, scalable and easy-to-understand policy for the regulation of supply and demand of resources.\nAll control is implemented in the agents.\nA simple decision mechanism based on different beliefs of the agent creates an emergent behaviour that leads to effective resource allocation.\nThis approach can be easily extended or supported by resource balancing\/queuing mechanisms provided by resources.\nOur approach adapts to changes in the environment but it is not evolutionary.\nThere is no discovery of new strategies by the agents.\nThe set of predictors stays the same over the whole life.\nIn fact, we believe that this could further improve the system's behaviour over a long term period and could be\ninvestigated in the future.\nThe evolution would be very slow and selective and will not influence the system behaviour in a short-term period that is covered by our experimental results.\nIn the near future we will investigate if an automatic adaptation of the decay rate of historical information our algorithm is possible and can improve the resource allocation performance.\nThe decay rate is currently predefined and must be altered manually depending on the environment.\nA large number of shared resources requires older historical information to avoid a too frequently resources exploration.\nIn contrast, a dynamic environment with varying capacities requires more up-to-date information to make more reliable predictions.\nWe are aware of the long learning phase in environments with a large number of shared resources known by each agent.\nIn the case that more resources are requested by agents than shared resources are provided by all servers, all agents will randomly explore all known servers.\nThis process of acquiring resource load information about all servers can take a long time in the case that no not enough shared resources for all tasks are provided.\nIn the worst case, by the time for exploring all servers, historical information of some servers could be already outdated and the exploration starts again.\nIn this situation, it is difficult for an agent to efficiently gather historical information about all remote servers.\nThis issue needs more investigation in the future.","lvl-4":"Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment\nABSTRACT\nDistributed applications require distributed techniques for efficient resource allocation.\nThese techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments.\nIn this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multiagent systems.\nOur solution is based on the self-organisation of agents, which does not require any facilitator or management layer.\nThe resource allocation in the system is a purely emergent effect.\nWe present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.\n1.\nINTRODUCTION\nIn this sense, each agent is a resource consumer that acquires a certain amount of resources for the execution of its tasks.\nIt is difficult for a central resource allocation mechanism to collect and manage the information about all shared resources and resource consumers to effectively perform the allocation of resources.\nHence, distributed solutions of the resource allocation problem are required.\nResearchers have recognised these requirements [10] and proposed techniques for distributed resource allocation.\nA promising kind of such distributed approaches are based on economic market models [4], inspired by principles of real stock markets.\nEven if those approaches are distributed, they usually require a facilitator for pricing, resource discovery and dispatching jobs to resources [5, 9].\nAnother mainly unsolved problem of those approaches is the fine-tuning of price and time, budget constraints to enable efficient resource allocation in large, dynamic systems [22].\nIn this paper we propose a distributed solution of the resource allocation problem based on self-organisation of the resource consumers in a system with limited resources.\nIn our approach, agents dynamically allocate tasks to servers that provide a limited amount of resources.\nIn our approach, agents select autonomously the execution platform for the task rather than ask a resource broker to do the allocation.\nAll control needed for our algorithm is distributed among the agents in the system.\nThey optimise the resource allocation process continuously over their lifetime to changes in the availability of shared resources by learning from past allocation decisions.\nThe only information available to all agents are resource load and allocation success information from past resource allocations.\nAdditional resource load information about servers is not disseminated.\nThe proposed mechanism does not require a central controlling authority, resource management layer or introduce additional communication between agents to decide which task is allocated on which server.\nWe demonstrate that this mechanism performs well dynamic systems with a large number of tasks and can easily be adapted to various system sizes.\nIn addition, the overall system performance is not affected in case agents or servers fail or become unavailable.\nThe proposed approach provides an easy way to implement distributed resource allocation and takes into account multi-agent system tendencies toward autonomy, heterogeneity and unreliability of resources and agents.\nThis proposed technique can be easily supplemented by techniques for queuing or rejecting resource allocation requests of agents [11].\nSuch self-managing capabilities of software agents allow a reliable resource allocation even in an environment with unreliable resource providers.\nThis can be achieved by the mutual interactions between agents by applying techniques from complex system theory.\nSelforganisation of all agents leads to a self-organisation of the\n2.\nRELATED WORK\nResource allocation is an important problem in the area of computer science.\nGenerally speaking, resource allocation is a mechanism or policy for the efficient and effective management of the access to a limited resource or set of resources by its consumers.\nIn the simplest case, resource consumers ask a central broker or dispatcher for available resources where the resource consumer will be allocated.\nThe broker usually has full knowledge about all system resources.\nIn those approaches, the resource consumer cannot influence the allocation decision process.\nLoad balancing [3] is a special case of the resource allocation problem using a broker that tries to be fair to all resources by balancing the system load equally among all resource providers.\nThis mechanism works best in a homogeneous system.\nA simple distributed technique for resource management is capacity planning by refusing or queuing incoming agents to avoid resource overload [11].\nFrom the resource owner perspective, this technique is important to prevent overload at the resource but it is not sufficient for effective resource allocation.\nThis technique can only provide a good supplement for distributed resource allocation mechanisms.\nThose coordinators usually need to have global knowledge on the state of all system resources.\nAn example of a dynamic resource allocation algorithm is the Cactus project [1] for the allocation of computational very expensive jobs.\nThe value of distributed solutions for the resource allocation problem has been recognised by research [10].\nInspired by the principles in stock markets, economic market models have been developed for trading resources for the regulation of supply and demand in the grid.\nUsers try to purchase cheap resources required to run the job while providers try to make as much profit as possible and operate the available resources at full capacity.\nA collection of different distributed resource allocation techniques based on market models is presented in Clearwater [10].\nBuyya et al. developed a resource allocation framework based on the regulation of supply and demand [4] for Nimrod-G [6] with the main focus on job deadlines and budget constraints.\nThe Agent based Resource Allocation Model (ARAM) for grids is designed to schedule computational expensive jobs using agents.\nDrawback of this model is the extensive use of message exchange between agents for periodic monitoring and information exchange within the hierarchical structure.\nSubtasks of a job migrate through the network until they find a resource that meets the price constraints.\nThe job's migration itinerary is determined by the resources in connecting them in different topologies [17].\nThe proposed mechanism in this paper eliminates the need of periodic information exchange about resource loads and does not need a connection topology between the resources.\nThere has been considerable work on decentralised resource allocation techniques using game theory published over recent years.\nIt is an ill-defined decision problem that assumes and models inductive reasoning.\nIn this repetitive decision game, an odd number of agents have to choose between two resources based on past success information trying to allocate itself at the resource with the minority.\nGalstyan et al. [14] studied a variation with more than two resources, changing resource capacities and information from neighbour agents.\nThey showed that agents can adapt effectively to changing capacities in this environment using a set of simple look-up tables (strategies) per agent.\nAnother distributed technique that is employed for solving the resource allocation problem is based on reinforcement learning [18].\nSimilar to our approach, a set of agents compete for a limited number of resources based only on prior individual experience.\nIn this paper, the system objective is to maximise system throughput while ensuring fairness to resources, measured as the average processing time per job unit.\nA resource allocation approach for sensor networks based on self-organisation techniques and reinforcement learning is presented in [16] with main focus on the optimisation of energy consumption of network nodes.\nWe [19] proposed a self-organising load balancing approach for a single server with focus on optimising the communication costs of mobile agents.\nA mobile agent will reject a migration to a remote agent server, if it expects the destination server to be already overloaded by other agents or server tasks.\nAgents make their decisions themselves based on forecasts of the server utilisation.\nIn this paper a solution for a multi-server environment is presented without consideration of communication or migration costs.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper a self-organising distributed resource allocation technique for multi-agent systems was presented.\nWe enable agents to select the execution platform for their tasks themselves before each execution at run-time.\nIn our approach the agents compete for an allocation at one of the\nFigure 5: Results of experiment 2 in a dynamic server environment averaged over 100 repetitions.\navailable shared resource.\nAgents sense their server environment and adopt their action to compete more efficient in the new created environment.\nThis process is adaptive and has a strong feedback as allocation decisions influence indirectly decisions of other agents.\nThe resource allocation is a purely emergent effect.\nOur mechanism demonstrates that resource allocation can be done by the effective competition of individual and autonomous agents.\nNeither do they need coordination or information from a higher authority nor is an additional direct communication between agents required.\nThis mechanism was inspired by inductive reasoning and bounded rationality principles which enables the agents' adaptation of their strategies to compete effectively in a dynamic environment.\nIn the case of a server becomes unavailable, the agents can adapt quickly to this new situation by exploring new resources or remain at the home server if an allocation is not possible.\nEspecially in dynamic and scalable environments such as grid systems, a robust and distributed mechanism for resource allocation is required.\nOur self-organising resource allocation approach was evaluated with a number of simulation experiments in a dynamic environment of agents and server resources.\nThe presented results for this new approach for strategic migration optimisation are very promising and justify further investigation in a real multi-agent system environment.\nIt is a distributed, scalable and easy-to-understand policy for the regulation of supply and demand of resources.\nAll control is implemented in the agents.\nA simple decision mechanism based on different beliefs of the agent creates an emergent behaviour that leads to effective resource allocation.\nThis approach can be easily extended or supported by resource balancing\/queuing mechanisms provided by resources.\nOur approach adapts to changes in the environment but it is not evolutionary.\nThere is no discovery of new strategies by the agents.\ninvestigated in the future.\nIn the near future we will investigate if an automatic adaptation of the decay rate of historical information our algorithm is possible and can improve the resource allocation performance.\nA large number of shared resources requires older historical information to avoid a too frequently resources exploration.\nIn contrast, a dynamic environment with varying capacities requires more up-to-date information to make more reliable predictions.\nWe are aware of the long learning phase in environments with a large number of shared resources known by each agent.\nIn the case that more resources are requested by agents than shared resources are provided by all servers, all agents will randomly explore all known servers.\nThis process of acquiring resource load information about all servers can take a long time in the case that no not enough shared resources for all tasks are provided.\nIn this situation, it is difficult for an agent to efficiently gather historical information about all remote servers.\nThis issue needs more investigation in the future.","lvl-2":"Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment\nABSTRACT\nDistributed applications require distributed techniques for efficient resource allocation.\nThese techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments.\nIn this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multiagent systems.\nOur solution is based on the self-organisation of agents, which does not require any facilitator or management layer.\nThe resource allocation in the system is a purely emergent effect.\nWe present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.\n1.\nINTRODUCTION\nWith the increasing popularity of distributed computing technologies such as Grid [12] and Web services [20], the Internet is becoming a powerful computing platform where different software peers (e.g., agents) can use existing computing resources to perform tasks.\nIn this sense, each agent is a resource consumer that acquires a certain amount of resources for the execution of its tasks.\nIt is difficult for a central resource allocation mechanism to collect and manage the information about all shared resources and resource consumers to effectively perform the allocation of resources.\nHence, distributed solutions of the resource allocation problem are required.\nResearchers have recognised these requirements [10] and proposed techniques for distributed resource allocation.\nA promising kind of such distributed approaches are based on economic market models [4], inspired by principles of real stock markets.\nEven if those approaches are distributed, they usually require a facilitator for pricing, resource discovery and dispatching jobs to resources [5, 9].\nAnother mainly unsolved problem of those approaches is the fine-tuning of price and time, budget constraints to enable efficient resource allocation in large, dynamic systems [22].\nIn this paper we propose a distributed solution of the resource allocation problem based on self-organisation of the resource consumers in a system with limited resources.\nIn our approach, agents dynamically allocate tasks to servers that provide a limited amount of resources.\nIn our approach, agents select autonomously the execution platform for the task rather than ask a resource broker to do the allocation.\nAll control needed for our algorithm is distributed among the agents in the system.\nThey optimise the resource allocation process continuously over their lifetime to changes in the availability of shared resources by learning from past allocation decisions.\nThe only information available to all agents are resource load and allocation success information from past resource allocations.\nAdditional resource load information about servers is not disseminated.\nThe basic concept of our solution is inspired by inductive reasoning and bounded rationality introduced by W. Brian Arthur [2].\nThe proposed mechanism does not require a central controlling authority, resource management layer or introduce additional communication between agents to decide which task is allocated on which server.\nWe demonstrate that this mechanism performs well dynamic systems with a large number of tasks and can easily be adapted to various system sizes.\nIn addition, the overall system performance is not affected in case agents or servers fail or become unavailable.\nThe proposed approach provides an easy way to implement distributed resource allocation and takes into account multi-agent system tendencies toward autonomy, heterogeneity and unreliability of resources and agents.\nThis proposed technique can be easily supplemented by techniques for queuing or rejecting resource allocation requests of agents [11].\nSuch self-managing capabilities of software agents allow a reliable resource allocation even in an environment with unreliable resource providers.\nThis can be achieved by the mutual interactions between agents by applying techniques from complex system theory.\nSelforganisation of all agents leads to a self-organisation of the\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nsystem resources and is an emergent property of the system [21].\nThe remainder of the paper is structured as follows: The next section gives an overview of the related work already done in the area of load balancing, resource allocation or scheduling.\nSection 3 describes the model of a multi-agent environment that was used to conduct simulations for a performance evaluation.\nSections 4 and 5 describe the distributed resource allocation algorithm and presents various experimental results.\nA summary, conclusion and outlook to future work finish this paper.\n2.\nRELATED WORK\nResource allocation is an important problem in the area of computer science.\nOver the past years, solutions based on different assumptions and constraints have been proposed by different research groups [7, 3, 15, 10].\nGenerally speaking, resource allocation is a mechanism or policy for the efficient and effective management of the access to a limited resource or set of resources by its consumers.\nIn the simplest case, resource consumers ask a central broker or dispatcher for available resources where the resource consumer will be allocated.\nThe broker usually has full knowledge about all system resources.\nAll incoming requests are directed to the broker who is the solely decision maker.\nIn those approaches, the resource consumer cannot influence the allocation decision process.\nLoad balancing [3] is a special case of the resource allocation problem using a broker that tries to be fair to all resources by balancing the system load equally among all resource providers.\nThis mechanism works best in a homogeneous system.\nA simple distributed technique for resource management is capacity planning by refusing or queuing incoming agents to avoid resource overload [11].\nFrom the resource owner perspective, this technique is important to prevent overload at the resource but it is not sufficient for effective resource allocation.\nThis technique can only provide a good supplement for distributed resource allocation mechanisms.\nMost of today's techniques for resource allocation in grid computing toolkits like Globus [12] or Condor-G [13] coordinate the resource allocation with an auctioneer, arbitrator, dispatcher, scheduler or manager.\nThose coordinators usually need to have global knowledge on the state of all system resources.\nAn example of a dynamic resource allocation algorithm is the Cactus project [1] for the allocation of computational very expensive jobs.\nThe value of distributed solutions for the resource allocation problem has been recognised by research [10].\nInspired by the principles in stock markets, economic market models have been developed for trading resources for the regulation of supply and demand in the grid.\nThese approaches use different pricing strategies such as posted price models, different auction methods or a commodity market model.\nUsers try to purchase cheap resources required to run the job while providers try to make as much profit as possible and operate the available resources at full capacity.\nA collection of different distributed resource allocation techniques based on market models is presented in Clearwater [10].\nBuyya et al. developed a resource allocation framework based on the regulation of supply and demand [4] for Nimrod-G [6] with the main focus on job deadlines and budget constraints.\nThe Agent based Resource Allocation Model (ARAM) for grids is designed to schedule computational expensive jobs using agents.\nDrawback of this model is the extensive use of message exchange between agents for periodic monitoring and information exchange within the hierarchical structure.\nSubtasks of a job migrate through the network until they find a resource that meets the price constraints.\nThe job's migration itinerary is determined by the resources in connecting them in different topologies [17].\nThe proposed mechanism in this paper eliminates the need of periodic information exchange about resource loads and does not need a connection topology between the resources.\nThere has been considerable work on decentralised resource allocation techniques using game theory published over recent years.\nMost of them are formulated as repetitive games in an idealistic and simplified environment.\nFor example, Arthur [2] introduced the so called El Farol bar problem that does not allow a perfect, logical and rational solution.\nIt is an ill-defined decision problem that assumes and models inductive reasoning.\nIt is probably one of the most studied examples of complex adaptive systems derived from the human way of deciding ill-defined problems.\nA variation of the El Farol problem is the so called minority game [8].\nIn this repetitive decision game, an odd number of agents have to choose between two resources based on past success information trying to allocate itself at the resource with the minority.\nGalstyan et al. [14] studied a variation with more than two resources, changing resource capacities and information from neighbour agents.\nThey showed that agents can adapt effectively to changing capacities in this environment using a set of simple look-up tables (strategies) per agent.\nAnother distributed technique that is employed for solving the resource allocation problem is based on reinforcement learning [18].\nSimilar to our approach, a set of agents compete for a limited number of resources based only on prior individual experience.\nIn this paper, the system objective is to maximise system throughput while ensuring fairness to resources, measured as the average processing time per job unit.\nA resource allocation approach for sensor networks based on self-organisation techniques and reinforcement learning is presented in [16] with main focus on the optimisation of energy consumption of network nodes.\nWe [19] proposed a self-organising load balancing approach for a single server with focus on optimising the communication costs of mobile agents.\nA mobile agent will reject a migration to a remote agent server, if it expects the destination server to be already overloaded by other agents or server tasks.\nAgents make their decisions themselves based on forecasts of the server utilisation.\nIn this paper a solution for a multi-server environment is presented without consideration of communication or migration costs.\n3.\nMODEL DESCRIPTION\nWe model a distributed multi-agent system as a network of servers L = {l1,..., lm}, agents A = {a1,..., an} and tasks T = {T1,..., Tm}.\nEach agent has a number of tasks Ti that needs to be executed during its lifetime.\nA task Ti requires U (Ti, t) resources for its execution at time t independent from its execution server.\nResources for the execution of tasks are provided by each server li.\nThe task's execution location in general is specified by the map L: T \u00d7 t \u2192 L.\nAn agent has to know about the existence of server resources in order to allocate tasks at those resources.\nWe write LS (ai)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 75\nFigure 1: An illustration of our multi-server model with exclusive and shared resources for the agent execution.\nto address the set of resources known by agent ai.\nResources in the system can be used by all agents for the execution of tasks.\nThe amount of provided resources C (li, t) of each server can vary over time.\nThe resource utilisation of a server li at time t is calculated using equation 1, by adding the resource consumption U (Tj, t) of each task Tj that is executed at the resource at time t. All resource units used in our model represent real metrics such as memory or processor cycles.\nAdditional to the case that the total amount of system resources is enough to execute all tasks, we are also interested in the case that not enough system resources are provided to fulfil all allocation requests.\nThat is, the overall shared resource capacity is lower than the amount of requested resources by agents.\nIn this case, some agents must wait with their allocation request until free resources are expected.\nThe multi-agent system model used for our simulations is illustrated in Fig. 1.\n4.\nSELF-ORGANISING RESOURCE ALLOCATION\nThe resource allocation algorithm as described in this section is integrated in each agent.\nThe only information required in order to make a resource allocation decision for a task is the server utilisation from completed task allocations at those servers.\nThere is no additional information dissemination about server resource utilisation or information about free resources.\nOur solution demonstrates that agents can self-organise in a dynamic environment without active monitoring information that causes a lot of network traffic overhead.\nAdditionally, we do not have any central controlling authority.\nAll behaviour that leads to the resource allocation is created by the effective competition of the agents for shared resources and is a purely emergent effect.\nThe agents in our multi-agent system compete for resources or a set of resources to execute tasks.\nThe collective action of these agents change the environment and, as time goes by, they have to adapt to these changes to compete more effectively in the newly created environment.\nOur approach is based on different agent beliefs, represented by predictors and different information about their environment.\nAgents prefer a task allocation at a server with free resources.\nHowever, there is no way to be sure of the amount of free server resources in advance.\nAll agents have the same preferences and a agent will allocate a task on a server if it expects enough free resources for its execution.\nThere is no communication between agents.\nActions taken by agents influence the actions of other agents indirectly.\nThe applied mechanism is inspired by inductive reasoning and bounded rationality principles [2].\nIt is derived from the human way of deciding ill-defined problems.\nHumans tend to keep in mind many hypotheses and act on the most plausible one.\nTherefore, each agent keeps track of the performance of a private collection of its predictors and selects the one that is currently most promising for decision making.\n4.1 Resource Allocation Algorithm\nThis section describes the decision mechanism for our selforganising resource allocation.\nAll necessary control is integrated in the agents themselves.\nThere is no higher controlling authority, management layer for decision support or information distribution.\nAll agents have a set of predictors for each resource to forecast the future resource utilisation of these servers for potential task allocation.\nTo do so, agents use historical information from past task allocations at those resources.\nBased on the forecasted resource utilisation, the agent will make its resource allocation decision.\nAfter the task has finished its execution and returned the results back to the agent, the predictor performances are evaluated and history information is updated.\nAlgorithm 1 shows the resource allocation algorithm for each agent.\nThe agent first predicts the next step's resource load for each server with historical information (line 3--7).\nIf the predicted resource load plus the task's resource consumption is below the last known server capacity, this server is added to the list of candidates for the allocation.\nThe agent then evaluates if any free shared resources for the task allocation are expected.\nIn the case, no free resources are expected (line 9), the agent will explore resources by allocating the task at a randomly selected server from all not predictable servers to gather resource load information.\nThis is the standard case at the beginning of the agent life-cycle as there is no information about the environment available.\nThe resource load prediction itself uses a set of r predictors P (a, l): = {pi | 1 0, a commitment profile x \u2208 X\u03c01 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 X\u03c0n is a \u03c4-extortion of order m in G given \u03c0 if x is an \u03c4-extortion of order m \u2212 1 with \u03c6 y\u03c01 , ... , y\u03c0m , x\u03c0m+1 , ... , x\u03c0n \u03c0m \u03c6 x\u03c01 , ... , x\u03c0m , x\u03c0m+1 , ... , x\u03c0n for all commitment profiles g in X with (y\u03c01 , ... , y\u03c0m , x\u03c0m+1 , ... , x\u03c0n ) a \u03c4-extortion of order m \u2212 1.\nA \u03c4-extortion is a commitment profile that is a \u03c4-extortion of order m for all m with 0 m n. Furthermore, we say that a (mixed) strategy profile \u03c3 is \u03c4-extortionable if there is some \u03c4-extortion x with \u03c6(x) = s. Thus, an extortion of order 1 is a commitment profile in which player \u03c01, makes a commitment that maximizes his payoff, given fixed commitments of the other players.\nAn extortion of order m is an extortion of order m \u2212 1 that maximizes player m``s payoff, given fixed commitments of the players \u03c0m+1 through \u03c0n.\nFor the type of conditional commitments we have that any conditional commitment profile f is an extortion of order 0 and an extortion of an order m greater than 0 is any extortion of order m \u2212 1 for which: g\u03c01 , ... , g\u03c0m , f\u03c0m+1 , ... , f\u03c0n \u03c0m f\u03c01 , ... , f\u03c0m , f\u03c0m+1 , ... , f\u03c0n , for each conditional commitment profile g such that g\u03c01 , ... , g\u03c0m , f\u03c0m+1 , ... , f\u03c0n an extortion of order m \u2212 1.\nTo illustrate the concept of an extortion for conditional commitments consider the three-player game in Figure 5 and assume \u23a1 \u23a2\u23a2\u23a2\u23a2\u23a3 (1, 4, 0) (1, 4, 0) (3, 3, 2) (0, 0, 2) \u23a4 \u23a5\u23a5\u23a5\u23a5\u23a6 \u23a1 \u23a2\u23a2\u23a2\u23a2\u23a3 (4, 1, 1) (4, 0, 0) (3, 3, 2) (0, 0, 2) \u23a4 \u23a5\u23a5\u23a5\u23a5\u23a6 Figure 5: A three-player strategic game The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 111 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 4 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 3 3 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 4 1 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 3 3 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Row Col Mat \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 4 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 4 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 4 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 3 3 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 4 1 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 3 3 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Row Col Mat \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 4 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 4 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Figure 6: A conditional extortion f of order 1 (left) and an extortion g of order 3 (right).\n(Row, Col, Mat) to be the order in which the players commit.\nFigure 6 depicts the possible conditional commitments of the players in extensive forms, with the left branch corresponding to Row``s strategy of playing the top row.\nLet f and g be the conditional commitment strategies indicated by the thick lines in the left and right figures respectively.\nBoth f and g are extortions of order 1.\nIn both f and g Row guarantees himself the higher payoff given the conditional commitments of Mat and Col. Only g, however, is also an extortion of order 2.\nTo appreciate that f is not, consider the conditional commitment profile h in which Row chooses top and Col chooses right no matter how Row decides, i.e., h is such that hRow = t and hCol(t) = hCol(b) = r. Then, (hRow, hCol, fMat) is also an extortion of order 1, but yields Col a higher payoff than f does.\nWe leave it to the reader to check that, by contrast, g is an extortion of order 3, and therewith an extortion per se.\n4.1 Promises and Threats One way of understanding conditional extortions is by conceiving of them as combinations of precisely one promise and a number of threats.\nFrom the strategy profiles that can still be realized given the conditional commitments of players that have committed before him, a player tries to enforce the strategy profile that yields him as much payoff as possible.\nHence, he chooses his commitment so as to render deviations from the path that leads to this strategy profile as unattractive as possible (`threats'') and the desired strategy profile as appealing as possible (`promises'') for the relevant players.\nIf (s\u03c01 , ... , s\u03c0n ) is such a desirable strategy profile for player \u03c0i and f\u03c0i his conditional commitment, the value of f\u03c0i (s\u03c01 , ... , s\u03c0i\u22121 ) could be taken as his promise, whereas the values of f\u03c0i for all other (t\u03c01 , ... , t\u03c0i\u22121 ) could be seen as constituting his threats.\nThe higher the payoff is to the other players in a strategy profile a player aims for, the easier it is for him to formulate an effective threat.\nHowever, making appropriate threats in this respect does not merely come down to minimizing the payoffs to players to commit later wherever possible.\nA player should also take into account the commitments, promises and threats the following players can make on the basis of his and his predecessors'' commitments.\nThis is what makes extortionate reasoning sometimes so complicated, especially in situations with more than two players.\nFor example, in the game of Figure 5, there is no conditional extortion that ensures Mat a payoff of two.\nTo appreciate this, consider the possible commitments Mat can make in case Row plays top and Col plays left (tl) and in case Row plays top and Col plays right (tr).\nIf Mat commits to the right matrix in both cases, he virtually promises Row a payoff of four, leaving himself with a payoff of at most one.\nOtherwise, he puts Col in a position to deter Row from choosing bottom by threatening to choose the right column if the latter does so.\nAgain, Mat cannot expect a payoff higher than one.\nIn short, no matter how Mat conditionally commits, he will either enable Col to threaten Row into playing top or fail to lure Row into playing the bottom row.\n4.2 Benign Backward Induction The solutions extortions provide can also be obtained by modeling the situation as an extensive form game and applying a backward inductive type of argument.\nThe actions of the players in any such extensive form game are then given by their conditional commitments, which they then choose sequentially.\nFor higher types of commitment, such as conditional commitments, such `metagames'', however, grow exponentially in the number of strategies available to the players and are generally much larger than the original game.\nThe correspondence between the backward induction solutions in the meta-game and the extortions of the original strategic game rather signifies that the concept of an extortion is defined properly.\nFirst we define the concept of benign backward induction in general relative to a game in strategic form together with an ordering of the players.\nIntuitively it reflects the idea that each player chooses for each possible combination of actions of his predecessors the action that yields the highest payoff, given that his successors do similarly.\nThe concept is called benign backward induction, because it implies that a player, when indifferent between a number of actions, chooses the one that benefits his predecessors most.\nFor an ordering \u03c0 of the players, we have \u03c0R denote its reversal (\u03c0n, ... , \u03c01).\nDefinition 4.2.\n(Benign backward induction) Let G be a strategic game and \u03c0 an ordering of its players.\nA benign backward induction of order 0 is any conditional commitment profile f subject to \u03c0.\nFor m > 0, a conditional commitment strategy profile f is a benign backward induction (solution) of order m if f is a benign backward induction of order m \u2212 1 and (g\u03c0R n , ... , g\u03c0R m+1 , g\u03c0R m , ... , g\u03c0R 1 ) \u03c0R m (g\u03c0R n , ... , g\u03c0R m+1 , f\u03c0R m , ... , f\u03c0R 1 ) for any backward induction (g\u03c0R n ,..., g\u03c0R m+1 , g\u03c0R m ,..., g\u03c0R 1 ) of order m\u22121.\nA conditional commitment profile f is a benign backward induction if it is a benign backward induction of order k for each k with 0 k n. For games with a finite action set for each player, the following result follows straightforwardly from Kuhn``s Theorem (cf. [6, p. 99]).\nIn particular, this result holds if the players'' actions are commitments of a finite type.\nFact 4.3.\nFor each finite game and each ordering of the players, benign backward inductions exist.\nFor each game, each ordering of its players and each commitment type, we can define another game G\u2217 with the the actions of each player i given by his \u03c4-commitments Xi in G.\nThe utility 112 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) of a strategy profile (x\u03c01 , ... , x\u03c0n ) for a player i in G\u2217 can then be equated to his utility of the strategy profile \u03c6(x\u03c0n , ... , x\u03c01 ) in G.\nWe now find that the extortions of G can be retrieved as the paths of the benign backward induction solutions of the game G\u2217 for the ordering \u03c0R of the players, provided that the commitment type is finite.\nTheorem 4.4.\nLet G = (N, (Ai)i\u2208N, (ui)i\u2208N) be a game and \u03c0 an ordering of its players with which the finite commitment type \u03c4 associates the tuple X\u03c01 , ... , X\u03c0n , \u03c6 .\nLet further G\u2217 = N, (X\u03c0i )i\u2208N, (u\u2217 \u03c0i )i\u2208N , where u\u2217 \u03c0i (x\u03c0n , ... , x\u03c01 ) = u\u03c0i (\u03c6(x\u03c01 , ... , x\u03c0n )), for each \u03c4-commitment profile (x\u03c01 , ... , x\u03c0n ).\nThen, a \u03c0commitment profile (x\u03c01 , ... , x\u03c0n ) is a \u03c4-extortion in G given \u03c0 if and only if there is some benign backward induction f in G\u2217 given \u03c0R with f = (x\u03c0n , ... , x\u03c01 ).\nProof.\nAssume that f is a benign backward induction in G\u2217 relative to \u03c0R .\nThen, f = (x\u03c0n , ... , x\u03c01 ), for some commitment profile (x\u03c01 , ... , x\u03c0n ) of G relative to \u03c0.\nWe show by induction that (x\u03c01 , ... , x\u03c0n ) is an extortion of order m, for all m with 0 m n. For m = 0, the proof is trivial.\nFor the induction step, consider an arbitrary commitment profile (y\u03c01 , ... , y\u03c0n ) such that (y\u03c01 , ... , y\u03c0m , x\u03c0m+1 , ... , x\u03c0n ) is an extortion of order m \u2212 1.\nIn virtue of the induction hypothesis, there is a benign backward induction g of order m \u2212 1 in G\u2217 with g = (x\u03c0n , ... , x\u03c0m+1 , y\u03c0m , ... , y\u03c01 ).\nAs f is also a benign backward induction of order m: (g\u03c0n , ... , g\u03c01 ) \u2217 \u03c0m (g\u03c0n , ... , g\u03c0m+1 , f\u03c0m , ... , f\u03c01 ) .\nHence, (x\u03c0n , ... , x\u03c0m+1 , y\u03c0m , ... , y\u03c01 ) \u2217 \u03c0m (x\u03c0n , ... , x\u03c01 ).\nBy definition of u\u2217 \u03c0m , then also: \u03c6(y\u03c01 , ... , y\u03c0m , x\u03c0m+1 , ... , x\u03c0n ) \u03c0m \u03c6(x\u03c01 , ... , x\u03c0n ).\nWe may conclude that x is an extortion of order m. For the only if direction, assume that x is an extortion of G given \u03c0.\nWe prove that there is a benign backward induction f (\u2217) in G\u2217 for \u03c0R with f (\u2217) = x.\nIn virtue of Fact 4.3, there is a benign backward induction h in G\u2217 given \u03c0R .\nNow define f (\u2217) in such a way that f (\u2217) \u03c0i (z\u03c0n , ... , z\u03c0i\u22121 ) = x\u03c0i , if (z\u03c0n , ... , z\u03c0i\u22121 ) = (x\u03c0n , ... , x\u03c0i\u22121 ), and f (\u2217) \u03c0i (z\u03c0n , ... , z\u03c0i\u22121 ) = h\u03c0i (z\u03c0n , ... , z\u03c0i\u22121 ), otherwise.\nWe prove by induction on m, that f (\u2217) is a benign backward induction of order m, for each m with 0 m n.\nThe basis is trivial.\nSo assume that f (\u2217) is a backward induction of order m \u2212 1 in G\u2217 given \u03c0R and consider an arbitrary benign backward induction g of order m \u2212 1 in G\u2217 given \u03c0R .\nLet g be given by (y\u03c0n , ... , y\u03c01 ).\nEither (y\u03c0n , ... , y\u03c0m+1 ) = (x\u03c0n , ... , x\u03c0m+1 ), or this is not the case.\nIf the latter, it can readily be appreciated that: (g\u03c0n , ... , g\u03c0m+1 , f (\u2217) \u03c0m , ... , f (\u2217) \u03c01 ) = (g\u03c0n , ... , g\u03c0m+1 , h\u03c0m , ... , h\u03c01 ) .\nHaving assumed that h is a benign backward induction, subsequently, (g\u03c0n , ... , g\u03c01 ) \u2217 m (g\u03c0n , ... , g\u03c0m+1 , h\u03c0m , ... , h\u03c01 ) , and (g\u03c0n , ... , g\u03c01 ) \u2217 m (g\u03c0n , ... , g\u03c0m+1 , f (\u2217) \u03c0m , ... , f (\u2217) \u03c01 ) .\nHence, f (\u2217) is a benign backward induction of order m.\nIn the former case the reasoning is slightly different.\nThen, (g\u03c0n , ... , g\u03c01 ) = (x\u03c0n , ... , x\u03c0m+1 , y\u03c0m , ... , y\u03c01 ).\nIt follows that: (g\u03c0n , ... , g\u03c0m+1 , f (\u2217) \u03c0m , ... , f (\u2217) \u03c01 ) = (f (\u2217) \u03c0n , ... , f (\u2217) \u03c01 ) = (x\u03c0n , ... , x\u03c01 ).\nIn virtue of the induction hypothesis, (y\u03c01 , ... , y\u03c0n ) is an extortion of order m \u2212 1 in G given \u03c0.\nAs the reasoning takes place under the assumption that x is an extortion in G given \u03c0, we also have: \u03c6(y\u03c01 , ... , y\u03c0m , x\u03c0m+1 , ... , x\u03c0n ) \u03c0m \u03c6(x\u03c01 , ... , x\u03c0n ).\nThen, (x\u03c0n , ... , x\u03c0m+1 , y\u03c0m , ... , y\u03c01 , ) \u2217 \u03c0m (x\u03c0n , ... , x\u03c01 ).\n, by definition of u\u2217 .\nWe may conclude that: (g\u03c0n , ... , g\u03c01 ) \u2217 \u03c0m (g\u03c0n , ... , g\u03c0m+1 , f (\u2217) \u03c0m , ... , f (\u2217) \u03c01 ) , signifying that f (\u2217) is a benign backward induction of order m.\nAs an immediate consequence of Theorem 4.4 and Fact 4.3 we also have the following result.\nCorollary 4.5.\nLet \u03c4 be a finite commitment type.\nThen, \u03c4-extortions exist for each strategic game and for each ordering of the players.\n4.3 Commitment Order In the case of unconditional commitments, it is not always favorable to be the first to commit.\nThis is well illustrated by the familiar game rock-paper-scissors.\nIf, on the other hand, the players are in a position to make conditional commitments in this particular game, moving first is an advantage.\nRather, we find that it can never harm to move first in a two-player game with conditional commitments.\nTheorem 4.6.\nLet G be a two-player strategic game involving player i. Further let f be an extortion of G in which i commits first, and g an extortion in which i commits second.\nThen, g i f .\nProof sketch.\nLet f be a conditional extortion in G given \u03c0.\nIt suffices to show that there is some conditional extortion h of order 1 in G given \u03c0 with h = f .\nAssume for a contradiction that there is no such extortion of order 1 in G given \u03c0 .\nThen there must be some b\u2217 \u2208 Aj such that f \u227aj b\u2217 , a , for all a \u2208 Ai.\n(Otherwise we could define (gj, gi) such that gj = fj(fi), gi(gj) = fi, and for any other b \u2208 Aj, gi(b) = a\u2217 , where a\u2217 is an action in Ai such that (b, a\u2217 ) j f .\nThen g would be an extortion of order 1 in G given \u03c0 with g .)\nNow consider a conditional commitment profile h for G and \u03c0 such that hj(a) = b\u2217 , for all a \u2208 Ai.\nLet further hi be such that (a, hj) i (hi, hj) , for all a \u2208 Ai.\nThen, h is an extortion of order 1 in G given \u03c0.\nObserve that (hi, hj) = (fi , b\u2217 ).\nHence, f \u227aj h , contradicting the assumption that f is an extortion in G given \u03c0.\nTheorem 4.6 does not generalize to games with more than two players.\nConsider the three-player game in Figure 7, with extensive forms as in Figure 8.\nHere, Row and Mat have identical preferences.\nThe latter``s extortionate powers relative Col, however, are very weak if he is to commit first: any conditional commitment he makes puts Col in a situation in which she can enforce a payoff of two, leaving Mat and Row in the cold with a payoff of one.\nHowever, if Mat is last to commit and Row first, then the latter can exploit his strategic powers, threaten Col so that she plays left, and guarantee both himself and Mat a payoff of two.\n4.4 Pareto Efficiency Another issue concerns the Pareto efficiency of the strategy profiles extortionable through conditional commitments.\nWe say that a strategy profile s (weakly) Pareto dominates another strategy profile t if t i s for all players i and s it for some.\nMoreover, a strategy profile s is (weakly) Pareto efficient if it is not (weakly) Pareto dominated by any other strategy profile.\nWe extend this terminology to conditional commitment profiles by saying that a conditional commitment profile f is (weakly) Pareto efficient or (weakly) Pareto dominates another conditional commitment profile if f is or does so.\nWe now have the following result.\n\u23a1 \u23a2\u23a2\u23a2\u23a2\u23a3 (0, 1, 0) (0, 0, 0) (0, 0, 0) (1, 2, 1) \u23a4 \u23a5\u23a5\u23a5\u23a5\u23a6 \u23a1 \u23a2\u23a2\u23a2\u23a2\u23a3 (2, 1, 2) (0, 0, 0) (0, 0, 0) (1, 2, 1) \u23a4 \u23a5\u23a5\u23a5\u23a5\u23a6 Figure 7: A three-person game.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 113 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 1 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 2 1 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Row Col Mat \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 2 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 2 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 1 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 2 1 2 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Row Col Mat \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 2 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 0 0 0 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 \u239b \u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d 1 2 1 \u239e \u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0 Figure 8: It is not always better to commit early than late, even in the case of conditional or inductive commitments.\nTheorem 4.7.\nIn each game, Pareto efficient conditional extortions exist.\nMoreover, any strategy profile that Pareto dominates an extortion is also extortionable through a conditional commitment.\nProof sketch.\nSince, in virtue of Fact 4.5, extortions generally exists in each game, it suffices to recognize that the second claim holds.\nLet s be the strategy profile (s\u03c01 , ... , s\u03c0n ).\nLet further the conditional extortion f be Pareto dominated by s.\nAn extortion g with g = s can then be constructed by adopting all threats of f while promising g .\nI.e., for all players \u03c0i we have g\u03c0i (s\u03c01 , ... , s\u03c0i\u22121 ) = si and g\u03c0i (t\u03c01 , ... , t\u03c0n ) = f\u03c0i (t\u03c01 , ... , t\u03c0n ), for all other t\u03c01 , ... , t\u03c0n .\nAs s Pareto dominates f , the threats of f remain effective as threats of g given that s is being promised.\nThis result hints at a difference between (benign) backward induction and extortions.\nIn general, solutions of benign backward inductions can be Pareto dominated by outcomes that are no benign backward induction solutions.\nTherefore, although every extortion can be seen as a benign backward induction in a larger game, it is not the case that all formal properties of extortions are shared by benign backward inductions in general.\n5.\nOTHER COMMITMENT TYPES Conditional and unconditional commitments are only two possible commitment types.\nThe definition also provides for types of commitment that allow for committing on commitments, thus achieving a finer adjustment of promises and threats.\nSimilarly, it subsumes commitments on and to mixed strategies.\nIn this section we comment on some of these possibilities.\n5.1 Inductive Commitments Apart from making commitments conditional on the actions of the players to commit later, one could also commit on the commitments of the following players.\nInformally, such commitments would have the form of if you only dare to commit in such and such a way, then I do such and such, otherwise I promise to act so and so.\nFor a strategic game G and an ordering \u03c0 of the players, we define the inductive commitments of the players inductively.\nThe inductive commitments available to \u03c01 coincide with the actions that are available to him.\nAn inductive commitment for player \u03c0i+1 is a function mapping each profile of inductive commitments of players \u03c01 through \u03c0i to one of his basic actions.\nFormally we define the type of inductive commitments F\u03c01 , ... , F\u03c0n , such that for each player \u03c0i in a game G and given \u03c0: F\u03c01 =df.\nA\u03c01 , F\u03c0i+1 =df.\nA F\u03c01 \u00d7\u00b7\u00b7\u00b7\u00d7F\u03c0i \u03c0i+1 .\nLet f\u03c0i = f\u03c0i f\u03c01 , ... , f\u03c0i\u22121 , for each player \u03c0i and have f denote the pure strategy profile f\u03c01 , ... , f\u03c0n .\nInductive commitments have a greater extortionate power than conditional commitments.\nTo appreciate this, consider once more the game in Figure 5.\nWe found that the strategy profile in which Row chooses bottom and Col and Mat both choose left is not extortionable through conditional commitments.\nBy means of inductive commitments, however, this is possible.\nLet f be the inductive commitment profile such that fRow is Row choosing the bottom row (b), fCol is the column player choosing the left column (l) no matter how Row decides, and fMat is defined such that: fMat fRow, fCol = \u23a7 \u23aa\u23aa\u23a8 \u23aa\u23aa\u23a9 r if fRow = t and fCol (b) = r, l otherwise.\nInstead of showing formally that f is an inductive extortion of the strategy profile (b, l, l), we point out informally how this can be done.\nWe argued that in order to exact a payoff of two by means of a conditional extortion, Mat would have to lure Row into choosing the bottom row without at the same time putting Col in a position to successfully threaten Row not to choose top.\nThis, we found, is an impossibility if the players can only make conditional commitments.\nBy contrast, if Mat can commit to commitments, he can undermine Col``s efforts to threaten Row by playing the right matrix, if Col were to do so.\nYet, Mat can still force Row to choose the bottom row, in case Col desists form making this threat.\nAs can readily be observed, in any game, the inductive commitments of the first two players to commit coincide with their conditional commitments.\nHence, as an immediate consequence of Theorem 4.6, it can never harm to be the first to commit to an inductive commitment in the two player case.\nSimilarly, we find that the game depicted in Figure 7 also serves as an example showing that, in case there are more than two players, it is not always better to commit to an inductive commitment early.\nIn this example the strategic position of Mat is so weak if he is to commit first, that even the possibility to commit inductively does not strengthen it, whereas, in a similar fashion as with conditional commitments, Row can enforce a payoff of two to both himself and Mat if he is the first to commit.\n5.2 Mixed Commitments Types So far we have merely considered commitments to and on pure strategies.\nA natural extension would be also to consider commitments to and on mixed strategies.\nWe distinguish between conditional, unconditional as well as inductive mixed commitments.\nWe find that they are generally quite incomparable with their pure counterparts: in some situations a player can achieve more using a mixed commitment, in another using a pure commitment type.\nA complicating factor with mixed commitment types is that they 114 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) can result in a mixed strategy profile being played.\nThis makes that the distinction between promises and threats, as delineated in Section 4.1, gets blurred for mixed commitment types.\nThe type of mixed unconditional commitments associates with each game G and ordering \u03c0 of its players the tuple \u03a3\u03c01 , ... , \u03a3\u03c0n , id .\nThe two-player case has been extensively studied (e.g., [2, 16]).\nAs a matter of fact, von Neumann``s famous minimax theorem shows that for two-player zero-sum games, it does not matter which player commits first.\nIf the second player to commit plays a mixed strategy that ensures his security level, the first player to commit can do no better than to do so as well [14].\nIn the game of Figure 5 we found that, with conditional commitments, Mat is unable to enforce an outcome that awards him a payoff of two.\nRecall that the reason of this failure is that any effort to deter Row from choosing the top row is flawed, as it would put Col in an excellent position to threaten Row not to choose the bottom row.\nIf Mat has inductive commitments at his disposal, however, this is a possibility.\nWe now find that in case the players can dispose of unconditional mixed strategies, Mat is in a much similar position.\nHe could randomize uniformly between the left and right matrix.\nThen, Row``s expected utility is 21 2 if he plays the top row, no matter how Col randomizes.\nThe expected payoff of Col does not exceed 21 2 , either, in case Row chooses top.\nBy purely committing to the left column, Col player entices Row to play bottom, as his expected utility then amounts to 3.\nThis ensures an expected utility of three for Col as well.\nHowever, a player is not always better off with unconditional mixed commitments than with pure conditional commitments.\nFor an example, consider the game in Figure 2.\nUsing pure conditional commitments, he can ensure a payoff of three, whereas with unconditional mixed commitments 21 2 would be the most he could achieve.\nNeither is it in general advantageous to commit first to a mixed strategy in a three-player game.\nTo appreciate this, consider once more the game in Figure 7.\nAgain committing to a mixed strategy will not achieve much for Mat if he is to move first, and as before the other players have no reason to commit to anything other than a pure strategy.\nThis holds for all players if Row commits first, Col second and Mat last, be it that in this case Mat obtains the best payoff he can get.\nAnalogous to conditional and inductive commitments one can also define the types of mixed conditional and mixed inductive commitments.\nWith the former, a player can condition his mixed strategies on the mixed strategies of the players to commit after him.\nThese tend to be very large objects and, knowing little about them yet, we shelve their formal analysis for future research.\nConceptually, it might not be immediately clear how such mixed conditional commitments can be made with credibility.\nFor one, when one``s commitments are conditional on a particular mixed strategy being played, how can it be recognized that it was in fact this mixed strategy that was played rather than another one?\nIf this proves to be impossible, how can one know how his conditional commitments is to be effectuated?\nA possible answer would be, that all depends on the circumstances in which the commitments were made.\nE.g., if the different agents can submit their mixed conditional commitments to an independent party, the latter can execute the randomizations and determine the unique mixed strategy profile that their commitments induce.\n6.\nSUMMARY AND CONCLUSION In some situations agents can strengthen their strategic position by committing themselves to a particular course of action.\nThere are various types of commitment, e.g., pure, mixed and conditional.\nWhich type of commitment an agent is in a position in to make essentially depends on the situation under consideration.\nIf the agents commit in a particular order, there is a tactic common to making commitments of any type, which we have formalized by means the concept of an extortion.\nThis generic concept of extortion can be analyzed in abstracto.\nMoreover, on its basis the various commitment types can be compared formally and systematically.\nWe have seen that the type of commitment an agent can make has a profound impact on what an agent can achieve in a gamelike situation.\nIn some situations a player is much helped if he is in a position to commit conditionally, whereas in others mixed commitments would be more profitable.\nThis raises the question as to the characteristic formal features of the situations in which it is advantageous for a player to be able to make commitments of a particular type.\nAnother issue which we leave for future research is the computational complexity of finding an extortion for the different commitment types.\n7.\nREFERENCES [1] A. K. Chopra and M. Singh.\nContextualizing commitment protocols.\nIn Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 1345-1352.\nACM Press, 2006.\n[2] V. Conitzer and T. Sandholm.\nComputing the optimal strategy to commit to.\nIn Proceedings of the 7th ACM Conference on Electronic Commerce (ACM-EC), pages 82-90.\nACM Press, 2006.\n[3] J. C. Harsanyi.\nA simplified bargaining model for the n-person cooperative game.\nInternational Economic Review, 4(2):194-220, 1963.\n[4] R. D. Luce and H. Raiffa.\nGames and Decisions: Introduction and Critical Survey.\nWiley, 1957.\n[5] J. Nash.\nTwo-person cooperative games.\nEconometrica, 21:128-140, 1953.\n[6] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nMIT Press, 1994.\n[7] D. Samet.\nHow to commit to cooperation, 2005.\nInvited talk at the 4th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS).\n[8] T. Sandholm and V. Lesser.\nLeveled-commitment contracting.\nA backtracking instrument for multiagent systems.\nAI Magazine, 23(3):89-100, 2002.\n[9] T. C. Schelling.\nThe Strategy of Conflict.\nHarvard University Press, 1960.\n[10] R. Selten.\nSpieltheoretische Behandlung eines Oligopolmodells mit Nachfragetr\u00a8agheit.\nZeitschrift f\u00a8ur die gesamte Staatswissenschaft, 121:301-324, 1965.\n[11] M. P. Singh.\nAn ontology for commitments in multiagent systems: Toward a unification of normative concepts.\nArtificial Intelligence and Law, 7(1):97-113, 1999.\n[12] M. Tennenholtz.\nProgram equilibrium.\nGames and Economic Behavior, 49:363-373, 2004.\n[13] E. van Damme and S. Hurkens.\nCommitment robust equilibria and endogenous timing.\nGames and Economic Behavior, 15:290-311, 1996.\n[14] J. von Neumann and O. Morgenstern.\nThe Theory of Games and Economic Behavior.\nPrinceton University Press, 1944.\n[15] H. von Stackelberg.\nMarktform und Gleichgewicht.\nJulius Springer Verlag, 1934.\n[16] B. von Stengel and S. Zamir.\nLeadership with commitment to mixed strategies.\nCDAM Research Report LSE-CDAM-2004-01, London School of Economics, 2003.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 115","lvl-3":"Commitment and Extortion *\nABSTRACT\nMaking commitments, e.g., through promises and threats, enables a player to exploit the strengths of his own strategic position as well as the weaknesses of that of his opponents.\nWhich commitments a player can make with credibility depends on the circumstances.\nIn some, a player can only commit to the performance of an action, in others, he can commit himself conditionally on the actions of the other players.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nWe explore the formal properties of these types of (conditional) commitment and their interrelationships.\nSo as to preclude inconsistencies among conditional commitments, we assume an order in which the players make their commitments.\nCentral to our analyses is the notion of an extortion, which we define, for a given order of the players, as a profile that contains, for each player, an optimal commitment given the commitments of the players that committed earlier.\nOn this basis, we investigate for different commitment types whether it is advantageous to commit earlier rather than later, and how the outcomes obtained through extortions relate to backward induction and Pareto efficiency.\n1.\nINTRODUCTION\nOn one view, the least one may expect of game theory is that it provides an answer to the question which actions maximize an agent's expected utility in situations of interactive decision making.\n* This material is based upon work supported by the Deutsche Forschungsgemeinschaft under grant BR 2312\/3 -1.\nA slightly divergent view is expounded by Schelling when he states that \"strategy [...] is not concerned with the efficient application of force but with the exploitation ofpotential force\" [9, page 5].\nFrom this perspective, the formal model of a game in strategic form only outlines the strategic features of an interactive situation.\nApart from merely choosing and performing an action from a set of actions, there may also be other courses open to an agent.\nE.g., the strategic lie of the land may be such that a promise, a threat, or a combination of both would be more conductive to his ends.\nThe potency of a promise, however, essentially depends on the extent the promisee can be convinced of the promiser's resolve to see to its fulfillment.\nLikewise, a threat only succeeds in deterring an agent if the latter can be made to believe that the threatener is bound to execute the threat, should it be ignored.\nIn this sense, promises and threats essentially involve a commitment on the part of the one who makes them, thus purposely restricting his freedom of choice.\nPromises and threats epitomize one of the fundamental and at first sight perhaps most surprising phenomena in game theory: it may occur that a player can improve his strategic position by limiting his own freedom of action.\nBy commitments we will understand such limitations of one's action space.\nAction itself could be seen as the ultimate commitment.\nPerforming a particular action means doing so to the exclusion of all other actions.\nCommitments come in different forms and it may depend on the circumstances which ones can and which ones cannot credibly be made.\nBesides simply committing to the performance of an action, an agent might make his commitment conditional on the actions of other agents, as, e.g., the kidnapper does, when he promises to set free a hostage on receiving a ransom, while threatening to cut off another toe, otherwise.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nBy focusing on the selection of actions rather than on commitments, it might seem that the conception of game theory as mere interactive decision theory is too narrow.\nIn this respect, Schelling's view might seem to evince a more comprehensive understanding of what game theory tries to accomplish.\nOne might object, that commitments could be seen as the actions of a larger game.\nIn reply to this criticism Schelling remarks: While it is instructive and intellectually satisfying to see how such tactics as threats, commitments, and promises can be absorbed in an enlarged, abstract \"supergame\" (game in \"normal form\"), it should be emphasized that we cannot learn anything about those tactics by studying games that are already in normal form.\n[...] What we want is a theory that systematizes the study of the various universal ingredients that make up the move-structure of games; too abstract a model will miss them.\n[9, pp. 156-7]\nOur concern is with these commitment tactics, be it that our analysis is confined to situations in which the players can commit in a given order and where we assume the commitments the players can make are given.\nDespite Schelling's warning for too abstract a framework, our approach will be based on the formal notion of an extortion, which we will propose in Section 4 as a uniform tactic for a comprehensive class of situations in which commitments can be made sequentially.\nOn this basis we tackle such issues as the usefulness of certain types of commitment in different situations (strategic games) or whether it is better to commit early rather than late.\nWe also provide a framework for the assessment of more general game theoretic matters like the relationship of extortions to backward induction or Pareto efficiency.\nInsight into these matters has proved itself invaluable for a proper understanding of diplomatic policy during the Cold War.\nNowadays, we believe, these issues are equally significant for applications and developments in such fields as multiagent systems, distributed computing and electronic markets.\nFor example, commitments have been argued to be of importance for interacting software agents as well as for mechanism design.\nIn the former setting, the inability to re-program a software agent on the fly can be seen as a commitment to its specification and thus exploited to strengthen its strategic position in a multiagent setting.\nA mechanism, on the other hand, could be seen as a set of commitments that steers the players' behavior in a certain desired way (see, e.g., [2]).\nOur analysis is conceptually similar to that of Stackelberg or leadership games [15], which have been extensively studied in the economic literature (cf., [16]).\nThese games analyze situations in which a leader commits to a pure or mixed strategy, and a number of followers, who then act simultaneously.\nOur approach, however, differs in that it is assumed that the players all move in a particular order--first, second, third and so on--and that it is specifically aimed at incorporating a wide range of possible commitments, in particular conditional commitments.\nAfter briefly discussing related work in Section 2, we present the formal game theoretic framework, in which we define the notions of a commitment type as well as conditional and unconditional commitments (Section 3).\nIn Section 4 we propose the generic concept of an extortion, which for each commitment type captures the idea of an optimal commitment profile.\nWe point out an equivalence between extortions and backward induction solutions, and investigate whether it is advantageous to commit earlier rather than later and how the outcomes obtained through extortions relate to Pareto efficiency.\nSection 5 briefly reviews some other commitment types, such as inductive, mixed and mixed conditional commitments.\nThe paper concludes with an overview of the results and an outlook for future research in Section 6.\n2.\nRELATED WORK\nCommitment is a central concept in game theory.\nThe possibility to make commitments distinguishes cooperative from noncooperative game theory [4, 6].\nLeadership games, as mentioned in the introduction, analyze commitments to pure or mixed strategies in what is essentially a two-player setting [15, 16].\nInformally, Schelling [9] has emphasized the importance of promises, threats and the like for a proper understanding of social interaction.\nOn a more formal level, threats have also figured in bargaining theory.\nNash's threat game [5] and Harsanyi's rational threats [3] are two important early examples.\nAlso, commitments have played a significant role in the theory of equilibrium selection (see, e.g., [13].\nOver the last few years, game theory has become almost indispensable as a research tool for computer science and (multi) agent research.\nCommitments have by no means gone unnoticed (see,\nFigure 1: Committing to a dominated strategy can be advantageous.\ne.g., [1, 11]).\nRecently, also the strategic aspects of commitments have attracted the attention of computer scientists.\nThus, Conitzer and Sandholm [2] have studied the computational complexity of computing the optimal strategy to commit to in normal form and Bayesian games.\nSandholm and Lesser [8] employ levelled commitments for the design of multiagent systems in which contractual agreements are not fully binding.\nAnother connection between commitments and computer science has been pointed out by Samet [7] and Tennenholtz [12].\nTheir point of departure is the observation that programs can be used to formulate commitments that are conditional on the programs of other systems.\nOur approach is similar to the Stackleberg setting in that we assume an order in which the players commit.\nWe, however, consider a number of different commitment types, among which conditional commitments, and propose a generic solution concept.\n3.\nCOMMITMENTS\n3.1 Strategic Games\n3.2 Conditional Commitments\n110 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.3 Commitment Types\n4.\nEXTORTIONS\n4.1 Promises and Threats\n4.2 Benign Backward Induction\n112 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.3 Commitment Order\n4.4 Pareto Efficiency\n5.\nOTHER COMMITMENT TYPES\n5.1 Inductive Commitments\n5.2 Mixed Commitments Types\n114 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nSUMMARY AND CONCLUSION\nIn some situations agents can strengthen their strategic position by committing themselves to a particular course of action.\nThere are various types of commitment, e.g., pure, mixed and conditional.\nWhich type of commitment an agent is in a position in to make essentially depends on the situation under consideration.\nIf the agents commit in a particular order, there is a tactic common to making commitments of any type, which we have formalized by means the concept of an extortion.\nThis generic concept of extortion can be analyzed in abstracto.\nMoreover, on its basis the various commitment types can be compared formally and systematically.\nWe have seen that the type of commitment an agent can make has a profound impact on what an agent can achieve in a gamelike situation.\nIn some situations a player is much helped if he is in a position to commit conditionally, whereas in others mixed commitments would be more profitable.\nThis raises the question as to the characteristic formal features of the situations in which it is advantageous for a player to be able to make commitments of a particular type.\nAnother issue which we leave for future research is the computational complexity of finding an extortion for the different commitment types.","lvl-4":"Commitment and Extortion *\nABSTRACT\nMaking commitments, e.g., through promises and threats, enables a player to exploit the strengths of his own strategic position as well as the weaknesses of that of his opponents.\nWhich commitments a player can make with credibility depends on the circumstances.\nIn some, a player can only commit to the performance of an action, in others, he can commit himself conditionally on the actions of the other players.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nWe explore the formal properties of these types of (conditional) commitment and their interrelationships.\nSo as to preclude inconsistencies among conditional commitments, we assume an order in which the players make their commitments.\nCentral to our analyses is the notion of an extortion, which we define, for a given order of the players, as a profile that contains, for each player, an optimal commitment given the commitments of the players that committed earlier.\nOn this basis, we investigate for different commitment types whether it is advantageous to commit earlier rather than later, and how the outcomes obtained through extortions relate to backward induction and Pareto efficiency.\n1.\nINTRODUCTION\nOn one view, the least one may expect of game theory is that it provides an answer to the question which actions maximize an agent's expected utility in situations of interactive decision making.\nFrom this perspective, the formal model of a game in strategic form only outlines the strategic features of an interactive situation.\nApart from merely choosing and performing an action from a set of actions, there may also be other courses open to an agent.\nE.g., the strategic lie of the land may be such that a promise, a threat, or a combination of both would be more conductive to his ends.\nLikewise, a threat only succeeds in deterring an agent if the latter can be made to believe that the threatener is bound to execute the threat, should it be ignored.\nIn this sense, promises and threats essentially involve a commitment on the part of the one who makes them, thus purposely restricting his freedom of choice.\nPromises and threats epitomize one of the fundamental and at first sight perhaps most surprising phenomena in game theory: it may occur that a player can improve his strategic position by limiting his own freedom of action.\nBy commitments we will understand such limitations of one's action space.\nAction itself could be seen as the ultimate commitment.\nPerforming a particular action means doing so to the exclusion of all other actions.\nCommitments come in different forms and it may depend on the circumstances which ones can and which ones cannot credibly be made.\nBesides simply committing to the performance of an action, an agent might make his commitment conditional on the actions of other agents, as, e.g., the kidnapper does, when he promises to set free a hostage on receiving a ransom, while threatening to cut off another toe, otherwise.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nBy focusing on the selection of actions rather than on commitments, it might seem that the conception of game theory as mere interactive decision theory is too narrow.\nIn this respect, Schelling's view might seem to evince a more comprehensive understanding of what game theory tries to accomplish.\nOne might object, that commitments could be seen as the actions of a larger game.\n[...] What we want is a theory that systematizes the study of the various universal ingredients that make up the move-structure of games; too abstract a model will miss them.\n[9, pp. 156-7]\nOur concern is with these commitment tactics, be it that our analysis is confined to situations in which the players can commit in a given order and where we assume the commitments the players can make are given.\nDespite Schelling's warning for too abstract a framework, our approach will be based on the formal notion of an extortion, which we will propose in Section 4 as a uniform tactic for a comprehensive class of situations in which commitments can be made sequentially.\nOn this basis we tackle such issues as the usefulness of certain types of commitment in different situations (strategic games) or whether it is better to commit early rather than late.\nWe also provide a framework for the assessment of more general game theoretic matters like the relationship of extortions to backward induction or Pareto efficiency.\nFor example, commitments have been argued to be of importance for interacting software agents as well as for mechanism design.\nIn the former setting, the inability to re-program a software agent on the fly can be seen as a commitment to its specification and thus exploited to strengthen its strategic position in a multiagent setting.\nA mechanism, on the other hand, could be seen as a set of commitments that steers the players' behavior in a certain desired way (see, e.g., [2]).\nThese games analyze situations in which a leader commits to a pure or mixed strategy, and a number of followers, who then act simultaneously.\nAfter briefly discussing related work in Section 2, we present the formal game theoretic framework, in which we define the notions of a commitment type as well as conditional and unconditional commitments (Section 3).\nIn Section 4 we propose the generic concept of an extortion, which for each commitment type captures the idea of an optimal commitment profile.\nSection 5 briefly reviews some other commitment types, such as inductive, mixed and mixed conditional commitments.\n2.\nRELATED WORK\nCommitment is a central concept in game theory.\nThe possibility to make commitments distinguishes cooperative from noncooperative game theory [4, 6].\nLeadership games, as mentioned in the introduction, analyze commitments to pure or mixed strategies in what is essentially a two-player setting [15, 16].\nInformally, Schelling [9] has emphasized the importance of promises, threats and the like for a proper understanding of social interaction.\nOn a more formal level, threats have also figured in bargaining theory.\nNash's threat game [5] and Harsanyi's rational threats [3] are two important early examples.\nAlso, commitments have played a significant role in the theory of equilibrium selection (see, e.g., [13].\nOver the last few years, game theory has become almost indispensable as a research tool for computer science and (multi) agent research.\nCommitments have by no means gone unnoticed (see,\nFigure 1: Committing to a dominated strategy can be advantageous.\ne.g., [1, 11]).\nRecently, also the strategic aspects of commitments have attracted the attention of computer scientists.\nThus, Conitzer and Sandholm [2] have studied the computational complexity of computing the optimal strategy to commit to in normal form and Bayesian games.\nSandholm and Lesser [8] employ levelled commitments for the design of multiagent systems in which contractual agreements are not fully binding.\nAnother connection between commitments and computer science has been pointed out by Samet [7] and Tennenholtz [12].\nTheir point of departure is the observation that programs can be used to formulate commitments that are conditional on the programs of other systems.\nOur approach is similar to the Stackleberg setting in that we assume an order in which the players commit.\nWe, however, consider a number of different commitment types, among which conditional commitments, and propose a generic solution concept.\n6.\nSUMMARY AND CONCLUSION\nIn some situations agents can strengthen their strategic position by committing themselves to a particular course of action.\nThere are various types of commitment, e.g., pure, mixed and conditional.\nWhich type of commitment an agent is in a position in to make essentially depends on the situation under consideration.\nIf the agents commit in a particular order, there is a tactic common to making commitments of any type, which we have formalized by means the concept of an extortion.\nThis generic concept of extortion can be analyzed in abstracto.\nMoreover, on its basis the various commitment types can be compared formally and systematically.\nWe have seen that the type of commitment an agent can make has a profound impact on what an agent can achieve in a gamelike situation.\nIn some situations a player is much helped if he is in a position to commit conditionally, whereas in others mixed commitments would be more profitable.\nThis raises the question as to the characteristic formal features of the situations in which it is advantageous for a player to be able to make commitments of a particular type.\nAnother issue which we leave for future research is the computational complexity of finding an extortion for the different commitment types.","lvl-2":"Commitment and Extortion *\nABSTRACT\nMaking commitments, e.g., through promises and threats, enables a player to exploit the strengths of his own strategic position as well as the weaknesses of that of his opponents.\nWhich commitments a player can make with credibility depends on the circumstances.\nIn some, a player can only commit to the performance of an action, in others, he can commit himself conditionally on the actions of the other players.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nWe explore the formal properties of these types of (conditional) commitment and their interrelationships.\nSo as to preclude inconsistencies among conditional commitments, we assume an order in which the players make their commitments.\nCentral to our analyses is the notion of an extortion, which we define, for a given order of the players, as a profile that contains, for each player, an optimal commitment given the commitments of the players that committed earlier.\nOn this basis, we investigate for different commitment types whether it is advantageous to commit earlier rather than later, and how the outcomes obtained through extortions relate to backward induction and Pareto efficiency.\n1.\nINTRODUCTION\nOn one view, the least one may expect of game theory is that it provides an answer to the question which actions maximize an agent's expected utility in situations of interactive decision making.\n* This material is based upon work supported by the Deutsche Forschungsgemeinschaft under grant BR 2312\/3 -1.\nA slightly divergent view is expounded by Schelling when he states that \"strategy [...] is not concerned with the efficient application of force but with the exploitation ofpotential force\" [9, page 5].\nFrom this perspective, the formal model of a game in strategic form only outlines the strategic features of an interactive situation.\nApart from merely choosing and performing an action from a set of actions, there may also be other courses open to an agent.\nE.g., the strategic lie of the land may be such that a promise, a threat, or a combination of both would be more conductive to his ends.\nThe potency of a promise, however, essentially depends on the extent the promisee can be convinced of the promiser's resolve to see to its fulfillment.\nLikewise, a threat only succeeds in deterring an agent if the latter can be made to believe that the threatener is bound to execute the threat, should it be ignored.\nIn this sense, promises and threats essentially involve a commitment on the part of the one who makes them, thus purposely restricting his freedom of choice.\nPromises and threats epitomize one of the fundamental and at first sight perhaps most surprising phenomena in game theory: it may occur that a player can improve his strategic position by limiting his own freedom of action.\nBy commitments we will understand such limitations of one's action space.\nAction itself could be seen as the ultimate commitment.\nPerforming a particular action means doing so to the exclusion of all other actions.\nCommitments come in different forms and it may depend on the circumstances which ones can and which ones cannot credibly be made.\nBesides simply committing to the performance of an action, an agent might make his commitment conditional on the actions of other agents, as, e.g., the kidnapper does, when he promises to set free a hostage on receiving a ransom, while threatening to cut off another toe, otherwise.\nSome situations even allow for commitments on commitments or for commitments to randomized actions.\nBy focusing on the selection of actions rather than on commitments, it might seem that the conception of game theory as mere interactive decision theory is too narrow.\nIn this respect, Schelling's view might seem to evince a more comprehensive understanding of what game theory tries to accomplish.\nOne might object, that commitments could be seen as the actions of a larger game.\nIn reply to this criticism Schelling remarks: While it is instructive and intellectually satisfying to see how such tactics as threats, commitments, and promises can be absorbed in an enlarged, abstract \"supergame\" (game in \"normal form\"), it should be emphasized that we cannot learn anything about those tactics by studying games that are already in normal form.\n[...] What we want is a theory that systematizes the study of the various universal ingredients that make up the move-structure of games; too abstract a model will miss them.\n[9, pp. 156-7]\nOur concern is with these commitment tactics, be it that our analysis is confined to situations in which the players can commit in a given order and where we assume the commitments the players can make are given.\nDespite Schelling's warning for too abstract a framework, our approach will be based on the formal notion of an extortion, which we will propose in Section 4 as a uniform tactic for a comprehensive class of situations in which commitments can be made sequentially.\nOn this basis we tackle such issues as the usefulness of certain types of commitment in different situations (strategic games) or whether it is better to commit early rather than late.\nWe also provide a framework for the assessment of more general game theoretic matters like the relationship of extortions to backward induction or Pareto efficiency.\nInsight into these matters has proved itself invaluable for a proper understanding of diplomatic policy during the Cold War.\nNowadays, we believe, these issues are equally significant for applications and developments in such fields as multiagent systems, distributed computing and electronic markets.\nFor example, commitments have been argued to be of importance for interacting software agents as well as for mechanism design.\nIn the former setting, the inability to re-program a software agent on the fly can be seen as a commitment to its specification and thus exploited to strengthen its strategic position in a multiagent setting.\nA mechanism, on the other hand, could be seen as a set of commitments that steers the players' behavior in a certain desired way (see, e.g., [2]).\nOur analysis is conceptually similar to that of Stackelberg or leadership games [15], which have been extensively studied in the economic literature (cf., [16]).\nThese games analyze situations in which a leader commits to a pure or mixed strategy, and a number of followers, who then act simultaneously.\nOur approach, however, differs in that it is assumed that the players all move in a particular order--first, second, third and so on--and that it is specifically aimed at incorporating a wide range of possible commitments, in particular conditional commitments.\nAfter briefly discussing related work in Section 2, we present the formal game theoretic framework, in which we define the notions of a commitment type as well as conditional and unconditional commitments (Section 3).\nIn Section 4 we propose the generic concept of an extortion, which for each commitment type captures the idea of an optimal commitment profile.\nWe point out an equivalence between extortions and backward induction solutions, and investigate whether it is advantageous to commit earlier rather than later and how the outcomes obtained through extortions relate to Pareto efficiency.\nSection 5 briefly reviews some other commitment types, such as inductive, mixed and mixed conditional commitments.\nThe paper concludes with an overview of the results and an outlook for future research in Section 6.\n2.\nRELATED WORK\nCommitment is a central concept in game theory.\nThe possibility to make commitments distinguishes cooperative from noncooperative game theory [4, 6].\nLeadership games, as mentioned in the introduction, analyze commitments to pure or mixed strategies in what is essentially a two-player setting [15, 16].\nInformally, Schelling [9] has emphasized the importance of promises, threats and the like for a proper understanding of social interaction.\nOn a more formal level, threats have also figured in bargaining theory.\nNash's threat game [5] and Harsanyi's rational threats [3] are two important early examples.\nAlso, commitments have played a significant role in the theory of equilibrium selection (see, e.g., [13].\nOver the last few years, game theory has become almost indispensable as a research tool for computer science and (multi) agent research.\nCommitments have by no means gone unnoticed (see,\nFigure 1: Committing to a dominated strategy can be advantageous.\ne.g., [1, 11]).\nRecently, also the strategic aspects of commitments have attracted the attention of computer scientists.\nThus, Conitzer and Sandholm [2] have studied the computational complexity of computing the optimal strategy to commit to in normal form and Bayesian games.\nSandholm and Lesser [8] employ levelled commitments for the design of multiagent systems in which contractual agreements are not fully binding.\nAnother connection between commitments and computer science has been pointed out by Samet [7] and Tennenholtz [12].\nTheir point of departure is the observation that programs can be used to formulate commitments that are conditional on the programs of other systems.\nOur approach is similar to the Stackleberg setting in that we assume an order in which the players commit.\nWe, however, consider a number of different commitment types, among which conditional commitments, and propose a generic solution concept.\n3.\nCOMMITMENTS\nBy committing, an agent can improve his strategic position.\nIt may even be advantageous to commit to a strategy that is strongly dominated, i.e., one for which there is another strategy that yields a better payoff no matter how the other agents act.\nConsider for example the 2 \u00d7 2 game in Figure 1, in which one player, Row, chooses rows and another, Col, chooses columns.\nThe entries in the matrix indicate the payoffs to Row and Col, respectively.\nThen, top-left is the solution obtained by iterative elimination of strongly dominated strategies: for Row, playing top is always better than playing bottom, and assuming that Row will therefore never play bottom, left is always better than right for Col. However, if Row succeeds in convincing Col of his commitment to play bottom, the latter had better choose the right column.\nThus, Row attains a payoff of two instead of one.\nAlong a similar line of reasoning, however, Col would wish to commit to the left column, as convincing Row of this commitment guarantees him the most desirable outcome.\nIf, on the other hand, both players actually commit themselves in this way but without convincing the other party of their having done so, the game ends in misery for both.\nImportant types of commitments, however, cannot simply be analyzed as unconditional commitments to actions.\nThe essence of a threat, for example, is deterrence.\nIf successful, it is not carried out.\n(This is also the reason why the credibility of a threat is not necessarily undermined if its putting into effect means that the threatener is also harmed.)\nBy contrast, promises are made to entice and, as such, meant to be fulfilled.\nThus, both threats and promises would be strategically void if they were unconditional.\nFigure 2 shows an example, in which Col can guarantee himself a payoff of three by threatening to choose the right column if Row chooses top.\n(This will suffice to deter Row, and there is no need for an additional promise on the part of Col.) He cannot do so by merely committing unconditionally, and neither can Row if he were to commit first.\nIn the case of conditional commitments, however, a particular kind of inconsistency can arise.\nIt is not in general the case that any two commitments can both be credible.\nIn a 2 \u00d7 2 game, it could occur that Row commits conditionally on playing top if the\nFigure 2: The column player Col can guarantee himself a payoff of three by threatening to play right if the row player Row plays top.\nCol plays left, and bottom, otherwise.\nIf now, Col simultaneously were able to commit to the conditional strategy to play right if Row plays top, and left, otherwise, there is no strategy profile that can be played without one of the players' bluff being called.\nTo get around this problem, one can write down conditional commitments in the form of rules and define appropriate fixed point constructions, as suggested by Samet [7] and Tennenholtz [12].\nSince checking the semantic equivalence of two commitments (or commitment conditions) is undecidable in general, Tennenholtz bases his definition of program equilibrium on syntactic equivalence.\nWe, by contrast, try to steer clear from fixed point constructions by assuming that the players make their commitment in a particular order.\nEach player can then make his commitments dependent on the actions of the players to commit after him, but not on the commitments of the players that committed before.\nOn the issue how this order comes about we do not here enter.\nRather, we assume it to be determined by the circumstances, which may force or permit some players to commit earlier and others later.\nWe will find that it is not always beneficial to commit earlier than later or vice versa.\nAnother point to heed is that we only consider the case in which the commitments are considered absolutely binding.\nWe do not take into account commitments that can be violated.\nIntuitively, this could be understood as that the possibility of violation fatally undermines the credibility of the commitment.\nWe also assume commitments to be complete, in the sense that they fully lay down a player's behavior in all foreseeable circumstances.\nThese assumptions imply that the outcome of the game is entirely determined by the commitments the players make.\nAlthough these might be implausible assumptions for some situations, we had better study the idealized case first, before tackling the complications of the more general case.\nTo make these concepts formally precise, we first have to fix some notation.\n3.1 Strategic Games\nA strategic game is a tuple (N, (Ai) i \u2208 N, (ui) i \u2208 N), where N = {1,..., n] is a finite set of players, Ai is a set of actions available to player i and ui a real-valued utility function for player i on the set of (pure) strategy profiles S = A1 x \u00b7 \u00b7 \u00b7 xAn.\nWe call a game finite if for all players i the action set Ai is finite.\nA mixed strategy \u03c3i for a player i is a probability distribution over Ai.\nWe write \u03a3i for the set of mixed strategies available to player i, and \u03a3 = \u03a31 x \u00b7 \u00b7 \u00b7 x \u03a3n for the set of mixed strategy profiles.\nWe further have \u03c3 (a) and \u03c3i (a) denote the probability of action a in mixed strategy profile \u03c3 or mixed strategy \u03c3i, respectively.\nIn settings involving expected utility, we will generally assume that utility functions represent von Neumann-Morgenstern preferences.\nFor a player i and (mixed) strategy profiles \u03c3 and \u03c4 we write \u03c3 5i \u03c4 if ui (\u03c3) 5 ui (\u03c4).\n3.2 Conditional Commitments\nRelative to a strategic game (N, (Ai) i \u2208 N, (ui) i \u2208 N) and an ordering \u03c0 = (\u03c01,..., \u03c0n) of the players, we define the set F\u03c0i of (pure) conditional commitments of a player \u03c0i as the set of functions from A\u03c01 x \u00b7 \u00b7 \u00b7 x A\u03c0i \u2212 1 to A\u03c0i.\nFor \u03c01 we have the set of conditional commitments coincide with A\u03c01.\nBy a conditional commitment profile f we understand any combination of conditional commitments in F\u03c01 x \u00b7 \u00b7 \u00b7 x F\u03c0n.\nIntuitively, \u03c0 reflects the sequential order in which the players can make their commitments, with \u03c0n committing first, \u03c0n \u2212 1 second, and so on.\nEach player can condition his action on the actions of all players that are to commit after him.\nIn this manner, each conditional commitment profile f can be seen to determine a unique strategy profile, denoted by f, which will be played if all players stick to their conditional commitments.\nMore formally, the strategy profile f = (f \u03c01,..., f \u03c0n) for a conditional commitment profile f is defined inductively as f \u03c01 = df.\nf\u03c01, f \u03c0i +1 = df.\nf\u03c0i +1 (f \u03c01,..., f \u03c0i).\nThe sequence f \u03c01, (f \u03c01, f \u03c02),..., (f \u03c01,..., f \u03c0n) will be called the path of f. E.g., in the two-player game of Figure 2 and given the order (Row, Col), Row has two conditional commitments, top and bottom, which we will henceforth denote t and b. Col, on the other hand, has four conditional commitments, corresponding to the different functions mapping strategies of Row to those of Col. If we consider a conditional commitment f for Col such that f (t) = l and f (b) = r, then (t, f) is a conditional commitment profile and (t, f) = (t, f (t)) = (t, l).\nThere is a natural way in which a strategic game G together with an ordering (\u03c01,..., \u03c0n) of the players can be interpreted as an extensive form game with perfect information (see, e.g., [4, 6]) 1, in which \u03c01 chooses his action first, \u03c02 second, and so on.\nObserve that under this assumption the strategies in the extensive form game and the conditional commitments in the strategic game G with ordering \u03c0 are mathematically the same objects.\nApplying backward induction to the extensive form game yields subgame perfect equilibria, which arguably provide appropriate solutions in this setting.\nFrom the perspective of conditional commitments, however, players move in reverse order.\nWe will argue that under this interpretation other strategy profiles should be singled out as appropriate.\nTo illustrate this point, consider once more the game in Figure 2 and observe that neither player can improve on the outcome obtained via iterated strong dominance by committing unconditionally to some strategy.\nSituations like this, in which players can make unconditional commitments in a fixed order, can fruitfully be analyzed as extensive form games, and the most lucrative unconditional commitment can be found through backward induction.\nFigure 3 shows the extensive form associated with the game of Figure 2.\nThe strategies available to the row player are the same as in the strategic form: choosing the top or the bottom row.\nThe strategies for the column player in the extensive game are given by the four functions that map strategies of the row player in the strategic game to one of his own.\nTransforming this extensive form back into a strategic game (see Figure 4), we find that there exists a second equilibrium besides the one found by means of backward induction.\nThis equilibrium with outcome (1, 3), indicated by the thick lines in Figure 3, has been argued to be unacceptable in the sequential game as it would involve an incredible threat by Col: once Row has played top, Col finds himself confronted with a fait accompli.\nHe had better make the best of a bad bargain and opt for the left column after all.\nThis is in essence the line of thought Selten followed in his famous argument for subgame perfect equilibria [10].\nIf, however, the strategies of Col in the extensive form are thought of as his conditional commitments he can make in case 1For a formal definition of a game in extensive form, the reader consult one of the standard textbooks, such as [4] or [6].\nIn this paper all formal definitions are based on strategic games and orderings of the players only.\n110 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 4: The strategic game corresponding to the extensive form of Figure 3\nFigure 3: Extensive form obtained from the strategic game of Fig\nure 2 when the row player chooses an action first.\nThe backward induction solution is indicated by dashed lines, the conditional commitment solution by solid ones.\n(The horizontal dotted lines do not indicate information sets, but merely indicate which players are to move when.)\nhe moves first, the situation is radically different.\nThus we also assume that it is possible for Col to make credible the threat to choose the right column if Row were to play top, so as to ensure the latter is always better off to play the bottom row.\nIf Col can make a conditional commitment of playing the right column if Row chooses top, and the left column otherwise, this leaves Row with the easy choice between a payoff of zero or one, and Col may expect a payoff of three.\nThis line of reasoning can be generalized to yield an algorithm for finding optimal conditional commitments for general twoplayer games:\n1.\nFind a strategy profile s = (s\u03c01, s\u03c02) with maximum payoff to player \u03c02, and set f\u03c01 = s\u03c01 and f\u03c02 (s\u03c01) = s\u03c02.\n2.\nFor each t\u03c01 E A\u03c01 with t\u03c01 #s\u03c01, find a strategy t\u03c02 E A\u03c02 that minimizes u\u03c01 (t\u03c01, t\u03c02), and set f\u03c02 (t\u03c01) = t\u03c02.\n3.\nIf u\u03c01 (t\u03c01, f\u03c02 (t\u03c01)) 5 u\u03c01 (s\u03c01, s\u03c02) for all t\u03c01 #s\u03c01, return f. 4.\nOtherwise, find the strategy profile (s, \u03c01, s, \u03c02) with the highest payoff to \u03c02 among the ones that have not yet been considered.\nSet f\u03c01 = s, \u03c01 and f\u03c02 (s, \u03c01) = s, \u03c02, and continue with Step 2.\nGeneralizing the idea underlying this algorithm, we present in Section 4 the concept of an extortion, which applies to games with any number of players.\nFor any order of the players an extortion contains, for each player, an optimal commitment given the commitments of the players that committed earlier.\n3.3 Commitment Types\nSo far, we have distinguished between conditional and unconditional commitments.\nIf made sequentially, both of them determine a unique strategy profile in a given strategic game.\nThis notion of sequential commitment allows for generalization and gives rise to the following definition of a (sequential) commitment type.\nDEFINITION 3.1.\n(Sequential commitment type) A (sequential) commitment type \u03c4 associates with each strategic game G and each ordering \u03c0 of its players, a tuple (X\u03c01,..., X\u03c0n, \u03c6), where X\u03c01,..., X\u03c0n are (abstract) sets of commitments and \u03c6 is a function mapping each profile in X = X\u03c01 x \u2022 \u2022 \u2022 x X\u03c0n to a (mixed) strategy profile of G.\nA commitment type (X\u03c01,..., X\u03c0n, \u03c6) is finite whenever X\u03c0i is finite for each i with 1 5 i 5 n. Thus, the type of unconditional commitments associates with a game and an ordering \u03c0 of its players the tuple (S \u03c01,..., S \u03c0n, id), where id is the identity function.\nSimilarly, (F\u03c01,..., F\u03c0n,,) is the tuple associated with the same game by the type of (pure) conditional commitments.\n4.\nEXTORTIONS\nIn the introduction, we argued informally how players could improve their position by conditionally committing.\nHow well they can do, could be analyzed by means of an extensive game with the actions of each player being defined as the possible commitments he can make.\nHere, we introduce for each commitment type a corresponding notion of extortion, which is defined relative to a strategic game and an ordering of the players.\nExtortions are meant to capture the concept of a profile that contains, for each player, an optimal commitment given the commitments of the players that committed earlier.\nA complicating factor is that in finding a player's optimal commitment, one should not only take into account how such a commitment affects other players' actions, but also how it enables them to make their commitments.\n) \u03c6 (y\u03c01,..., y\u03c0m, x\u03c0m +1,..., x\u03c0n) d\u03c0m \u03c6 (x\u03c01,..., x\u03c0m, x\u03c0m +1,..., x\u03c0n for all commitment profiles g in X with (y\u03c01,..., y\u03c0m, x\u03c0m +1,..., x\u03c0n) a \u03c4-extortion of order m--1.\nA \u03c4-extortion is a commitment profile that is a \u03c4-extortion of order m for all m with 0 5 m 5 n. Furthermore, we say that a (mixed) strategy profile \u03c3 is \u03c4-extortionable if there is some \u03c4-extortion x with \u03c6 (x) = s. Thus, an extortion of order 1 is a commitment profile in which player \u03c01, makes a commitment that maximizes his payoff, given fixed commitments of the other players.\nAn extortion of order m is an extortion of order m--1 that maximizes player m's payoff, given fixed commitments of the players \u03c0m +1 through \u03c0n.\nFor the type of conditional commitments we have that any conditional commitment profile f is an extortion of order 0 and an extortion of an order m greater than 0 is any extortion of order m--1 for which: (g\u03c01,..., g\u03c0m, f\u03c0m +1,..., f\u03c0n), d\u03c0m (f\u03c01,..., f\u03c0m, f\u03c0m +1,..., f\u03c0n),, for each conditional commitment profile g such that (g\u03c01,..., g\u03c0m, f\u03c0m +1,..., f\u03c0n) an extortion of order m--1.\nTo illustrate the concept of an extortion for conditional commitments consider the three-player game in Figure 5 and assume\nFigure 5: A three-player strategic game\nFigure 6: A conditional extortion f of order 1 (left) and an extortion g of order 3 (right).\n(Row, Col, Mat) to be the order in which the players commit.\nFigure 6 depicts the possible conditional commitments of the players in extensive forms, with the left branch corresponding to Row's strategy of playing the top row.\nLet f and g be the conditional commitment strategies indicated by the thick lines in the left and right figures respectively.\nBoth f and g are extortions of order 1.\nIn both f and g Row guarantees himself the higher payoff given the conditional commitments of Mat and Col. Only g, however, is also an extortion of order 2.\nTo appreciate that f is not, consider the conditional commitment profile h in which Row chooses top and Col chooses right no matter how Row decides, i.e., h is such that hRow = t and hCol (t) = hCol (b) = r. Then, (hRow, hCol, fMat) is also an extortion of order 1, but yields Col a higher payoff than f does.\nWe leave it to the reader to check that, by contrast, g is an extortion of order 3, and therewith an extortion per se.\n4.1 Promises and Threats\nOne way of understanding conditional extortions is by conceiving of them as combinations of precisely one promise and a number of threats.\nFrom the strategy profiles that can still be realized given the conditional commitments of players that have committed before him, a player tries to enforce the strategy profile that yields him as much payoff as possible.\nHence, he chooses his commitment so as to render deviations from the path that leads to this strategy profile as unattractive as possible (` threats') and the desired strategy profile as appealing as possible (` promises') for the relevant players.\nIf (s\u03c01,..., s\u03c0n) is such a desirable strategy profile for player \u03c0i and f\u03c0i his conditional commitment, the value of f\u03c0i (s\u03c01,..., s\u03c0i \u2212 1) could be taken as his promise, whereas the values of f\u03c0i for all other (t\u03c01,..., t\u03c0i \u2212 1) could be seen as constituting his threats.\nThe higher the payoff is to the other players in a strategy profile a player aims for, the easier it is for him to formulate an effective threat.\nHowever, making appropriate threats in this respect does not merely come down to minimizing the payoffs to players to commit later wherever possible.\nA player should also take into account the commitments, promises and threats the following players can make on the basis of his and his predecessors' commitments.\nThis is what makes extortionate reasoning sometimes so complicated, especially in situations with more than two players.\nFor example, in the game of Figure 5, there is no conditional extortion that ensures Mat a payoff of two.\nTo appreciate this, consider the possible commitments Mat can make in case Row plays top and Col plays left (tl) and in case Row plays top and Col plays right (tr).\nIf Mat commits to the right matrix in both cases, he virtually promises Row a payoff of four, leaving himself with a payoff of at most one.\nOtherwise, he puts Col in a position to deter Row from choosing bottom by threatening to choose the right column if the latter does so.\nAgain, Mat cannot expect a payoff higher than one.\nIn short, no matter how Mat conditionally commits, he will either enable Col to threaten Row into playing top or fail to lure Row into playing the bottom row.\n4.2 Benign Backward Induction\nThe solutions extortions provide can also be obtained by modeling the situation as an extensive form game and applying a backward inductive type of argument.\nThe actions of the players in any such extensive form game are then given by their conditional commitments, which they then choose sequentially.\nFor higher types of commitment, such as conditional commitments, such ` metagames', however, grow exponentially in the number of strategies available to the players and are generally much larger than the original game.\nThe correspondence between the backward induction solutions in the meta-game and the extortions of the original strategic game rather signifies that the concept of an extortion is defined properly.\nFirst we define the concept of benign backward induction in general relative to a game in strategic form together with an ordering of the players.\nIntuitively it reflects the idea that each player chooses for each possible combination of actions of his predecessors the action that yields the highest payoff, given that his successors do similarly.\nThe concept is called benign backward induction, because it implies that a player, when indifferent between a number of actions, chooses the one that benefits his predecessors most.\nFor an ordering \u03c0 of the players, we have \u03c0R denote its reversal (\u03c0n,..., \u03c01).\nDEPINITION 4.2.\n(Benign backward induction) Let G be a strategic game and \u03c0 an ordering of its players.\nA benign backward induction of order 0 is any conditional commitment profile f subject to \u03c0.\nFor m> 0, a conditional commitment strategy profile f is a benign backward induction (solution) of order m if f is a benign backward induction of order m \u2212 1 and (g\u03c0Rn,..., g\u03c0Rm,1, g\u03c0Rm,..., g\u03c0R1) ~ S\u03c0Rm (g\u03c0Rn,..., g\u03c0Rm,1, f\u03c0Rm,..., f\u03c0R1) ~ for any backward induction (g\u03c0Rn,..., g\u03c0Rm,1, g\u03c0Rm,..., g\u03c0R1) of order m \u2212 1.\nA conditional commitment profile f is a benign backward induction if it is a benign backward induction of order k for each k with 0 5 k .\nEach such tuple specifies the number of information items in category that can be reached by , such that ( ) { }.\nThis specifies the of to with respect to the information category .\nAs it can be noticed, each tuple corresponds either to the agent itself (specifying the pieces of information classified in available to its local repository) or to an acquaintance of the agent (recording the pieces of information in category available to the acquaintance agent and to agents that can be reached through this acquaintance).\nThe routing index is exploited for the propagation of queries to the right agents: Those that are either more likely to provide answers or that know someone that can provide the requested pieces of information.\nConsidering an agent , the profile model of some of its acquaintances , denoted by is a set of tuples < , >, maintained by .\nSuch a tuple specifies the probability that the acquaintance is interested to pieces of information in category subsequently, such a probability is also denoted by ).\nFormally, the profile model of an acquaintance of is { , >| ( ) and }.\nProfile models are exploited by the agents to decide where to `advertise'' their information provision abilities.\nGiven two acquaintances and in , the information searching and sharing process proceeds as it is depicted in Figure 1: Initially, each agent has no knowledge about the information provision abilities of its acquaintances and also, it possesses no information about their interests.\nWhen that the query is sent to from the agent , then has to update the profile of concerning the category increasing the probability that is interested to information in When this probability is greater than a threshold value (due to the queries about that has sent to ), then assesses that it is highly Figure 1.\nTypical pattern for information sharing between two acquaintances (numbers show the sequence of tasks) Aj Ai The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 249 probable for to be interested about information in category .\nThis leads to inform about its information provision abilities as far as the category is concerned.\nThis information is being used by to update its index about .\nThis index is being exploited by to further propagate queries, and it is further propagated to those interested in .\nMoreover, the profile of maintained by guides to propagate changes concerning its information provision abilities to .\nThe above method has the following features: (a) It combines routing indices and token-based information sharing techniques for efficient information searching and sharing, without imposing an overlay network structure.\n(b) It can be used by agents to adapt safely and effectively to dynamic networks.\n(c) It supports the acquisition and exploitation of different types of locally available information for the `tuning'' process.\n(d) It extends the tokenbased method for information sharing (as it was originally proposed in [12,13]) in two respects: First, to deal with categories of information represented by means of ontology concepts and not with specific pieces of information, and second, to guide agents to advertise information that is semantically similar to the information requested, by using a semantic similarity measure between information categories.\nTherefore, it paves the way for the use of token-based methods for semantic peer-to-peer systems.\nThis is further described in section 4.3.\n(d) It provides a more sophisticated way for agents to update routing indices than that originally proposed in [2].\nThis is done by gathering and exploiting acquaintances'' profiles for effective information sharing, avoiding unnecessary and cyclic updates that may result to misleading information about agents'' information provision abilities.\nThis is further described in the next sub-section.\n4.2 Routing Indices As already specified, given a network of agents and the set of agent``s acquaintances, the routing index (RI) of (denoted by ) is a collection of at most | | indexing tuples < , >.\nThe key idea is that given such an index and a request concerning , will forward this request to if the resources available (i.e. the information abilities of to ) can best serve this request.\nTo compute the information abilities of to , all tuples < , > concerning all agents in ( )-{ } must be aggregated.\nCrespo and Garcia-Molina [2] examine various types of aggregations.\nIn this paper, given some tuples < >,< , ...> maintained by the agent , their aggregation is the tuple < , >.\nThis gives information concerning the pieces of information that can be provided through , but it does not distinguish what each of ``s acquaintances can provide: This is an inherent feature of routing indices.\nWithout considering the interests of its acquaintances, may compute aggregations concerning agents in ( ) { }-{ } and advertise\/share its information provision abilities to each agent in ( ).\nFor instance, given the network configuration depicted in Figure 2 and a category , agent sends the aggregation of the tuples concerning agents in ( ) { }-{ } (denoted as ( , )) to agent , which records the tuple < >.\nSimilarly the aggregation of the tuples concerning the agents in ( ) { }-{ } (denoted as ( )) is sent to the agent , which also records the tuple < >.\nIt must be noticed that and record the information provision abilities of each from its own point of view.\nEvery time the tuple that models the information provision abilities of an agent changes, the aggregation has to re-compute and send the new aggregation to the appropriate neighbors in the way described above.\nThen, its neighbors have to propagate these updates to their acquaintances, and so on.\nFigure 2.\nAggregating and sharing information provision indices.\nRouting indices may be misleading and lead to inefficiency in arbitrary graphs containing cycles.\nThe exploitation of acquaintances'' profiles can provide solutions to these deficiencies.\nEach agent propagates its information provision abilities concerning a category only to these acquaintances that have high interest in this category.\nAs it has been mentioned, an agent expresses its interest in a category by propagating queries about it.\nTherefore, indices concerning a category are propagated in the inverse direction in the paths to which queries about are propagated.\nIndices are propagated as long as agents in the path have a high interest in .\nQueries can not be propagated in a cyclic fashion since an agent serves and propagates queries that have not been served by it in a previous time point.\nTherefore, due to their relation to queries, indices are not propagated in a cyclic fashion, as well.\nHowever, there is still a specific case where cycles can not be avoided.\nSuch a case is shown in Figure 3: Figure 3.\nCyclic pattern for the sharing of indices.\nWhile the propagation of the query causes the propagation of information provision abilities of agents in a non cyclic way (since the agent A recognizes that has been served), the query causes the propagation of information abilities of A to other agents in the network, causing, in conjunction to the propagation of indices due to a cyclic update of indices.\n4.3 Profiles The key assumption behind the exploitation of acquaintances'' profiles, as it was originally proposed in [12,13], is that for an agent to pass a specific information item, this agent has a high interest on it or to related information.\nAs already said, in our case, acquaintances'' profiles are created based on received queries and specify the interests of acquaintances to specific information categories.\nGiven the query sent from to , has to record not only the interest of to , but Ak A2 A Notation Acquaintance relation Flow of query Flow of indices due to Flow of query Flow of indices due to 250 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) the interest of to all the related classes, given their semantic similarity to To measure the similarity between two ontology classes we use the similarity function [0,1] [7]: = otherwise cc 01.0 1 cofsubconceptaiscif ji ji where is the length of the shortest path between and in the graph spanned by the sub concept relation and the minimal level in the hierarchy of either or .\nand are parameters scaling the contribution of shortest path length and , respectively.\nBased on previous works we choose =0.2 and =0.6 as optimal values.\nIt must be noticed that we measure similarity between sub-concepts, assigning a very low similarity value between concepts that are not related by the sub-concept relation.\nThis is due to that, each query about information in category can be answered by information in any sub-category of close enough to Given a threshold value 0.3, 0.3 indicates that an agent interested in is also interested in , while <0.3 indicates that an agent interested in is unlikely to be interested in .\nThis threshold value was chosen after some empirical experiments with ontologies.\nThe update of ``s assessment on pc based on an incoming query from is computed by leveraging Bayes Rule as follows [12,13]: , and ( ) If is the last in the || 1 || 2 ),( If is not the last in the Then probabilities must be normalized to ensure that , 1 )( , According to the first case of the equation, the probability that the agent that has propagated a query about to be interested about information in , is updated based on the similarity between and .\nThe second case updates the interests of agents other than the requesting one, in a way that ensures that normalization works.\nIt must be noticed that in contrast to [12,13], the computation has been changed in favour to the agent that passed the query.\nThe profiles of acquaintances enable an agent to decide where and which advertisements to be sent.\nSpecifically, for each and for which is greater than a threshold value (currently set to 0.5), the agent aggregates the vectors ( ) of each agent ( ) { }-{ }and sends the tuple ( , ) to .\nAlso, given a high , when a change to an index concerning occurs (e.g. due to a change in ``s local repository, or due to that the set of its acquaintances changed), sends the updated aggregated index entry to .\nDoing so, the agent which is highly interested to pieces of information in category updates its index so as to become aware of the information provision abilities of as far as the category is concerned.\n4.4 Tuning Tuning is performed seamlessly to searching: As agents propagate queries to be served, their profiles are getting updated by their acquaintances.\nAs their profiles are getting updated, agents receive the aggregated indices of their acquaintances, becoming aware of their information provision abilities on information categories to which they are probably interested.\nGiven these indices, agents further propagate queries to acquaintances that are more likely to serve queries, and so on.\nConcerning the routing index and the profiles maintained by an agent , it must be pointed that does not need to record all possible tuples, i.e. | | | { }|: It records only those that are of particular interest for searching and sharing information, depending on the expertise and interests of its own and its acquaintances.\nInitially, agents do not possess profiles of their acquaintances.\nFor indices there are two alternatives: Either agents do not initially possess any information about acquaintances'' local repositories (this is the case), or they do (this is the case).\nGiven a query, agents propagate this query to those acquaintances that have the highest information provision abilities.\nIn the no initialization of indices case where an agent does not initially possess information about its acquaintances'' abilities, it may initially propagate a query to all of them, resulting to a pure flooding approach; or it may propagate the query randomly to a percentage of them.\nIn the initialization of indices case, where an agent initially possesses information about its acquaintances'' local repository, it can propagate queries to all or to a percentage of those that can best serve the request.\nWe considered both cases in our experiments.\nGiven a static setting where agents do not shift their expertise, and the distribution of information pieces does not change, the network will eventually reach a state where no information concerning agents'' information abilities will need to be propagated and no agents'' profiles will need to be updated: Queries shall be propagated only to those agents that will lead to a near-to-the-maximum benefit of the system in a very efficient way.\nIn a dynamic setting, agents may shift their expertise, their interests, they may leave the network at will, or welcome new agents that join the network and bring new information provision abilities, new interests and new types of queries.\nIn this paper we study settings where agents may leave or join the network.\nThis requires agents to adapt safely and effectively.\nTowards this goal, in case an agent does not receive a reply from one of its acquaintances within a time interval, then it retracts all the indices and the profile concerning the missing acquaintance and repropagates the queries that have been sent to the missing agent since the last successful handshake, to other agents.\nIn case a new agent joins the network, then its acquaintances that are getting aware of its presence propagate all the queries that have processed by them in the last time points (currently is set to 6) to the newcomer.\nThis is done so as to inform the newcomer about their interests and initiate information sharing.\n5.\nEXPERIMENTAL SETUP To validate the proposed approach we have built a prototype that simulates large networks.\nTo test the scalability of our approach The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 251 we have run several experiments with various types of networks.\nHere we present results from 3 network types with | |=100, | |=500 and | |=1000 that provide representative cases.\nNetworks are constructed by distributing randomly | | agents in an area, each with a visibility ratio equal to The acquaintances of an agent are those that are visible to the agent and those from which the agent is visible (since edges in the network are bidirectional).\nDetails about networks are given in Table 1.\nThe column avg(|N(A)|) shows the average number of acquaintances per agent in the network and the column |T| shows the number of queries per network type.\nIt must be noticed that the TypeA network is more dense than the others, which are much larger than this.\nEach experiment ran 40 times.\nIn each run the network is provided with a new set of randomly generated queries that are originated from randomly chosen agents.\nThe agents search and gather knowledge that they further use and enrich, tuning the network gradually, run by run.\nEach run lasts a number of rounds that depends on the of queries and on the parameters that determine the dynamics of the network: To end a run, all queries must have either been served (i.e. 100% of the information items requested must have been found), or they must have been unfulfilled (i.e. have exceeded their ).\nIt must be noticed that in case of a dynamic setting, this ending criterion causes some of the queries to be lost.\nThis is the case when some queries are the only active remained and the agents to whom they have been propagated left the network without their acquaintances to be aware of it.\nTable1: Network types |N| R N avg(|N(A)|) |T| TypeA 100 10 25 50 363 TypeB 500 10 125 20 1690 TypeC 1000 10 250 10 3330 Information used in the experiments is synthetic and is being classified in 15 distinct categories: Each agent``s expertise comprises a unique information category.\nFor the category in its expertise each agent holds at most 1000 information pieces, the exact number of which is determined randomly.\nAt each run a constant number of queries are being generated, depending on the type of network used (last column in Table 1).\nAt each run, each query is randomly assigned to an originator agent and is set to request a random number of information items, classified in a sub-category of the query-originator agent``s expertise.\nThis sub-category is chosen in a random way and the requested items are less than 6000.\nThe for any query is set to be equal to 6.\nIn such a setting, the demand for information items is much higher than the agents'' information provision abilities, given the of queries: The maximum benefit in any experimental case is much less than 60% (this has been done so as to challenge the `tuning'' task in settings where queries can not be served in the first hop or after 2-3 hops).\nGiven that agents are initially not aware of acquaintances'' local repository ( case), we have run several evaluation experiments for each network type depending on the percentage of acquaintances to which a query can be propagated by an agent.\nThese types of experiments are denoted by TypeX-Y, where X denotes the type of network and Y the percentage of acquaintances: Here we present results for Y equal to 10, 20 or 50.\nFor instance, TypeA-10 denotes a setting with a 0 50 100 150 200 250 300 TypeA-10 Type B-20 (no initialization) TypeB-20 (initialization) Type C-50 4000 14000 24000 34000 44000 54000 40 42 44 46 48 50 52 54 56 58 0 0.0005 0.001 0.0015 0.002 0.0025 0.003 0.0035 0.004 0.0045 TypeA-10 TypeB-20 (no initialization) Type B-20 (initialization) TypeC-50 TypeB-20 without RIs Figure 4.\nResults for static networks as agents gather information about acquaintances'' abilities and interests network of TypeA where each query is being propagated to at most 10% of an agent``s acquaintances.\nThe exact number of acquaintances is randomly chosen per agent and queries are being propagated only to those acquaintances that are likely to best i-messages per run q-messages per run benefit per run message gain per run 252 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) serve the request.\nFigures 4 and 5 show experiments for static and dynamic networks of TypeA-10 (dense network with a low percentage of acquaintances), TypeB-20 (quite dense network with a low percentage of acquaintances), with initialization and without initialization, and TypeC-50 (not a so dense network with a quite high percentage of acquaintances).\nTo demonstrate the advantages of our method we have considered networks without 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 10000 20000 30000 40000 50000 60000 70000 80000 42 43 44 45 46 47 48 49 50 0 0.0005 0.001 0.0015 0.002 0.0025 0.003 0.0035 TypeB-20 TypeB-20 without RIs TypeC-50 TypeC-50 without RIs TypeC-50 (static) Figure 5.\nResults for dynamic networks as agents gather information about acquaintances'' abilities and interests routing indices for TypeC-50 and TypeB-20 networks: Agents in these networks, similarly to [12,13], share information concerning their local repository based on their assessments on acquaintances'' interests.\nResults computed in each experiment show the number of querypropagation messages ( ), the number of messages for the update of indices ( ), the of the system, i.e. the average ratio of information pieces provided to the number of pieces requested per query, and the i.e. the ratio of benefit to the total number of messages.\nThe horizontal axis in each diagram corresponds to the runs.\nAs it is shown in Figure 4, as agents search and share information from run 1 to run 40, they manage to increase the benefit of the system, by drastically reducing the number of messages.\nAlso (not shown here due to space reasons) the number of unfulfilled queries decrease, while the served queries increase gradually.\nExperiments show: (a) An effective tuning of the networks as time passes and more queries are posed to the network, even if agents maintain the models of a small percentage of their acquaintances.\n(b) That `tuning'' can greatly facilitate the scalability of the information searching and sharing tasks in networks.\nTo show whether initial knowledge about acquaintances local repository (the case) affects the effective tuning of the network, we provide representative results from the TypeB-20 network.\nAs it is shown in Figure 4, the tuning task in this case does not manage to achieve the benefit of the system reported for the case.\nOn the contrary, while the tuning affects the drastically; the are not affected in the same way: The in the case are less than those in the TypeB-20 with case.\nThis is further shown in a more clear way in the message gain of both approaches: The message gain of the TypeB-20 with case is higher than the message gain for the TypeB-20 experiment with .\nTherefore, initial knowledge concerning local information of acquaintances can be used for guiding searching and tuning at the initial stages of the tuning task, only if we need to gain efficiency (i.e. decrease the number of required messages) to the cost of loosing effectiveness (i.e. have lower benefit): This is due to the fact that, as agents posses information about acquaintances'' local repositories, the tuning process enables the further exchange of messages concerning agents'' information provision abilities in cases where agents'' profiles provide evidence for such a need.\nHowever, initial information about acquaintances'' local repositories may mislead the searching process, resulting in low benefit.\nIn case we need to gain effectiveness to the cost of reducing efficiency, this type of local knowledge does not suffice.\nConsidering also the information sharing method without routing indices ( cases), we can see that for static networks it requires more without managing to tune the system, while the benefit is nearly the same to the one reported by our method.\nThis is shown clearly in the message gain diagrams in Figure 4.\nFigure 5 provides results for dynamic networks.\nThese are results from a particular representative case of our experiments where more than 25% of (randomly chosen) nodes leave the network in each run during the experiment.\nAfter a random number of i-messages per run q-messages per run benefit per run message gain per run The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 253 rounds, a new node may replace the one left.\nThis newcomer has no information about the network.\nApproximately 25% of the nodes that leave the network are not replaced for 50% of the experiment, and approximately 50% are not replaced for more than 35% of the experiment.\nIn such a highly dynamic setting with very scarce information resources distributed in the network, as Figure 5 shows, the tuning approach has managed to keep the benefit to acceptable levels, while still reducing drastically the number of i-messages.\nHowever, as it can be expected, this reduction is not so drastic as it was in the corresponding static cases.\nFigure 5 shows that the message gain for the dynamic case is comparable to the message gain for the corresponding (TypeC50) static case, which proves the value of this approach for dynamic settings.\nThe comparison to the case where no routing indices are exploited reveals the same results as in the static case, to the cost of a large number of messages.\nFinally it must be pointed that the maximum number of messages per query required by the proposed method is nearly 12, which is less than that that reported by other efforts.\n6.\nCONCLUSIONS This paper presents a method for semantic query processing in large networks of agents that combines routing indices with information sharing methods.\nThe presented method enables agents to keep records of acquaintances'' interests, to advertise their information provision abilities to those that have a high interest on them, and to maintain indices for routing queries to those agents that have the requested information provision abilities.\nSpecifically, the paper demonstrates through extensive performance experiments: (a) How networks of agents can be `tuned'' so as to provide requested information effectively, increasing the benefit and the efficiency of the system.\n(b) How different types of local knowledge (number, local information repositories, percentage, interests and information provision abilities of acquaintances) can guide agents to effectively answer queries, balancing between efficiency and efficacy.\n(c) That the proposed tuning task manages to increase the efficiency of information searching and sharing in highly dynamic and large networks.\n(d) That the information gathered and maintained by agents supports efficient and effective information searching and sharing: Initial information about acquaintances information provision abilities is not necessary and a small percentage of acquaintances suffices.\nFurther work concerns experimenting with real data and ontologies, differences in ontologies between agents, shifts in expertise and the parallel construction of overlay structure.\n7.\nREFERENCES [1] Cooper, B.F., Garcia-Molina, H. ad-hoc, self-supervising peer-to-peer search networks.\n, Volume 23 ,Issue 2 (April 2005) ,169 - 200 [2] Crespo, A., Garcia-Molina, H. Routing indices for peer-topeer systems, in , July 2002.''\n, [3] Goldman, C., and Zilberstein, S. Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis.\n22 (2004), 143-174.\n[4] Goldman, C., and Zilberstein, S. Optimizing Information Exchange in Cooperative Multi-agent Systems.\nIn , July 2003.\n[5] Haase, P., Siebes, R., van Harmelen, F. Peer Selection in Peer-to-Peer Networks with Semantic Topologies.\nIn Lecture Notes in Computer Science, Springer, Volume 3226\/2004, 108-125.\n[6] Haase, P., Broekstra, J., Ehrig, M., Menken, M., Mika, P., Plechawski, M., Pyszlak, P., Schnizler, B., Siebes, R., Staab, S., Tempich, T. Bibster-A Semantics-Based Bibliographic Peer-to-Peer System, In , 122-136.\n[7] Li, Y., Bandar, Z., and McLean, D..\nAn approach for measuring semantic similarity between words using semantic multiple information sources.\nvol.\n15, No 4, 2003, 871-882.\n[8] Loser, A., Staab, S., Tempich, C. Semantic Social Overlay Networks.\n.\nTo appear 2006\/2007 [9] Nejdl, W., Wolpers M., Siberski, W., Schmitz, C., Schlosser, M., Brunkhorst, I., Loser, Super-Peer-Based Routing and Clustering Strategies for RDF-Based Peer-to-Peer Networks.\n536-543.\n[10] Tempich, C., Staab, S., Wranik, A. REMINDIN': Semantic Query Routing in Peer-to-Peer Networks Based on Social Metaphors.\nIn 640-649.\n[11] Xu, Y., Scerri, P., Yu, B., Lewis, M., and Sycara, K.\nA POMDP Approach to Token-Based Team Coordination.\nIn , (July 25-29, Utrecht) ACM Press.\n[12] Xu, Y., Lewis, M., Sycara, K., and Scerri, P. Information Sharing in Large Scale Teams.\nIn , 2004.\n[13] Xu, Y., Liao, E., Scerri, P., Yu, B., Lewis, M., and Sycara, K. Towards Flexible Coordination of Large Scale MultiAgent Systems.\nSpringer, 2005.\n[14] Xu, Y., Scerri, P., Yu, B., Okamoto, S., Lewis, M., and Sycara, K.\nAn Integrated Token Based Algorithm for Scalable Coordination.\nIn 407-414 [15] Xuan, P., Lesser, V., Zilberstein, S. Communication Decisions in Multi-agent Cooperation: Model and Experiments.\nIn 2001, 616-623.\n[16] Yu, B., and Singh M. Searching Social Networks, In .\n[17] Zhang, Y., Volz, R., Ioeger, T.R., Yen, J.\nA Decision Theoretic Approach for Designing Proactive Communication in Multi-Agent Teamwork.\nIn 2004, 64-71.\n[18] Zhang, H., Croft, W.B., Levine, B., Lesser, V.\nA MultiAgent Approach for Peer-to-Peer-based Information Retrieval Systems.\nIn In 04, 456-464.\n[19] Zhang, H., Lesser, V. Multi-Agent Based Peer-to-Peer Information Retrieval Systems with Concurrent Search Sessions.\nIn 06, 305-31.\n254 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Information Searching and Sharing in Large-Scale Dynamic Networks\nABSTRACT\nFinding the right agents in a large and dynamic network to provide the needed resources in a timely fashion, is a long standing problem.\nThis paper presents a method for information searching and sharing that combines routing indices with tokenbased methods.\nThe proposed method enables agents to search effectively by acquiring their neighbors' interests, advertising their information provision abilities and maintaining indices for routing queries, in an integrated way.\nSpecifically, the paper demonstrates through performance experiments how static and dynamic networks of agents can be ` tuned' to answer queries effectively as they gather evidence for the interests and information provision abilities of others, without altering the topology or imposing an overlay structure to the network of acquaintances.\n1.\nINTRODUCTION\nConsidering to be a decentralized control problem, information searching and sharing in large-scale systems of cooperative agents is a hard problem in the general case: The computation of an optimal policy, when each agent possesses an approximate partial view of the state of the environment and when agents' observations and activities are interdependent (i.e. one agent's actions affect the observations and the state of an other) [3], is hard.\nThis fact, has resulted to efforts that either require agents to have a global view of the system [15], to heuristics [4], to precomputation of agents' information needs and information provision capabilities for proactive communication [17], to localized reasoning processes built on incoming information [12,13,14], and to mathematical frameworks for coordination whose optimal policies can be approximated [11] for small (sub -)\nnetworks of associated agents.\nOn the other hand, there is a lot of research on semantic peer to peer search networks and social networks [1,5,6,8,9,10,16,18,19] many of which deal with tuning a network of peers for effective information searching and sharing.\nThey do it mostly by imposing logical and semantic overlay structures.\nHowever, as far as we know there is no work that demonstrates the effectiveness of a gradual tuning process in large-scale dynamic networks that studies the impact of the information gathered by agents as more and more queries are issued and served in concurrent sessions in the network.\nThe main issue in this paper concerns ` tuning' a network of agents, each with a specific expertise, for efficient and effective information searching and sharing, without altering the topology or imposing an overlay structure via clustering, introduction of shortcut indices, or re-wiring.\n` Tuning' is the task of sharing and gathering the necessary knowledge for agents to propagate requests to the right acquaintances, minimizing the searching effort, increasing the efficiency and the benefit of the system.\nSpecifically, this paper proposes a method for information searching and sharing in dynamic and large scale networks, which combines routing indices with token-based methods for information sharing in large-scale multi-agent systems.\nThis paper is structured as follows: Section 2 presents related work and motivates the proposed method.\nSection 3 states the problem and section 4 presents in detail the individual techniques and the overall proposed method.\nSection 5 presents the experimental setup and results, and section 6 concludes the paper, sketching future work.\n2.\nRELATED WORK\nInformation provision and sharing can be considered to be a decentralized partially-observable Markov decision process [3,4,11,14].\nIn the general case, decentralized control of largescale dynamic systems of cooperative agents is a hard problem.\nOptimal solutions can only be approximated by means of heuristics, by relaxations of the original problem or by centralized solutions.\nThe computation of an optimal control policy is simple given that global states can be factored, that the probability of transitions and observations are independent, the observations combined determine the global state of the system and the reward function can be easily defined as the sum of local reward functions [3].\nHowever, in a large-scale dynamic system with decentralized control it is very hard for agents to possess accurate partial views of the environment, and it is even more hard for agents to possess\na global view of the environment.\nFurthermore, agents' observations cannot be assumed independent, as one agent's actions can affect the observations of others: For instance, when one agent joins\/leaves the system, then this may affect other agents' assessment of neighbours' information provision abilities.\nFurthermore, the probabilities of transitions can be dependent too; something that increases the complexity of the problem: For example, when an agent sends a query to another agent, then this may affect the state of the latter, as far as the assessed interests of the former are concerned.\nConsidering independent activities and observations, authors in [4] propose a decision-theoretic solution treating standard action and information exchange as explicit choices that the decision maker must make.\nThey approximate the solution using a myopic algorithm.\nTheir work differs in the one reported here in the following aspects: First, it aims at optimizing communication, while the goal here is to tune the network for effective information sharing, reducing communication and increasing system's benefit.\nSecond, the solution is approximated using a myopic algorithm, but authors do not demonstrate how suboptimal are the solutions computed (something we neither do), given their interest to the optimal solution.\nThird, they consider that transitions and observations made by agents are independent, which, as already discussed, is not true in the general case.\nLast, in contrast to their approach where agents broadcast messages, here agents decide not only when to communicate, but to whom to send a message too.\nToken based approaches are promising for scaling coordination and therefore information provision and sharing to large-scale systems effectively.\nIn [11] authors provide a mathematical framework for routing tokens, providing also an approximation to solving the original problem in case of independent agents' activities.\nThe proposed method requires a high volume of computations that authors aim to reduce by restricting its application to static logical teams of associated agents.\nIn accordance to this approach, in [12,13,14], information sharing is considered only for static networks and self-tuning of networks is not demonstrated.\nAs it will be shown in section 5, our experiments show that although these approaches can handle information sharing in dynamic networks, they require a larger amount of messages in comparison to the approach proposed here and cannot tune the network for efficient information sharing.\nProactive communication has been proposed in [17] as a result of a dynamic decision theoretic determination of communication strategies.\nThis approach is based on the specification of agents as \"providers\" and \"needers\": This is done by a plan-based precomputation of information needs and provision abilities of agents.\nHowever, this approach cannot scale to large and dynamic networks, as it would be highly inefficient for each agent to compute and determine its potential needs and information provision abilities given its potential interaction with 100s of other agents.\nViewing information retrieval in peer-to-peer systems from a multi-agent system perspective, the approach proposed in [18] is based on a language model of agents' documents collection.\nExploiting the models of other agents in the network, agents construct their view of the network which is being used for forming routing decisions.\nInitially, agents build their views using the models of their neighbours.\nThen, the system reorganizes by forming clusters of agents with similar content.\nClusters are being exploited during information retrieval using a kNN approach and a gradient search scheme.\nAlthough this work aims at tuning a network for efficient information provision (through reorganization), it does not demonstrate the effectiveness of the approach with respect to this issue.\nMoreover, although during reorganization and retrieval they measure the similarity of content between agents, a more fine grained approach is needed that would allow agents to measure similarities of information items or sub-collections of information items.\nHowever, it is expected that this will complicate re-organization.\nBased on their work on peer-to-peer systems, H.Zhand and V.Lesser in [19] study concurrent search sessions.\nDealing with static networks, they focus on minimizing processing and communication bottlenecks: Although we deal with concurrent search sessions, their work is orthogonal to ours, which may be further extended towards incorporating such features in the future.\nConsidering research in semantic peer-to-peer systems1, most of the approaches exploit what can be loosely stated a \"routing index\".\nA major question concerning information searching is \"what information has to be shared between peers, when, and what adjustments have to be made so as queries to be routed to trustworthy information sources in the most effective and efficient way\".\nREMINDIN' [10] peers gather information concerning the queries that have been answered successfully by other peers, so as to subsequently select peers to forward requests to: This is a lazy learning approach that does not involve advertisement of peer information provision abilities.\nThis results in a tuning process where the overall recall increases over time, while the number of messages per query remains about the same.\nHere, agents actively advertise their information provision abilities based on the assessed interests of their peers: This results in a much lower number of messages per query than those reported in REMINDIN'.\nIn [5,6] peers, using a common ontology, advertise their expertise, which is being exploited for the formation of a semantic overlay network: Queries are propagated in this network depending on their similarity with peers' expertise.\nIt is on the receiver's side to decide whether it shall accept or not an advertisement, based on the similarity between expertise descriptions.\nAccording to our approach, agents advertise selectively their information provision abilities about specific topics to their neighbours with similar information interests (and only to these).\nHowever, this is done as time passes and while agents' receive requests from their peers.\nThe gradual creation of overlay networks via re-wiring, shortcuts creation [1,8,16] or clustering of peers [17,9] are tuning approaches that differ fundamentally from the one proposed here: Through local interactions, we aim at tuning the network for efficient information provision by gathering routing information gradually, as queries are being propagated in the network and 1 General research in peer-to-peer systems concentrates either on network topologies or on distribution of documents: Approaches do not aim to optimize advertising, and search mostly requires common keys for nodes and their contents.\nThey generate a substantial overhead in highly dynamic settings, where nodes join\/leave the system.\n248 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nagents advertise their information provision abilities given the interests of their neighbours.\nGiven the success of this method, we shall study how the addition of logical paths and gradual evolution of the network topology can further increase the effectiveness of the proposed method.\n3.\nPROBLEM STATEMENT\n4.\nINFORMATION SEARCHING AND SHARING\n4.1 Overall Method\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 249\n4.2 Routing Indices\n4.3 Profiles\n250 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.4 Tuning\n5.\nEXPERIMENTAL SETUP\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 251\n252 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 253\n6.\nCONCLUSIONS\nThis paper presents a method for semantic query processing in large networks of agents that combines routing indices with information sharing methods.\nThe presented method enables agents to keep records of acquaintances' interests, to advertise their information provision abilities to those that have a high interest on them, and to maintain indices for routing queries to those agents that have the requested information provision abilities.\nSpecifically, the paper demonstrates through extensive performance experiments: (a) How networks of agents can be ` tuned' so as to provide requested information effectively, increasing the benefit and the efficiency of the system.\n(b) How different types of local knowledge (number, local information repositories, percentage, interests and information provision abilities of acquaintances) can guide agents to effectively answer queries, balancing between efficiency and efficacy.\n(c) That the proposed \"tuning\" task manages to increase the efficiency of information searching and sharing in highly dynamic and large networks.\n(d) That the information gathered and maintained by agents supports efficient and effective information searching and sharing: Initial information about acquaintances information provision abilities is not necessary and a small percentage of acquaintances suffices.\nFurther work concerns experimenting with real data and ontologies, differences in ontologies between agents, shifts in expertise and the parallel construction of overlay structure.","lvl-4":"Information Searching and Sharing in Large-Scale Dynamic Networks\nABSTRACT\nFinding the right agents in a large and dynamic network to provide the needed resources in a timely fashion, is a long standing problem.\nThis paper presents a method for information searching and sharing that combines routing indices with tokenbased methods.\nThe proposed method enables agents to search effectively by acquiring their neighbors' interests, advertising their information provision abilities and maintaining indices for routing queries, in an integrated way.\nSpecifically, the paper demonstrates through performance experiments how static and dynamic networks of agents can be ` tuned' to answer queries effectively as they gather evidence for the interests and information provision abilities of others, without altering the topology or imposing an overlay structure to the network of acquaintances.\n1.\nINTRODUCTION\nnetworks of associated agents.\nOn the other hand, there is a lot of research on semantic peer to peer search networks and social networks [1,5,6,8,9,10,16,18,19] many of which deal with tuning a network of peers for effective information searching and sharing.\nThey do it mostly by imposing logical and semantic overlay structures.\nHowever, as far as we know there is no work that demonstrates the effectiveness of a gradual tuning process in large-scale dynamic networks that studies the impact of the information gathered by agents as more and more queries are issued and served in concurrent sessions in the network.\nThe main issue in this paper concerns ` tuning' a network of agents, each with a specific expertise, for efficient and effective information searching and sharing, without altering the topology or imposing an overlay structure via clustering, introduction of shortcut indices, or re-wiring.\n` Tuning' is the task of sharing and gathering the necessary knowledge for agents to propagate requests to the right acquaintances, minimizing the searching effort, increasing the efficiency and the benefit of the system.\nSpecifically, this paper proposes a method for information searching and sharing in dynamic and large scale networks, which combines routing indices with token-based methods for information sharing in large-scale multi-agent systems.\nThis paper is structured as follows: Section 2 presents related work and motivates the proposed method.\nSection 3 states the problem and section 4 presents in detail the individual techniques and the overall proposed method.\nSection 5 presents the experimental setup and results, and section 6 concludes the paper, sketching future work.\n2.\nRELATED WORK\nInformation provision and sharing can be considered to be a decentralized partially-observable Markov decision process [3,4,11,14].\nIn the general case, decentralized control of largescale dynamic systems of cooperative agents is a hard problem.\nOptimal solutions can only be approximated by means of heuristics, by relaxations of the original problem or by centralized solutions.\nHowever, in a large-scale dynamic system with decentralized control it is very hard for agents to possess accurate partial views of the environment, and it is even more hard for agents to possess\na global view of the environment.\nFurthermore, agents' observations cannot be assumed independent, as one agent's actions can affect the observations of others: For instance, when one agent joins\/leaves the system, then this may affect other agents' assessment of neighbours' information provision abilities.\nConsidering independent activities and observations, authors in [4] propose a decision-theoretic solution treating standard action and information exchange as explicit choices that the decision maker must make.\nThey approximate the solution using a myopic algorithm.\nTheir work differs in the one reported here in the following aspects: First, it aims at optimizing communication, while the goal here is to tune the network for effective information sharing, reducing communication and increasing system's benefit.\nThird, they consider that transitions and observations made by agents are independent, which, as already discussed, is not true in the general case.\nLast, in contrast to their approach where agents broadcast messages, here agents decide not only when to communicate, but to whom to send a message too.\nToken based approaches are promising for scaling coordination and therefore information provision and sharing to large-scale systems effectively.\nIn [11] authors provide a mathematical framework for routing tokens, providing also an approximation to solving the original problem in case of independent agents' activities.\nThe proposed method requires a high volume of computations that authors aim to reduce by restricting its application to static logical teams of associated agents.\nIn accordance to this approach, in [12,13,14], information sharing is considered only for static networks and self-tuning of networks is not demonstrated.\nAs it will be shown in section 5, our experiments show that although these approaches can handle information sharing in dynamic networks, they require a larger amount of messages in comparison to the approach proposed here and cannot tune the network for efficient information sharing.\nProactive communication has been proposed in [17] as a result of a dynamic decision theoretic determination of communication strategies.\nThis approach is based on the specification of agents as \"providers\" and \"needers\": This is done by a plan-based precomputation of information needs and provision abilities of agents.\nHowever, this approach cannot scale to large and dynamic networks, as it would be highly inefficient for each agent to compute and determine its potential needs and information provision abilities given its potential interaction with 100s of other agents.\nViewing information retrieval in peer-to-peer systems from a multi-agent system perspective, the approach proposed in [18] is based on a language model of agents' documents collection.\nExploiting the models of other agents in the network, agents construct their view of the network which is being used for forming routing decisions.\nInitially, agents build their views using the models of their neighbours.\nThen, the system reorganizes by forming clusters of agents with similar content.\nClusters are being exploited during information retrieval using a kNN approach and a gradient search scheme.\nAlthough this work aims at tuning a network for efficient information provision (through reorganization), it does not demonstrate the effectiveness of the approach with respect to this issue.\nMoreover, although during reorganization and retrieval they measure the similarity of content between agents, a more fine grained approach is needed that would allow agents to measure similarities of information items or sub-collections of information items.\nBased on their work on peer-to-peer systems, H.Zhand and V.Lesser in [19] study concurrent search sessions.\nConsidering research in semantic peer-to-peer systems1, most of the approaches exploit what can be loosely stated a \"routing index\".\nA major question concerning information searching is \"what information has to be shared between peers, when, and what adjustments have to be made so as queries to be routed to trustworthy information sources in the most effective and efficient way\".\nREMINDIN' [10] peers gather information concerning the queries that have been answered successfully by other peers, so as to subsequently select peers to forward requests to: This is a lazy learning approach that does not involve advertisement of peer information provision abilities.\nThis results in a tuning process where the overall recall increases over time, while the number of messages per query remains about the same.\nHere, agents actively advertise their information provision abilities based on the assessed interests of their peers: This results in a much lower number of messages per query than those reported in REMINDIN'.\nIn [5,6] peers, using a common ontology, advertise their expertise, which is being exploited for the formation of a semantic overlay network: Queries are propagated in this network depending on their similarity with peers' expertise.\nAccording to our approach, agents advertise selectively their information provision abilities about specific topics to their neighbours with similar information interests (and only to these).\nHowever, this is done as time passes and while agents' receive requests from their peers.\nThey generate a substantial overhead in highly dynamic settings, where nodes join\/leave the system.\n248 The Sixth Intl. .\nJoint Conf.\nagents advertise their information provision abilities given the interests of their neighbours.\nGiven the success of this method, we shall study how the addition of logical paths and gradual evolution of the network topology can further increase the effectiveness of the proposed method.\n6.\nCONCLUSIONS\nThis paper presents a method for semantic query processing in large networks of agents that combines routing indices with information sharing methods.\nThe presented method enables agents to keep records of acquaintances' interests, to advertise their information provision abilities to those that have a high interest on them, and to maintain indices for routing queries to those agents that have the requested information provision abilities.\nSpecifically, the paper demonstrates through extensive performance experiments: (a) How networks of agents can be ` tuned' so as to provide requested information effectively, increasing the benefit and the efficiency of the system.\n(b) How different types of local knowledge (number, local information repositories, percentage, interests and information provision abilities of acquaintances) can guide agents to effectively answer queries, balancing between efficiency and efficacy.\n(c) That the proposed \"tuning\" task manages to increase the efficiency of information searching and sharing in highly dynamic and large networks.\n(d) That the information gathered and maintained by agents supports efficient and effective information searching and sharing: Initial information about acquaintances information provision abilities is not necessary and a small percentage of acquaintances suffices.\nFurther work concerns experimenting with real data and ontologies, differences in ontologies between agents, shifts in expertise and the parallel construction of overlay structure.","lvl-2":"Information Searching and Sharing in Large-Scale Dynamic Networks\nABSTRACT\nFinding the right agents in a large and dynamic network to provide the needed resources in a timely fashion, is a long standing problem.\nThis paper presents a method for information searching and sharing that combines routing indices with tokenbased methods.\nThe proposed method enables agents to search effectively by acquiring their neighbors' interests, advertising their information provision abilities and maintaining indices for routing queries, in an integrated way.\nSpecifically, the paper demonstrates through performance experiments how static and dynamic networks of agents can be ` tuned' to answer queries effectively as they gather evidence for the interests and information provision abilities of others, without altering the topology or imposing an overlay structure to the network of acquaintances.\n1.\nINTRODUCTION\nConsidering to be a decentralized control problem, information searching and sharing in large-scale systems of cooperative agents is a hard problem in the general case: The computation of an optimal policy, when each agent possesses an approximate partial view of the state of the environment and when agents' observations and activities are interdependent (i.e. one agent's actions affect the observations and the state of an other) [3], is hard.\nThis fact, has resulted to efforts that either require agents to have a global view of the system [15], to heuristics [4], to precomputation of agents' information needs and information provision capabilities for proactive communication [17], to localized reasoning processes built on incoming information [12,13,14], and to mathematical frameworks for coordination whose optimal policies can be approximated [11] for small (sub -)\nnetworks of associated agents.\nOn the other hand, there is a lot of research on semantic peer to peer search networks and social networks [1,5,6,8,9,10,16,18,19] many of which deal with tuning a network of peers for effective information searching and sharing.\nThey do it mostly by imposing logical and semantic overlay structures.\nHowever, as far as we know there is no work that demonstrates the effectiveness of a gradual tuning process in large-scale dynamic networks that studies the impact of the information gathered by agents as more and more queries are issued and served in concurrent sessions in the network.\nThe main issue in this paper concerns ` tuning' a network of agents, each with a specific expertise, for efficient and effective information searching and sharing, without altering the topology or imposing an overlay structure via clustering, introduction of shortcut indices, or re-wiring.\n` Tuning' is the task of sharing and gathering the necessary knowledge for agents to propagate requests to the right acquaintances, minimizing the searching effort, increasing the efficiency and the benefit of the system.\nSpecifically, this paper proposes a method for information searching and sharing in dynamic and large scale networks, which combines routing indices with token-based methods for information sharing in large-scale multi-agent systems.\nThis paper is structured as follows: Section 2 presents related work and motivates the proposed method.\nSection 3 states the problem and section 4 presents in detail the individual techniques and the overall proposed method.\nSection 5 presents the experimental setup and results, and section 6 concludes the paper, sketching future work.\n2.\nRELATED WORK\nInformation provision and sharing can be considered to be a decentralized partially-observable Markov decision process [3,4,11,14].\nIn the general case, decentralized control of largescale dynamic systems of cooperative agents is a hard problem.\nOptimal solutions can only be approximated by means of heuristics, by relaxations of the original problem or by centralized solutions.\nThe computation of an optimal control policy is simple given that global states can be factored, that the probability of transitions and observations are independent, the observations combined determine the global state of the system and the reward function can be easily defined as the sum of local reward functions [3].\nHowever, in a large-scale dynamic system with decentralized control it is very hard for agents to possess accurate partial views of the environment, and it is even more hard for agents to possess\na global view of the environment.\nFurthermore, agents' observations cannot be assumed independent, as one agent's actions can affect the observations of others: For instance, when one agent joins\/leaves the system, then this may affect other agents' assessment of neighbours' information provision abilities.\nFurthermore, the probabilities of transitions can be dependent too; something that increases the complexity of the problem: For example, when an agent sends a query to another agent, then this may affect the state of the latter, as far as the assessed interests of the former are concerned.\nConsidering independent activities and observations, authors in [4] propose a decision-theoretic solution treating standard action and information exchange as explicit choices that the decision maker must make.\nThey approximate the solution using a myopic algorithm.\nTheir work differs in the one reported here in the following aspects: First, it aims at optimizing communication, while the goal here is to tune the network for effective information sharing, reducing communication and increasing system's benefit.\nSecond, the solution is approximated using a myopic algorithm, but authors do not demonstrate how suboptimal are the solutions computed (something we neither do), given their interest to the optimal solution.\nThird, they consider that transitions and observations made by agents are independent, which, as already discussed, is not true in the general case.\nLast, in contrast to their approach where agents broadcast messages, here agents decide not only when to communicate, but to whom to send a message too.\nToken based approaches are promising for scaling coordination and therefore information provision and sharing to large-scale systems effectively.\nIn [11] authors provide a mathematical framework for routing tokens, providing also an approximation to solving the original problem in case of independent agents' activities.\nThe proposed method requires a high volume of computations that authors aim to reduce by restricting its application to static logical teams of associated agents.\nIn accordance to this approach, in [12,13,14], information sharing is considered only for static networks and self-tuning of networks is not demonstrated.\nAs it will be shown in section 5, our experiments show that although these approaches can handle information sharing in dynamic networks, they require a larger amount of messages in comparison to the approach proposed here and cannot tune the network for efficient information sharing.\nProactive communication has been proposed in [17] as a result of a dynamic decision theoretic determination of communication strategies.\nThis approach is based on the specification of agents as \"providers\" and \"needers\": This is done by a plan-based precomputation of information needs and provision abilities of agents.\nHowever, this approach cannot scale to large and dynamic networks, as it would be highly inefficient for each agent to compute and determine its potential needs and information provision abilities given its potential interaction with 100s of other agents.\nViewing information retrieval in peer-to-peer systems from a multi-agent system perspective, the approach proposed in [18] is based on a language model of agents' documents collection.\nExploiting the models of other agents in the network, agents construct their view of the network which is being used for forming routing decisions.\nInitially, agents build their views using the models of their neighbours.\nThen, the system reorganizes by forming clusters of agents with similar content.\nClusters are being exploited during information retrieval using a kNN approach and a gradient search scheme.\nAlthough this work aims at tuning a network for efficient information provision (through reorganization), it does not demonstrate the effectiveness of the approach with respect to this issue.\nMoreover, although during reorganization and retrieval they measure the similarity of content between agents, a more fine grained approach is needed that would allow agents to measure similarities of information items or sub-collections of information items.\nHowever, it is expected that this will complicate re-organization.\nBased on their work on peer-to-peer systems, H.Zhand and V.Lesser in [19] study concurrent search sessions.\nDealing with static networks, they focus on minimizing processing and communication bottlenecks: Although we deal with concurrent search sessions, their work is orthogonal to ours, which may be further extended towards incorporating such features in the future.\nConsidering research in semantic peer-to-peer systems1, most of the approaches exploit what can be loosely stated a \"routing index\".\nA major question concerning information searching is \"what information has to be shared between peers, when, and what adjustments have to be made so as queries to be routed to trustworthy information sources in the most effective and efficient way\".\nREMINDIN' [10] peers gather information concerning the queries that have been answered successfully by other peers, so as to subsequently select peers to forward requests to: This is a lazy learning approach that does not involve advertisement of peer information provision abilities.\nThis results in a tuning process where the overall recall increases over time, while the number of messages per query remains about the same.\nHere, agents actively advertise their information provision abilities based on the assessed interests of their peers: This results in a much lower number of messages per query than those reported in REMINDIN'.\nIn [5,6] peers, using a common ontology, advertise their expertise, which is being exploited for the formation of a semantic overlay network: Queries are propagated in this network depending on their similarity with peers' expertise.\nIt is on the receiver's side to decide whether it shall accept or not an advertisement, based on the similarity between expertise descriptions.\nAccording to our approach, agents advertise selectively their information provision abilities about specific topics to their neighbours with similar information interests (and only to these).\nHowever, this is done as time passes and while agents' receive requests from their peers.\nThe gradual creation of overlay networks via re-wiring, shortcuts creation [1,8,16] or clustering of peers [17,9] are tuning approaches that differ fundamentally from the one proposed here: Through local interactions, we aim at tuning the network for efficient information provision by gathering routing information gradually, as queries are being propagated in the network and 1 General research in peer-to-peer systems concentrates either on network topologies or on distribution of documents: Approaches do not aim to optimize advertising, and search mostly requires common keys for nodes and their contents.\nThey generate a substantial overhead in highly dynamic settings, where nodes join\/leave the system.\n248 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nagents advertise their information provision abilities given the interests of their neighbours.\nGiven the success of this method, we shall study how the addition of logical paths and gradual evolution of the network topology can further increase the effectiveness of the proposed method.\n3.\nPROBLEM STATEMENT\nLet 1 {$~ ~ $~ ~ \"$Q} be the set of agents in the system.\nThe network of agents is modelled as a graph *--(1, (), where 1 is the set of agents and (is a set of bidirectional edges denoted as nonordered pairs ($L, $M).\nThe neighbourhood of an agent $L includes all the one-hop away agents (i.e. its acquaintance agents) $M such that ($L ~ $M) e (~ The set of acquaintances of $L is denoted by 1 ~ $L ~ ~ Each agent maintains (a) an ontology that represents categories of information, (b) indices of information pieces available to its local database and to other agents, and (c) a profile model for some of its acquaintances.\nIndices and profile models are described in detail in section 4.\nOntology concepts represent categories that classify the information pieces available.\nIt is assumed that agents in the network share the same ontology, but each agent has a set of information items in its local repository, which are classified under the concepts of its expertise.\nThe set of concepts is denoted by & ~ It is assumed that the sets of items in agents' local repositories are non-overlapping.\nFinally, it is assumed that there is a set of N queries 7 ^ W ~ ~ \"~ WN `.\nEach query is represented by a tuple ~ LG ~ D ~ F ~ SDWK ~ WWO!\n~ where LG is the unique identity of the query ~ D is a non-negative integer representing the maximum number of information pieces requested, F is the specific category to which these pieces must belong, SDWK is a path in the network of agents through which the query has been propagated (initially it contains the originator of the query and each agent appends its id in the SDWK before propagating the query), and WWO is a positive integer that specifies the maximum number of hops that the query can reach.\nIn case this limit is exceeded and the corresponding number of information pieces have not been found, then the query is considered \"unfulfilled\" However, even in this case, a (possibly high) percentage of the requested pieces of information may have been found.\nThe problem that this article deals with is as follows: Given a network of agents * ~ 1 ~ (~ and a set of queries 7, agents must retrieve the pieces of information requested by queries, in concurrent search sessions, and further ` tune' the network so as to answer future similar queries in the more effective and efficient way, increasing the benefit of the system and reducing the communication messages required.\nThe EHQHILW of the system is the ratio of information pieces retrieved to the number of information pieces requested.\nThe HIILFLHQF \\ of the system is measured by the number of messages needed for searching and updating the indexes and profiles maintained.\n` Tuning' the network requires agents to acquire the necessary information about acquaintances' interests and information provision abilities (i.e. the routing and profiling tuples detailed in section 4), so as to route queries and further share information in the most efficient way.\nThis must be done seamlessly to searching: I.e. agents in the network must share\/acquire the necessary information while searching, increasing the benefit and efficiency gradually, as more queries are posed.\n4.\nINFORMATION SEARCHING AND SHARING\n4.1 Overall Method\nGiven a network *--(1, () of agents and a set of queries 7, each agent maintains indices for routing queries to the \"right agents\", as well as acquaintances' profiles for advertising its information provision abilities to those interested.\nTo capture information about pieces of information accessible by the agents, each agent $maintains a routing index that is realized as a set of tuples of the form <$L, F, V>.\nEach such tuple specifies the number V of information items in category F that can be reached by $WKURXJK $L, such that $Le1 ($) \u2030 {$}.\nThis specifies the LQIRUPDWLRQ SURYLVLRQ DELOLWLHV of $L to $with respect to the information category F.\nAs it can be noticed, each tuple corresponds either to the agent $itself (specifying the pieces of information classified in F available to its local repository) or to an acquaintance of the agent (recording the pieces of information in category F available to the acquaintance agent and to agents that can be reached through this acquaintance).\nThe routing index is exploited for the propagation of queries to the \"right\" agents\": Those that are either more likely to provide answers or that know someone that can provide the requested pieces of information.\nFigure 1.\nTypical pattern for information sharing between two acquaintances (numbers show the sequence of tasks) Considering an agent $L, the profile model of some of its acquaintances $M, denoted by 3LM ~ ~ is a set of tuples <$M, F ~ S>, maintained by $L.\nSuch a tuple specifies the probability S that the acquaintance $M is interested to pieces of information in category F ~ subsequently, such a probability is also denoted by SFL ~ M).\nFormally, the profile model of an acquaintance $M of $L is 3LM {~ $M, F ~ SFL ~ M> | $Me1 ($L) and Fe &}.\nProfile models are exploited by the agents to decide where to ` advertise' their information provision abilities.\nGiven two acquaintances $L and $M in *, the information searching and sharing process proceeds as it is depicted in Figure 1: Initially, each agent has no knowledge about the information provision abilities of its acquaintances and also, it possesses no information about their interests.\nWhen that the query ~ LG ~ D ~ F ~ SDWK ~ WWO!\nis sent to $L from the agent $M, then $L has to update the profile of $M concerning the category F ~ increasing the probability SFL ~ M that $M is interested to information in F ~ When this probability is greater than a threshold value (due to the queries about F that $M has sent to $L), then $L assesses that it is highly\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 249\nprobable for Aj to be interested about information in category c.\nThis leads A; to inform Aj about its information provision abilities as far as the category c is concerned.\nThis information is being used by Aj to update its index about A;.\nThis index is being exploited by Aj to further propagate queries, and it is further propagated to those interested in c. Moreover, the profile of Aj maintained by A; guides A; to propagate changes concerning its information provision abilities to Aj.\nThe above method has the following features: (a) It combines routing indices and token-based information sharing techniques for efficient information searching and sharing, without imposing an overlay network structure.\n(b) It can be used by agents to adapt safely and effectively to dynamic networks.\n(c) It supports the acquisition and exploitation of different types of locally available information for the ` tuning' process.\n(d) It extends the tokenbased method for information sharing (as it was originally proposed in [12,13]) in two respects: First, to deal with categories of information represented by means of ontology concepts and not with specific pieces of information, and second, to guide agents to advertise information that is semantically similar to the information requested, by using a semantic similarity measure between information categories.\nTherefore, it paves the way for the use of token-based methods for semantic peer-to-peer systems.\nThis is further described in section 4.3.\n(d) It provides a more sophisticated way for agents to update routing indices than that originally proposed in [2].\nThis is done by gathering and exploiting acquaintances' profiles for effective information sharing, avoiding unnecessary and cyclic updates that may result to misleading information about agents' information provision abilities.\nThis is further described in the next sub-section.\n4.2 Routing Indices\nAs already specified, given a network of agents G--(N, E), and the set N (A) of agent's A acquaintances, the routing index (RI) of A (denoted by RI (A)) is a collection of at most | C | \u02dc1N (A) \u2030 (A) 1 indexing tuples .\nThe key idea is that given such an index and a request concerning c, A will forward this request to Ak if the resources available v; a Ak (i.e. the information abilities of Ak to A) can best serve this request.\nTo compute the information abilities of Ak to A, all tuples concerning all agents in N (Ak) - {A} must be aggregated.\nCrespo and Garcia-Molina [2] examine various types of aggregations.\nIn this paper, given some tuples , maintained by the agent Ak, their aggregation is the tuple .\nThis gives information concerning the pieces of information that can be provided through Ak, but it does not distinguish what each of Ak's acquaintances can provide: This is an inherent feature of routing indices.\nWithout considering the interests of its acquaintances, Ak may compute aggregations concerning agents in N (A) \u2030 {A} - {A;} and advertise\/share its information provision abilities to each agent A; in N (A).\nFor instance, given the network configuration depicted in Figure 2 and a category c, agent Ak sends the aggregation of the tuples concerning agents in N (Ak) \u2030 {A} - {A2} (denoted as aggregat; on (Ak, A1, c)) to agent A2, which records the tuple .\nSimilarly the aggregation of the tuples concerning the agents in N (Ak) \u2030 {Ak} - {A1} (denoted as aggregat; on (Ak, A2, c)) is sent to the agent A1, which also records the tuple .\nIt must be noticed that A1 and A2 record the information provision abilities of Ak each from \"its own point of view\".\nEvery time the tuple that models the information provision abilities of an agent changes, the aggregation has to re-compute and send the new aggregation to the appropriate neighbors in the way described above.\nThen, its neighbors have to propagate these updates to their acquaintances, and so on.\nFigure 2.\nAggregating and sharing information provision indices.\nRouting indices may be misleading and lead to inefficiency in arbitrary graphs containing cycles.\nThe exploitation of acquaintances' profiles can provide solutions to these deficiencies.\nEach agent propagates its information provision abilities concerning a category c only to these acquaintances that have high interest in this category.\nAs it has been mentioned, an agent \"expresses\" its interest in a category by propagating queries about it.\nTherefore, indices concerning a category c are propagated in the inverse direction in the paths to which queries about c are propagated.\nIndices are propagated as long as agents in the path have a high interest in c. Queries cannot be propagated in a cyclic fashion since an agent serves and propagates queries that have not been served by it in a previous time point.\nTherefore, due to their relation to queries, indices are not propagated in a cyclic fashion, as well.\nHowever, there is still a specific case where cycles cannot be avoided.\nSuch a case is shown in Figure 3:\nFigure 3.\nCyclic pattern for the sharing of indices.\nWhile the propagation of the query q' causes the propagation of information provision abilities of agents in a non cyclic way (since the agent A recognizes that q' has been served), the query q causes the propagation of information abilities of A to other agents in the network, causing, in conjunction to the propagation of indices due to q', a cyclic update of indices.\n4.3 Profiles\nThe key assumption behind the exploitation of acquaintances' profiles, as it was originally proposed in [12,13], is that for an agent to pass a specific information item, this agent has a high interest on it or to related information.\nAs already said, in our case, acquaintances' profiles are created based on received queries and specify the interests of acquaintances to specific information categories.\nGiven the query <;d, a, c, path, ttO!\nsent from Aj to A;, A; has to record not only the interest of Aj to c, but\n250 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthe interest of $M to all the related classes, given their semantic similarity to F ~ To measure the similarity between two ontology classes we use the similarity function VLP ~ & u & o [0,1] [7]: where O is the length of the shortest path between FL and FM in the graph spanned by the sub concept relation and K the minimal level in the hierarchy of either FL ~ or FM.\n.\nand,8 are parameters scaling the contribution of shortest path length O and K, respectively.\nBased on previous works we choose.\n= 0.2 and,8 = 0.6 as optimal values.\nIt must be noticed that we measure similarity between sub-concepts, assigning a very low similarity value between concepts that are not related by the sub-concept relation.\nThis is due to that, each query about information in category FL can be answered by information in any sub-category of FL close enough to FL ~ Given a threshold value 0.3, VLP ~ FL ~ FM ~ t0 .3 indicates that an agent interested in FL is also interested in FM, while VLP ~ FL ~ FM ~ <0.3 indicates that an agent interested in FL is unlikely to be interested in FM.\nThis threshold value was chosen after some empirical experiments with ontologies.\nThe update of $L's assessment on pcL ~ M based on an incoming query ~ LG ~ D ~ F ~ SDWK ~ WWO!\nfrom $M is computed by leveraging Bayes Rule as follows [12,13]:\nAccording to the first case of the equation, the probability that the agent that has propagated a query about F to be interested about information in FN, is updated based on the similarity between F and FN.\nThe second case updates the interests of agents other than the requesting one, in a way that ensures that normalization works.\nIt must be noticed that in contrast to [12,13], the computation has been changed in favour to the agent that passed the query.\nThe profiles of acquaintances enable an agent to decide where and which advertisements to be sent.\nSpecifically, for each $M 1 ~ $L ~ and F & for which SF is greater than a threshold value (currently L ~ M set to 0.5), the agent $L aggregates the vectors ($N ~ F ~ V) of each agent $N 1 ($L) {$L} - {$M} and sends the tuple ($L, F ~ V \u00b6) to $M. Also, given a high SFL ~ M, when a change to an index concerning F occurs (e.g. due to a change in $L's local repository, or due to that the set of its acquaintances changed), $L sends the updated aggregated index entry to $M. Doing so, the agent $M which is highly interested to pieces of information in category F updates its index so as to become aware of the information provision abilities of $L as far as the category F is concerned.\n4.4 Tuning\nTuning is performed seamlessly to searching: As agents propagate queries to be served, their profiles are getting updated by their acquaintances.\nAs their profiles are getting updated, agents receive the aggregated indices of their acquaintances, becoming aware of their information provision abilities on information categories to which they are probably interested.\nGiven these indices, agents further propagate queries to acquaintances that are more likely to serve queries, and so on.\nConcerning the routing index and the profiles maintained by an agent $, it must be pointed that $does not need to record all possible tuples, i.e. | & | | 1 ~ $~ {$} |: It records only those that are of particular interest for searching and sharing information, depending on the expertise and interests of its own and its acquaintances.\nInitially, agents do not possess profiles of their acquaintances.\nFor indices there are two alternatives: Either agents do not initially possess any information about acquaintances' local repositories (this is the \"QR LQLWLDOL] DWLRQ RI LQGLFHV\" case), or they do (this is the \"LQLWLDOL] DWLRQ RI LQGLFHV\" case).\nGiven a query, agents propagate this query to those acquaintances that have the highest information provision abilities.\nIn the \"no initialization of indices\" case where an agent does not initially possess information about its acquaintances' abilities, it may initially propagate a query to all of them, resulting to a pure flooding approach; or it may propagate the query randomly to a percentage of them.\nIn the \"initialization of indices\" case, where an agent initially possesses information about its acquaintances' local repository, it can propagate queries to all or to a percentage of those that can best serve the request.\nWe considered both cases in our experiments.\nGiven a static setting where agents do not shift their expertise, and the distribution of information pieces does not change, the network will eventually reach a state where no information concerning agents' information abilities will need to be propagated and no agents' profiles will need to be updated: Queries shall be propagated only to those agents that will lead to a near-to-the-maximum benefit of the system in a very efficient way.\nIn a dynamic setting, agents may shift their expertise, their interests, they may leave the network at will, or welcome new agents that join the network and bring new information provision abilities, new interests and new types of queries.\nIn this paper we study settings where agents may leave or join the network.\nThis requires agents to adapt safely and effectively.\nTowards this goal, in case an agent does not receive a reply from one of its acquaintances within a time interval, then it retracts all the indices and the profile concerning the missing acquaintance and repropagates the queries that have been sent to the missing agent since the last successful handshake, to other agents.\nIn case a new agent joins the network, then its acquaintances that are getting aware of its presence propagate all the queries that have processed by them in the last Q time points (currently Q is set to 6) to the newcomer.\nThis is done so as to inform the newcomer about their interests and initiate information sharing.\n5.\nEXPERIMENTAL SETUP\nTo validate the proposed approach we have built a prototype that simulates large networks.\nTo test the scalability of our approach\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 251\nwe have run several experiments with various types of networks.\nHere we present results from 3 network types with | 1 | = 100, | 1 | = 500 and | 1 | = 1000 that provide representative cases.\nNetworks are constructed by distributing randomly | 1 | agents in an nun area, each with a \"visibility\" ratio equal to r.\nThe acquaintances of an agent are those that are \"visible\" to the agent and those from which the agent is visible (since edges in the network are bidirectional).\nDetails about networks are given in Table 1.\nThe column avg (| N (A) |) shows the average number of acquaintances per agent in the network and the column | T | shows the number of queries per network type.\nIt must be noticed that the TypeA network is more \"dense\" than the others, which are much larger than this.\nEach experiment ran 40 times.\nIn each run the network is provided with a new set of randomly generated queries that are originated from randomly chosen agents.\nThe agents search and gather knowledge that they further use and enrich, tuning the network gradually, run by run.\nEach run lasts a number of rounds that depends on the ttl of queries and on the parameters that determine the dynamics of the network: To end a run, all queries must have either been \"served\" (i.e. 100% of the information items requested must have been found), or they must have been \"unfulfilled\" (i.e. have exceeded their ttl).\nIt must be noticed that in case of a dynamic setting, this ending criterion causes some of the queries to be \"lost\".\nThis is the case when some queries are the only \"active\" remained and the agents to whom they have been propagated left the network without their acquaintances to be aware of it.\nInformation used in the experiments is synthetic and is being classified in 15 distinct categories: Each agent's expertise comprises a unique information category.\nFor the category in its expertise each agent holds at most 1000 information pieces, the exact number of which is determined randomly.\nAt each run a constant number of queries are being generated, depending on the type of network used (last column in Table 1).\nAt each run, each query is randomly assigned to an originator agent and is set to request a random number of information items, classified in a sub-category of the query-originator agent's expertise.\nThis sub-category is chosen in a random way and the requested items are less than 6000.\nThe ttl for any query is set to be equal to 6.\nIn such a setting, the demand for information items is much higher than the agents' information provision abilities, given the ttl of queries: The maximum benefit in any experimental case is much less than 60% (this has been done so as to challenge the ` tuning' task in settings where queries cannot be served in the first hop or after 2-3 hops).\nGiven that agents are initially not aware of acquaintances' local repository (\"no initialization of indices\" case), we have run several evaluation experiments for each network type depending on the percentage of acquaintances to which a query can be propagated by an agent.\nThese types of experiments are denoted by TypeX-Y, where X denotes the type of network and Y the percentage of acquaintances: Here we present results for Y equal to 10, 20 or 50.\nFor instance, TypeA-10 denotes a setting with a\nFigure 4.\nResults for static networks as agents gather information about acquaintances' abilities and interests\nnetwork of TypeA where each query is being propagated to at most 10% of an agent's acquaintances.\nThe exact number of acquaintances is randomly chosen per agent and queries are being propagated only to those acquaintances that are likely to best\n252 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nserve the request.\nFigures 4 and 5 show experiments for static and dynamic networks of TypeA-10 (dense network with a low percentage of acquaintances), TypeB-20 (quite dense network with a low percentage of acquaintances), with initialization and without initialization, and TypeC-50 (not a so dense network with a quite high percentage of acquaintances).\nTo demonstrate the advantages of our method we have considered networks without\nFigure 5.\nResults for dynamic networks as agents gather information about acquaintances' abilities and interests\nrouting indices for TypeC-50 and TypeB-20 networks: Agents in these networks, similarly to [12,13], share information concerning their local repository based on their assessments on acquaintances' interests.\nResults computed in each experiment show the number of querypropagation messages (q-messages), the number of messages for the update of indices (i-messages), the benefit of the system, i.e. the average ratio of information pieces provided to the number of pieces requested per query, and the message gain, i.e. the ratio of benefit to the total number of messages.\nThe horizontal axis in each diagram corresponds to the runs.\nAs it is shown in Figure 4, as agents search and share information from run 1 to run 40, they manage to increase the benefit of the system, by drastically reducing the number of messages.\nAlso (not shown here due to space reasons) the number of unfulfilled queries decrease, while the served queries increase gradually.\nExperiments show: (a) An effective tuning of the networks as time passes and more queries are posed to the network, even if agents maintain the models of a small percentage of their acquaintances.\n(b) That ` tuning' can greatly facilitate the scalability of the information searching and sharing tasks in networks.\nTo show whether initial knowledge about acquaintances local repository (the \"initialization of indices\" case) affects the effective tuning of the network, we provide representative results from the TypeB-20 network.\nAs it is shown in Figure 4, the tuning task in this case does not manage to achieve the benefit of the system reported for the \"no initialization of indices\" case.\nOn the contrary, while the tuning affects the i-messages drastically; the q-messages are not affected in the same way: The q-messages in the \"initialization of indices\" case are less than those in the TypeB-20 with \"no initialization of indices\" case.\nThis is further shown in a more clear way in the message gain of both approaches: The message gain of the TypeB-20 with \"initialization of indices\" case is higher than the message gain for the TypeB-20 experiment with \"no initialization of indices\".\nTherefore, initial knowledge concerning local information of acquaintances can be used for guiding searching and tuning at the initial stages of the tuning task, only if we need to gain efficiency (i.e. decrease the number of required messages) to the cost of loosing effectiveness (i.e. have lower benefit): This is due to the fact that, as agents posses information about acquaintances' local repositories, the \"tuning\" process enables the further exchange of messages concerning agents' information provision abilities only in cases where agents' profiles provide evidence for such a need.\nHowever, initial information about acquaintances' local repositories may mislead the searching process, resulting in low benefit.\nIn case we need to gain effectiveness to the cost of reducing efficiency, this type of local knowledge does not suffice.\nConsidering also the information sharing method without routing indices (\"without Rls\" cases), we can see that for static networks it requires more q-messages without managing to \"tune\" the system, while the benefit is nearly the same to the one reported by our method.\nThis is shown clearly in the \"message gain\" diagrams in Figure 4.\nFigure 5 provides results for dynamic networks.\nThese are results from a particular representative case of our experiments where more than 25% of (randomly chosen) nodes leave the network in each run during the experiment.\nAfter a random number of\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 253\nrounds, a new node may replace the one left.\nThis newcomer has no information about the network.\nApproximately 25% of the nodes that leave the network are not replaced for 50% of the experiment, and approximately 50% are not replaced for more than 35% of the experiment.\nIn such a highly dynamic setting with very scarce information resources distributed in the network, as Figure 5 shows, the tuning approach has managed to keep the benefit to acceptable levels, while still reducing drastically the number of i-messages.\nHowever, as it can be expected, this reduction is not so drastic as it was in the corresponding static cases.\nFigure 5 shows that the message gain for the dynamic case is comparable to the message gain for the corresponding (TypeC50) static case, which proves the value of this approach for dynamic settings.\nThe comparison to the case where no routing indices are exploited reveals the same results as in the static case, to the cost of a large number of messages.\nFinally it must be pointed that the maximum number of messages per query required by the proposed method is nearly 12, which is less than that that reported by other efforts.\n6.\nCONCLUSIONS\nThis paper presents a method for semantic query processing in large networks of agents that combines routing indices with information sharing methods.\nThe presented method enables agents to keep records of acquaintances' interests, to advertise their information provision abilities to those that have a high interest on them, and to maintain indices for routing queries to those agents that have the requested information provision abilities.\nSpecifically, the paper demonstrates through extensive performance experiments: (a) How networks of agents can be ` tuned' so as to provide requested information effectively, increasing the benefit and the efficiency of the system.\n(b) How different types of local knowledge (number, local information repositories, percentage, interests and information provision abilities of acquaintances) can guide agents to effectively answer queries, balancing between efficiency and efficacy.\n(c) That the proposed \"tuning\" task manages to increase the efficiency of information searching and sharing in highly dynamic and large networks.\n(d) That the information gathered and maintained by agents supports efficient and effective information searching and sharing: Initial information about acquaintances information provision abilities is not necessary and a small percentage of acquaintances suffices.\nFurther work concerns experimenting with real data and ontologies, differences in ontologies between agents, shifts in expertise and the parallel construction of overlay structure.","keyphrases":["inform search and share","perform","social network","cooper agent","peer to peer search network","peer-to-peer system","dynam and larg scale network","decentr partial-observ markov decis process","decentr control","myopic algorithm","knn approach","gradient search scheme","artifici social system","scalabl","robust","depend"],"prmu":["P","P","M","M","M","U","M","U","U","U","U","M","U","U","U","U"]} {"id":"I-29","title":"Distributed Management of Flexible Times Schedules","abstract":"We consider the problem of managing schedules in an uncertain, distributed environment. We assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution. The goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others. We describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a flexible times representation of the agent's schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure. The former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent's schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set. Basic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities. Then, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change. Using a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.","lvl-1":"Distributed Management of Flexible Times Schedules Stephen F. Smith, Anthony Gallagher, Terry Zimmerman, Laura Barbulescu, Zachary Rubinstein The Robotics Institute, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh PA 15024 {sfs,anthonyg,wizim,laurabar,zbr}@cs.\ncmu.edu ABSTRACT We consider the problem of managing schedules in an uncertain, distributed environment.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution.\nThe goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others.\nWe describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a flexible times representation of the agent``s schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure.\nThe former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent``s schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set.\nBasic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities.\nThen, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change.\nUsing a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.\nCategories and Subject Descriptors I.2.11 [Computing Methodologies]: Artificial IntelligenceDistributed Artificial Intelligence General Terms Algorithms, Design 1.\nINTRODUCTION The practical constraints of many application environments require distributed management of executing plans and schedules.\nSuch factors as geographical separation of executing agents, limitations on communication bandwidth, constraints relating to chain of command and the high tempo of execution dynamics may all preclude any single agent from obtaining a complete global view of the problem, and hence necessitate collaborative yet localized planning and scheduling decisions.\nIn this paper, we consider the problem of managing and executing schedules in an uncertain and distributed environment as defined by the DARPA Coordinators program.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally preestablished schedule, but none possessing a global view of either the problem or solution.\nThe team goal is to maximize the total quality of all activities executed by all agents, given that unexpected events will force changes to pre-scheduled activities and alter the utility of executing others as execution unfolds.\nTo provide a basis for distributed coordination, each agent is aware of dependencies between its scheduled activities and those of other agents.\nEach agent is also given a pre-computed set of local contingency (fall-back) options.\nCentral to our approach to solving this multi-agent problem is an incremental flexible-times scheduling framework.\nIn a flexible-times representation of an agent``s schedule, the execution intervals associated with scheduled activities are not fixed, but instead are allowed to float within imposed time and activity sequencing constraints.\nThis representation allows the explicit use of slack as a hedge against simple forms of executional uncertainty (e.g., activity durations), and its underlying implementation as a Simple Temporal Network (STN) model provides efficient updating and consistency enforcement mechanisms.\nThe advantages of flexible times frameworks have been demonstrated in various centralized planning and scheduling contexts (e.g., [12, 8, 9, 10, 11]).\nHowever their use in distributed problem solving settings has been quite sparse ([7] is one exception), and prior approaches to multi-agent scheduling (e.g., [6, 13, 5]) have generally operated with fixed-times representations of agent schedules.\nWe define an agent architecture centered around incremental management of a flexible times schedule.\nThe underlying STN-based representation is used (1) to loosen the coupling between executor and scheduler threads, (2) to retain a basic ability to absorb unexpected executional delays (or speedups), and (3) to provide a basic criterion for detecting the need for schedule change.\nLocal change is ac484 978-81-904262-7-5 (RPS) c 2007 IFAAMAS Figure 1: A two agent C TAEMS problem.\ncomplished by an incremental scheduler, designed to maximize quality while attempting to minimize schedule change.\nTo this schedule management infra-structure, we add two mechanisms for multi-agent coordination.\nBasic coordination with other agents is achieved by simple communication of local schedule changes to other agents with interdependent activities.\nLayered over this is a non-local option generation and evaluation process (similar in some respects to [5]), aimed at identification of opportunities for global improvement through joint changes to the schedules of multiple agents.\nThis latter process uses analysis of detected conflicts in the STN as a basis for generating options.\nThe remainder of the paper is organized as follows.\nWe begin by briefly summarizing the general distributed scheduling problem of interest in our work.\nNext, we introduce the agent architecture we have developed to solve this problem and sketch its operation.\nIn the following sections, we describe the components of the architecture in more detail, considering in turn issues relating to executing agent schedules, incrementally revising agent schedules and coordinating schedule changes among multiple agents.\nWe then give some experimental results to indicate current system performance.\nFinally we conclude with a brief discussion of current research plans.\n2.\nTHE COORDINATORS PROBLEM As indicated above the distributed schedule management problem that we address in this paper is that put forth by the DARPA Coordinators program.\nThe Coordinators problem is concerned generally with the collaborative execution of a joint mission by a team of agents in a highly dynamic environment.\nA mission is formulated as a network of tasks, which are distributed among the agents by the MASS simulator such that no agent has a complete, objective view of the whole problem.\nInstead, each agent receives only a subjective view containing just the portion of the task network that relates to ground tasks that it is responsible for and any remote tasks that have interdependencies with these local tasks.\nA pre-computed initial schedule is also distributed to the agents, and each agent``s schedule indicates which of its local tasks should be executed and when.\nEach task has an associated quality value which accrues if it is successfully executed within its constraints, and the overall goal is to maximize the quality obtained during execution.\nFigure 2: Subjective view for Agent 2.\nAs execution proceeds, agents must react to unexpected results (e.g., task delays, failures) and changes to the mission (e.g., new tasks, deadline changes) generated by the simulator, recognize when scheduled tasks are no longer feasible or desirable, and coordinate with each other to take corrective, quality-maximizing rescheduling actions that keep execution of the overall mission moving forward.\nProblems are formally specified using a version of the TAEMS language (Task Analysis, Environment Modeling and Simulation) [4] called C TAEMS [1].\nWithin C TAEMS, tasks are represented hierarchically, as shown in the example in Figure 1.\nAt the highest, most abstract level, the root of the tree is a special task called the task group.\nOn successive levels, tasks constitute aggregate activities, which can be decomposed into sets of subtasks and\/or primitive activities, termed methods.\nMethods appear at the leaf level of C TAEMS task structures and are those that are directly executable in the world.\nEach declared method m can only be executed by a specified agent (denoted by ag : AgentN in Figure 1) and each agent can be executing at most one method at any given time (i.e. agents are unit-capacity resources).\nMethod durations and quality are typically specified as discrete probability distributions, and hence known with certainty only after they have been executed.1 It is also possible for a method to fail unexpectedly in execution, in which case the reported quality is zero.\nFor each task, a quality accumulation function qaf is defined, which specifies when and how a task accumulates quality as its subtasks (methods) are executed.\nFor example, a task with a min qaf will accrue the quality of its child with lowest quality if all its children execute and accumulate positive quality.\nTasks with sum or max qafs acquire quality as soon as one child executes with positive quality; as their qaf names suggest, their respective values ultimately will be the total or maximum quality of all children that executed.\nA sync-sum task will accrue quality only for those children that commence execution concurrently with the first child that executes, while an exactly-one task accrues quality only if precisely one of its children executes.\nInter-dependencies between tasks\/methods in the problem are modeled via non-local effects (nles).\nTwo types of nles can be specified: hard and soft.\nHard nles express 1 For simplicity, Figures 1 and 2 show only fixed values for method quality and duration.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 485 causal preconditions: for example, the enables nle in Figure 1 stipulates that the target method M5 can not be executed until the source M4 accumulates quality.\nSoft nles, which include facilitates and hinders, are not required constraints; however, when they are in play, they amplify (or dampen) the quality and duration of the target task.\nAny given task or method a can also be constrained by an earliest start time and a deadline, specifying the window in which a can be feasibly executed.\na may also inherit these constraints from ancestor tasks at any higher level in the task structure, and its effective execution window will be defined by the tightest of these constraints.\nFigure 1 shows the complete objective view of a simple 2 agent problem.\nFigure 2 shows the subjective view available to agent 2 for the same problem.\nIn what follows, we will sometimes use the term activity to refer generically to both task and method nodes.\n3.\nOVERVIEW OF APPROACH Our solution framework combines two basic principles for coping with the problem of managing multi-agent schedules in an uncertain and time stressed execution environment.\nFirst is the use of a STN-based flexible times representation of solution constraints, which allows execution to be driven by a set of schedules rather than a single point solution.\nThis provides a basic hedge against temporal uncertainty and can be used to modulate the need for solution revision.\nThe second principle is to first respond locally to exceptional events, and then, as time permits, explore nonlocal options (i.e., options involving change by 2 or more agents) for global solution improvement.\nThis provides a means for keeping pace with execution, and for tying the amount of effort spent in more global multi-agent solution improvement to the time available.\nBoth local and non-local problem solving time is further minimized by the use of a core incremental scheduling procedure.\nFigure 3: Agent Architecture.\nOur solution framework is made concrete in the agent architecture depicted in Figure 3.\nIn its most basic form, an agent comprises four principal components - an Executor, a Scheduler, a Distributed State Manager (DSM), and an Options Manager - all of which share a common model of the current problem and solution state that couples a domainlevel representation of the subjective c taems task structure to an underlying STN.\nAt any point during operation, the currently installed schedule dictates the timing and sequence of domain-level activities that will be initiated by the agent.\nThe Executor, running in its own thread, continually monitors the enabling conditions of various pending activities, and activates the next pending activity as soon as all of its causal and temporal constraints are satisfied.\nWhen execution results are received back from the environment (MASS) and\/or changes to assumed external constraints are received from other agents, the agent``s model of current state is updated.\nIn cases where this update leads to inconsistency in the STN or it is otherwise recognized that the current local schedule might now be improved, the Scheduler, running on a separate thread, is invoked to revise the current solution and install a new schedule.\nWhenever local schedule constraints change either in response to a current state update or through manipulation by the Scheduler, the DSM is invoked to communicate these changes to interested agents (i.e., those agents that share dependencies and have overlapping subjective views).\nAfter responding locally to a given state update and communicating consequences, the agent will use any remaining computation time to explore possibilities for improvement through joint change.\nThe Option Manager utilizes the Scheduler (in this case in hypothetical mode) to generate one or more non-local options, i.e., identifying changes to the schedule of one or more other agents that will enable the local agent to raise the quality of its schedule.\nThese options are formulated and communicated as queries to the appropriate remote agents, who in turn hypothetically evaluate the impact of proposed changes from their local perspective.\nIn those cases where global improvement is verified, joint changes are committed to.\nIn the following sections we consider the mechanics of these components in more detail.\n4.\nTHE SCHEDULER As indicated above, our agent scheduler operates incrementally.\nIncremental scheduling frameworks are ideally suited for domains requiring tight scheduler-execution coupling: rather than recomputing a new schedule in response to every change, they respond quickly to execution events by localizing changes and making adjustments to the current schedule to accommodate the event.\nThere is an inherent bias toward schedule stability which provides better support for the continuity in execution.\nThis latter property is also advantageous in multi-agent settings, since solution stability tends to minimize the ripple across different agents'' schedules.\nThe coupling of incremental scheduling with flexible times scheduling adds additional leverage in an uncertain, multiagent execution environment.\nAs mentioned earlier, slack can be used as a hedge against uncertain method execution times.\nIt also provides a basis for softening the impact of inter-dependencies across agents.\nIn this section, we summarize the core scheduler that we have developed to solve the Coordinators problem.\nIn subsequent sections we discuss its use in managing execution and coordinating with other agents.\n4.1 STN Solution Representation To maintain the range of admissible values for the start and end times of various methods in a given agent``s sched486 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) ule, all problem and scheduling constraints impacting these times are encoded in an underlying Simple Temporal Network (STN)[3].\nAn STN represents temporal constraints as a graph G < N, E >, where nodes in N represent the set of time points of interest, and edges in E are distances between pairs of time points in N.\nA special time point, called calendar zero grounds the network and has the value 0.\nConstraints on activities (e.g. release time, due time, duration) and relationships between activities (e.g. parentchild relation, enables) are uniformly represented as temporal constraints (i.e., edges) between relevant start and finish time points.\nAn agent``s schedule is designated as a total ordering of selected methods by posting precedence constraints between the end and start points of each ordered pair.\nAs new methods are inserted into a schedule or external state updates require adjustments to existing constraints (e.g., substitution of an actual duration constraint, tightening of a deadline), the network propagates constraints and maintains lower and upper bounds on all time points in the network.\nThis is accomplished efficiently via the use of a standard all-pairs shortest path algorithm; in our implementation, we take advantage of an incremental procedure based on [2].\nAs bounds are updated, a consistency check is made for the presence of negative cycles, and the absence of any such cycle ensures the continued temporal feasibility of the network (and hence the schedule).\nOtherwise a conflict has been detected, and some amount of constraint retraction is necessary to restore feasibility.\n4.2 Maintaining High-Quality Schedules The scheduler consists of two basic components: a quality propagator and an activity allocator that work in a tightly integrated loop.\nThe quality propagator analyzes the activity hierarchy and collects a set of methods that (if scheduled) would maximize the quality of the agent``s local problem.\nThe methods are collected without regard for resource contention; in essence, the quality propagator optimally solves a relaxed problem where agents are capable of performing an infinite number of activities at once.\nThe allocator selects methods from this list and attempts to install them in the agent``s schedule.\nFailure to do so reinvokes the quality propagator with the problematic activity excluded.\nThe Quality Propagator - The quality propagator performs the following actions on the C TAEMS task structure: \u2022 Computes the quality of all activities in the task structure: The expected quality qual(m) of a method m is computed from the probability distribution of the execution outcomes.\nThe quality qual(t) of a task t is computed by applying its qaf to the assessed quality of its children.\n\u2022 Generates a list of contributors for each task: methods that, if scheduled, will maximize the quality obtained by the task.\n\u2022 Generates a list of activators for each task: methods that, if scheduled, are sufficient to qualify the task as scheduled.\nMethods in the activators list are chosen to minimize demands on the agent``s timeline without regard to quality.\nThe first time the quality propagator is invoked, the qualities of all tasks and methods are calculated and the initial lists of contributors and activators are determined.\nSubsequent calls to the propagator occur as the allocator installs methods on the agent``s timeline: failure of the allocator to install a method causes the propagator to recompute a new list of contributors and activators.\nThe Activity Allocator - The activity allocator seeks to install the contributors of the taskgroup identified by the quality propagator onto the agent``s timeline.\nAny currently scheduled methods that do not appear in the contributors list are first unscheduled and removed from the timeline.\nThe contributors are then preprocessed using a quality-centric heuristic to create an agenda sorted in decreasing quality order.\nIn addition, methods associated with a and task (i.e., min, sumand) are grouped consecutively within the agenda.\nSince an and task accumulates quality only if all its children are scheduled, this biases the scheduling process towards failing early (and regenerating contributors) when the methods chosen for the and cannot together be allocated.\nThe allocator iteratively pops the first method mnew from the agenda and attempts to install it.\nThis entails first checking that all activities that enable mnew have been scheduled, while attempting to install any enabler that is not.\nIf any of the enabler activities fails to install, the allocation pass fails.\nWhen successful, the enables constraints linking the enabler activities to mnew are activated.\nThe STN rejects an infeasible enabler constraint by returning a conflict.\nIn this event any enabler activities it has scheduled are uninstalled and the allocator returns failure.\nOnce scheduling of enablers is ensured, a feasible slot on the agent``s timeline within mnew``s time window is sought and the allocator attempts to insert mnew between two currently scheduled methods.\nAt the STN level, mnew``s insertion breaks the sequencing constraint between the two extant timeline methods and attempts to insert two new sequencing constraints that chain mnew to these methods.\nIf these insertions succeed, the routine returns success, otherwise the two extant timeline methods are relinked and allocation attempts the next possible slot for mnew insertion.\n5.\nTHE DYNAMICS OF EXECUTION Maintaining a flexible-times schedule enables us to use a conflict-driven approach to schedule repair: Rather than reacting to every event in the execution that may impact the existing schedule by computing an updated solution, the STN can absorb any change that does not cause a conflict.\nConsequently, computation (producing a new schedule) and communication costs (informing other agents of changes that affect them) are minimized.\nOne basic mechanism needed to model execution in the STN is a dynamic model for current time.\nWe employ a model proposed by [7] that establishes a `current-time'' time point and includes a link between it and the calendar-zero time point.\nAs each method is scheduled, a simple precedence constraint between the current-time time point and the method is established.\nWhen the scheduler receives a current time update, the link between calendar-zero and current-time is modified to reflect this new time, and the constraint propagates to all scheduled methods.\nA second issue concerns synchronization between the executor and the scheduler, as producer and consumer of the schedule running on different threads within a given agent.\nThis coordination must be robust despite the fact that the The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 487 executor needs to start methods for execution in real-time even while the scheduler may be reassessing the schedule to maximize quality, and\/or transmitting a revised schedule.\nIf the executor, for example, slates a method for execution based on current time while the scheduler is instantiating a revised schedule in which that method is no longer nextto-be-executed, an inconsistent state may arise within the agent architecture.\nThis is addressed in part by introducing a freeze window; a specified short (and adjustable) time period beyond current time within which any activity slated as eligible to start in the current schedule cannot be rescheduled by the scheduler.\nThe scheduler is triggered in response to various environmental messages.\nThere are two types of environmental message classes that we discuss here as execution dynamics: 1) feedback as a result of method execution - both the agent``s own and that of other agents, and 2) changes in the C TAEMS model corresponding to a set of simulatordirected evolutions of the problem and environment.\nSuch messages are termed updates and are treated by the scheduler as directives to permanently modify parameters in its model.\nWe discuss these update types in turn here and defer until later the discussion of queries to the scheduler, a ``what-if'' mode initiated by a remote agent that is pursuing higher global quality.\nWhether it is invoked via an update or a query, the scheduler``s response is an option; essentially a complete schedule of activities the agent can execute along with associated quality metrics.\nWe define a local option as a valid schedule for an agent``s activities, which does not require change to any other agent``s schedule.\nThe overarching design for handling execution dynamics aims at anytime scheduling behavior in which a local option maximizing the local view of quality is returned quickly, possibly followed by globally higher quality schedules that entail inter-agent coordination if available scheduler cycles permit.\nAs such, the default scheduling mode for updates is to seek the highest quality local option according to the scheduler``s search strategy, instantiate the option as its current schedule, and notify the executor of the revision.\n5.1 Responding to Activity Execution As suggested earlier, a committed schedule consists of a sequence of methods, each with a designated [est, lst] start time window (as provided by the underlying STN representation).\nThe executor is free to execute a method any time within its start time window, once any additional enabling conditions have been confirmed.\nThese scheduled start time windows are established using the expected duration of each scheduled method (derived from associated method duration distributions during schedule construction).\nOf course as execution unfolds, actual method durations may deviate from these expectations.\nIn these cases, the flexibility retained in the schedule can be used to absorb some of this unpredictability and modulate invocation of a schedule revision process.\nConsider the case of a method completion message, one of the environmental messages that could be communicated to the scheduler as an execution state update.\nIf the completion time is coincident with the expected duration (i.e., it completes exactly as expected), then the scheduler``s response is to simply mark it as `completed'' and the agent can proceed to communicate the time at which it has accumulated quality to any remote agents linked to this method.\nHowever if the method completes with a duration shorter than expected a rescheduling action might be warranted.\nThe posting of the actual duration in the STN introduces no potential for conflict in this case, either with the latest start times (lsts) of local or remote methods that depend on this method as an enabler, or to successively scheduled methods on the agent``s timeline.\nHowever, it may present a possibility for exploiting the unanticipated scheduling slack.\nThe flexible times representation afforded by the STN provides a quick means of assessing whether the next method on the timeline can begin immediate execution instead of waiting for its previously established earliest start time (est).\nIf indeed the est of the next scheduled method can spring back to current-time once the actual duration constraint is substituted for the expected duration constraint, then the schedule can be left intact and simply communicated back to the executor.\nIf alternatively, other problem constraints prevent this relaxation of the est, then there is forced idle time that may be exploited by revising the schedule, and the scheduler is invoked (always respecting the freeze period).\nIf the method completes later than expected, then there is no need for rescheduling under flexible times scheduling unless 1) the method finishes later than the lst of the subsequent scheduled activity, or 2) it finishes later than its deadline.\nThus we only invoke the scheduler if, upon posting the late finish in the STN, a constraint violation occurs.\nIn the latter case no quality is accrued and rescheduling is mandated even if there are no conflicts with subsequent scheduled activities.\nOther execution status updates the agent may receive include: \u2022 method start - If a method sent for execution is started within its [est, lst] window, the response is to mark it as ``executing''.\nA method cannot start earlier than when it is transmitted by the executor but it is possible for it to start later than requested.\nIf the posted start time causes an inconsistency in the STN (e.g. because the expected method duration can no longer be accommodated) the duration constraint in the STN is shortened based on the known distribution until either consistency is restored or rescheduling is mandated.\n\u2022 method failure - Any method under execution may fail unexpectedly, garnering no quality for the agent.\nAt this point rescheduling is mandated as the method may enable other activities or significantly impact quality in the absence of local repair.\nAgain, the executor will proceed with execution of the next method if its start time arrives before the revised schedule is committed, and the scheduler accommodates this by respecting the freeze window.\n\u2022 current time advances An update on ``current time'' may arrive either alone or as part of any of the previously discussed updates.\nIf, when updating the currenttime link in the STN (as described above), a conflict results, the execution state is inconsistent with the schedule.\nIn this case, the scheduler proceeds as if execution were consistent with its expectations, subject to possible later updates.\n488 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.2 Responding to Model Updates The agent can also dynamically receive changes to the agent``s underlying C TAEMS model.\nDynamic revisions in the outcome distributions for methods already in an agent``s subjective view may impact the assessed quality and\/or duration values that shaped the current schedule.\nSimilarly, dynamic revisions in the designated release times and deadlines for methods and tasks already in an agent``s subjective view can invalidate an extant schedule or present opportunities to boost quality.\nIt is also possible during execution to receive updates in which new methods and possibly entire task structures are given to the agent for inclusion in its subjective view.\nModel changes that involve temporal constraints are handled in much the same fashion as described for method starts and completions, i.e, rescheduling is required only when the posting of the revised constraints leads to an STN conflict.\nIn the case of non-temporal model changes, rescheduling action is currently always initiated.\n6.\nINTER-AGENT COORDINATION Having responded locally to an unexpected execution result or model change, it is necessary to communicate the consequences to agents with inter-dependent activities so that they can align their decisions accordingly.\nResponses that look good locally may have a sub-optimal global effect once alignments are made, and hence agents must have the ability to seek mutually beneficial joint schedule changes.\nIn this section we summarize the coordination mechanisms provided in the agent architecture to address these issues.\n6.1 Communicating Non-Local Constraints A basic means of coordination with other agents is provided by the Distributed State Mechanism (DSM), which is responsible for communicating changes made to the model or schedule of a given agent to other interested agents.\nMore specifically, the DSM of a given agent acts to push any changes made to the time bounds, quality, or status of a local task\/method to all the other agents that have that same task\/method as a remote node in their subjective views.\nA recipient agent treats any communicated changes as additional forms of updates, in this case an update that modifies the current constraints associated with non-local (but inter-dependent) tasks or methods.\nThese changes are handled identically to updates reflecting schedule execution results, potentially triggering the local scheduler if the need to reschedule is detected.\n6.2 Generating Non-Local Options As mentioned in the previous section, the agent``s first response to any given query or update (either from execution or from another agent) is to generate one or more local options.\nSuch options represent local schedule changes that are consistent with all currently known constraints originating from other agents'' schedules, and hence can be implemented without interaction with other agents.\nIn many cases, however, a larger-scoped change to the schedules of two or more agents can produce a higher-quality response.\nExploration of opportunities for such coordinated action by two or more agents is the responsibility of the Options Manager.\nRunning in lower priority mode than the Executor and Scheduler, the Options Manager initiates a non-local option generation and evaluation process in response to any local schedule change made by the agent if computation time constraints permits.\nGenerally speaking, a non-local option identifies certain relaxations (to one or more constraints imposed by methods that are scheduled by one or more remote agents) that enable the generation of a higher quality local schedule.\nWhen found, a non-local option is used by a coordinating agent to formulate queries to any other involved agents in order to determine the impact of such constraint relaxations on their local schedules.\nIf the combined quality change reported back from a set of one or more relevant queries is a net gain, then the issuing agent signals to the other involved agents to commit to this joint set of schedule changes.\nThe Option Manager currently employs two basic search strategies for generating non-local options, each exploiting the local scheduler in hypothetical mode.\nOptimistic Synchronization - Optimistic synchronization is a non-local option generation strategy where search is used to explore the impact on quality if optimistic assumptions are made about currently unscheduled remote enablers.\nMore specifically, the strategy looks for would be contributor methods that are currently unscheduled due to the fact that one or more remote enabling (source) tasks or methods are not currently scheduled.\nFor each such local method, the set of remote enablers are hypothetically activated, and the scheduler attempts to construct a new local schedule under these optimistic assumptions.\nIf successful, a non-local option is generated, specifying the value of the new, higher quality local schedule, the temporal constraints on the local target activity, and the set of must-schedule enabler activities that must be scheduled by remote agents in order to achieve this local quality.\nThe needed queries requesting the quality impact of scheduling these activities are then formulated and sent to the relevant remote agents.\nTo illustrate, consider again the example in Figure 1.\nThe maximum quality that Agent1 can contribute to the task group is 15 (by scheduling M1, M2 and M3).\nAssume that this is Agent1``s current schedule.\nGiven this state, the maximum quality that Agent2 can contribute to the task group is 10, and the total task group quality would then be 15 + 10 = 25.\nUsing optimistic synchronization, Agent2 will generate a non-local option that indicates that if M5 becomes enabled, both M5 and M6 would be scheduled, and the quality contributed by Agent2 to the task group would become 30.\nAgent2 sends a must schedule M4 query to Agent1.\nBecause of the time window constraints, Agent1 must remove M3 from its schedule to get M4 on, resulting in a new lower quality schedule of 5.\nHowever, when Agent2 receives this option response from Agent1, it determines that the total quality accumulated for the task group would be 5 + 30 = 35, a net gain of 10.\nHence, Agent 2 signals to Agent1 to commit to this non-local option.\nConflict-Driven Relaxation - A second strategy for generating non-local options, referred to as Conflict-Directed Relaxation, utilizes analysis of STN conflicts to identify and prioritize external constraints to relax in the event that a particular method that would increase local quality is found to be unschedulable.\nRecall that if a method cannot be feasibly inserted into the schedule, an attempt to do so will generate a negative cycle.\nGiven this cycle, the mechanism proceeds in three steps.\nFirst, the constraints involved in the cycle are collected.\nSecond, by virtue of the connections in the STN to the domain-level C TAEMS model, this set is filtered to identify the subset associated with remote nodes.\nThird, constraints in this subset are selectively retracted to The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 489 Figure 4: A high quality task is added to the task structure of Agent2.\nFigure 5: If M4, M5 and M7 are scheduled, a conflict is detected by the STN.\ndetermine if STN consistency is restored.\nIf successful, a non-local option is generated indicating which remote constraint(s) must be relaxed and by how much to allow installation of the new, higher quality local schedule.\nTo illustrate this strategy, consider Figure 5 where Agent1 has M1, M2 and M4 on its timeline, and therefore est(M4) = 21.\nAgent2 has M5 and M6 on its timeline, with est(M5) = 31 (M6 could be scheduled before or after M5).\nSuppose that Agent2 receives a new task M7 with deadline 55 (see Figure 4).\nIf Agent2 could schedule M7, the quality contributed by Agent2 to the task group would be 70.\nHowever, an attempt to schedule M7 together with M5 and M6 leads to a conflict, since the est(M7) = 46, dur(M7) = 10 and lft(M7) = 55 (see Figure 5).\nConflict-directed relaxation by Agent 2 suggests relaxing the lft(M4) by 1 tick to 30, and this query is communicated to Agent 1.\nIn fact, by retracting either method M1 or M2 from the schedule this relaxation can be accommodated with no quality loss to Agent1 (due to the min qaf).\nUpon communication of this fact Agent 2 signals to commit.\n7.\nEXPERIMENTAL RESULTS An initial version of the agent described in this paper was developed in collaboration with SRI International and subjected to the independently conducted Coordinators programmatic evaluation.\nThis evaluation involved over 2000 problem instances randomly generated by a scenario generator that was configured to produce scenarios of varying Problem Class Description Agent Class Quality OD `Only Dynamics''.\nNo NLEs.\n97.9% (390 probs) Actual task duration & quality vary according to distribution.\nINT `Interdependent''.\nFrequent & 100% (360 probs) random (esp.\nfacilitates) CHAINS Activities chained together 99.5% (360 probs) via sequences of enables NLEs (1-4 chains\/prob) TT `Temporal Tightness''.\nRelease - 94.9% (360 probs) Deadline windows preclude preferred high quality (longest duration) tasks from all being scheduled.\nSYNC Problems contain range of 97.1% (360 probs) different Sync sum tasks NTA `New Task Arrival''.\ncTaems 99.0% (360 probs) model is augmented with new tasks dynamically during run.\nOVERALL Avg: 98.1% (2190 probs) Std dev: 6.96 Table 1: Performance of year 1 agent over Coordinators evaluation.\n`Agent Quality'' is % of `optimal'' durations within six experiment classes.\nThese classes, summarized in Table 1, were designed to evaluate key aspects of a set of Coordinators distributed scheduling agents, such as their ability to handle unexpected execution results, chains of nle``s involving multiple agents, and effective scheduling of new activities that arise unexpectedly at some point during the problem run.\nYear 1 evaluation problems were constrained to be small enough (3 -10 agents, 50 - 100 methods) such that comparison against an optimal centralized solver was feasible.\nThe evaluation team employed an MDP-based solver capable of unrolling the entire search space for these problems, choosing for an agent at each execution decision point the activity most likely to produce maximum global quality.\nThis established a challenging benchmark for the distributed agent systems to compare against.\nThe hardware configuration used by the evaluators instantiated and ran one agent per machine, dedicating a separate machine to the MASS simulator.\nAs reported in Table 1, the year 1 prototype agent clearly compares favorably to the benchmark on all classes, coming within 2% of the MDP optimal averaged over the entire set of 2190 problems.\nThese results are particularly notable given that each agent``s STN-based scheduler does very little reasoning over the success probability of the activity sequences it selects to execute.\nOnly simple tactics were adopted to explicitly address such uncertainty, such as the use of expected durations and quality for activities and a policy of excluding from consideration those activities with failure likelihood of >75%.\nThe very respectable agent performance can be at least partially credited to the fact that the flexible times representation employed by the scheduler affords it an important buffer against the uncertainty of execution and exogenous events.\nThe agent turns in its lowest performance on the TT (Temporal Tightness) experiment classes, and an examination of the agent trace logs reveals possible reasons.\nIn about half of the TT problems the year 1 agent under-performs on, the specified time windows within which an agent``s ac490 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) tivities must be scheduled are so tight that any scheduled activity which executes with a longer duration than the expected value, causes a deadline failure.\nThis constitutes a case where more sophisticated reasoning over success probability would benefit this agent.\nThe other half of underperforming TT problems involve activities that depend on facilitation relationships in order to fit in their time windows (recall that facilitation increases quality and decreases duration).\nThe limited facilitates reasoning performed by the year 1 scheduler sometimes causes failures to install a heavily facilitated initial schedule.\nEven when such activities are successfully installed they tend to be prone to deadline failures -If a source-side activity(s) either fails or exceeds its expected duration the resulting longer duration of the target activity can violate its time window deadline.\n8.\nSTATUS AND DIRECTIONS Our current research efforts are aimed at extending the capabilities of the Year 1 agent and scaling up to significantly larger problems.\nYear 2 programmatic evaluation goals call for solving problems on the order of 100 agents and 10,000 methods.\nThis scale places much higher computational demands on all of the agent``s components.\nWe have recently completed a re-implementation of the prototype agent designed to address some recognized performance issues.\nIn addition to verifying that the performance on Year 1 problems is matched or exceeded, we have recently run some successful tests with the agent on a few 100 agent problems.\nTo fully address various scale up issues, we are investigating a number of more advanced coordination mechanisms.\nTo provide more global perspective to local scheduling decisions, we are introducing mechanisms for computing, communicating and using estimates of the non-local impact of remote nodes.\nTo better address the problem of establishing inter-agent synchronization points, we expanding the use of task owners and qaf-specifc protocols as a means for directing coordination activity.\nFinally, we plan to explore the use of more advanced STN-driven coordination mechanisms, including the use of temporal decoupling [7] to insulate the actions of inter-dependent agents and the introduction of probability sensitive contingency schedules.\n9.\nACKNOWLEDGEMENTS The Year 1 agent architecture was developed in collaboration with Andrew Agno, Roger Mailler and Regis Vincent of SRI International.\nThis paper is based on work supported by the Department of Defense Advance Research Projects Agency (DARPA) under Contract # FA8750-05-C0033.\nAny opinions findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA.\n10.\nREFERENCES [1] M. Boddy, B. Horling, J. Phelps, R. Goldman, R. Vincent, A. Long, and B. Kohout.\nC taems language specification v. 1.06, October 2005.\n[2] A. Cesta and A. Oddi.\nGaining efficiency and flexibility in the simple temporal problem.\nIn Proc.\n3rd Int.\nWorkshop on Temporal Representation and Reasoning, Key West FL, May 1996.\n[3] R. Dechter, I. Meiri, and J. Pearl.\nTemporal constraint networks.\nArtificial Intelligence, 49:61-95, May 1991.\n[4] K. Decker.\nT\u00c6MS: A framework for environment centered analysis & design of coordination mechanisms.\nIn G. O``Hare and N. Jennings, editors, Foundations of Distributed Artificial Intelligence, chapter 16, pages 429-448.\nWiley Inter-Science, 1996.\n[5] K. Decker and V. Lesser.\nDesigning a family of coordination algorithms.\nIn Proc.\n1st.\nInt.\nConference on Multi-Agent Systems, San Francisco, 1995.\n[6] A. J. Garvey.\nDesign-To-Time Real-Time Scheduling.\nPhD thesis, Univ. of Massachusetts, Feb. 1996.\n[7] L. Hunsberger.\nAlgorithms for a temporal decoupling problem in multi-agent planning.\nIn Proc.\n18th National Conference on AI, 2002.\n[8] S. Lemai and F. Ingrand.\nInterleaving temporal planning and execution in robotics domains.\nIn Proc.\n19th National Conference on AI, 2004.\n[9] N. Muscettola, P. P. Nayak, B. Pell, and B. C. Williams.\nRemote agent: To boldly go where no AI system has gone before.\nArtificial Intelligence, 103(1-2):5-47, 1998.\n[10] W. Ruml, M. B. Do, and M. Fromherz.\nOn-line planning and scheduling of high-speed manufacturing.\nIn Proc.\nICAPS-05, Monterey, 2005.\n[11] I. Shu, R. Effinger, and B. Williams.\nEnabling fast flexible planning through incremental temporal reasoning with conflict extraction.\nIn Proce.\nICAPS-05, Monterey, 2005.\n[12] S. Smith and C. Cheng.\nSlack-based heuristics for constraint satisfaction scheduling.\nIn Proc.\n12th National Conference on AI, Wash DC, July 1993.\n[13] T. Wagner, A. Garvey, and V. Lesser.\nCriteria-directed heuristic task scheduling.\nInternational Journal of Approximate Reasoning, 19(1):91-118, 1998.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 491","lvl-3":"Distributed Management of Flexible Times Schedules\nABSTRACT\nWe consider the problem of managing schedules in an uncertain, distributed environment.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution.\nThe goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others.\nWe describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a \"flexible times\" representation of the agent's schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure.\nThe former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent's schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set.\nBasic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities.\nThen, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change.\nUsing a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.\n1.\nINTRODUCTION\nThe practical constraints of many application environments require distributed management of executing plans and schedules.\nSuch factors as geographical separation of executing agents, limitations on communication bandwidth, constraints relating to chain of command and the high tempo of execution dynamics may all preclude any single agent from obtaining a complete global view of the problem, and hence necessitate collaborative yet localized planning and scheduling decisions.\nIn this paper, we consider the problem of managing and executing schedules in an uncertain and distributed environment as defined by the DARPA Coordinators program.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally preestablished schedule, but none possessing a global view of either the problem or solution.\nThe team goal is to maximize the total quality of all activities executed by all agents, given that unexpected events will force changes to pre-scheduled activities and alter the utility of executing others as execution unfolds.\nTo provide a basis for distributed coordination, each agent is aware of dependencies between its scheduled activities and those of other agents.\nEach agent is also given a pre-computed set of local contingency (fall-back) options.\nCentral to our approach to solving this multi-agent problem is an incremental flexible-times scheduling framework.\nIn a flexible-times representation of an agent's schedule, the execution intervals associated with scheduled activities are not fixed, but instead are allowed to float within imposed time and activity sequencing constraints.\nThis representation allows the explicit use of slack as a hedge against simple forms of executional uncertainty (e.g., activity durations), and its underlying implementation as a Simple Temporal Network (STN) model provides efficient updating and consistency enforcement mechanisms.\nThe advantages of flexible times frameworks have been demonstrated in various centralized planning and scheduling contexts (e.g., [12, 8, 9, 10, 11]).\nHowever their use in distributed problem solving settings has been quite sparse ([7] is one exception), and prior approaches to multi-agent scheduling (e.g., [6, 13, 5]) have generally operated with fixed-times representations of agent schedules.\nWe define an agent architecture centered around incremental management of a flexible times schedule.\nThe underlying STN-based representation is used (1) to loosen the coupling between executor and scheduler threads, (2) to retain a basic ability to absorb unexpected executional delays (or speedups), and (3) to provide a basic criterion for detecting the need for schedule change.\nLocal change is ac\nFigure 1: A two agent C TAEMS problem.\ncomplished by an incremental scheduler, designed to maximize quality while attempting to minimize schedule change.\nTo this schedule management infra-structure, we add two mechanisms for multi-agent coordination.\nBasic coordination with other agents is achieved by simple communication of local schedule changes to other agents with interdependent activities.\nLayered over this is a non-local option generation and evaluation process (similar in some respects to [5]), aimed at identification of opportunities for global improvement through joint changes to the schedules of multiple agents.\nThis latter process uses analysis of detected conflicts in the STN as a basis for generating options.\nThe remainder of the paper is organized as follows.\nWe begin by briefly summarizing the general distributed scheduling problem of interest in our work.\nNext, we introduce the agent architecture we have developed to solve this problem and sketch its operation.\nIn the following sections, we describe the components of the architecture in more detail, considering in turn issues relating to executing agent schedules, incrementally revising agent schedules and coordinating schedule changes among multiple agents.\nWe then give some experimental results to indicate current system performance.\nFinally we conclude with a brief discussion of current research plans.\n2.\nTHE COORDINATORS PROBLEM\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 485\n3.\nOVERVIEW OF APPROACH\n4.\nTHE SCHEDULER\n4.1 STN Solution Representation\n486 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Maintaining High-Quality Schedules\n5.\nTHE DYNAMICS OF EXECUTION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 487\n5.1 Responding to Activity Execution\n488 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Responding to Model Updates\n6.\nINTER-AGENT COORDINATION\n6.1 Communicating Non-Local Constraints\n6.2 Generating Non-Local Options\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 489\n7.\nEXPERIMENTAL RESULTS\n490 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n8.\nSTATUS AND DIRECTIONS\nOur current research efforts are aimed at extending the capabilities of the Year 1 agent and scaling up to significantly larger problems.\nYear 2 programmatic evaluation goals call for solving problems on the order of 100 agents and 10,000 methods.\nThis scale places much higher computational demands on all of the agent's components.\nWe have recently completed a re-implementation of the prototype agent designed to address some recognized performance issues.\nIn addition to verifying that the performance on Year 1 problems is matched or exceeded, we have recently run some successful tests with the agent on a few 100 agent problems.\nTo fully address various scale up issues, we are investigating a number of more advanced coordination mechanisms.\nTo provide more global perspective to local scheduling decisions, we are introducing mechanisms for computing, communicating and using estimates of the non-local impact of remote nodes.\nTo better address the problem of establishing inter-agent synchronization points, we expanding the use of task owners and qaf-specifc protocols as a means for directing coordination activity.\nFinally, we plan to explore the use of more advanced STN-driven coordination mechanisms, including the use of temporal decoupling [7] to insulate the actions of inter-dependent agents and the introduction of probability sensitive contingency schedules.","lvl-4":"Distributed Management of Flexible Times Schedules\nABSTRACT\nWe consider the problem of managing schedules in an uncertain, distributed environment.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution.\nThe goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others.\nWe describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a \"flexible times\" representation of the agent's schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure.\nThe former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent's schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set.\nBasic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities.\nThen, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change.\nUsing a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.\n1.\nINTRODUCTION\nThe practical constraints of many application environments require distributed management of executing plans and schedules.\nIn this paper, we consider the problem of managing and executing schedules in an uncertain and distributed environment as defined by the DARPA Coordinators program.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally preestablished schedule, but none possessing a global view of either the problem or solution.\nThe team goal is to maximize the total quality of all activities executed by all agents, given that unexpected events will force changes to pre-scheduled activities and alter the utility of executing others as execution unfolds.\nTo provide a basis for distributed coordination, each agent is aware of dependencies between its scheduled activities and those of other agents.\nEach agent is also given a pre-computed set of local contingency (fall-back) options.\nCentral to our approach to solving this multi-agent problem is an incremental flexible-times scheduling framework.\nIn a flexible-times representation of an agent's schedule, the execution intervals associated with scheduled activities are not fixed, but instead are allowed to float within imposed time and activity sequencing constraints.\nHowever their use in distributed problem solving settings has been quite sparse ([7] is one exception), and prior approaches to multi-agent scheduling (e.g., [6, 13, 5]) have generally operated with fixed-times representations of agent schedules.\nWe define an agent architecture centered around incremental management of a flexible times schedule.\nLocal change is ac\nFigure 1: A two agent C TAEMS problem.\ncomplished by an incremental scheduler, designed to maximize quality while attempting to minimize schedule change.\nTo this schedule management infra-structure, we add two mechanisms for multi-agent coordination.\nBasic coordination with other agents is achieved by simple communication of local schedule changes to other agents with interdependent activities.\nLayered over this is a non-local option generation and evaluation process (similar in some respects to [5]), aimed at identification of opportunities for global improvement through joint changes to the schedules of multiple agents.\nWe begin by briefly summarizing the general distributed scheduling problem of interest in our work.\nNext, we introduce the agent architecture we have developed to solve this problem and sketch its operation.\nIn the following sections, we describe the components of the architecture in more detail, considering in turn issues relating to executing agent schedules, incrementally revising agent schedules and coordinating schedule changes among multiple agents.\nWe then give some experimental results to indicate current system performance.\nFinally we conclude with a brief discussion of current research plans.\n8.\nSTATUS AND DIRECTIONS\nOur current research efforts are aimed at extending the capabilities of the Year 1 agent and scaling up to significantly larger problems.\nYear 2 programmatic evaluation goals call for solving problems on the order of 100 agents and 10,000 methods.\nThis scale places much higher computational demands on all of the agent's components.\nWe have recently completed a re-implementation of the prototype agent designed to address some recognized performance issues.\nIn addition to verifying that the performance on Year 1 problems is matched or exceeded, we have recently run some successful tests with the agent on a few 100 agent problems.\nTo fully address various scale up issues, we are investigating a number of more advanced coordination mechanisms.\nTo provide more global perspective to local scheduling decisions, we are introducing mechanisms for computing, communicating and using estimates of the non-local impact of remote nodes.\nTo better address the problem of establishing inter-agent synchronization points, we expanding the use of task owners and qaf-specifc protocols as a means for directing coordination activity.\nFinally, we plan to explore the use of more advanced STN-driven coordination mechanisms, including the use of temporal decoupling [7] to insulate the actions of inter-dependent agents and the introduction of probability sensitive contingency schedules.","lvl-2":"Distributed Management of Flexible Times Schedules\nABSTRACT\nWe consider the problem of managing schedules in an uncertain, distributed environment.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution.\nThe goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others.\nWe describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a \"flexible times\" representation of the agent's schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure.\nThe former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent's schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set.\nBasic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities.\nThen, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change.\nUsing a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.\n1.\nINTRODUCTION\nThe practical constraints of many application environments require distributed management of executing plans and schedules.\nSuch factors as geographical separation of executing agents, limitations on communication bandwidth, constraints relating to chain of command and the high tempo of execution dynamics may all preclude any single agent from obtaining a complete global view of the problem, and hence necessitate collaborative yet localized planning and scheduling decisions.\nIn this paper, we consider the problem of managing and executing schedules in an uncertain and distributed environment as defined by the DARPA Coordinators program.\nWe assume a team of collaborative agents, each responsible for executing a portion of a globally preestablished schedule, but none possessing a global view of either the problem or solution.\nThe team goal is to maximize the total quality of all activities executed by all agents, given that unexpected events will force changes to pre-scheduled activities and alter the utility of executing others as execution unfolds.\nTo provide a basis for distributed coordination, each agent is aware of dependencies between its scheduled activities and those of other agents.\nEach agent is also given a pre-computed set of local contingency (fall-back) options.\nCentral to our approach to solving this multi-agent problem is an incremental flexible-times scheduling framework.\nIn a flexible-times representation of an agent's schedule, the execution intervals associated with scheduled activities are not fixed, but instead are allowed to float within imposed time and activity sequencing constraints.\nThis representation allows the explicit use of slack as a hedge against simple forms of executional uncertainty (e.g., activity durations), and its underlying implementation as a Simple Temporal Network (STN) model provides efficient updating and consistency enforcement mechanisms.\nThe advantages of flexible times frameworks have been demonstrated in various centralized planning and scheduling contexts (e.g., [12, 8, 9, 10, 11]).\nHowever their use in distributed problem solving settings has been quite sparse ([7] is one exception), and prior approaches to multi-agent scheduling (e.g., [6, 13, 5]) have generally operated with fixed-times representations of agent schedules.\nWe define an agent architecture centered around incremental management of a flexible times schedule.\nThe underlying STN-based representation is used (1) to loosen the coupling between executor and scheduler threads, (2) to retain a basic ability to absorb unexpected executional delays (or speedups), and (3) to provide a basic criterion for detecting the need for schedule change.\nLocal change is ac\nFigure 1: A two agent C TAEMS problem.\ncomplished by an incremental scheduler, designed to maximize quality while attempting to minimize schedule change.\nTo this schedule management infra-structure, we add two mechanisms for multi-agent coordination.\nBasic coordination with other agents is achieved by simple communication of local schedule changes to other agents with interdependent activities.\nLayered over this is a non-local option generation and evaluation process (similar in some respects to [5]), aimed at identification of opportunities for global improvement through joint changes to the schedules of multiple agents.\nThis latter process uses analysis of detected conflicts in the STN as a basis for generating options.\nThe remainder of the paper is organized as follows.\nWe begin by briefly summarizing the general distributed scheduling problem of interest in our work.\nNext, we introduce the agent architecture we have developed to solve this problem and sketch its operation.\nIn the following sections, we describe the components of the architecture in more detail, considering in turn issues relating to executing agent schedules, incrementally revising agent schedules and coordinating schedule changes among multiple agents.\nWe then give some experimental results to indicate current system performance.\nFinally we conclude with a brief discussion of current research plans.\n2.\nTHE COORDINATORS PROBLEM\nAs indicated above the distributed schedule management problem that we address in this paper is that put forth by the DARPA Coordinators program.\nThe Coordinators problem is concerned generally with the collaborative execution of a joint mission by a team of agents in a highly dynamic environment.\nA mission is formulated as a network of tasks, which are distributed among the agents by the MASS simulator such that no agent has a complete, \"objective\" view of the whole problem.\nInstead, each agent receives only a \"subjective view\" containing just the portion of the task network that relates to ground tasks that it is responsible for and any remote tasks that have interdependencies with these local tasks.\nA pre-computed initial schedule is also distributed to the agents, and each agent's schedule indicates which of its local tasks should be executed and when.\nEach task has an associated quality value which accrues if it is successfully executed within its constraints, and the overall goal is to maximize the quality obtained during execution.\nFigure 2: Subjective view for Agent 2.\nAs execution proceeds, agents must react to unexpected results (e.g., task delays, failures) and changes to the mission (e.g., new tasks, deadline changes) generated by the simulator, recognize when scheduled tasks are no longer feasible or desirable, and coordinate with each other to take corrective, quality-maximizing rescheduling actions that keep execution of the overall mission moving forward.\nProblems are formally specified using a version of the TAEMS language (Task Analysis, Environment Modeling and Simulation) [4] called C TAEMS [1].\nWithin C TAEMS, tasks are represented hierarchically, as shown in the example in Figure 1.\nAt the highest, most abstract level, the root of the tree is a special task called the \"task group\".\nOn successive levels, \"tasks\" constitute aggregate activities, which can be decomposed into sets of subtasks and\/or primitive activities, termed \"methods.\"\nMethods appear at the leaf level of C TAEMS task structures and are those that are directly executable in the world.\nEach declared method m can only be executed by a specified agent (denoted by ag: AgentN in Figure 1) and each agent can be executing at most one method at any given time (i.e. agents are unit-capacity resources).\nMethod durations and quality are typically specified as discrete probability distributions, and hence known with certainty only after they have been executed .1 It is also possible for a method to fail unexpectedly in execution, in which case the reported quality is zero.\nFor each task, a quality accumulation function qaf is defined, which specifies when and how a task accumulates quality as its subtasks (methods) are executed.\nFor example, a task with a min qaf will accrue the quality of its child with lowest quality if all its children execute and accumulate positive quality.\nTasks with sum or max qafs acquire quality as soon as one child executes with positive quality; as their qaf names suggest, their respective values ultimately will be the total or maximum quality of all children that executed.\nA sync-sum task will accrue quality only for those children that commence execution concurrently with the first child that executes, while an exactly-one task accrues quality only if precisely one of its children executes.\nInter-dependencies between tasks\/methods in the problem are modeled via non-local effects (nles).\nTwo types of nles can be specified: \"hard\" and \"soft.\"\nHard nles express\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 485\ncausal preconditions: for example, the enables nle in Figure 1 stipulates that the target method M5 cannot be executed until the source M4 accumulates quality.\nSoft nles, which include facilitates and hinders, are not required constraints; however, when they are in play, they amplify (or dampen) the quality and duration of the target task.\nAny given task or method a can also be constrained by an earliest start time and a deadline, specifying the window in which a can be feasibly executed.\na may also inherit these constraints from ancestor tasks at any higher level in the task structure, and its effective execution window will be defined by the tightest of these constraints.\nFigure 1 shows the complete \"objective\" view of a simple 2 agent problem.\nFigure 2 shows the subjective view available to agent 2 for the same problem.\nIn what follows, we will sometimes use the term activity to refer generically to both task and method nodes.\n3.\nOVERVIEW OF APPROACH\nOur solution framework combines two basic principles for coping with the problem of managing multi-agent schedules in an uncertain and time stressed execution environment.\nFirst is the use of a STN-based flexible times representation of solution constraints, which allows execution to be driven by a \"set\" of schedules rather than a single point solution.\nThis provides a basic hedge against temporal uncertainty and can be used to modulate the need for solution revision.\nThe second principle is to first respond locally to exceptional events, and then, as time permits, explore nonlocal options (i.e., options involving change by 2 or more agents) for global solution improvement.\nThis provides a means for keeping pace with execution, and for tying the amount of effort spent in more global multi-agent solution improvement to the time available.\nBoth local and non-local problem solving time is further minimized by the use of a core incremental scheduling procedure.\nFigure 3: Agent Architecture.\nOur solution framework is made concrete in the agent architecture depicted in Figure 3.\nIn its most basic form, an agent comprises four principal components - an Executor, a Scheduler, a Distributed State Manager (DSM), and an Options Manager - all of which share a common model of the current problem and solution state that couples a domainlevel representation of the subjective c taems task structure to an underlying STN.\nAt any point during operation, the currently installed schedule dictates the timing and sequence of domain-level activities that will be initiated by the agent.\nThe Executor, running in its own thread, continually monitors the enabling conditions of various pending activities, and activates the next pending activity as soon as all of its causal and temporal constraints are satisfied.\nWhen execution results are received back from the environment (MASS) and\/or changes to assumed external constraints are received from other agents, the agent's model of current state is updated.\nIn cases where this update leads to inconsistency in the STN or it is otherwise recognized that the current local schedule might now be improved, the Scheduler, running on a separate thread, is invoked to revise the current solution and install a new schedule.\nWhenever local schedule constraints change either in response to a current state update or through manipulation by the Scheduler, the DSM is invoked to communicate these changes to interested agents (i.e., those agents that share dependencies and have overlapping subjective views).\nAfter responding locally to a given state update and communicating consequences, the agent will use any remaining computation time to explore possibilities for improvement through joint change.\nThe Option Manager utilizes the Scheduler (in this case in hypothetical mode) to generate one or more non-local options, i.e., identifying changes to the schedule of one or more other agents that will enable the local agent to raise the quality of its schedule.\nThese options are formulated and communicated as queries to the appropriate remote agents, who in turn hypothetically evaluate the impact of proposed changes from their local perspective.\nIn those cases where global improvement is verified, joint changes are committed to.\nIn the following sections we consider the mechanics of these components in more detail.\n4.\nTHE SCHEDULER\nAs indicated above, our agent scheduler operates incrementally.\nIncremental scheduling frameworks are ideally suited for domains requiring tight scheduler-execution coupling: rather than recomputing a new schedule in response to every change, they respond quickly to execution events by localizing changes and making adjustments to the current schedule to accommodate the event.\nThere is an inherent bias toward schedule stability which provides better support for the continuity in execution.\nThis latter property is also advantageous in multi-agent settings, since solution stability tends to minimize the ripple across different agents' schedules.\nThe coupling of incremental scheduling with flexible times scheduling adds additional leverage in an uncertain, multiagent execution environment.\nAs mentioned earlier, slack can be used as a hedge against uncertain method execution times.\nIt also provides a basis for softening the impact of inter-dependencies across agents.\nIn this section, we summarize the core scheduler that we have developed to solve the Coordinators problem.\nIn subsequent sections we discuss its use in managing execution and coordinating with other agents.\n4.1 STN Solution Representation\nTo maintain the range of admissible values for the start and end times of various methods in a given agent's sched\n486 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nule, all problem and scheduling constraints impacting these times are encoded in an underlying Simple Temporal Network (STN) [3].\nAn STN represents temporal constraints as a graph G , where nodes in N represent the set of time points of interest, and edges in E are distances between pairs of time points in N.\nA special time point, called calendar zero grounds the network and has the value 0.\nConstraints on activities (e.g. release time, due time, duration) and relationships between activities (e.g. parentchild relation, enables) are uniformly represented as temporal constraints (i.e., edges) between relevant start and finish time points.\nAn agent's schedule is designated as a total ordering of selected methods by posting precedence constraints between the end and start points of each ordered pair.\nAs new methods are inserted into a schedule or external state updates require adjustments to existing constraints (e.g., substitution of an actual duration constraint, tightening of a deadline), the network propagates constraints and maintains lower and upper bounds on all time points in the network.\nThis is accomplished efficiently via the use of a standard all-pairs shortest path algorithm; in our implementation, we take advantage of an incremental procedure based on [2].\nAs bounds are updated, a consistency check is made for the presence of negative cycles, and the absence of any such cycle ensures the continued temporal feasibility of the network (and hence the schedule).\nOtherwise a conflict has been detected, and some amount of constraint retraction is necessary to restore feasibility.\n4.2 Maintaining High-Quality Schedules\nThe scheduler consists of two basic components: a quality propagator and an activity allocator that work in a tightly integrated loop.\nThe quality propagator analyzes the activity hierarchy and collects a set of methods that (if scheduled) would maximize the quality of the agent's local problem.\nThe methods are collected without regard for resource contention; in essence, the quality propagator optimally solves a relaxed problem where agents are capable of performing an infinite number of activities at once.\nThe allocator selects methods from this list and attempts to install them in the agent's schedule.\nFailure to do so reinvokes the quality propagator with the problematic activity excluded.\nThe Quality Propagator - The quality propagator performs the following actions on the C TAEMS task structure:\n\u2022 Computes the quality of all activities in the task structure: The expected quality qual (m) of a method m is computed from the probability distribution of the execution outcomes.\nThe quality qual (t) of a task t is computed by applying its qaf to the assessed quality of its children.\n\u2022 Generates a list of contributors for each task: methods that, if scheduled, will maximize the quality obtained by the task.\n\u2022 Generates a list of activators for each task: methods that, if scheduled, are sufficient to qualify the task as scheduled.\nMethods in the activators list are chosen to minimize demands on the agent's timeline without regard to quality.\nThe first time the quality propagator is invoked, the qualities of all tasks and methods are calculated and the initial lists of contributors and activators are determined.\nSubsequent calls to the propagator occur as the allocator installs methods on the agent's timeline: failure of the allocator to install a method causes the propagator to recompute a new list of contributors and activators.\nThe Activity Allocator - The activity allocator seeks to install the contributors of the taskgroup identified by the quality propagator onto the agent's timeline.\nAny currently scheduled methods that do not appear in the contributors list are first unscheduled and removed from the timeline.\nThe contributors are then preprocessed using a quality-centric heuristic to create an agenda sorted in decreasing quality order.\nIn addition, methods associated with a \"and\" task (i.e., min, sumand) are grouped consecutively within the agenda.\nSince an \"and\" task accumulates quality only if all its children are scheduled, this biases the scheduling process towards failing early (and regenerating contributors) when the methods chosen for the \"and\" cannot together be allocated.\nThe allocator iteratively pops the first method mnew from the agenda and attempts to install it.\nThis entails first checking that all activities that enable mnew have been scheduled, while attempting to install any enabler that is not.\nIf any of the enabler activities fails to install, the allocation pass fails.\nWhen successful, the enables constraints linking the enabler activities to mnew are activated.\nThe STN rejects an infeasible enabler constraint by returning a conflict.\nIn this event any enabler activities it has scheduled are uninstalled and the allocator returns failure.\nOnce scheduling of enablers is ensured, a feasible slot on the agent's timeline within mnew's time window is sought and the allocator attempts to insert mnew between two currently scheduled methods.\nAt the STN level, mnew's insertion breaks the sequencing constraint between the two extant timeline methods and attempts to insert two new sequencing constraints that chain mnew to these methods.\nIf these insertions succeed, the routine returns success, otherwise the two extant timeline methods are relinked and allocation attempts the next possible slot for mnew insertion.\n5.\nTHE DYNAMICS OF EXECUTION\nMaintaining a flexible-times schedule enables us to use a conflict-driven approach to schedule repair: Rather than reacting to every event in the execution that may impact the existing schedule by computing an updated solution, the STN can absorb any change that does not cause a conflict.\nConsequently, computation (producing a new schedule) and communication costs (informing other agents of changes that affect them) are minimized.\nOne basic mechanism needed to model execution in the STN is a dynamic model for current time.\nWe employ a model proposed by [7] that establishes a ` current-time' time point and includes a link between it and the calendar-zero time point.\nAs each method is scheduled, a simple precedence constraint between the current-time time point and the method is established.\nWhen the scheduler receives a current time update, the link between calendar-zero and current-time is modified to reflect this new time, and the constraint propagates to all scheduled methods.\nA second issue concerns synchronization between the executor and the scheduler, as producer and consumer of the schedule running on different threads within a given agent.\nThis coordination must be robust despite the fact that the\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 487\nexecutor needs to start methods for execution in real-time even while the scheduler may be reassessing the schedule to maximize quality, and\/or transmitting a revised schedule.\nIf the executor, for example, slates a method for execution based on current time while the scheduler is instantiating a revised schedule in which that method is no longer nextto-be-executed, an inconsistent state may arise within the agent architecture.\nThis is addressed in part by introducing a \"freeze window;\" a specified short (and adjustable) time period beyond current time within which any activity slated as eligible to start in the current schedule cannot be rescheduled by the scheduler.\nThe scheduler is triggered in response to various environmental messages.\nThere are two types of environmental message classes that we discuss here as \"execution dynamics:\" 1) feedback as a result of method execution - both the agent's own and that of other agents, and 2) changes in the C TAEMS model corresponding to a set of simulatordirected evolutions of the problem and environment.\nSuch messages are termed updates and are treated by the scheduler as directives to permanently modify parameters in its model.\nWe discuss these update types in turn here and defer until later the discussion of queries to the scheduler, a' what-if' mode initiated by a remote agent that is pursuing higher global quality.\nWhether it is invoked via an update or a query, the scheduler's response is an option; essentially a complete schedule of activities the agent can execute along with associated quality metrics.\nWe define a local option as a valid schedule for an agent's activities, which does not require change to any other agent's schedule.\nThe overarching design for handling execution dynamics aims at anytime scheduling behavior in which a local option maximizing the local view of quality is returned quickly, possibly followed by globally higher quality schedules that entail inter-agent coordination if available scheduler cycles permit.\nAs such, the default scheduling mode for updates is to seek the highest quality local option according to the scheduler's search strategy, instantiate the option as its current schedule, and notify the executor of the revision.\n5.1 Responding to Activity Execution\nAs suggested earlier, a committed schedule consists of a sequence of methods, each with a designated [est, lst] start time window (as provided by the underlying STN representation).\nThe executor is free to execute a method any time within its start time window, once any additional enabling conditions have been confirmed.\nThese scheduled start time windows are established using the expected duration of each scheduled method (derived from associated method duration distributions during schedule construction).\nOf course as execution unfolds, actual method durations may deviate from these expectations.\nIn these cases, the flexibility retained in the schedule can be used to absorb some of this unpredictability and modulate invocation of a schedule revision process.\nConsider the case of a method completion message, one of the environmental messages that could be communicated to the scheduler as an execution state update.\nIf the completion time is coincident with the expected duration (i.e., it completes exactly as expected), then the scheduler's response is to simply mark it as ` completed' and the agent can proceed to communicate the time at which it has accumulated quality to any remote agents linked to this method.\nHowever if the method completes with a duration shorter than expected a rescheduling action might be warranted.\nThe posting of the actual duration in the STN introduces no potential for conflict in this case, either with the latest start times (lsts) of local or remote methods that depend on this method as an enabler, or to successively scheduled methods on the agent's timeline.\nHowever, it may present a possibility for exploiting the unanticipated scheduling slack.\nThe flexible times representation afforded by the STN provides a quick means of assessing whether the next method on the timeline can begin immediate execution instead of waiting for its previously established earliest start time (est).\nIf indeed the est of the next scheduled method can \"spring back\" to current-time once the actual duration constraint is substituted for the expected duration constraint, then the schedule can be left intact and simply communicated back to the executor.\nIf alternatively, other problem constraints prevent this relaxation of the est, then there is forced idle time that may be exploited by revising the schedule, and the scheduler is invoked (always respecting the freeze period).\nIf the method completes later than expected, then there is no need for rescheduling under flexible times scheduling unless 1) the method finishes later than the lst of the subsequent scheduled activity, or 2) it finishes later than its deadline.\nThus we only invoke the scheduler if, upon posting the late finish in the STN, a constraint violation occurs.\nIn the latter case no quality is accrued and rescheduling is mandated even if there are no conflicts with subsequent scheduled activities.\nOther execution status updates the agent may receive include:\n\u2022 method start - If a method sent for execution is started within its [est, lst] window, the response is to mark it as' executing'.\nA method cannot start earlier than when it is transmitted by the executor but it is possible for it to start later than requested.\nIf the posted start time causes an inconsistency in the STN (e.g. because the expected method duration can no longer be accommodated) the duration constraint in the STN is shortened based on the known distribution until either consistency is restored or rescheduling is mandated.\n\u2022 method failure - Any method under execution may fail unexpectedly, garnering no quality for the agent.\nAt this point rescheduling is mandated as the method may enable other activities or significantly impact quality in the absence of local repair.\nAgain, the executor will proceed with execution of the next method if its start time arrives before the revised schedule is committed, and the scheduler accommodates this by respecting the freeze window.\n\u2022 current time advances An update on' current time' may arrive either alone or as part of any of the previously discussed updates.\nIf, when updating the currenttime link in the STN (as described above), a conflict results, the execution state is inconsistent with the schedule.\nIn this case, the scheduler proceeds as if execution were consistent with its expectations, subject to possible later updates.\n488 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Responding to Model Updates\nThe agent can also dynamically receive changes to the agent's underlying C TAEMS model.\nDynamic revisions in the outcome distributions for methods already in an agent's subjective view may impact the assessed quality and\/or duration values that shaped the current schedule.\nSimilarly, dynamic revisions in the designated release times and deadlines for methods and tasks already in an agent's subjective view can invalidate an extant schedule or present opportunities to boost quality.\nIt is also possible during execution to receive updates in which new methods and possibly entire task structures are given to the agent for inclusion in its subjective view.\nModel changes that involve temporal constraints are handled in much the same fashion as described for method starts and completions, i.e, rescheduling is required only when the posting of the revised constraints leads to an STN conflict.\nIn the case of non-temporal model changes, rescheduling action is currently always initiated.\n6.\nINTER-AGENT COORDINATION\nHaving responded locally to an unexpected execution result or model change, it is necessary to communicate the consequences to agents with inter-dependent activities so that they can align their decisions accordingly.\nResponses that look good locally may have a sub-optimal global effect once alignments are made, and hence agents must have the ability to seek mutually beneficial joint schedule changes.\nIn this section we summarize the coordination mechanisms provided in the agent architecture to address these issues.\n6.1 Communicating Non-Local Constraints\nA basic means of coordination with other agents is provided by the Distributed State Mechanism (DSM), which is responsible for communicating changes made to the model or schedule of a given agent to other \"interested\" agents.\nMore specifically, the DSM of a given agent acts to push any changes made to the time bounds, quality, or status of a local task\/method to all the other agents that have that same task\/method as a remote node in their subjective views.\nA recipient agent treats any communicated changes as additional forms of updates, in this case an update that modifies the current constraints associated with non-local (but inter-dependent) tasks or methods.\nThese changes are handled identically to updates reflecting schedule execution results, potentially triggering the local scheduler if the need to reschedule is detected.\n6.2 Generating Non-Local Options\nAs mentioned in the previous section, the agent's first response to any given query or update (either from execution or from another agent) is to generate one or more local options.\nSuch options represent local schedule changes that are consistent with all currently known constraints originating from other agents' schedules, and hence can be implemented without interaction with other agents.\nIn many cases, however, a larger-scoped change to the schedules of two or more agents can produce a higher-quality response.\nExploration of opportunities for such coordinated action by two or more agents is the responsibility of the Options Manager.\nRunning in lower priority mode than the Executor and Scheduler, the Options Manager initiates a non-local option generation and evaluation process in response to any local schedule change made by the agent if computation time constraints permits.\nGenerally speaking, a non-local option identifies certain relaxations (to one or more constraints imposed by methods that are scheduled by one or more remote agents) that enable the generation of a higher quality local schedule.\nWhen found, a non-local option is used by a coordinating agent to formulate queries to any other involved agents in order to determine the impact of such constraint relaxations on their local schedules.\nIf the combined quality change reported back from a set of one or more relevant queries is a net gain, then the issuing agent signals to the other involved agents to commit to this joint set of schedule changes.\nThe Option Manager currently employs two basic search strategies for generating non-local options, each exploiting the local scheduler in hypothetical mode.\nOptimistic Synchronization - Optimistic synchronization is a non-local option generation strategy where search is used to explore the impact on quality if optimistic assumptions are made about currently unscheduled remote enablers.\nMore specifically, the strategy looks for \"would be\" contributor methods that are currently unscheduled due to the fact that one or more remote enabling (source) tasks or methods are not currently scheduled.\nFor each such local method, the set of remote enablers are hypothetically activated, and the scheduler attempts to construct a new local schedule under these optimistic assumptions.\nIf successful, a non-local option is generated, specifying the value of the new, higher quality local schedule, the temporal constraints on the local target activity, and the set of must-schedule enabler activities that must be scheduled by remote agents in order to achieve this local quality.\nThe needed queries requesting the quality impact of scheduling these activities are then formulated and sent to the relevant remote agents.\nTo illustrate, consider again the example in Figure 1.\nThe maximum quality that Agent1 can contribute to the task group is 15 (by scheduling M1, M2 and M3).\nAssume that this is Agent1's current schedule.\nGiven this state, the maximum quality that Agent2 can contribute to the task group is 10, and the total task group quality would then be 15 + 10 = 25.\nUsing optimistic synchronization, Agent2 will generate a non-local option that indicates that if M5 becomes enabled, both M5 and M6 would be scheduled, and the quality contributed by Agent2 to the task group would become 30.\nAgent2 sends a must schedule M4 query to Agent1.\nBecause of the time window constraints, Agent1 must remove M3 from its schedule to get M4 on, resulting in a new lower quality schedule of 5.\nHowever, when Agent2 receives this option response from Agent1, it determines that the total quality accumulated for the task group would be 5 + 30 = 35, a net gain of 10.\nHence, Agent 2 signals to Agent1 to commit to this non-local option.\nConflict-Driven Relaxation - A second strategy for generating non-local options, referred to as Conflict-Directed Relaxation, utilizes analysis of STN conflicts to identify and prioritize external constraints to relax in the event that a particular method that would increase local quality is found to be unschedulable.\nRecall that if a method cannot be feasibly inserted into the schedule, an attempt to do so will generate a negative cycle.\nGiven this cycle, the mechanism proceeds in three steps.\nFirst, the constraints involved in the cycle are collected.\nSecond, by virtue of the connections in the STN to the domain-level C TAEMS model, this set is filtered to identify the subset associated with remote nodes.\nThird, constraints in this subset are selectively retracted to\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 489\nFigure 4: A high quality task is added to the task structure of Agent2.\nFigure 5: If M4, M5 and M7 are scheduled, a conflict\nis detected by the STN.\ndetermine if STN consistency is restored.\nIf successful, a non-local option is generated indicating which remote constraint (s) must be relaxed and by how much to allow installation of the new, higher quality local schedule.\nTo illustrate this strategy, consider Figure 5 where Agent1 has M1, M2 and M4 on its timeline, and therefore est (M4) = 21.\nAgent2 has M5 and M6 on its timeline, with est (M5) = 31 (M6 could be scheduled before or after M5).\nSuppose that Agent2 receives a new task M7 with deadline 55 (see Figure 4).\nIf Agent2 could schedule M7, the quality contributed by Agent2 to the task group would be 70.\nHowever, an attempt to schedule M7 together with M5 and M6 leads to a conflict, since the est (M7) = 46, dur (M7) = 10 and lft (M7) = 55 (see Figure 5).\nConflict-directed relaxation by Agent 2 suggests relaxing the lf t (M4) by 1 tick to 30, and this query is communicated to Agent 1.\nIn fact, by retracting either method M1 or M2 from the schedule this relaxation can be accommodated with no quality loss to Agent1 (due to the min qaf).\nUpon communication of this fact Agent 2 signals to commit.\n7.\nEXPERIMENTAL RESULTS\nAn initial version of the agent described in this paper was developed in collaboration with SRI International and subjected to the independently conducted Coordinators programmatic evaluation.\nThis evaluation involved over 2000 problem instances randomly generated by a scenario generator that was configured to produce scenarios of varying\nTable 1: Performance of year 1 agent over Coordinators evaluation.\n` Agent Quality' is% of ` optimal '\ndurations within six experiment classes.\nThese classes, summarized in Table 1, were designed to evaluate key aspects of a set of Coordinators distributed scheduling agents, such as their ability to handle unexpected execution results, chains of nle's involving multiple agents, and effective scheduling of new activities that arise unexpectedly at some point during the problem run.\nYear 1 evaluation problems were constrained to be small enough (3 -10 agents, 50 - 100 methods) such that comparison against an optimal centralized solver was feasible.\nThe evaluation team employed an MDP-based solver capable of unrolling the entire search space for these problems, choosing for an agent at each execution decision point the activity most likely to produce maximum global quality.\nThis established a challenging benchmark for the distributed agent systems to compare against.\nThe hardware configuration used by the evaluators instantiated and ran one agent per machine, dedicating a separate machine to the MASS simulator.\nAs reported in Table 1, the year 1 prototype agent clearly compares favorably to the benchmark on all classes, coming within 2% of the MDP optimal averaged over the entire set of 2190 problems.\nThese results are particularly notable given that each agent's STN-based scheduler does very little reasoning over the success probability of the activity sequences it selects to execute.\nOnly simple tactics were adopted to explicitly address such uncertainty, such as the use of expected durations and quality for activities and a policy of excluding from consideration those activities with failure likelihood of> 75%.\nThe very respectable agent performance can be at least partially credited to the fact that the flexible times representation employed by the scheduler affords it an important buffer against the uncertainty of execution and exogenous events.\nThe agent turns in its lowest performance on the TT (Temporal Tightness) experiment classes, and an examination of the agent trace logs reveals possible reasons.\nIn about half of the TT problems the year 1 agent under-performs on, the specified time windows within which an agent's ac\n490 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ntivities must be scheduled are so tight that any scheduled activity which executes with a longer duration than the expected value, causes a deadline failure.\nThis constitutes a case where more sophisticated reasoning over success probability would benefit this agent.\nThe other half of underperforming TT problems involve activities that depend on facilitation relationships in order to fit in their time windows (recall that facilitation increases quality and decreases duration).\nThe limited facilitates reasoning performed by the year 1 scheduler sometimes causes failures to install a heavily facilitated initial schedule.\nEven when such activities are successfully installed they tend to be prone to deadline failures - If a source-side activity (s) either fails or exceeds its expected duration the resulting longer duration of the target activity can violate its time window deadline.\n8.\nSTATUS AND DIRECTIONS\nOur current research efforts are aimed at extending the capabilities of the Year 1 agent and scaling up to significantly larger problems.\nYear 2 programmatic evaluation goals call for solving problems on the order of 100 agents and 10,000 methods.\nThis scale places much higher computational demands on all of the agent's components.\nWe have recently completed a re-implementation of the prototype agent designed to address some recognized performance issues.\nIn addition to verifying that the performance on Year 1 problems is matched or exceeded, we have recently run some successful tests with the agent on a few 100 agent problems.\nTo fully address various scale up issues, we are investigating a number of more advanced coordination mechanisms.\nTo provide more global perspective to local scheduling decisions, we are introducing mechanisms for computing, communicating and using estimates of the non-local impact of remote nodes.\nTo better address the problem of establishing inter-agent synchronization points, we expanding the use of task owners and qaf-specifc protocols as a means for directing coordination activity.\nFinally, we plan to explore the use of more advanced STN-driven coordination mechanisms, including the use of temporal decoupling [7] to insulate the actions of inter-dependent agents and the introduction of probability sensitive contingency schedules.","keyphrases":["manag","flexibl time","schedul","manag schedul","distribut environ","agent architectur","inter-depend activ","perform","geograph separ","central plan","schedul-execut","slack","shortest path algorithm","activ alloc","conflict-driven approach","optimist synchron","inter-agent coordin","multi-agent schedul"],"prmu":["P","P","P","P","P","P","P","P","U","M","U","U","U","M","U","U","M","M"]} {"id":"I-14","title":"A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems","abstract":"The dominant existing routing strategies employed in peerto-peer(P2P) based information retrieval(IR) systems are similarity-based approaches. In these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions. However, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents. In this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions. Specifically, agents maintain estimates on the downstream agents' abilities to provide relevant documents for incoming queries. These estimates are updated gradually by learning from the feedback information returned from previous search sessions. Based on this information, the agents derive corresponding routing policies. Thereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies. Experimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.","lvl-1":"A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems Haizheng Zhang College of Information Science and Technology Pennsylvania State University University Park, PA 16803 hzhang@ist.psu.edu Victor Lesser Department of Computer Science University Of Massachusetts Amherst, MA 01003 lesser@cs.umass.edu ABSTRACT The dominant existing routing strategies employed in peerto-peer(P2P) based information retrieval(IR) systems are similarity-based approaches.\nIn these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions.\nHowever, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents.\nIn this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions.\nSpecifically, agents maintain estimates on the downstream agents'' abilities to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on this information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies.\nExperimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Performance, Experimentation 1.\nINTRODUCTION Over the last few years there have been increasing interests in studying how to control the search processes in peer-to-peer(P2P) based information retrieval(IR) systems [6, 13, 14, 15].\nIn this line of research, one of the core problems that concerns researchers is to efficiently route user queries in the network to agents that are in possession of appropriate documents.\nIn the absence of global information, the dominant strategies in addressing this problem are content-similarity based approaches [6, 13, 14, 15].\nWhile the content similarity between queries and local nodes appears to be a creditable indicator for the number of relevant documents residing on each node, these approaches are limited by a number of factors.\nFirst of all, similaritybased metrics can be myopic since locally relevant nodes may not be connected to other relevant nodes.\nSecond, the similarity-based approaches do not take into account the run-time characteristics of the P2P IR systems, including environmental parameters, bandwidth usage, and the historical information of the past search sessions, that provide valuable information for the query routing algorithms.\nIn this paper, we develop a reinforcement learning based IR approach for improving the performance of distributed IR search algorithms.\nAgents can acquire better search strategies by collecting and analyzing feedback information from previous search sessions.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents'' capabilities of providing relevant documents for specific types of incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThis process is conducted in an iterative manner.\nThe goal of the learning algorithm, even though it consumes some network bandwidth, is to shorten the routing time so that more queries are processed per time unit while at the same time finding more relevant documents.\nThis contrasts with the content-similarity based approaches where similar operations are repeated for every incoming query and the processing time keeps largely constant over time.\nAnother way of viewing this paper is that our basic approach to distributed IR search is to construct a hierarchical overlay network(agent organization) based on the contentsimilarity measure among agents'' document collections in a bottom-up fashion.\nIn the past work, we have shown that this organization improves search performance significantly.\nHowever, this organizational structure does not take into account the arrival patterns of queries, including their frequency, types, and where they enter the system, nor the available communication bandwidth of the network and processing capabilities of individual agents.\nThe intention of the reinforcement learning is to adapt the agents'' routing decisions to the dynamic network situations and learn from past search sessions.\nSpecifically, the contributions of this paper include: (1) a reinforcement learning based approach for agents to acquire satisfactory routing policies based on estimates of the potential contribution of their neighboring agents; (2) two strategies to speed up the learning process.\nTo our best knowledge, this is one of the first reinforcement learning applications in addressing distributed content sharing problems and it is indicative of some of the issues in applying reinforcement in a complex application.\nThe remainder of this paper is organized as follows: Section 2 reviews the hierarchical content sharing systems and the two-phase search algorithm based on such topology.\nSection 3 describes a reinforcement learning based approach to direct the routing process; Section 4 details the experimental settings and analyze the results.\nSection 5 discusses related studies and Section 6 concludes the paper.\n2.\nSEARCH IN HIERARCHICAL P2P IR SYSTEMS This section briefly reviews our basic approaches to hierarchical P2P IR systems.\nIn a hierarchical P2P IR system illustrated in Fig.1, agents are connected to each other through three types of links: upward links, downward links, and lateral links.\nIn the following sections, we denote the set of agents that are directly connected to agent Ai as DirectConn(Ai), which is defined as DirectConn(Ai) = NEI(Ai) \u222a PAR(Ai) \u222a CHL(Ai) , where NEI(Ai) is the set of neighboring agents connected to Ai through lateral links; PAR(Ai) is the set of agents whom agent Ai is connected to through upward links and CHL(Ai) is the set of agents that agent Ai connects to through downward links.\nThese links are established through a bottom-up content-similarity based distributed clustering process[15].\nThese links are then used by agents to locate other agents that contain documents relevant to the given queries.\nA typical agent Ai in our system uses two queues: a local search queue, LSi, and a message forwarding queue MFi.\nThe states of the two queues constitute the internal states of an agent.\nThe local search queue LSi stores search sessions that are scheduled for local processing.\nIt is a priority queue and agent Ai always selects the most promising queries to process in order to maximize the global utility.\nMFi consists of a set of queries to forward on and is processed in a FIFO (first in first out) fashion.\nFor the first query in MFi, agent Ai determines which subset of its neighboring agents to forward it to based on the agent``s routing policy \u03c0i.\nThese routing decisions determine how the search process is conducted in the network.\nIn this paper, we call Ai as Aj``s upstream agent and Aj as Ai``s downstream agent if A4 A5 A6 A7 A2 A3 A9 NEI(A2)={A3} PAR(A2)={A1} CHL(A2)={A4,A5} A1 A8 Figure 1: A fraction of a hierarchical P2PIR system an agent Ai routes a query to agent Aj.\nThe distributed search protocol of our hierarchical agent organization is composed of two steps.\nIn the first step, upon receipt of a query qk at time tl from a user, agent Ai initiates a search session si by probing its neighboring agents Aj \u2208 NEI(Ai) with the message PROBE for the similarity value Sim(qk, Aj) between qk and Aj.\nHere, Ai is defined as the query initiator of search session si.\nIn the second step, Ai selects a group of the most promising agents to start the actual search process with the message SEARCH.\nThese SEARCH messages contain a TTL (Time To Live) parameter in addition to the query.\nThe TTL value decreases by 1 after each hop.\nIn the search process, agents discard those queries that either have been previously processed or whose TTL drops to 0, which prevents queries from looping in the system forever.\nThe search session ends when all the agents that receive the query drop it or TTL decreases to 0.\nUpon receipt of SEARCH messages for qk, agents schedule local activities including local searching, forwarding qk to their neighbors, and returning search results to the query initiator.\nThis process and related algorithms are detailed in [15, 14].\n3.\nA BASIC REINFORCEMENTLEARNING BASED SEARCH APPROACH In the aforementioned distributed search algorithm, the routing decisions of an agent Ai rely on the similarity comparison between incoming queries and Ai``s neighboring agents in order to forward those queries to relevant agents without flooding the network with unnecessary query messages.\nHowever, this heuristic is myopic because a relevant direct neighbor is not necessarily connected to other relevant agents.\nIn this section, we propose a more general approach by framing this problem as a reinforcement learning task.\nIn pursuit of greater flexibility, agents can switch between two modes: learning mode and non-learning mode.\nIn the non-learning mode, agents operate in the same way as they do in the normal distributed search processes described in [14, 15].\nOn the other hand, in the learning mode, in parallel with distributed search sessions, agents also participate in a learning process which will be detailed in this section.\nNote that in the learning protocol, the learning process does not interfere with the distributed search process.\nAgents can choose to initiate and stop learning processes without affecting the system performance.\nIn particular, since the learning process consumes network resources (especially bandwidth), agents can choose to initiate learning only when the network load is relatively low, thus minimizing the extra communication costs incurred by the learning algorithm.\nThe section is structured as follows, Section 3.1 describes 232 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) a reinforcement learning based model.\nSection 3.2 describes a protocol to deploy the learning algorithm in the network.\nSection 3.3 discusses the convergence of the learning algorithm.\n3.1 The Model An agent``s routing policy takes the state of a search session as input and output the routing actions for that query.\nIn our work, the state of a search session sj is stipulated as: QSj = (qk, ttlj) where ttlj is the number of hops that remains for the search session sj , qk is the specific query.\nQL is an attribute of qk that indicates which type of queries qk most likely belong to.\nThe set of QL can be generated by running a simple online classification algorithm on all the queries that have been processed by the agents, or an o\ufb04ine algorithm on a pre-designated training set.\nThe assumption here is that the set of query types is learned ahead of time and belongs to the common knowledge of the agents in the network.\nFuture work includes exploring how learning can be accomplished when this assumption does not hold.\nGiven the query types set, an incoming query qi can be classified to one query class Q(qi) by the formula: Q(qi) = arg max Qj P(qi|Qj) (1) where P(qi|Qj ) indicates the likelihood that the query qi is generated by the query class Qj [8].\nThe set of atomic routing actions of an agent Ai is denoted as {\u03b1i}, where {\u03b1i} is defined as \u03b1i = {\u03b1i0 , \u03b1i1 , ..., \u03b1in }.\nAn element \u03b1ij represents an action to route a given query to the neighboring agent Aij \u2208 DirectConn(Ai).\nThe routing policy \u03c0i of agent Ai is stochastic and its outcome for a search session with state QSj is defined as: \u03c0i(QSj) = {(\u03b1i0 , \u03c0i(QSi, \u03b1i0 )), (\u03b1i1 , \u03c0i(QSi, \u03b1i1 )), ...} (2) Note that operator \u03c0i is overloaded to represent either the probabilistic policy for a search session with state QSj, denoted as \u03c0i(QSj); or the probability of forwarding the query to a specific neighboring agent Aik \u2208 DirectConn(Ai) under the policy \u03c0i(QSj), denoted as \u03c0i(QSj, \u03b1ik ).\nTherefore, equation (2) means that the probability of forwarding the search session to agent Ai0 is \u03c0i(QSi, \u03b1i0 ) and so on.\nUnder this stochastic policy, the routing action is nondeterministic.\nThe advantage of such a strategy is that the best neighboring agents will not be selected repeatedly, thereby mitigating the potential hot spots situations.\nThe expected utility, Un i (QSj), is used to estimate the potential utility gain of routing query type QSj to agent Ai under policy \u03c0n i .\nThe superscript n indicates the value at the nth iteration in an iterative learning process.\nThe expected utility provides routing guidance for future search sessions.\nIn the search process, each agent Ai maintains partial observations of its neighbors'' states, as shown in Fig. 2.\nThe partial observation includes non-local information such as the potential utility estimation of its neighbor Am for query state QSj, denoted as Um(QSj), as well as the load information, Lm.\nThese observations are updated periodically by the neighbors.\nThe estimated utility information will be used to update Ai``s expected utility for its routing policy.\nLoad Information Expected Utility For Different Query Types Neighboring Agents ... A0 A1 A3 A2 Un 0 (QS0) ... ... ... ... ....\nUn 0 (QS1) Un 1 (QS1) Un 2 (QS1) Un 3 (QS1) Un 1 (QS0) Un 2 (QS0) Un 3 (QS0) Ln 0 Ln 1 Ln 2 Ln 3 ... QS0 QS1 ... Figure 2: Agent Ai``s Partial Observation about its neighbors(A0, A1...) The load information of Am, Lm, is defined as Lm = |MFm| Cm , where |MFm| is the length of the message-forward queue and Cm is the service rate of agent Am``s message-forward queue.\nTherefore Lm characterizes the utilization of an agent``s communication channel, and thus provide non-local information for Am``s neighbors to adjust the parameters of their routing policy to avoid inundating their downstream agents.\nNote that based on the characteristics of the queries entering the system and agents'' capabilities, the loading of agents may not be uniform.\nAfter collecting the utilization rate information from all its neighbors, agent Ai computes Li as a single measure for assessing the average load condition of its neighborhood: Li = P k Lk |DirectConn(Ai)| Agents exploit Li value in determining the routing probability in its routing policy.\nNote that, as described in Section 3.2, information about neighboring agents is piggybacked with the query message propagated among the agents whenever possible to reduce the traffic overhead.\n3.1.1 Update the Policy An iterative update process is introduced for agents to learn a satisfactory stochastic routing policy.\nIn this iterative process, agents update their estimates on the potential utility of their current routing policies and then propagate the updated estimates to their neighbors.\nTheir neighbors then generate a new routing policy based on the updated observation and in turn they calculate the expected utility based on the new policies and continue this iterative process.\nIn particular, at time n, given a set of expected utility, an agent Ai, whose directly connected agents set is DirectConn(Ai) = {Ai0 , ..., Aim }, determines its corresponding stochastic routing policy for a search session of state QSj based on the following steps: (1) Ai first selects a subset of agents as the potential downstream agents from set DirectConn(Ai), denoted as PDn(Ai, QSj).\nThe size of the potential downstream agent is specified as |PDn(Ai, QSj)| = min(|NEI(Ai), dn i + k)| where k is a constant and is set to 3 in this paper; dn i , the forward width, is defined as the expected number of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 233 neighboring agents that agent Ai can forward to at time n.\nThis formula specifies that the potential downstream agent set PDn(Ai, QSj) is either the subset of neighboring agents with dn i + k highest expected utility value for state QSj among all the agents in DirectConn(Ai), or all their neighboring agents.\nThe k is introduced based on the idea of a stochastic routing policy and it makes the forwarding probability of the dn i +k highest agent less than 100%.\nNote that if we want to limit the number of downstream agents for search session sj as 5, the probability of forwarding the query to all neighboring agents should add up to 5.\nSetting up dn i value properly can improve the utilization rate of the network bandwidth when much of the network is idle while mitigating the traffic load when the network is highly loaded.\nThe dn+1 i value is updated based on dn i , the previous and current observations on the traffic situation in the neighborhood.\nSpecifically, the update formula for dn+1 i is dn+1 i = dn i \u2217 (1 + 1 \u2212 Li |DirectConn(Ai)| ) In this formula, the forward width is updated based on the traffic conditions of agent Ai``s neighborhood, i.e Li, and its previous value.\n(2) For each agent Aik in the PDn(Ai, QSj), the probability of forwarding the query to Aik is determined in the following way in order to assign higher forwarding probability to the neighboring agents with higher expected utility value: \u03c0n+1 i (QSj, \u03b1ik ) = dn+1 i |PDn(Ai, QSj)| + \u03b2 \u2217 ` Uik (QSj) \u2212 PDU(Ai, QSj) |PDn(Ai, QSj)| \u00b4 (3) where PDUn(Ai, QSj) = X o\u2208P Dn(Ai,QSj ) Uo(QSj) and QSj is the subsequent state of agent Aik after agent Ai forwards the search session with state QSj to its neighboring agent Aik ; If QSj = (qk, ttl0), then QSj = (qk, ttl0 \u2212 1).\nIn formula 3, the first term on the right of the equation, dn+1 i |P Dn(Ai,QSj )| , is used to to determine the forwarding probability by equally distributing the forward width, dn+1 i , to the agents in PDn(Ai, QSj) set.\nThe second term is used to adjust the probability of being chosen so that agents with higher expected utility values will be favored.\n\u03b2 is determined according to: \u03b2 = min ` m \u2212 dn+1 i m \u2217 umax \u2212 PDUn(Ai, QSj) , dn+1 i PDUn(Ai, QSj) \u2212 m \u2217 umin \u00b4 (4) where m = |PDn(Ai, QSj)|, umax = max o\u2208P Dn(Ao,QSj ) Uo(QSj) and umin = min o\u2208P Dn(Ao,QSj ) Uo(QSj) This formula guarantees that the final \u03c0n+1 i (QSj, \u03b1ik ) value is well defined, i.e, 0 \u2264 \u03c0n+1 i (QSj, \u03b1ik ) \u2264 1 and X i \u03c0n+1 i (QSj, \u03b1ik ) = dn+1 i However, such a solution does not explore all the possibilities.\nIn order to balance between exploitation and exploration, a \u03bb-Greedy approach is taken.\nIn the \u03bb-Greedy approach, in addition to assigning higher probability to those agents with higher expected utility value, as in the equation (3).\nAgents that appear to be not-so-good choices will also be sent queries based on a dynamic exploration rate.\nIn particular, for agents in the set PDn(Ai, QSj), \u03c0n+1 i1 (QSj) is determined in the same way as the above, with the only difference being that dn+1 i is replaced with dn+1 i \u2217 (1 \u2212 \u03bbn).\nThe remaining search bandwidth is used for learning by assigning probability \u03bbn evenly to agents Ai2 in the set DirectConn(Ai) \u2212 PDn(Ai, QSj).\n\u03c0n+1 i2 (QSj, \u03b1ik ) = dn+1 i \u2217 \u03bbn |DirectConn(Ai) \u2212 PDn(Ai, QSj)| (5) where PDn(Ai, QSj) \u2282 DirectConn(Ai).\nNote that the exploration rate \u03bb is not a constant and it decreases overtime.\nThe \u03bb is determined according to the following equation: \u03bbn+1 = \u03bb0 \u2217 e\u2212c1n (6) where \u03bb0 is the initial exploration rate, which is a constant; c1 is also a constant to adjust the decreasing rate of the exploration rate; n is the current time unit.\n3.1.2 Update Expected Utility Once the routing policy at step n+1, \u03c0n+1 i , is determined based on the above formula, agent Ai can update its own expected utility, Un+1 i (QSi), based on the the updated routing policy resulted from the formula 5 and the updated U values of its neighboring agents.\nUnder the assumption that after a query is forwarded to Ai``s neighbors the subsequent search sessions are independent, the update formula is similar to the Bellman update formula in Q-Learning: Un+1 i (QSj) = (1 \u2212 \u03b8i) \u2217 Un i (QSj) + \u03b8i \u2217 (Rn+1 i (QSj) + X k \u03c0n+1 i (QSj, \u03b1ik )Un k (QSj)) (7) where QSj = (Qj, ttl \u2212 1) is the next state of QSj = (Qj, ttl); Rn+1 i (QSj) is the expected local reward for query class Qk at agent Ai under the routing policy \u03c0n+1 i ; \u03b8i is the coefficient for deciding how much weight is given to the old value during the update process: the smaller \u03b8i value is, the faster the agent is expected to learn the real value, while the greater volatility of the algorithm, and vice versa.\nRn+1 (s) is updated according to the following equation: 234 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Rn+1 i (QSj) = Rn i (QSj) +\u03b3i \u2217 (r(QSj) \u2212 Rn i (QSj)) \u2217 P(qj|Qj ) (8) where r(QSj) is the local reward associated with the search session.\nP(qj|Qj ) indicates how relevant the query qj is to the query type Qj, and \u03b3i is the learning rate for agent Ai.\nDepending on the similarity between a specific query qi and its corresponding query type Qi, the local reward associated with the search session has different impact on the Rn i (QSj) estimation.\nIn the above formula, this impact is reflected by the coefficient, the P(qj|Qj) value.\n3.1.3 Reward function After a search session stops when its TTL values expires, all search results are returned back to the user and are compared against the relevance judgment.\nAssuming the set of search results is SR, the reward Rew(SR) is defined as: Rew(SR) = j 1 if |Rel(SR)| > c |Rel(SR)| c otherwise.\nwhere SR is the set of returned search results, Rel(SR) is the set of relevant documents in the search results.\nThis equation specifies that users give 1.0 reward if the number of returned relevant documents reaches a predefined number c. Otherwise, the reward is in proportion to the number of relevant documents returned.\nThis rationale for setting up such a cut-off value is that the importance of recall ratio decreases with the abundance of relevant documents in real world, therefore users tend to focus on only a limited number of searched results.\nThe details of the actual routing protocol will be introduced in Section 3.2 when we introduce how the learning algorithm is deployed in real systems.\n3.2 Deployment of the Learning algorithm This section describes how the learning algorithm can be used in either a single-phase or a two-phase search process.\nIn the single-phase search algorithm, search sessions start from the initiators of the queries.\nIn contrast, in the two-step search algorithm, the query initiator first attempts to seek a more appropriate starting point for the query by introducing an exploratory step as described in Section 2.\nDespite the difference in the quality of starting points, the major part of the learning process for the two algorithms is largely the same as described in the following paragraphs.\nBefore learning starts, each agent initializes the expected utility value for all possible states as 0.\nThereafter, upon receipt of a query, in addition to the normal operations described in the previous section, an agent Ai also sets up a timer to wait for the search results returned from its downstream agents.\nOnce the timer expires or it has received response from all its downstream agents, Ai merges and forwards the search results accrued from its downstream agents to its upstream agent.\nSetting up the timer speeds up the learning because agents can avoid waiting too long for the downstream agents to return search results.\nNote that these detailed results and corresponding agent information will still be stored at Ai until the feedback information is passed from its upstream agent and the performance of its downstream agents can be evaluated.\nThe duration of the timer is related to the TTL value.\nIn this paper, we set the timer to ttimer = ttli \u2217 2 + tf , where ttli \u2217 2 is the sum of the travel time of the queries in the network, and tf is the expected time period that users would like to wait.\nThe search results will eventually be returned to the search session initiator A0.\nThey will be compared to the relevance judgment that is provided by the final users (as described in the experiment section, the relevance judgement for the query set is provided along with the data collections).\nThe reward will be calculated and propagated backward to the agents along the way that search results were passed.\nThis is a reverse process of the search results propagation.\nIn the process of propagating reward backward, agents update estimates of their own potential utility value, generate an upto-dated policy and pass their updated results to the neighboring agents based on the algorithm described in Section 3.\nUpon change of expected utility value, agent Ai sends out its updated utility estimation to its neighbors so that they can act upon the changed expected utility and corresponding state.\nThis update message includes the potential reward as well as the corresponding state QSi = (qk, ttll) of agent Ai.\nEach neighboring agent, Aj, reacts to this kind of update message by updating the expected utility value for state QSj(qk, ttll + 1) according to the newly-announced changed expected utility value.\nOnce they complete the update, the agents would again in turn inform related neighbors to update their values.\nThis process goes on until the TTL value in the update message increases to the TTL limit.\nTo speed up the learning process, while updating the expected utility values of an agent Ai``s neighboring agents we specify that Um(Qk, ttl0) >= Um(Qk, ttl1) iff ttl0 > ttl1 Thus, when agent Ai receives an updated expected utility value with ttl1, it also updates the expected utility values with any ttl0 > ttl1 if Um(Qk, ttl0) < Um(Qk, ttl1) to speed up convergence.\nThis heuristic is based on the fact that the utility of a search session is a non-decreasing function of time t. 3.3 Discussion In formalizing the content routing system as a learning task, many assumptions are made.\nIn real systems, these assumptions may not hold, and thus the learning algorithm may not converge.\nTwo problems are of particular note, (1) This content routing problem does not have Markov properties.\nIn contrast to IP-level based packet routing, the routing decision of each agent for a particular search session sj depends on the routing history of sj.\nTherefore, the assumption that all subsequent search sessions are independent does not hold in reality.\nThis may lead to double counting problem that the relevant documents of some agents will be counted more than once for the state where the TTL value is more than 1.\nHowever, in the context of the hierarchical agent organizations, two factors mitigate this problems: first, the agents in each content group form a tree-like structure.\nWith the absense of the cycles, the estimates inside the tree would be close to the accurate value.\nSecondly, the stochastic nature of the routing policy partly remedies this problem.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 235 (2) Another challenge for this learning algorithm is that in a real network environment observations on neighboring agents may not be able to be updated in time due to the communication delay or other situations.\nIn addition, when neighboring agents update their estimates at the same time, oscillation may arise during the learning process[1].\nThis paper explores several approaches to speed up the learning process.\nBesides the aforementioned strategy of updating the expected utility values, we also employ an active update strategy where agents notify their neighbors whenever its expected utility is updated.\nThus a faster convergence speed can be achieved.\nThis strategy contrasts to the Lazy update, where agents only echo their neighboring agents with their expected utility change when they exchange information.\nThe trade off between the two approaches is the network load versus learning speed.\nThe advantage of this learning algorithm is that once a routing policy is learned, agents do not have to repeatedly compare the similarity of queries as long as the network topology remains unchanged.\nInstead, agent just have to determine the classification of the query properly and follow the learned policies.\nThe disadvantage of this learning-based approach is that the learning process needs to be conducted whenever the network structure changes.\nThere are many potential extensions for this learning model.\nFor example, a single measure is currently used to indicate the traffic load for an agent``s neighborhood.\nA simple extension would be to keep track of individual load for each neighbor of the agent.\n4.\nEXPERIMENTSSETTINGSAND RESULTS The experiments are conducted on TRANO simulation toolkit with two sets of datasets, TREC-VLC-921 and TREC123-100.\nThe following sub-sections introduce the TRANO testbed, the datasets, and the experimental results.\n4.1 TRANO Testbed TRANO (Task Routing on Agent Network Organization) is a multi-agent based network based information retrieval testbed.\nTRANO is built on top of the Farm [4], a time based distributed simulator that provides a data dissemination framework for large scale distributed agent network based organizations.\nTRANO supports importation and exportation of agent organization profiles including topological connections and other features.\nEach TRANO agent is composed of an agent view structure and a control unit.\nIn simulation, each agent is pulsed regularly and the agent checks the incoming message queues, performs local operations and then forwards messages to other agents .\n4.2 Experimental Settings In our experiment, we use two standard datasets, TRECVLC-921 and TREC-123-100 datasets, to simulate the collections hosted on agents.\nThe TREC-VLC-921 and TREC123-100 datasets were created by the U.S. National Institute for Standard Technology(NIST) for its TREC conferences.\nIn distributed information retrieval domain, the two data collections are split to 921 and 100 sub-collections.\nIt is observed that dataset TREC-VLC-921 is more heterogeneous than TREC-123-100 in terms of source, document length, and relevant document distribution from the statistics of the two data collections listed in [13].\nHence, TREC-VLC-921 is much closer to real document distributions in P2P environments.\nFurthermore, TREC-123-100 is split into two sets of 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 500\u00a01000\u00a01500\u00a02000 2500 3000 ARSS Query number ARSS versus the number of incoming queries for TREC-VLC-921 SSLA-921 SSNA-921 Figure 3: ARSS(Average reward per search session) versus the number of search sessions for 1phase search in TREC-VLC-921 0 0.1 0.2 0.3 0.4 0.5 0.6 0 500\u00a01000\u00a01500\u00a02000 2500 3000 ARSS Query number ARSS versus query number for TREC-VLC-921 TSLA-921 TSNA-921 Figure 4: ARSS(Average reward per search session) versus the number of search sessions for 2phase search in TREC-VLC-921 sub-collections in two ways: randomly and by source.\nThe two partitions are denoted as TREC-123-100-Random and TREC-123-100-Source respectively.\nThe documents in each subcollection in dataset TREC-123-100-Source are more coherent than those in TREC-123-100-Random.\nThe two different sets of partitions allow us to observe how the distributed learning algorithm is affected by the homogeneity of the collections.\nThe hierarchical agent organization is generated by the algorithm described in our previous algorithm [15].\nDuring the topology generation process, degree information of each agent is estimated by the algorithm introduced by Palmer et al. [9] with parameters \u03b1 = 0.5 and \u03b2 = 0.6.\nIn our experiments, we estimate the upward limit and downward degree limit using linear discount factors 0.5, 0.8 and 1.0.\nOnce the topology is built, queries randomly selected from the query set 301\u2212350 on TREC-VLC-921 and query set 1\u2212 50 on TREC-123-100-Random and TREC-123-100-Source are injected to the system based on a Poisson distribution P(N(t) = n) = (\u03bbt)n n!\ne\u2212\u03bb 236 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 50 100 150 200 250 300 350 400 0 500\u00a01000\u00a01500\u00a02000 2500 3000 Cumulativeutility Query number Cumulative utility over the number of incoming queries TSLA-921 SSNA-921 SSLA-921 TSNA-921 Figure 5: The cumulative utility versus the number of search sessions TREC-VLC-921 In addition, we assume that all agents have an equal chance of getting queries from the environment, i.e, \u03bb is the same for every agent.\nIn our experiments, \u03bb is set as 0.0543 so that the mean of the incoming queries from the environment to the agent network is 50 per time unit.\nThe service time for the communication queue and local search queue, i.e tQij and trs, is set as 0.01 time unit and 0.05 time units respectively.\nIn our experiments, there are ten types of queries acquired by clustering the query set 301 \u2212 350 and 1 \u2212 50.\n4.3 Results analysis and evaluation Figure 3 demonstrates the ARSS(Average Reward per Search Session) versus the number of incoming queries over time for the the Single-Step based Non-learning Algorithm (SSNA), and the Single-Step Learning Algorithm(SSLA) for data collection TREC-VLC-921.\nIt shows that the average reward for SSNA algorithm ranges from 0.02 \u2212 0.06 and the performance changes little over time.\nThe average reward for SSLA approach starts at the same level with the SSNA algorithm.\nBut the performance increases over time and the average performance gain stabilizes at about 25% after query range 2000 \u2212 3000.\nFigure 4 shows the ARSS(Average Reward per Search Session) versus the number of incoming queries over time for the the Two-Step based Non-learning Algorithm(TSNA), and the Two-Step Learning Algorithm(TSLA) for data collection TREC-VLC-921.\nThe TSNA approach has a relatively consistent performance with the average reward ranges from 0.05 \u2212 0.15.\nThe average reward for TSLA approach, where learning algorithm is exploited, starts at the same level with the TSNA algorithm and improves the average reward over time until 2000\u22122500 queries joining the system.\nThe results show that the average performance gain for TSLA approach over TNLA approach is 35% after stabilization.\nFigure 5 shows the cumulative utility versus the number of incoming queries over time for SSNA, SSLA,TSNA, and TSLA respectively.\nIt illustrates that the cumulative utility of non-learning algorithms increases largely linearly over time, while the gains of learning-based algorithms accelerate when more queries enter the system.\nThese experimental results demonstrate that learning-based approaches consistently perform better than non-learning based routing algorithm.\nMoreover, two-phase learning based algorithm is better than single-phase based learning algorithm because the maximal reward an agent can receive from searching its neighborhood within TTL hops is related to the total number of the relevant documents in that area.\nThus, even the optimal routing policy can do little beyond reaching these relevant documents faster.\nOn the contrary, the two-stepbased learning algorithm can relocate the search session to a neighborhood with more relevant documents.\nThe TSLA combines the merits of both approaches and outperforms them.\nTable 1 lists the cumulative utility for datasets TREC123-100-Random and TREC-123-100-Source with hierarchical organizations.\nThe five columns show the results for four different approaches.\nIn particular, column TSNA-Random shows the results for dataset TREC-123-100-Random with the TSNA approach.\nThe column TSLA-Random shows the results for dataset TREC-123-100-Random with the TSLA approach.\nThere are two numbers in each cell in the column TSLA-Random.\nThe first number is the actual cumulative utility while the second number is the percentage gain in terms of the utility over TSNA approach.\nColumns TSNA-Source and TSLA-Source show the results for dataset TREC-123-100-Source with TSNA and TSLA approaches respectively.\nTable 1 shows that the performance improvement for TREC-123-100-Random is not as significant as the other datasets.\nThis is because that the documents in the sub-collection of TREC-123-100-Random are selected randomly which makes the collection model, the signature of the collection, less meaningful.\nSince both algorithms are designed based on the assumption that document collections can be well represented by their collection model, this result is not surprising.\nOverall, Figures 4, 5, and Table 1 demonstrate that the reinforcement learning based approach can considerably enhance the system performance for both data collections.\nHowever, it remains as future work to discover the correlation between the magnitude of the performance gains and the size of the data collection and\/or the extent of the heterogeneity between the sub-collections.\n5.\nRELATED WORK The content routing problem differs from the networklevel routing in packet-switched communication networks in that content-based routing occurs in application-level networks.\nIn addition, the destination agents in our contentrouting algorithms are multiple and the addresses are not known in the routing process.\nIP-level Routing problems have been attacked from the reinforcement learning perspective[2, 5, 11, 12].\nThese studies have explored fully distributed algorithms that are able, without central coordination to disseminate knowledge about the network, to find the shortest paths robustly and efficiently in the face of changing network topologies and changing link costs.\nThere are two major classes of adaptive, distributed packet routing algorithms in the literature: distance-vector algorithms and link-state algorithms.\nWhile this line of studies carry a certain similarity with our work, it has mainly focused on packet-switched communication networks.\nIn this domain, the destination of a packet is deterministic and unique.\nEach agent maintains estimations, probabilistically or deterministically, on the distance to a certain destination through its neighbors.\nA variant of Q-Learning techniques is deployed The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 237 Table 1: Cumulative Utility for Datasets TREC-123-100-Random and TREC-123-100-Source with Hierarchical Organization; The percentage numbers in the columns TSLA-Random and TSLA-Source demonstrate the performance gain over the algorithm without learning Query number TSNA-Random TSLA-Random TSNA-Source TSLA-Source 500 25.15 28.45 13% 24.00 21.05 -13% 1000 104.99 126.74 20% 93.95 96.44 2.6% 1250 149.79 168.40 12% 122.64 134.05 9.3% 1500 188.94 211.05 12% 155.30 189.60 22% 1750 235.49 261.60 11% 189.14 243.90 28% 2000 275.09 319.25 16% 219.0 278.80 26% to update the estimations to converge to the real distances.\nIt has been discovered that the locality property is an important feature of information retrieval systems in user modeling studies[3].\nIn P2P based content sharing systems, this property is exemplified by the phenomenon that users tend to send queries that represent only a limited number of topics and conversely, users in the same neighborhood are likely to share common interests and send similar queries [10].\nThe learning based approach is perceived to be more beneficial for real distributed information retrieval systems which exhibit locality property.\nThis is because the users'' traffic and query patterns can reduce the state space and speed up the learning process.\nRelated work in taking advantage of this property include [7], where the authors attempted to address this problem by user modeling techniques.\n6.\nCONCLUSIONS In this paper, a reinforcement-learning based approach is developed to improve the performance of distributed IR search algorithms.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents'' ability to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents modify their routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThe experiments on two different distributed IR datasets illustrates that the reinforcement learning approach improves considerably the cumulative utility over time.\n7.\nREFERENCES [1] S. Abdallah and V. Lesser.\nLearning the task allocation game.\nIn AAMAS ``06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 850-857, New York, NY, USA, 2006.\nACM Press.\n[2] J. A. Boyan and M. L. Littman.\nPacket routing in dynamically changing networks: A reinforcement learning approach.\nIn Advances in Neural Information Processing Systems, volume 6, pages 671-678.\nMorgan Kaufmann Publishers, Inc., 1994.\n[3] J. C. French, A. L. Powell, J. P. Callan, C. L. Viles, T. Emmitt, K. J. Prey, and Y. Mou.\nComparing the performance of database selection algorithms.\nIn Research and Development in Information Retrieval, pages 238-245, 1999.\n[4] B. Horling, R. Mailler, and V. Lesser.\nFarm: A scalable environment for multi-agent development and evaluation.\nIn Advances in Software Engineering for Multi-Agent Systems, pages 220-237, Berlin, 2004.\nSpringer-Verlag.\n[5] M. Littman and J. Boyan.\nA distributed reinforcement learning scheme for network routing.\nIn Proceedings of the International Workshop on Applications of Neural Networks to Telecommunications, 1993.\n[6] J. Lu and J. Callan.\nFederated search of text-based digital libraries in hierarchical peer-to-peer networks.\nIn In ECIR``05, 2005.\n[7] J. Lu and J. Callan.\nUser modeling for full-text federated search in peer-to-peer networks.\nIn ACM SIGIR 2006.\nACM Press, 2006.\n[8] C. D. Manning and H. Sch\u00a8utze.\nFoundations of Statistical Natural Language Processing.\nThe MIT Press, Cambridge, Massachusetts, 1999.\n[9] C. R. Palmer and J. G. Steffan.\nGenerating network topologies that obey power laws.\nIn Proceedings of GLOBECOM ``2000, November 2000.\n[10] K. Sripanidkulchai, B. Maggs, and H. Zhang.\nEfficient content location using interest-based locality in peer-topeer systems.\nIn INFOCOM, 2003.\n[11] D. Subramanian, P. Druschel, and J. Chen.\nAnts and reinforcement learning: A case study in routing in dynamic networks.\nIn In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, pages 832-839, 1997.\n[12] J. N. Tao and L. Weaver.\nA multi-agent, policy gradient approach to network routing.\nIn In Proceedings of the Eighteenth International Conference on Machine Learning, 2001.\n[13] H. Zhang, W. B. Croft, B. Levine, and V. Lesser.\nA multi-agent approach for peer-to-peer information retrieval.\nIn Proceedings of Third International Joint Conference on Autonomous Agents and Multi-Agent Systems, July 2004.\n[14] H. Zhang and V. Lesser.\nMulti-agent based peer-to-peer information retrieval systems with concurrent search sessions.\nIn Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multi-Agent Systems, May 2006.\n[15] H. Zhang and V. R. Lesser.\nA dynamically formed hierarchical agent organization for a distributed content sharing system.\nIn 2004 IEEE\/WIC\/ACM International Conference on Intelligent Agent Technology (IAT 2004), 20-24 September 2004, Beijing, China, pages 169-175.\nIEEE Computer Society, 2004.\n238 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems\nABSTRACT\nThe dominant existing routing strategies employed in peerto-peer (P2P) based information retrieval (IR) systems are similarity-based approaches.\nIn these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions.\nHowever, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents.\nIn this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions.\nSpecifically, agents maintain estimates on the downstream agents' abilities to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on this information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies.\nExperimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.\n1.\nINTRODUCTION\nOver the last few years there have been increasing interests in studying how to control the search processes in peer-to-peer (P2P) based information retrieval (IR) systems [6, 13, 14, 15].\nIn this line of research, one of the core problems that concerns researchers is to efficiently route user queries in the network to agents that are in possession of appropriate documents.\nIn the absence of global information, the dominant strategies in addressing this problem are content-similarity based approaches [6, 13, 14, 15].\nWhile the content similarity between queries and local nodes appears to be a creditable indicator for the number of relevant documents residing on each node, these approaches are limited by a number of factors.\nFirst of all, similaritybased metrics can be myopic since locally relevant nodes may not be connected to other relevant nodes.\nSecond, the similarity-based approaches do not take into account the run-time characteristics of the P2P IR systems, including environmental parameters, bandwidth usage, and the historical information of the past search sessions, that provide valuable information for the query routing algorithms.\nIn this paper, we develop a reinforcement learning based IR approach for improving the performance of distributed IR search algorithms.\nAgents can acquire better search strategies by collecting and analyzing feedback information from previous search sessions.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents' capabilities of providing relevant documents for specific types of incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThis process is conducted in an iterative manner.\nThe goal of the learning algorithm, even though it consumes some network bandwidth, is to shorten the routing time so that more queries are processed per time unit while at the same time finding more relevant documents.\nThis contrasts with the content-similarity based approaches where similar operations are repeated for every incoming query and the processing time keeps largely constant over time.\nAnother way of viewing this paper is that our basic approach to distributed IR search is to construct a hierarchical\noverlay network (agent organization) based on the contentsimilarity measure among agents' document collections in a bottom-up fashion.\nIn the past work, we have shown that this organization improves search performance significantly.\nHowever, this organizational structure does not take into account the arrival patterns of queries, including their frequency, types, and where they enter the system, nor the available communication bandwidth of the network and processing capabilities of individual agents.\nThe intention of the reinforcement learning is to adapt the agents' routing decisions to the dynamic network situations and learn from past search sessions.\nSpecifically, the contributions of this paper include: (1) a reinforcement learning based approach for agents to acquire satisfactory routing policies based on estimates of the potential contribution of their neighboring agents; (2) two strategies to speed up the learning process.\nTo our best knowledge, this is one of the first reinforcement learning applications in addressing distributed content sharing problems and it is indicative of some of the issues in applying reinforcement in a complex application.\nThe remainder of this paper is organized as follows: Section 2 reviews the hierarchical content sharing systems and the two-phase search algorithm based on such topology.\nSection 3 describes a reinforcement learning based approach to direct the routing process; Section 4 details the experimental settings and analyze the results.\nSection 5 discusses related studies and Section 6 concludes the paper.\n2.\nSEARCH IN HIERARCHICAL P2P IR SYSTEMS\n3.\nA BASIC REINFORCEMENT LEARNING BASED SEARCH APPROACH\n232 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.1 The Model\n3.1.1 Update the Policy\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 233\n3.1.2 Update Expected Utility\n234 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.1.3 Reward function\n3.2 Deployment of the Learning algorithm\n3.3 Discussion\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 235\n4.\nEXPERIMENTS SETTINGS AND RESULTS\n4.1 TRANO Testbed\n4.2 Experimental Settings\n4.3 Results analysis and evaluation\n5.\nRELATED WORK\nThe content routing problem differs from the networklevel routing in packet-switched communication networks in that content-based routing occurs in application-level networks.\nIn addition, the destination agents in our contentrouting algorithms are multiple and the addresses are not known in the routing process.\nIP-level Routing problems have been attacked from the reinforcement learning perspective [2, 5, 11, 12].\nThese studies have explored fully distributed algorithms that are able, without central coordination to disseminate knowledge about the network, to find the shortest paths robustly and efficiently in the face of changing network topologies and changing link costs.\nThere are two major classes of adaptive, distributed packet routing algorithms in the literature: distance-vector algorithms and link-state algorithms.\nWhile this line of studies carry a certain similarity with our work, it has mainly focused on packet-switched communication networks.\nIn this domain, the destination of a packet is deterministic and unique.\nEach agent maintains estimations, probabilistically or deterministically, on the distance to a certain destination through its neighbors.\nA variant of Q-Learning techniques is deployed\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 237\nTable 1: Cumulative Utility for Datasets TREC-123-100-Random and TREC-123-100-Source with Hierarchical Organization; The percentage numbers in the columns \"TSLA-Random\" and \"TSLA-Source\" demonstrate the performance gain over the algorithm without learning\nto update the estimations to converge to the real distances.\nIt has been discovered that the locality property is an important feature of information retrieval systems in user modeling studies [3].\nIn P2P based content sharing systems, this property is exemplified by the phenomenon that users tend to send queries that represent only a limited number of topics and conversely, users in the same neighborhood are likely to share common interests and send similar queries [10].\nThe learning based approach is perceived to be more beneficial for real distributed information retrieval systems which exhibit locality property.\nThis is because the users' traffic and query patterns can reduce the state space and speed up the learning process.\nRelated work in taking advantage of this property include [7], where the authors attempted to address this problem by user modeling techniques.\n6.\nCONCLUSIONS\nIn this paper, a reinforcement-learning based approach is developed to improve the performance of distributed IR search algorithms.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents' ability to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents modify their routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThe experiments on two different distributed IR datasets illustrates that the reinforcement learning approach improves considerably the cumulative utility over time.","lvl-4":"A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems\nABSTRACT\nThe dominant existing routing strategies employed in peerto-peer (P2P) based information retrieval (IR) systems are similarity-based approaches.\nIn these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions.\nHowever, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents.\nIn this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions.\nSpecifically, agents maintain estimates on the downstream agents' abilities to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on this information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies.\nExperimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.\n1.\nINTRODUCTION\nOver the last few years there have been increasing interests in studying how to control the search processes in peer-to-peer (P2P) based information retrieval (IR) systems [6, 13, 14, 15].\nIn this line of research, one of the core problems that concerns researchers is to efficiently route user queries in the network to agents that are in possession of appropriate documents.\nIn the absence of global information, the dominant strategies in addressing this problem are content-similarity based approaches [6, 13, 14, 15].\nWhile the content similarity between queries and local nodes appears to be a creditable indicator for the number of relevant documents residing on each node, these approaches are limited by a number of factors.\nSecond, the similarity-based approaches do not take into account the run-time characteristics of the P2P IR systems, including environmental parameters, bandwidth usage, and the historical information of the past search sessions, that provide valuable information for the query routing algorithms.\nIn this paper, we develop a reinforcement learning based IR approach for improving the performance of distributed IR search algorithms.\nAgents can acquire better search strategies by collecting and analyzing feedback information from previous search sessions.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents' capabilities of providing relevant documents for specific types of incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThis process is conducted in an iterative manner.\nThe goal of the learning algorithm, even though it consumes some network bandwidth, is to shorten the routing time so that more queries are processed per time unit while at the same time finding more relevant documents.\nThis contrasts with the content-similarity based approaches where similar operations are repeated for every incoming query and the processing time keeps largely constant over time.\nAnother way of viewing this paper is that our basic approach to distributed IR search is to construct a hierarchical\noverlay network (agent organization) based on the contentsimilarity measure among agents' document collections in a bottom-up fashion.\nIn the past work, we have shown that this organization improves search performance significantly.\nThe intention of the reinforcement learning is to adapt the agents' routing decisions to the dynamic network situations and learn from past search sessions.\nSpecifically, the contributions of this paper include: (1) a reinforcement learning based approach for agents to acquire satisfactory routing policies based on estimates of the potential contribution of their neighboring agents; (2) two strategies to speed up the learning process.\nThe remainder of this paper is organized as follows: Section 2 reviews the hierarchical content sharing systems and the two-phase search algorithm based on such topology.\nSection 3 describes a reinforcement learning based approach to direct the routing process; Section 4 details the experimental settings and analyze the results.\nSection 5 discusses related studies and Section 6 concludes the paper.\n5.\nRELATED WORK\nThe content routing problem differs from the networklevel routing in packet-switched communication networks in that content-based routing occurs in application-level networks.\nIn addition, the destination agents in our contentrouting algorithms are multiple and the addresses are not known in the routing process.\nIP-level Routing problems have been attacked from the reinforcement learning perspective [2, 5, 11, 12].\nThere are two major classes of adaptive, distributed packet routing algorithms in the literature: distance-vector algorithms and link-state algorithms.\nWhile this line of studies carry a certain similarity with our work, it has mainly focused on packet-switched communication networks.\nEach agent maintains estimations, probabilistically or deterministically, on the distance to a certain destination through its neighbors.\nA variant of Q-Learning techniques is deployed\nThe Sixth Intl. .\nJoint Conf.\nto update the estimations to converge to the real distances.\nIt has been discovered that the locality property is an important feature of information retrieval systems in user modeling studies [3].\nThe learning based approach is perceived to be more beneficial for real distributed information retrieval systems which exhibit locality property.\nThis is because the users' traffic and query patterns can reduce the state space and speed up the learning process.\nRelated work in taking advantage of this property include [7], where the authors attempted to address this problem by user modeling techniques.\n6.\nCONCLUSIONS\nIn this paper, a reinforcement-learning based approach is developed to improve the performance of distributed IR search algorithms.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents' ability to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents modify their routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThe experiments on two different distributed IR datasets illustrates that the reinforcement learning approach improves considerably the cumulative utility over time.","lvl-2":"A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems\nABSTRACT\nThe dominant existing routing strategies employed in peerto-peer (P2P) based information retrieval (IR) systems are similarity-based approaches.\nIn these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions.\nHowever, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents.\nIn this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions.\nSpecifically, agents maintain estimates on the downstream agents' abilities to provide relevant documents for incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on this information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies.\nExperimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.\n1.\nINTRODUCTION\nOver the last few years there have been increasing interests in studying how to control the search processes in peer-to-peer (P2P) based information retrieval (IR) systems [6, 13, 14, 15].\nIn this line of research, one of the core problems that concerns researchers is to efficiently route user queries in the network to agents that are in possession of appropriate documents.\nIn the absence of global information, the dominant strategies in addressing this problem are content-similarity based approaches [6, 13, 14, 15].\nWhile the content similarity between queries and local nodes appears to be a creditable indicator for the number of relevant documents residing on each node, these approaches are limited by a number of factors.\nFirst of all, similaritybased metrics can be myopic since locally relevant nodes may not be connected to other relevant nodes.\nSecond, the similarity-based approaches do not take into account the run-time characteristics of the P2P IR systems, including environmental parameters, bandwidth usage, and the historical information of the past search sessions, that provide valuable information for the query routing algorithms.\nIn this paper, we develop a reinforcement learning based IR approach for improving the performance of distributed IR search algorithms.\nAgents can acquire better search strategies by collecting and analyzing feedback information from previous search sessions.\nParticularly, agents maintain estimates, namely expected utility, on the downstream agents' capabilities of providing relevant documents for specific types of incoming queries.\nThese estimates are updated gradually by learning from the feedback information returned from previous search sessions.\nBased on the updated expected utility information, the agents derive corresponding routing policies.\nThereafter, these agents route the queries based on the learned policies and update the estimates on the expected utility based on the new routing policies.\nThis process is conducted in an iterative manner.\nThe goal of the learning algorithm, even though it consumes some network bandwidth, is to shorten the routing time so that more queries are processed per time unit while at the same time finding more relevant documents.\nThis contrasts with the content-similarity based approaches where similar operations are repeated for every incoming query and the processing time keeps largely constant over time.\nAnother way of viewing this paper is that our basic approach to distributed IR search is to construct a hierarchical\noverlay network (agent organization) based on the contentsimilarity measure among agents' document collections in a bottom-up fashion.\nIn the past work, we have shown that this organization improves search performance significantly.\nHowever, this organizational structure does not take into account the arrival patterns of queries, including their frequency, types, and where they enter the system, nor the available communication bandwidth of the network and processing capabilities of individual agents.\nThe intention of the reinforcement learning is to adapt the agents' routing decisions to the dynamic network situations and learn from past search sessions.\nSpecifically, the contributions of this paper include: (1) a reinforcement learning based approach for agents to acquire satisfactory routing policies based on estimates of the potential contribution of their neighboring agents; (2) two strategies to speed up the learning process.\nTo our best knowledge, this is one of the first reinforcement learning applications in addressing distributed content sharing problems and it is indicative of some of the issues in applying reinforcement in a complex application.\nThe remainder of this paper is organized as follows: Section 2 reviews the hierarchical content sharing systems and the two-phase search algorithm based on such topology.\nSection 3 describes a reinforcement learning based approach to direct the routing process; Section 4 details the experimental settings and analyze the results.\nSection 5 discusses related studies and Section 6 concludes the paper.\n2.\nSEARCH IN HIERARCHICAL P2P IR SYSTEMS\nThis section briefly reviews our basic approaches to hierarchical P2P IR systems.\nIn a hierarchical P2P IR system illustrated in Fig. 1, agents are connected to each other through three types of links: upward links, downward links, and lateral links.\nIn the following sections, we denote the set of agents that are directly connected to agent Ai as DirectConn (Ai), which is defined as\n, where NEI (Ai) is the set of neighboring agents connected to Ai through lateral links; P AR (Ai) is the set of agents whom agent Ai is connected to through upward links and CHL (Ai) is the set of agents that agent Ai connects to through downward links.\nThese links are established through a bottom-up content-similarity based distributed clustering process [15].\nThese links are then used by agents to locate other agents that contain documents relevant to the given queries.\nA typical agent Ai in our system uses two queues: a local search queue, LSi, and a message forwarding queue MFi.\nThe states of the two queues constitute the internal states of an agent.\nThe local search queue LSi stores search sessions that are scheduled for local processing.\nIt is a priority queue and agent Ai always selects the most promising queries to process in order to maximize the global utility.\nMFi consists of a set of queries to forward on and is processed in a FIFO (first in first out) fashion.\nFor the first query in MFi, agent Ai determines which subset of its neighboring agents to forward it to based on the agent's routing policy \u03c0i.\nThese routing decisions determine how the search process is conducted in the network.\nIn this paper, we call Ai as Aj's upstream agent and Aj as Ai's downstream agent if\nFigure 1: A fraction of a hierarchical P2PIR system\nan agent Ai routes a query to agent Aj.\nThe distributed search protocol of our hierarchical agent organization is composed of two steps.\nIn the first step, upon receipt of a query qk at time tl from a user, agent Ai initiates a search session si by probing its neighboring agents Aj \u2208 NEI (Ai) with the message PROBE for the similarity value Sim (qk, Aj) between qk and Aj.\nHere, Ai is defined as the query initiator of search session si.\nIn the second step, Ai selects a group of the most promising agents to start the actual search process with the message SEARCH.\nThese SEARCH messages contain a TTL (Time To Live) parameter in addition to the query.\nThe TTL value decreases by 1 after each hop.\nIn the search process, agents discard those queries that either have been previously processed or whose TTL drops to 0, which prevents queries from looping in the system forever.\nThe search session ends when all the agents that receive the query drop it or TTL decreases to 0.\nUpon receipt of SEARCH messages for qk, agents schedule local activities including local searching, forwarding qk to their neighbors, and returning search results to the query initiator.\nThis process and related algorithms are detailed in [15, 14].\n3.\nA BASIC REINFORCEMENT LEARNING BASED SEARCH APPROACH\nIn the aforementioned distributed search algorithm, the routing decisions of an agent Ai rely on the similarity comparison between incoming queries and Ai's neighboring agents in order to forward those queries to relevant agents without flooding the network with unnecessary query messages.\nHowever, this heuristic is myopic because a relevant direct neighbor is not necessarily connected to other relevant agents.\nIn this section, we propose a more general approach by framing this problem as a reinforcement learning task.\nIn pursuit of greater flexibility, agents can switch between two modes: learning mode and non-learning mode.\nIn the non-learning mode, agents operate in the same way as they do in the normal distributed search processes described in [14, 15].\nOn the other hand, in the learning mode, in parallel with distributed search sessions, agents also participate in a learning process which will be detailed in this section.\nNote that in the learning protocol, the learning process does not interfere with the distributed search process.\nAgents can choose to initiate and stop learning processes without affecting the system performance.\nIn particular, since the learning process consumes network resources (especially bandwidth), agents can choose to initiate learning only when the network load is relatively low, thus minimizing the extra communication costs incurred by the learning algorithm.\nThe section is structured as follows, Section 3.1 describes\n232 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\na reinforcement learning based model.\nSection 3.2 describes a protocol to deploy the learning algorithm in the network.\nSection 3.3 discusses the convergence of the learning algorithm.\n3.1 The Model\nAn agent's routing policy takes the state of a search session as input and output the routing actions for that query.\nIn our work, the state of a search session sj is stipulated as:\nwhere ttlj is the number of hops that remains for the search session sj, qk is the specific query.\nQL is an attribute of qk that indicates which type of queries qk most likely belong to.\nThe set of QL can be generated by running a simple online classification algorithm on all the queries that have been processed by the agents, or an offline algorithm on a pre-designated training set.\nThe assumption here is that the set of query types is learned ahead of time and belongs to the common knowledge of the agents in the network.\nFuture work includes exploring how learning can be accomplished when this assumption does not hold.\nGiven the query types set, an incoming query qi can be classified to one query class Q (qi) by the formula:\nwhere P (qi | Qj) indicates the likelihood that the query qi is generated by the query class Qj [8].\nThe set of atomic routing actions of an agent Ai is denoted as {\u03b1i}, where {\u03b1i} is defined as \u03b1i = {\u03b1i0, \u03b1i1,..., \u03b1in}.\nAn element \u03b1ij represents an action to route a given query to the neighboring agent Aij \u2208 DirectConn (Ai).\nThe routing policy Sri of agent Ai is stochastic and its outcome for a search session with state QSj is defined as:\nNote that operator Sri is overloaded to represent either the probabilistic policy for a search session with state QSj, denoted as Sri (QSj); or the probability of forwarding the query to a specific neighboring agent Aik \u2208 DirectConn (Ai) under the policy Sri (QSj), denoted as Sri (QSj, \u03b1ik).\nTherefore, equation (2) means that the probability of forwarding the search session to agent Ai0 is Sri (QSi, \u03b1i0) and so on.\nUnder this stochastic policy, the routing action is nondeterministic.\nThe advantage of such a strategy is that the best neighboring agents will not be selected repeatedly, thereby mitigating the potential \"hot spots\" situations.\nThe expected utility, Uin (QSj), is used to estimate the potential utility gain of routing query type QSj to agent Ai under policy Srni.\nThe superscript n indicates the value at the nth iteration in an iterative learning process.\nThe expected utility provides routing guidance for future search sessions.\nIn the search process, each agent Ai maintains partial observations of its neighbors' states, as shown in Fig. 2.\nThe partial observation includes non-local information such as the potential utility estimation of its neighbor Am for query state QSj, denoted as Um (QSj), as well as the load information, Lm.\nThese observations are updated periodically by the neighbors.\nThe estimated utility information will be used to update Ai's expected utility for its routing policy.\n, where | MFm | is the length of the message-forward queue and Cm is the service rate of agent Am's message-forward queue.\nTherefore Lm characterizes the utilization of an agent's communication channel, and thus provide non-local information for Am's neighbors to adjust the parameters of their routing policy to avoid inundating their downstream agents.\nNote that based on the characteristics of the queries entering the system and agents' capabilities, the loading of agents may not be uniform.\nAfter collecting the utilization rate information from all its neighbors, agent Ai computes L ~ i as a single measure for assessing the average load condition of its neighborhood:\nAgents exploit L ~ i value in determining the routing probability in its routing policy.\nNote that, as described in Section 3.2, information about neighboring agents is piggybacked with the query message propagated among the agents whenever possible to reduce the traffic overhead.\n3.1.1 Update the Policy\nAn iterative update process is introduced for agents to learn a satisfactory stochastic routing policy.\nIn this iterative process, agents update their estimates on the potential utility of their current routing policies and then propagate the updated estimates to their neighbors.\nTheir neighbors then generate a new routing policy based on the updated observation and in turn they calculate the expected utility based on the new policies and continue this iterative process.\nIn particular, at time n, given a set of expected utility, an agent Ai, whose directly connected agents set is DirectConn (Ai) = {Ai0,..., Aim}, determines its corresponding stochastic routing policy for a search session of state QSj based on the following steps:\n(1) Ai first selects a subset of agents as the potential downstream agents from set DirectConn (Ai), denoted as PDn (Ai, QSj).\nThe size of the potential downstream agent is specified as\nwhere k is a constant and is set to 3 in this paper; dni, the forward width, is defined as the expected number of\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 233\nneighboring agents that agent Ai can forward to at time n.\nThis formula specifies that the potential downstream agent set PDn (Ai, QSj) is either the subset of neighboring agents with dni + k highest expected utility value for state QSj among all the agents in DirectConn (Ai), or all their neighboring agents.\nThe k is introduced based on the idea of a stochastic routing policy and it makes the forwarding probability of the dni + k highest agent less than 100%.\nNote that if we want to limit the number of downstream agents for search session sj as 5, the probability of forwarding the query to all neighboring agents should add up to 5.\nSetting up dni value properly can improve the utilization rate of the network bandwidth when much of the network is idle while mitigating the traffic load when the network is highly loaded.\nThe dn +1 i value is updated based on dni, the previous and current observations on the traffic situation in the neighborhood.\nSpecifically, the update formula for dn +1\nIn this formula, the forward width is updated based on the traffic conditions of agent Ai's neighborhood, i.e L ~ i, and its previous value.\n(2) For each agent Aik in the PDn (Ai, QSj), the probability of forwarding the query to Aik is determined in the following way in order to assign higher forwarding probability to the neighboring agents with higher expected utility value:\nwhere\nand QS ~ j is the subsequent state of agent Aik after agent Ai forwards the search session with state QSj to its neighboring agent Aik; If QSj = (qk, ttl0), then QS ~ j = (qk, ttl0 \u2212 1).\nIn formula 3, the first term on the right of the equation,\nability by equally distributing the forward width, dn +1 i, to the agents in PDn (Ai, QSj) set.\nThe second term is used to adjust the probability of being chosen so that agents with higher expected utility values will be favored.\n\u03b2 is determined according to: and\nHowever, such a solution does not explore all the possibilities.\nIn order to balance between exploitation and exploration, a \u03bb-Greedy approach is taken.\nIn the \u03bb-Greedy approach, in addition to assigning higher probability to those agents with higher expected utility value, as in the equation (3).\nAgents that appear to be \"not-so-good\" choices will also be sent queries based on a dynamic exploration rate.\nIn particular, for agents in the set PDn (Ai, QSj), \u03c0n +1\ndifference being that dn +1 i is replaced with dn +1 i \u2217 (1 \u2212 \u03bbn).\nThe remaining search bandwidth is used for learning by assigning probability \u03bbn evenly to agents Ai2 in the set\nwhere PDn (Ai, QSj) \u2282 DirectConn (Ai).\nNote that the exploration rate \u03bb is not a constant and it decreases overtime.\nThe \u03bb is determined according to the following equation:\nwhere \u03bb0 is the initial exploration rate, which is a constant; c1 is also a constant to adjust the decreasing rate of the exploration rate; n is the current time unit.\n3.1.2 Update Expected Utility\nOnce the routing policy at step n +1, \u03c0n +1 i, is determined based on the above formula, agent Ai can update its own expected utility, Un +1 i (QSi), based on the the updated routing policy resulted from the formula 5 and the updated U values of its neighboring agents.\nUnder the assumption that after a query is forwarded to Ai's neighbors the subsequent search sessions are independent, the update formula is similar to the Bellman update formula in Q-Learning:\nwhere QS ~ j = (Qj, ttl \u2212 1) is the next state of QSj = (Qj, ttl); Rn +1 i (QSj) is the expected local reward for query class Qk at agent Ai under the routing policy \u03c0n +1 i; \u03b8i is the coefficient for deciding how much weight is given to the old value during the update process: the smaller \u03b8i value is, the faster the agent is expected to learn the real value, while the greater volatility of the algorithm, and vice versa.\nRn +1 (s) is updated according to the following equation:\n234 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nwhere r (QSj) is the local reward associated with the search session.\nP (qjlQj) indicates how relevant the query qj is to the query type Qj, and - yi is the learning rate for agent Ai.\nDepending on the similarity between a specific query qi and its corresponding query type Qi, the local reward associated with the search session has different impact on the Rni (QSj) estimation.\nIn the above formula, this impact is reflected by the coefficient, the P (qjlQj) value.\n3.1.3 Reward function\nAfter a search session stops when its TTL values expires, all search results are returned back to the user and are compared against the relevance judgment.\nAssuming the set of search results is SR, the reward Rew (SR) is defined as:\nwhere SR is the set of returned search results, Rel (SR) is the set of relevant documents in the search results.\nThis equation specifies that users give 1.0 reward if the number of returned relevant documents reaches a predefined number c. Otherwise, the reward is in proportion to the number of relevant documents returned.\nThis rationale for setting up such a cut-off value is that the importance of recall ratio decreases with the abundance of relevant documents in real world, therefore users tend to focus on only a limited number of searched results.\nThe details of the actual routing protocol will be introduced in Section 3.2 when we introduce how the learning algorithm is deployed in real systems.\n3.2 Deployment of the Learning algorithm\nThis section describes how the learning algorithm can be used in either a single-phase or a two-phase search process.\nIn the single-phase search algorithm, search sessions start from the initiators of the queries.\nIn contrast, in the two-step search algorithm, the query initiator first attempts to seek a more appropriate starting point for the query by introducing an exploratory step as described in Section 2.\nDespite the difference in the quality of starting points, the major part of the learning process for the two algorithms is largely the same as described in the following paragraphs.\nBefore learning starts, each agent initializes the expected utility value for all possible states as 0.\nThereafter, upon receipt of a query, in addition to the normal operations described in the previous section, an agent Ai also sets up a timer to wait for the search results returned from its downstream agents.\nOnce the timer expires or it has received response from all its downstream agents, Ai merges and forwards the search results accrued from its downstream agents to its upstream agent.\nSetting up the timer speeds up the learning because agents can avoid waiting too long for the downstream agents to return search results.\nNote that these detailed results and corresponding agent information will still be stored at Ai until the feedback information is passed from its upstream agent and the performance of its downstream agents can be evaluated.\nThe duration of the timer is related to the TTL value.\nIn this paper, we set the timer\n, where ttli * 2 is the sum of the travel time of the queries in the network, and tf is the expected time period that users would like to wait.\nThe search results will eventually be returned to the search session initiator A0.\nThey will be compared to the relevance judgment that is provided by the final users (as described in the experiment section, the relevance judgement for the query set is provided along with the data collections).\nThe reward will be calculated and propagated backward to the agents along the way that search results were passed.\nThis is a reverse process of the search results propagation.\nIn the process of propagating reward backward, agents update estimates of their own potential utility value, generate an upto-dated policy and pass their updated results to the neighboring agents based on the algorithm described in Section 3.\nUpon change of expected utility value, agent Ai sends out its updated utility estimation to its neighbors so that they can act upon the changed expected utility and corresponding state.\nThis update message includes the potential reward as well as the corresponding state QSi = (qk, ttll) of agent Ai.\nEach neighboring agent, Aj, reacts to this kind of update message by updating the expected utility value for state QSj (qk, ttll + 1) according to the newly-announced changed expected utility value.\nOnce they complete the update, the agents would again in turn inform related neighbors to update their values.\nThis process goes on until the TTL value in the update message increases to the TTL limit.\nTo speed up the learning process, while updating the expected utility values of an agent Ai's neighboring agents we specify that\nThus, when agent Ai receives an updated expected utility value with ttl1, it also updates the expected utility values with any ttl0> ttl1 if Um (Qk, ttl0) t : {(i, \u2217)} Figure 2: FSM-like state transition diagram describing the \u0394-relation in a DS specification Definition 3.\nA dynamic semantics (DS) is a structure O, S, s0, \u0394 where - O = {o1, o2, ... , on} a set of dialogue operators, - S \u2286 \u2118(O) is a set of semantic states specified as subsets of dialogue operators which are valid in this state, - s0 \u2208 S is the initial semantic state, - and the transition relation \u0394 \u2286 S \u00d7 \u2118(C) \u00d7 \u2118(Ag \u00d7 Ag) \u00d7 S defines the transitions over S triggered by conditions expressed as elements of \u2118(C) (C is the set of all possible commitments).\nThe meaning of a transition (s, c, {(i1, j1), ... , (in, jn)}, s ) \u2208 \u0394 is as follows: Assume a mapping act : Ag \u00d7 Ag \u2192 S which specifies that the semantics of operators in s holds for messages sent from i to j .\nThen, if CS \u2208 c (i.e. the current CS matches the constraint c given as a collection of possible CSs) this will trigger a transition to state s for all pairs of agents in {(i1, j1), ... , (in, jn)} for which the constraint was satisfied and will update act accordingly.\nIn other words, the act mapping tracks which version of the semantics is valid for which pairs of communication partners over time.\n2.4.2 Example To illustrate these concepts, consider the following example: Let O = {RQ, RJ, AC, AC2}, S = {s0, s1} where s0 = {RQ, RJ, AC} and s1 = {RQ, RJ, AC2}, i.e. there are two possible states of the semantics which only differ in their definition of accept (we call alternative versions of a single dialogue operator like AC and AC2 semantic variants).\nWe assume that initially act(i, j) = s0 for all agents i, j \u2208 Ag.\nWe describe \u03b4 by the transition diagram shown in figure 2.\nIn this diagram, edges carry labels c : A where c is a constraint on the contents of CS followed by a description of the set of agent pairs A for which the transition should be made to the target state.\nWriting A(s) = act\u22121 (s) for the so-called range of agent pairs for which s is active, we use agent variables like i and j and the wildcard symbol \u2217 that can be bound to any agent in A(s), and we assume that this binding carries over to descriptions of A. For example, the edge with label \u03b9, v : \u0393 i\u2192j \u2208 CS : {(i, \u2217)} \u222a {(j, i)} can be interpreted as follows: select all pairs (i, j) \u2208 A(s0) for which \u03b9, v : \u0393 i\u2192j \u2208 CS applies (i.e. i has violated some commitment toward j) and make s1 valid for the set of agents {(i, k)|k \u2208 A(s0)} \u222a {(j, i)}.\nThis means that for all agents i who have lied, s1 will become active for (i, j ) where j \u2208 A(s0) and s1 will also become active for (j, i).\nThe way the DS of the diagram above works is as follows: initially the semantics says (for every agent i) that they will fulfill any commitment truthfully (the use of AC ensures that expected behaviour is equivalent to compliant behaviour).\nIf an agent i violates a commitment once then s1 will become active for i towards all other agents, so that they won``t expect i to fulfill any future commitments.\nMoreover, this will also apply to (j, i) so that the culprit i should not expect the deceived agent j to keep its promises towards i either in the future.\nHowever, this will not affect expectations regarding their interactions with i by agents other than i (i.e. they still have no right to violate their own commitments).\nThis reflects the idea that (only) agents that have been fooled are allowed to trespass (only) against those agents who trespassed against them.\nHowever, if i ever fulfills any commitment again (after the latest violation, this is ensured by the complex constraint used as a label for the transition from s1 to s0), the semantics in s0 will become valid for i again.\nIn this case, though, s1 will still be valid for the pair (j, i), i.e. agent j will regain trust in i but cannot be expected to be trustworthy toward i ever again.\nRather than suggesting that this is a particularly useful communication-inherent mechanism for sanctioning and rewarding specific kinds of behaviour, this example serves to illustrate the expressiveness of our framework and the kind of distinctions it enables us to make.\n2.4.3 Formal Semantics The semantics of a DS can be defined inductively as follows: Let CS(r) denote the contents of the commitment store after run r as before.\nWe use the notation A(\u03b4, CS) = {(i, j)|CS|i,j \u2208 c} \u2229 A(s) \u2229 A to denote the set of agents that are to be moved from s to s due to transition rule \u03b4 = (s, c, A, s ) \u2208 \u0394 given CS, where CS|i,j is the set of commitments that mention i and\/or j (in their sender\/receiver\/content slots).\nIn other words, A(\u03b4, CS) contains those pairs of agents who are (i) mentioned in the commitments covered by the constraint c, (ii) contained in the range of s, and (iii) explicitly listed in A as belonging to those pairs of agents that should be affected by the transition \u03b4.\n104 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Definition 4.\nThe state of a dynamic semantics O, S, s0, \u0394 after run r with immediate predecessor r is defined as a mapping actr as follows: 1.\nr = \u03b5: act\u03b5(i, j) = s0 for all i, j \u2208 Ag 2.\nr = \u03b5: actr(i, j) = 8 >< >: s if \u2203\u03b4 = (s, c, A, s ) \u2208 \u0394.\n(i, j) \u2208 A(\u03b4, CS(r)) actr (i, j) else This maintains the property act\u22121 r (s) = act\u22121 r (s) \u2212 A(\u03b4, CS(r )), which specifies that the agent pairs to be moved from s to s are removed from the range of s and added to the range of s .\nWhat is not ensured by this definition is consistency of the state transition system, i.e. making sure that the semantic successor state is uniquely identified for any state of the commitment store and previous state so that every agent pair is only assigned one active state in each step, i.e. actr is actually a function for any r.7 2.4.4 Integration Once the DS itself has been specified, we need to integrate the different components of our framework to monitor the dynamics of our ACL semantics and its implications for expected agent behaviour.\nStarting with an initially empty commitment store CS and initial semantic state s0 such that act\u03b5(i, j) = s0 for any two agents i and j, the agent (or external observer) observes (a partial subset of) everything that is communicated in the system in each step.\nBy applying the commitment transition rules (D, A, S, F and V ) we can update CS accordingly, ignoring any observed message sent from i to j that does not syntactically match the dialogue operator set defined in actr(i, j) for a current run r.\nAfter this update has been performed for all observed messages and actions in this cycle, which should not depend on the ordering of messages8 , we can compute for any message sent from i to j the new value of actr (i, j) depending on the semantic transition rules of the DS if r is the successor run of r. With this, we can then determine what the compliant and expected behaviour of agents will be under these new conditions.\nThus, an agent can use information about expected behaviour in its own planning processes by assuming that all agents involved will exhibit their expected (rather than just compliant) behaviours.\nThis prediction will not always be more accurate than under normal (static) ACL semantics, but since it is common knowledge that agents assume expected behaviour to occur (and, by virtue of the DS-ACL specification, have the right to do that) most reasonable dynamic ACL specifications will make provisions to ensure that it is safer to assume expected rather than fully compliant behaviour if they want to promote their use by agents.\n7 One way of ensuring this is to require that \u2200s \u2208 S. (\u2229{c|(s, c, A, s ) \u2208 \u0394(s)} = \u2205) so that no two constraints pertaining to outgoing edges of s can be fulfilled by CS at a time.\nIn some cases this may be too coarse-grained - it would be sufficient for constraints to be mutually exclusive for the same pair of agents at any point in time - but this would have to be verified for an individual DS on a case-bycase basis.\n8 This is the case for our operators, because their pre- and post-conditions never concern or affect any commitments other than those that involve both i and j - avoiding any connection to third parties helps us keep the CS-update independent of the order in which observations are processed.\n2.4.5 Complexity Issues The main disadvantage of our approach is the space complexity of the dynamic ACL specification: If d is the number of dialogue operators in a language and b is the maximum number of semantic variants of a single dialogue operator within this language, the DS specification would have to specify O(db ) states.\nIn many cases, however, most of the speech acts will not have different variants (like RQ and RJ in our example) and this may significantly reduce the number of DS states that need to be specified.\nAs for the run-time behaviour of our semantics processing mechanism, we can assume that n messages\/actions are sent\/performed in each processing step in a system with n agents.\nEvery commitment processing rule (D, S, etc.) has to perform a pass over the contents of CS.\nIn the worst case every originally created commitment (of which there may be nt after t steps) may have immediately become pending, active and violated (which doesn``t require any further physical actions, so that every agent can create a new commitment in each step).\nThus, if any agent creates a new commitment in each step without ever fulfilling it, this will result in the total size of CS being in O(nt).9 Regarding semantic state transitions, as many as n different pairs of agents could be affected in a single iteration by n messages.\nAssuming that the verification of CS-constraints for these transitions would take O(nt), this yields a total update time of O(n2 t) for tracking DS evolution.\nThis bound can be reduced to O(n2 ) if a quasi-stationarity assumption is made by limiting the window of earlier commitments that are being considered when verifying transition constraints to a constant size (and thus obtaining a finite set of possible commitment stores).10 3.\nANALYSIS AND DISCUSSION The main strength of our framework is that it allows us to exploit the three main elements of reciprocity: \u2022 Reputation-based adaptation: The DS adapts the expectations toward agent i according to i``s previous behaviour by modifying the semantic state to better reflect this behaviour (based on the assumption that it will repeat itself in the future).\n\u2022 Mutuality of expectations: The DS adapts the expectations toward j``s behaviour according to i``s previous behaviour toward j to better reflect j``s response to i``s observed behaviour (in particular, allowing j to behave toward i as i behaved toward j earlier).\n\u2022 Recovery mechanisms: The DS allows i to revert to an earlier semantic state after having undone a change in expectations by a further, later change of behaviour (e.g. by means of redemption).\nIn open systems in which we cannot enforce certain behaviours, these are effectively the only available means for indirect sanctions and rewards.\n9 This is actually only a lower bound on the complexity for commitment processing which could become even worse if dominated by the complexity of verifying entailment |=; however, this would also hold for a static ACL semantics.\n10 For example, this could be useful if we want to discard commitments whose status was last modified more than k time steps ago (this is problematic, as it might force us to discard certain unset\/pending commitments before they become pending\/active).\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 105 There are two further dimensions that affect DS-based sanctioning and reward mechanisms and are orthogonal to the above: One concerns the character of the semantic state changes (i.e. whether it is a reward or punishment), the other the degree of adaptation (reputation-based mechanisms, for example, need not realistically reflect the behaviour of the culprit, but may instead utilise immediate (exaggerated) stigmatisation of agents as a deterrent).\nAlbeit simple, our example DS described above makes use of all these aspects, and apart from consistency and completeness, it also satisfies some other useful properties: 1.\nNon-redundancy: No two dialogue operators in O should have identical pre- and post-conditions, and any two semantic variants of an operator must differ in terms of pre- and\/or post-conditions: \u2200o, o \u2208 O .\n(pre(o) = pre(o )\u2227post(o) = post(o ) \u21d2 o = o ) \u2200o, o \u2208 O .\n(action(o) = action(o ) \u21d2 pre(o) = pre(o ) \u2228 post(o) = post(o)) 2.\nReachability of all semantic states: Any constraint causing a transition must be satisfiable in principle when using the dialogue operators and physical actions that are provided: \u2200(s, c, A, s ) \u2208 \u0394 \u2203r \u2208 R(Env, A).\nCS(r) \u2229 c = \u2205 3.\nDistinction between expected and compliant behaviour: The content of expectations must differ from that of normative commitments at least for some semantic variants (giving rise to non-compliant expectations for some runs): \u2203r \u2208 R(Env, A) .\nexpected(CS(r)) = compliant(CS(r)) 4.\nCompliance\/deviance realisability: It must be possible for agents in principle to comply with normative commitments or deviate from them in principle: \u2203r \u2208 R(Env, A) .\nexpected(CS(r)) = \u2205\u2227 compliant(CS(r)) = \u2205 While not absolutely essential, these constitute desiderata for the design of DS-ACLs as they add to the simplicity and clarity of a given semantics specification.\nOur framework raises interesting questions regarding further potential properties of DS such as: 1.\nRespect for commitment autonomy: The semantics must not allow an agent to create a pending commitment for another agent or to violate a commitment on behalf of another agent.\nWhile in some cases some agents should be able to enforce commitments upon others, this should generally be avoided to ensure agent autonomy.\n2.\nAvoiding commitment inconsistency: The ACL must either disallow commitment to contradictory actions or beliefs, or at least provide operators for rectifying such contradictory claims.\nUnder contradictory commitments, no possible behaviour can be compliantit is up to the designer to decide to which extent this should be permitted.\n3.\nUnprejudiced judgement: Expected behaviour prediction must not deviate from compliant behaviour prediction if deviant behaviour has not been observed so far (in particular this must hold for the initial semantic state).\nThis might not always be desirable as initial distrust is necessary in some systems, but it increases the chances that agents will agree to participate in communication.\n4.\nConvergence: The semantic state of each of the dialogue operators will remain stable after a finite number of transitions, regardless of any further agent behaviour11 .\nIf this property holds, this would imply that agents can stop tracking semantic state transitions after some amount of initial interaction.\nThe advantage of this is reduced complexity, which of course comes at the price of giving up adaptiveness.\n5.\nForgiveness: After initial deviance, further compliant behaviour of an agent should lead to a semantic state that predicts compliant behaviour for that agent again.\nHere, we have to trade off cautiousness against the provision of incentives to resume cooperative behaviour.\nTrusting an agent makes others vulnerable to exploitation - blacklisting an agent forever, though, might lead that agent to keep up its unpredictable and potentially malicious behaviour.\n6.\nEquality: Unless this is required by domain-specific constraints, the same dynamics of semantics should apply to all parties involved.\nOur simple example semantics satisfies all these properties apart from convergence.\nMany of the above properties are debatable, as we have to trade off cautiousness against the provision of incentives for cooperative behaviour.\nWhile we cannot make any general statements here regarding optimal DS-ACL design, our framework provides the tools to test and evaluate the performance of different such communication-inherent sanctioning and rewarding mechanisms (i.e. social rules that do not presuppose ability to direct punishment or reward through physical actions) in real-world applications.\n4.\nRELATED WORK Expectation-based reasoning about interaction was first proposed in [2], considering the evolution of expectations described as probabilistic expectations of communication and action sequences.\nThe same authors suggested a more general framework for expectation-based communication semantics [9], and argue for a consequentialist view of semantics that is based on defining the meaning of utterances in terms of their expected consequences and updating these expectations with new observations [11].\nHowever, their approach does not use an explicit notion of commitments which in our framework mediates between communication and behaviour-based grounding, and provides a clear distinction between a normative notion of compliance and a more empirical notion of expectation.\nGrounding for (mentalistic) ACL semantics has been investigated in [7] where grounded information is viewed as information that is publicly expressed and accepted as being true by all the agents participating in a conversation.\nLike [1] (which bases the notion of publicly expressed on roles rather than internal states of agents) these authors'' main concern is to provide a verifiable basis for determining the semantics of expressed mental states and commitments.\nThough our framework is only concerned with commitment to the achievement of states of affairs rather than exchanged information, in a sense, DS provides an alternative view by specifying what will happen if the assumptions on which what is publicly accepted is based are violated.\n11 In a non-trivial sense, i.e. when some initial transitions are possible in principle 106 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Our framework is also related to deontic methods for the specification of obligations, norms and sanctions.\nIn this area, [16] is the only framework that we are aware of which considers dynamic obligations, norms and sanctions.\nHowever, as we have described above we solely utilise semantic evolution as a sanctioning and rewarding mechanism, i.e. unlike this work we do not assume that agents can be directly punished or rewarded.\nFinally, the FSM-like structure of the DS transition systems in combination with agent communication is reminiscent of work on electronic institutions [5], but there the focus is on providing different means of communication in different scenes of the interaction process (e.g. different protocols for different phases of market-based interaction) whereas we focus on different semantic variants that are to be used in the same interaction context.\n5.\nCONCLUSION This paper introduces dynamic semantics for ACLs as a method for dealing with some fundamental problems of agent communication in open systems, the simple underlying idea being that different courses of agent behaviour can give rise to different interpretations of meaning of the messages exchanged among agents.\nBased on a common framework of commitment-based semantics, we presented a notion of grounding for commitments based on notions of compliant and expected behaviour.\nWe then defined dynamic semantics as state transition systems over different semantic states that can be viewed as different versions of ACL semantics in the traditional sense, and can be easily associated with a planning-based view of reasoning about communication.\nThereby, our focus was on simplicity and on providing mechanisms for tracking semantic evolution in a down-toearth, algorithmic fashion to ensure applicability to many different agent designs.\nWe discussed the properties of our framework showing how it can be used as a powerful communication-inherent mechanism for rewarding and sanctioning agent behaviour in open systems without compromising agent autonomy, discussed its integration with agents'' planning processes, complexity issues, and presented a list of desiderata for the design of ACLs with such semantics.\nCurrently, we are working on fully-fledged specifications of dynamic semantics for more complex languages and on extending our approach to mentalistic semantics where we view statements about mental states as commitments regarding the rational implications of these mental states (a simple example for this is that an agent commits itself to dropping an ostensible intention that it is claiming to maintain if that intention turns out to be unachievable).\nIn this context, we are particularly interested in appropriate mechanisms to detect and respond to lying by interrogating suspicious agents and forcing them to commit themselves to (sets of) mental states publicly while sanctioning them when these are inconsistent with their actions.\n6.\nREFERENCES [1] G. Boella, R. Damiano, J. Hulstijn, and L. van der Torre.\nACL Semantics between Social Commitments and Mental Attitudes.\nIn Proceedings of the International Workshop on Agent Communication , 2006.\n[2] W. Brauer, M. Nickles, M. Rovatsos, G. Wei\u00df, and K. F. Lorentzen.\nExpectation-Oriented Analysis and Design.\nIn Proceedings of the 2nd Workshop on Agent-Oriented Software Engineering , LNCS 2222, 2001.\nSpringer-Verlag, Berlin.\n[3] P. R. Cohen and H. J. Levesque.\nCommunicative actions for artificial agents.\nIn Proceedings of the First International Conference on Multi-Agent Systems, pages 65-72, 1995.\n[4] P. R. Cohen and C. R. Perrault.\nElements of a Plan-Based Theory of Speech Acts.\nCognitive Science, 3:177-212, 1979.\n[5] M. Esteva, J. Rodriguez, J. Arcos, C. Sierra, and P. Garcia.\nFormalising Agent Mediated Electronic Institutions.\nIn Catalan Congres on AI, pages 29-38, 2000.\n[6] N. Fornara and M. Colombetti.\nOperational specification of a commitment-based agent communication language.\nIn Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems, pages 536-542, Bologna, Italy, 2002.\nACM Press.\n[7] B. Gaudou, A. Herzig, D. Longin, and M. Nickles.\nA New Semantics for the FIPA Agent Communication Language based on Social Attitudes.\nIn Proceedings of the 17th European Conference on Artificial Intelligence, Riva del Garda, Italy, 2006.\nIOS Press.\n[8] F. Guerin and J. Pitt.\nDenotational Semantics for Agent Communication Languages.\nIn Proceedings of the Fifth International Conference on Autonomous Agents, pages 497-504.\nACM Press, 2001.\n[9] M. Nickles, M. Rovatsos, and G. Weiss.\nEmpiricalRational Semantics of Agent Communication.\nIn Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, New York, NY, 2004.\n[10] J. Pitt and A. Mamdani.\nSome Remarks on the Semantics of FIPA``s Agent Communication Language.\nAutonomous Agents and Multi-Agent Systems, 2:333-356, 1999.\n[11] M. Rovatsos, M. Nickles, and G. Wei\u00df.\nInteraction is Meaning: A New Model for Communication in Open Systems.\nIn Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, Melbourne, Australia, 2003.\n[12] M. D. Sadek.\nDialogue acts are rational plans.\nIn Proceedings of the ESCA\/ETRW Workshop on the Structure of Multimodal Dialogue, pages 1-29, 1991.\n[13] M. Singh.\nAgent communication languages: Rethinking the principles.\nIEEE Computer, 31(12):55-61, 1998.\n[14] M. Singh.\nA social semantics for agent communication languages.\nIn Proceedings of the IJCAI Workshop on Agent Communication Languages, 2000.\n[15] M. P. Singh.\nA semantics for speech acts.\nAnnals of Mathematics and Artificial Intelligence, 8(1-2):47-71, 1993.\n[16] G. Wei\u00df, M. Nickles, M. Rovatsos, and F. Fischer.\nSpecifying the Intertwining of Cooperation and Autonomy in Agent-based Systems.\nJournal of Networks and Computer Applications, 29, 2007.\n[17] M. J. Wooldridge.\nVerifiable semantics for agent communication languages.\nIn Proceedings of the Third International Conference on Multi-Agent Systems, pages 349-356, Paris, France, 1998.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 107","lvl-3":"Dynamic Semantics for Agent Communication Languages\nABSTRACT\nThis paper proposes dynamic semantics for agent communication languages (ACLs) as a method for tackling some of the fundamental problems associated with agent communication in open multiagent systems.\nBased on the idea of providing alternative semantic \"variants\" for speech acts and transition rules between them that are contingent on previous agent behaviour, our framework provides an improved notion of grounding semantics in ongoing interaction, a simple mechanism for distinguishing between compliant and expected behaviour, and a way to specify sanction and reward mechanisms as part of the ACL itself.\nWe extend a common framework for commitment-based ACL semantics to obtain these properties, discuss desiderata for the design of concrete dynamic semantics together with examples, and analyse their properties.\n1.\nINTRODUCTION\nThe field of agent communication language (ACL) research has long been plagued by problems of verifiability and grounding [10, 13, 17].\nEarly mentalistic semantics that specify the semantics of speech acts in terms of preand post-conditions contingent on mental states of the participants (e.g. [3, 4, 12, 15]) lack verifiability regarding compliance of agents with the intended semantics (as the mental states of agents cannot be observed in open multiagent systems (MASs)).\nUnable to safeguard themselves against abuse by malicious, deceptive or malfunctioning agents, mentalistic semantics are inherently unreliable and inappropriate for use in open MAS in which agents with potentially\nconflicting objectives might deliberately exploit their adversaries' conceptions of message semantics to provoke a certain behaviour.\nCommitment-based semantics [6, 8, 14], on the other hand, define the meaning of messages exchanged among agents in terms of publicly observable commitments, i.e. pledges to bring about a state of affairs or to perform certain actions.\nSuch semantics solve the verifiability problem as they allow for tracing the status of existing commitments at any point in time given observed messages and actions so that any observer can, for example, establish whether an agent has performed a promised action.\nHowever, this can only be done a posteriori, and this creates a grounding problem as no expectations regarding what will happen in the future can be formed at the time of uttering or receiving a message purely on the grounds of the ACL semantics.\nFurther, this implies that the semantics specification does not provide an interface to agents' deliberation and planning mechanisms and hence it is unclear how rational agents would be able to decide whether to subscribe to a suggested ACL semantics when it is deployed.\nFinally, none of the existing approaches allows the ACL to specify how to respond to a violation of its semantics by individual agents.\nThis has two implications: Firstly, it is left it up to the individual agent to reason about potential violations, i.e. to bear the burden of planning its own reaction to others' non-compliant behaviour (e.g. in order to sanction them) and to anticipate others' reactions to own misconduct without any guidance from the ACL specification.\nSecondly, existing approaches fail to exploit the possibilities of sanctioning and rewarding certain behaviours in a communication-inherent way by modifying the future meaning of messages uttered or received by compliant\/deviant agents.\nIn this paper, we propose dynamic semantics (DSs) for ACLs as a solution to these problems.\nOur notion of DS is based on the very simple idea of defining different alternatives for the meaning of individual speech acts (so-called semantic variants) in an ACL semantics specification, and transition rules between semantic states (i.e. collections of variants for different speech acts) that describe the current meaning of the ACL.\nThese elements taken together result in a FSM-like view of ACL specifications where each individual state provides a complete ACL semantics and state transitions are triggered by observed agent behaviour in order to (1) reflect future expectations based on previous interaction experience and (2) sanction or reward certain kinds of behaviour.\nIn defining a DS framework for commitment-based ACLs, this paper makes three contributions:\n1.\nAn extension of commitment-based ACL semantics to provide an improved notion of grounding commitments in agent interaction and to allow ACL specifications to be directly used for planning-based rational decision making.\n2.\nA simple way of distinguishing between compliant and expected behaviour with respect to an ACL specification that enables reasoning about the potential behaviour of agents purely from an ACL semantics perspective.\n3.\nA mechanism for specifying how meaning evolves with agent behaviour and how this can be used to describe communication-inherent sanctioning and rewarding mechanisms essential to the design of open MASs. .\nFurthermore, we discuss desiderata for DS design that can be derived from our framework, present examples and analyse their properties.\nThe remainder of this paper is structured as follows: Section 2 introduces a formal framework for dynamic ACL semantics.\nIn section 3 we present an analysis and discussion of this framework and discuss desiderata for the design of ACLs with dynamic semantics.\nSection 4 reviews related approaches, and section 5 concludes.\n2.\nFORMAL FRAMEWORK\n2.1 Commitments\n2.2 Grounding\n102 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.3 Static ACL Semantics\n2.4 Dynamic Semantics\n2.4.1 Defining Dynamic Semantics\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 103\n2.4.2 Example\n2.4.3 Formal Semantics\n104 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.4.4 Integration\n2.4.5 Complexity Issues\n3.\nANALYSIS AND DISCUSSION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 105\n1.\nRespect for commitment autonomy: The semantics\n4.\nRELATED WORK\nExpectation-based reasoning about interaction was first proposed in [2], considering the evolution of expectations described as probabilistic expectations of communication and action sequences.\nThe same authors suggested a more general framework for expectation-based communication semantics [9], and argue for a \"consequentialist\" view of semantics that is based on defining the meaning of utterances in terms of their expected consequences and updating these expectations with new observations [11].\nHowever, their approach does not use an explicit notion of commitments which in our framework mediates between communication and behaviour-based grounding, and provides a clear distinction between a normative notion of compliance and a more empirical notion of expectation.\nGrounding for (mentalistic) ACL semantics has been investigated in [7] where grounded information is viewed as \"information that is publicly expressed and accepted as being true by all the agents participating in a conversation\".\nLike [1] (which bases the notion of \"publicly expressed\" on roles rather than internal states of agents) these authors' main concern is to provide a verifiable basis for determining the semantics of expressed mental states and commitments.\nThough our framework is only concerned with commitment to the achievement of states of affairs rather than exchanged information, in a sense, DS provides an alternative view by specifying what will happen if the assumptions on which \"what is publicly accepted\" is based are violated.\n11In a non-trivial sense, i.e. when some initial transitions are possible in principle\n106 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nOur framework is also related to deontic methods for the specification of obligations, norms and sanctions.\nIn this area, [16] is the only framework that we are aware of which considers dynamic obligations, norms and sanctions.\nHowever, as we have described above we solely utilise semantic evolution as a sanctioning and rewarding mechanism, i.e. unlike this work we do not assume that agents can be directly punished or rewarded.\nFinally, the FSM-like structure of the DS transition systems in combination with agent communication is reminiscent of work on electronic institutions [5], but there the focus is on providing different means of communication in different \"scenes\" of the interaction process (e.g. different protocols for different phases of market-based interaction) whereas we focus on different semantic variants that are to be used in the same interaction context.\n5.\nCONCLUSION\nThis paper introduces dynamic semantics for ACLs as a method for dealing with some fundamental problems of agent communication in open systems, the simple underlying idea being that different courses of agent behaviour can give rise to different interpretations of meaning of the messages exchanged among agents.\nBased on a common framework of commitment-based semantics, we presented a notion of grounding for commitments based on notions of compliant and expected behaviour.\nWe then defined dynamic semantics as state transition systems over different semantic states that can be viewed as different \"versions\" of ACL semantics in the traditional sense, and can be easily associated with a planning-based view of reasoning about communication.\nThereby, our focus was on simplicity and on providing mechanisms for tracking semantic evolution in a \"down-toearth\", algorithmic fashion to ensure applicability to many different agent designs.\nWe discussed the properties of our framework showing how it can be used as a powerful communication-inherent mechanism for rewarding and sanctioning agent behaviour in open systems without compromising agent autonomy, discussed its integration with agents' planning processes, complexity issues, and presented a list of desiderata for the design of ACLs with such semantics.\nCurrently, we are working on fully-fledged specifications of dynamic semantics for more complex languages and on extending our approach to mentalistic semantics where we view statements about mental states as commitments regarding the rational implications of these mental states (a simple example for this is that an agent commits itself to dropping an ostensible intention that it is claiming to maintain if that intention turns out to be unachievable).\nIn this context, we are particularly interested in appropriate mechanisms to detect and respond to lying by \"interrogating\" suspicious agents and forcing them to commit themselves to (sets of) mental states publicly while sanctioning them when these are inconsistent with their actions.","lvl-4":"Dynamic Semantics for Agent Communication Languages\nABSTRACT\nThis paper proposes dynamic semantics for agent communication languages (ACLs) as a method for tackling some of the fundamental problems associated with agent communication in open multiagent systems.\nBased on the idea of providing alternative semantic \"variants\" for speech acts and transition rules between them that are contingent on previous agent behaviour, our framework provides an improved notion of grounding semantics in ongoing interaction, a simple mechanism for distinguishing between compliant and expected behaviour, and a way to specify sanction and reward mechanisms as part of the ACL itself.\nWe extend a common framework for commitment-based ACL semantics to obtain these properties, discuss desiderata for the design of concrete dynamic semantics together with examples, and analyse their properties.\n1.\nINTRODUCTION\nThe field of agent communication language (ACL) research has long been plagued by problems of verifiability and grounding [10, 13, 17].\nUnable to safeguard themselves against abuse by malicious, deceptive or malfunctioning agents, mentalistic semantics are inherently unreliable and inappropriate for use in open MAS in which agents with potentially\nconflicting objectives might deliberately exploit their adversaries' conceptions of message semantics to provoke a certain behaviour.\nCommitment-based semantics [6, 8, 14], on the other hand, define the meaning of messages exchanged among agents in terms of publicly observable commitments, i.e. pledges to bring about a state of affairs or to perform certain actions.\nSuch semantics solve the verifiability problem as they allow for tracing the status of existing commitments at any point in time given observed messages and actions so that any observer can, for example, establish whether an agent has performed a promised action.\nFurther, this implies that the semantics specification does not provide an interface to agents' deliberation and planning mechanisms and hence it is unclear how rational agents would be able to decide whether to subscribe to a suggested ACL semantics when it is deployed.\nFinally, none of the existing approaches allows the ACL to specify how to respond to a violation of its semantics by individual agents.\nSecondly, existing approaches fail to exploit the possibilities of sanctioning and rewarding certain behaviours in a communication-inherent way by modifying the future meaning of messages uttered or received by compliant\/deviant agents.\nIn this paper, we propose dynamic semantics (DSs) for ACLs as a solution to these problems.\nOur notion of DS is based on the very simple idea of defining different alternatives for the meaning of individual speech acts (so-called semantic variants) in an ACL semantics specification, and transition rules between semantic states (i.e. collections of variants for different speech acts) that describe the current meaning of the ACL.\nThese elements taken together result in a FSM-like view of ACL specifications where each individual state provides a complete ACL semantics and state transitions are triggered by observed agent behaviour in order to (1) reflect future expectations based on previous interaction experience and (2) sanction or reward certain kinds of behaviour.\nIn defining a DS framework for commitment-based ACLs, this paper makes three contributions:\n1.\nAn extension of commitment-based ACL semantics to provide an improved notion of grounding commitments in agent interaction and to allow ACL specifications to be directly used for planning-based rational decision making.\n2.\nA simple way of distinguishing between compliant and expected behaviour with respect to an ACL specification that enables reasoning about the potential behaviour of agents purely from an ACL semantics perspective.\n3.\nA mechanism for specifying how meaning evolves with agent behaviour and how this can be used to describe communication-inherent sanctioning and rewarding mechanisms essential to the design of open MASs. .\nFurthermore, we discuss desiderata for DS design that can be derived from our framework, present examples and analyse their properties.\nThe remainder of this paper is structured as follows: Section 2 introduces a formal framework for dynamic ACL semantics.\nIn section 3 we present an analysis and discussion of this framework and discuss desiderata for the design of ACLs with dynamic semantics.\nSection 4 reviews related approaches, and section 5 concludes.\n4.\nRELATED WORK\nExpectation-based reasoning about interaction was first proposed in [2], considering the evolution of expectations described as probabilistic expectations of communication and action sequences.\nThe same authors suggested a more general framework for expectation-based communication semantics [9], and argue for a \"consequentialist\" view of semantics that is based on defining the meaning of utterances in terms of their expected consequences and updating these expectations with new observations [11].\nHowever, their approach does not use an explicit notion of commitments which in our framework mediates between communication and behaviour-based grounding, and provides a clear distinction between a normative notion of compliance and a more empirical notion of expectation.\nGrounding for (mentalistic) ACL semantics has been investigated in [7] where grounded information is viewed as \"information that is publicly expressed and accepted as being true by all the agents participating in a conversation\".\nLike [1] (which bases the notion of \"publicly expressed\" on roles rather than internal states of agents) these authors' main concern is to provide a verifiable basis for determining the semantics of expressed mental states and commitments.\n11In a non-trivial sense, i.e. when some initial transitions are possible in principle\n106 The Sixth Intl. .\nJoint Conf.\nOur framework is also related to deontic methods for the specification of obligations, norms and sanctions.\nIn this area, [16] is the only framework that we are aware of which considers dynamic obligations, norms and sanctions.\nHowever, as we have described above we solely utilise semantic evolution as a sanctioning and rewarding mechanism, i.e. unlike this work we do not assume that agents can be directly punished or rewarded.\n5.\nCONCLUSION\nThis paper introduces dynamic semantics for ACLs as a method for dealing with some fundamental problems of agent communication in open systems, the simple underlying idea being that different courses of agent behaviour can give rise to different interpretations of meaning of the messages exchanged among agents.\nBased on a common framework of commitment-based semantics, we presented a notion of grounding for commitments based on notions of compliant and expected behaviour.\nWe then defined dynamic semantics as state transition systems over different semantic states that can be viewed as different \"versions\" of ACL semantics in the traditional sense, and can be easily associated with a planning-based view of reasoning about communication.\nThereby, our focus was on simplicity and on providing mechanisms for tracking semantic evolution in a \"down-toearth\", algorithmic fashion to ensure applicability to many different agent designs.\nWe discussed the properties of our framework showing how it can be used as a powerful communication-inherent mechanism for rewarding and sanctioning agent behaviour in open systems without compromising agent autonomy, discussed its integration with agents' planning processes, complexity issues, and presented a list of desiderata for the design of ACLs with such semantics.","lvl-2":"Dynamic Semantics for Agent Communication Languages\nABSTRACT\nThis paper proposes dynamic semantics for agent communication languages (ACLs) as a method for tackling some of the fundamental problems associated with agent communication in open multiagent systems.\nBased on the idea of providing alternative semantic \"variants\" for speech acts and transition rules between them that are contingent on previous agent behaviour, our framework provides an improved notion of grounding semantics in ongoing interaction, a simple mechanism for distinguishing between compliant and expected behaviour, and a way to specify sanction and reward mechanisms as part of the ACL itself.\nWe extend a common framework for commitment-based ACL semantics to obtain these properties, discuss desiderata for the design of concrete dynamic semantics together with examples, and analyse their properties.\n1.\nINTRODUCTION\nThe field of agent communication language (ACL) research has long been plagued by problems of verifiability and grounding [10, 13, 17].\nEarly mentalistic semantics that specify the semantics of speech acts in terms of preand post-conditions contingent on mental states of the participants (e.g. [3, 4, 12, 15]) lack verifiability regarding compliance of agents with the intended semantics (as the mental states of agents cannot be observed in open multiagent systems (MASs)).\nUnable to safeguard themselves against abuse by malicious, deceptive or malfunctioning agents, mentalistic semantics are inherently unreliable and inappropriate for use in open MAS in which agents with potentially\nconflicting objectives might deliberately exploit their adversaries' conceptions of message semantics to provoke a certain behaviour.\nCommitment-based semantics [6, 8, 14], on the other hand, define the meaning of messages exchanged among agents in terms of publicly observable commitments, i.e. pledges to bring about a state of affairs or to perform certain actions.\nSuch semantics solve the verifiability problem as they allow for tracing the status of existing commitments at any point in time given observed messages and actions so that any observer can, for example, establish whether an agent has performed a promised action.\nHowever, this can only be done a posteriori, and this creates a grounding problem as no expectations regarding what will happen in the future can be formed at the time of uttering or receiving a message purely on the grounds of the ACL semantics.\nFurther, this implies that the semantics specification does not provide an interface to agents' deliberation and planning mechanisms and hence it is unclear how rational agents would be able to decide whether to subscribe to a suggested ACL semantics when it is deployed.\nFinally, none of the existing approaches allows the ACL to specify how to respond to a violation of its semantics by individual agents.\nThis has two implications: Firstly, it is left it up to the individual agent to reason about potential violations, i.e. to bear the burden of planning its own reaction to others' non-compliant behaviour (e.g. in order to sanction them) and to anticipate others' reactions to own misconduct without any guidance from the ACL specification.\nSecondly, existing approaches fail to exploit the possibilities of sanctioning and rewarding certain behaviours in a communication-inherent way by modifying the future meaning of messages uttered or received by compliant\/deviant agents.\nIn this paper, we propose dynamic semantics (DSs) for ACLs as a solution to these problems.\nOur notion of DS is based on the very simple idea of defining different alternatives for the meaning of individual speech acts (so-called semantic variants) in an ACL semantics specification, and transition rules between semantic states (i.e. collections of variants for different speech acts) that describe the current meaning of the ACL.\nThese elements taken together result in a FSM-like view of ACL specifications where each individual state provides a complete ACL semantics and state transitions are triggered by observed agent behaviour in order to (1) reflect future expectations based on previous interaction experience and (2) sanction or reward certain kinds of behaviour.\nIn defining a DS framework for commitment-based ACLs, this paper makes three contributions:\n1.\nAn extension of commitment-based ACL semantics to provide an improved notion of grounding commitments in agent interaction and to allow ACL specifications to be directly used for planning-based rational decision making.\n2.\nA simple way of distinguishing between compliant and expected behaviour with respect to an ACL specification that enables reasoning about the potential behaviour of agents purely from an ACL semantics perspective.\n3.\nA mechanism for specifying how meaning evolves with agent behaviour and how this can be used to describe communication-inherent sanctioning and rewarding mechanisms essential to the design of open MASs. .\nFurthermore, we discuss desiderata for DS design that can be derived from our framework, present examples and analyse their properties.\nThe remainder of this paper is structured as follows: Section 2 introduces a formal framework for dynamic ACL semantics.\nIn section 3 we present an analysis and discussion of this framework and discuss desiderata for the design of ACLs with dynamic semantics.\nSection 4 reviews related approaches, and section 5 concludes.\n2.\nFORMAL FRAMEWORK\nOur general framework for describing the kind of MASs we are interested in is fairly simple.\nLet Ag = {1,..., n} a finite set of agents, {Aci} iEAg a collection of action sets (where Aci are the actions of agent i), A = xn i = 1Aci the joint action space, and Env a set of environment states.\nA ~ at \u2212 1--+ et where ~ ai E A (~ ai [j] denotes the action of agent j in this tuple), and ei E Env.\nWe define | r | = t, last (r) = et, r [1: j] is short for the j-long initial sub-sequence of r, and we write r' C r for any run r' iff 3j E N.r' = r [1: j].\nWriting R (Env, A) for the set of all possible runs, we can view each agent i as a function gi: R (Env, A)--+ Aci describing the agent's action choices depending on the history of previous environment states and joint actions.\nThe set of all agent functions for i given A and Env is denoted by Gi (Env, A).\nThe (finite, discrete, stationary, fully accessible, deterministic) environment is defined by a state transformer function f: Env x A--+ Env, so that the system's operation for an initial state e1 is defined by\nvector of functions gi).\nThis definition implies that execution of actions is synchronised among agents, so that the system evolves though an execution of \"rounds\" where all agents perform their actions simultaneously.\nWe denote the set of all runs given a particular configuration of agent functions g ~ by R (Env, A, ~ g).\nWe write gi \u223c r where gi an agent function and r a run iff b' 1 si.\nIn our experiments, we took a maximal number of threshold k = 8.\nFor instance, in the iono problem case, there were 34 numeric attributes, and an instance is described with 506 atoms.\nBelow are given the accuracy results of our system along with previous results.\nThe column Nb ex.\nrefer to the 4 The probability for the value of a to be in any interval is constant The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 169 number of examples used for learning5 .\nColumn (1) represents minimal and maximal accuracy values for the thirty three classifiers tested in [8].\nColumn (2) represents the results of [13], where various learning methods are compared to ensemble learning methods using weighted classifiers sets.\nColumn S-1 and S-10 gives the accuracy of SMILE with respectively 1 and 10 agents.\nPb Nb ex.\n(1) (2) S-1 S-10 ttt 862\/574 \/\/ 76.2-99.7 99.7 99.9 kr-vs-kp 2876\/958 \/\/ 91.4-99.4 96.8 97.3 iono 315 \/\/ 88.0-91.8 87.2 88.1 bupa 310 57-72 58-69.3 62.5 63.3 breastw 614 91-97 94.3-97.3 94.7 94.7 vote 391 94-96 95.3-96 91.9 92.6 pima 691 \/\/ 71.5- 73.4 65.0 65.0 heart 243 66-86 77.1-84.1 69.5 70.7 This table shows that the incremental algorithm corresponding to the single agent case, gives honorable results relatively to non-incremental classical methods using larger and more complex hypotheses.\nIn some cases, there is an accuracy improvement with a 10 agents MAS.\nHowever, with such benchmarks data, which are often noisy, the difficulty does not really come from the way in which the search space is explored, and therefore the improvement observed is not always significant.\nThe same kind of phenomenon have been observed with methods dedicated to hard boolean problems [4].\n4.2 MAS synchronization Here we consider that n single agents learn without interactions and at a given time start interacting thus forming a MAS.\nThe purpose is to observe how the agents take advantage of collaboration when they start from different states of beliefs and memories.\nWe compare in this section a 1-MAS, a 10-MAS (ref) and a 10-MAS (100sync) whose agents did not communicate during the arrival of the first 100 examples (10 by agents).\nThe three accuracy curves are shown in figure 5.\nBy comparing the single agent curve and the synchronized 10-MAS, we can observe that after the beginning of the synchronization, that is at 125 examples, accuracies are identical.\nThis was expected since as soon as an example e received by the MAS contradicts the current hypothesis of the agent ra receiving it, this agent makes an update and its new hypothesis is proposed to the others agents for criticism.\nTherefore, this first contradictory example brings the MAS to reach consistency relatively to the whole set of examples present in agents'' memories.\nA higher accuracy, corresponding to a 10-MAS is obtained later, from the 175th example.\nIn other words, the benefit of a better exploration of the research space is obtained slightly later in the learning process.\nNote that this synchronization happens naturally in all situations where agents have, for some reason, a divergence between their hypothesis and the system memory.\nThis includes the fusion of two MAS into a single one or the arrival of new agents in an existing MAS.\n4.3 Experiments on asynchronous learning: the effect of a large data stream 5 For ttt and kr-vs-kp, our protocol did not use more than respectively 574 and 958 learning examples, so we put another number in the column.\nFigure 5: Accuracies of a 1-MAS, a 10-MAS, and a 10-MAS synchronized after 100 examples.\nIn this experiment we relax our slow learning mode: the examples are sent at a given rate to the MAS.\nThe resulting example stream is measured in ms\u22121 , and represents the number of examples sent to the MAS each ms. Whenever the stream is too large, the MAS cannot reach MAS consistency on reception of an example from the environment before a new example arrives.\nThis means that the update process, started by agent r0 as he received an example, may be unfinished when a new example is received by r0 or another agent r1.\nAs a result, a critic agent may have at instant t to send counterexamples of hypotheses sent by various agents.\nHowever as far as the agents, in our setting, memorizes all the examples they receive whenever the stream ends, the MAS necessarily reaches MAS consistency with respect to all the examples received so far.\nIn our experiments, though its learning curve is slowed down during the intense learning phase (corresponding to low accuracy of the current hypotheses), the MAS still reaches a satisfying hypothesis later on as there are less and less counterexamples in the example stream.\nIn Figure 6 we compare the accuracies of two 11-MAS respectively submitted to example streams of different rates when learning the M11 formula.\nThe learning curve of the MAS receiving an example at a 1\/33 ms\u22121 rate is almost not altered (see Figure 4) whereas the 1\/16 ms\u22121 MAS is first severely slowed down before catching up with the first one.\n5.\nRELATED WORKS Since 96 [15], various work have been performed on learning in MAS, but rather few on concept learning.\nIn [11] the MAS performs a form of ensemble learning in which the agents are lazy learners (no explicit representation is maintained) and sell useless examples to other agents.\nIn [10] each agent observes all the examples but only perceive a part of their representation.\nIn mutual online concept learning [14] the agents converge to a unique hypothesis, but each agent produces examples from its own concept representation, thus resulting in a kind of synchronization rather than in pure concept learning.\n170 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 6: Accuracies of two asynchronous 11-MAS (1\/33ms\u22121 and 1\/16ms\u22121 example rates) .\n6.\nCONCLUSION We have presented here and experimented a protocol for MAS online concept learning.\nThe main feature of this collaborative learning mechanism is that it maintains a consistency property: though during the learning process each agent only receives and stores, with some limited redundancy, part of the examples received by the MAS, at any moment the current hypothesis is consistent with the whole set of examples.\nThe hypotheses of our experiments do not address the issues of distributed MAS such as faults (for instance messages could be lost or corrupted) or other failures in general (crash, byzantine faults, etc.).\nNevertheless, our framework is open, i.e., the agents can leave the system or enter it while the consistency mechanism is preserved.\nFor instance if we introduce a timeout mechanism, even when a critic agent crashes or omits to answer, the consistency with the other critics (within the remaining agents) is entailed.\nIn [1], a similar approach has been applied to MAS abduction problems: the hypotheses to maintain, given an incomplete information, are then facts or statements.\nFurther work concerns first coupling induction and abduction in order to perform collaborative concept learning when examples are only partially observed by each agent, and second, investigating partial memory learning: how learning is preserved whenever one agent or the whole MAS forgets some selected examples.\nAknowledgments We are very grateful to Dominique Bouthinon for implementing late modifications in SMILE, so much easing our experiments.\nPart of this work has been performed during the first author``s visit to the Atelier De BioInformatique of Paris VI university, France.\n7.\nREFERENCES [1] G. Bourgne, N. Maudet, and S. Pinson.\nWhen agents communicate hypotheses in critical situations.\nIn DALT-2006, May 2006.\n[2] W. W. Cohen.\nFast effective rule induction.\nIn ICML, pages 115-123, 1995.\n[3] C. B. D.J. Newman, S. Hettich and C. Merz.\nUCI repository of machine learning databases, 1998.\n[4] S. Esmeir and S. Markovitch.\nLookahead-based algorithms for anytime induction of decision trees.\nIn ICML``O4, pages 257-264.\nMorgan Kaufmann, 2004.\n[5] J. F\u00a8urnkranz.\nA pathology of bottom-up hill-climbing in inductive rule learning.\nIn ALT, volume 2533 of LNCS, pages 263-277.\nSpringer, 2002.\n[6] A. Guerra-Hern\u00b4andez, A. ElFallah-Seghrouchni, and H. Soldano.\nLearning in BDI multi-agent systems.\nIn CLIMA IV, volume 3259, pages 218-233.\nSpringer Verlag, 2004.\n[7] M. Henniche.\nMgi: an incremental bottom-up algorithm.\nIn IEEE Aust.\nand New Zealand Conference on Intelligent Information Systems, pages 347-351, 1994.\n[8] T.-S.\nLim, W.-Y.\nLoh, and Y.-S.\nShih.\nA comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms.\nMachine Learning, 40(3):203-228, 2000.\n[9] M. A. Maloof and R. S. Michalski.\nIncremental learning with partial instance memory.\nArtif.\nIntell., 154(1-2):95-126, 2004.\n[10] P. J. Modi and W.-M.\nShen.\nCollaborative multiagent learning for classification tasks.\nIn AGENTS ``01, pages 37-38.\nACM Press, 2001.\n[11] S. Onta\u02dcnon and E. Plaza.\nRecycling data for multi-agent learning.\nIn ICML ``05, pages 633-640.\nACM Press, 2005.\n[12] J. R. Quinlan.\nInduction of decision trees.\nMachine Learning, 1(1):81-106, 1986.\n[13] U. R\u00a8uckert and S. Kramer.\nTowards tight bounds for rule learning.\nIn ICML ``04 (International conference on Machine learning), page 90, New York, NY, USA, 2004.\nACM Press.\n[14] J. Wang and L. Gasser.\nMutual online concept learning for multiple agents.\nIn AAMAS, pages 362-369.\nACM Press, 2002.\n[15] G. Wei\u00df and S. Sen, editors.\nAdaption and Learning in Multi-Agent Systems, volume 1042 of Lecture Notes in Computer Science.\nSpringer, 1996.\n[16] I. H. Witten and E. Frank.\nData Mining: Practical Machine Learning Tools and Techniques with Java Implementations.\nMorgan Kaufmann, October 1999.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 171","lvl-3":"SMILE: Sound Multi-agent Incremental LEarning;--RRB- *\nABSTRACT\nThis article deals with the problem of collaborative learning in a multi-agent system.\nHere each agent can update incrementally its beliefs B (the concept representation) so that it is in a way kept consistent with the whole set of information K (the examples) that he has received from the environment or other agents.\nWe extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent.\nThe corresponding protocol is applied to supervised concept learning.\nThe resulting method SMILE (standing for Sound Multiagent Incremental LEarning) is described and experimented here.\nSurprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.\n1.\nINTRODUCTION\nThis article deals with the problem of collaborative concept learning in a multi-agent system.\n[6] introduces a characterisation of learning in multi-agent system according to the level of awareness of the agents.\nAt level 1, agents learn * The primary author of this paper is a student.\nin the system without taking into account the presence of other agents, except through the modification brought upon the environment by their action.\nLevel 2 implies direct interaction between the agents as they can exchange messages to improve their learning.\nLevel 3 would require agents to take into account the competencies of other agents, and be able to learn from observation of the other agents' behaviour (while considering them as independant entities and not indetermined part of the environment as in level 1).\nWe focus in this paper on level 2, studying direct interaction between agents involved in a learning process.\nEach agent is assumed to be able to learn incrementally from the data he receives, meaning that each agent can update his belief set B to keep it consistent with the whole set of information K that he has received from the environment or from other agents.\nIn such a case, we will say that he is a-consistent.\nHere, the belief set B represents hypothetical knowledge that can therefore be revised, whereas the set of information K represents certain knowledge, consisting of non revisable observations and facts.\nMoreover, we suppose that at least a part Bc of the beliefs of each agent is common to all agents and must stay that way.\nTherefore, an update of this common set Bc by agent r must provoke an update of Bc for the whole community of agents.\nIt leads us to define what is the mas-consistency of an agent with respect to the community.\nThe update process of the community beliefs when one of its members gets new information can then be defined as the consistency maintenance process ensuring that every agent in the community will stay masconsistent.\nThis mas-consistency maintenance process of an agent getting new information gives him the role of a learner and implies communication with other agents acting as critics.\nHowever, agents are not specialised and can in turn be learners or critics, none of them being kept to a specific role.\nPieces of information are distributed among the agents, but can be redundant.\nThere is no central memory.\nThe work described here has its origin in a former work concerning learning in an intentional multi-agent system using a BDI formalism [6].\nIn that work, agents had plans, each of them being associated with a context defining in which conditions it can be triggered.\nPlans (each of them having its own context) were common to the whole set of agents in the community.\nAgents had to adapt their plan contexts depending on the failure or success of executed plans, using a learning mechanism and asking other agents for examples (plans successes or failures).\nHowever this work lacked a collective learning protocol enabling a real autonomy of the multi-agent system.\nThe study of such a protocol is the ob\nject of the present paper.\nIn section 2 we formally define the mas-consistency of an update mechanism for the whole MAS and we propose a generic update mechanism proved to be mas consistent.\nIn section 3 we describe SMILE, an incremental multi agent concept learner applying our mas consistent update mechanism to collaborative concept learning.\nSection 4 describes various experiments on SMILE and discusses various issues including how the accuracy and the simplicity of the current hypothesis vary when comparing single agent learning and mas learning.\nIn section 5 we briefly present some related works and then conclude in section 6 by discussing further investigations on mas consistent learning.\n2.\nFORMAL MODEL\n2.1 Definitions and framework\nDefinition 3.\nConsistency of a MAS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 165\n2.2 A strongly mas-consistent update mecha\n3.\nSOUND MULTI-AGENT INCREMENTAL LEARNING\n3.1 The learning task\n3.2 Incremental learning process\n166 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.3 Collective learning\n4.\nEXPERIMENTS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 167\n4.1 Efficiency of MAS concept learning\n4.1.1 Execution time\n4.1.2 Redundancy in the MAS memory\n4.1.3 A n-MAS selects a simpler solution than a single agent\n4.1.4 A n-MAS is more accurate than a single agent\n4.1.4.1 Boolean formulas.\n4.1.4.2 ML database problems.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 169\n4.2 MAS synchronization\n4.3 Experiments on asynchronous learning: the effect of a large data stream\n5.\nRELATED WORKS\nSince 96 [15], various work have been performed on learning in MAS, but rather few on concept learning.\nIn [11] the MAS performs a form of ensemble learning in which the agents are lazy learners (no explicit representation is maintained) and sell useless examples to other agents.\nIn [10] each agent observes all the examples but only perceive a part of their representation.\nIn mutual online concept learning [14] the agents converge to a unique hypothesis, but each agent produces examples from its own concept representation, thus resulting in a kind of synchronization rather than in pure concept learning.\nFigure 6: Accuracies of two asynchronous 11-MAS (1\/33ms \u2212 1 and 1\/16ms \u2212 1 example rates).\n6.\nCONCLUSION\nWe have presented here and experimented a protocol for MAS online concept learning.\nThe main feature of this collaborative learning mechanism is that it maintains a consistency property: though during the learning process each agent only receives and stores, with some limited redundancy, part of the examples received by the MAS, at any moment the current hypothesis is consistent with the whole set of examples.\nThe hypotheses of our experiments do not address the issues of distributed MAS such as faults (for instance messages could be lost or corrupted) or other failures in general (crash, byzantine faults, etc.).\nNevertheless, our framework is open, i.e., the agents can leave the system or enter it while the consistency mechanism is preserved.\nFor instance if we introduce a timeout mechanism, even when a critic agent crashes or omits to answer, the consistency with the other critics (within the remaining agents) is entailed.\nIn [1], a similar approach has been applied to MAS abduction problems: the hypotheses to maintain, given an incomplete information, are then facts or statements.\nFurther work concerns first coupling induction and abduction in order to perform collaborative concept learning when examples are only partially observed by each agent, and second, investigating partial memory learning: how learning is preserved whenever one agent or the whole MAS forgets some selected examples.","lvl-4":"SMILE: Sound Multi-agent Incremental LEarning;--RRB- *\nABSTRACT\nThis article deals with the problem of collaborative learning in a multi-agent system.\nHere each agent can update incrementally its beliefs B (the concept representation) so that it is in a way kept consistent with the whole set of information K (the examples) that he has received from the environment or other agents.\nWe extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent.\nThe corresponding protocol is applied to supervised concept learning.\nThe resulting method SMILE (standing for Sound Multiagent Incremental LEarning) is described and experimented here.\nSurprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.\n1.\nINTRODUCTION\nThis article deals with the problem of collaborative concept learning in a multi-agent system.\n[6] introduces a characterisation of learning in multi-agent system according to the level of awareness of the agents.\nAt level 1, agents learn * The primary author of this paper is a student.\nin the system without taking into account the presence of other agents, except through the modification brought upon the environment by their action.\nLevel 2 implies direct interaction between the agents as they can exchange messages to improve their learning.\nWe focus in this paper on level 2, studying direct interaction between agents involved in a learning process.\nEach agent is assumed to be able to learn incrementally from the data he receives, meaning that each agent can update his belief set B to keep it consistent with the whole set of information K that he has received from the environment or from other agents.\nMoreover, we suppose that at least a part Bc of the beliefs of each agent is common to all agents and must stay that way.\nTherefore, an update of this common set Bc by agent r must provoke an update of Bc for the whole community of agents.\nIt leads us to define what is the mas-consistency of an agent with respect to the community.\nThe update process of the community beliefs when one of its members gets new information can then be defined as the consistency maintenance process ensuring that every agent in the community will stay masconsistent.\nThis mas-consistency maintenance process of an agent getting new information gives him the role of a learner and implies communication with other agents acting as critics.\nHowever, agents are not specialised and can in turn be learners or critics, none of them being kept to a specific role.\nPieces of information are distributed among the agents, but can be redundant.\nThere is no central memory.\nThe work described here has its origin in a former work concerning learning in an intentional multi-agent system using a BDI formalism [6].\nIn that work, agents had plans, each of them being associated with a context defining in which conditions it can be triggered.\nPlans (each of them having its own context) were common to the whole set of agents in the community.\nAgents had to adapt their plan contexts depending on the failure or success of executed plans, using a learning mechanism and asking other agents for examples (plans successes or failures).\nHowever this work lacked a collective learning protocol enabling a real autonomy of the multi-agent system.\nThe study of such a protocol is the ob\nject of the present paper.\nIn section 2 we formally define the mas-consistency of an update mechanism for the whole MAS and we propose a generic update mechanism proved to be mas consistent.\nIn section 3 we describe SMILE, an incremental multi agent concept learner applying our mas consistent update mechanism to collaborative concept learning.\nSection 4 describes various experiments on SMILE and discusses various issues including how the accuracy and the simplicity of the current hypothesis vary when comparing single agent learning and mas learning.\nIn section 5 we briefly present some related works and then conclude in section 6 by discussing further investigations on mas consistent learning.\n5.\nRELATED WORKS\nSince 96 [15], various work have been performed on learning in MAS, but rather few on concept learning.\nIn [11] the MAS performs a form of ensemble learning in which the agents are lazy learners (no explicit representation is maintained) and sell useless examples to other agents.\nIn [10] each agent observes all the examples but only perceive a part of their representation.\nIn mutual online concept learning [14] the agents converge to a unique hypothesis, but each agent produces examples from its own concept representation, thus resulting in a kind of synchronization rather than in pure concept learning.\n6.\nCONCLUSION\nWe have presented here and experimented a protocol for MAS online concept learning.\nNevertheless, our framework is open, i.e., the agents can leave the system or enter it while the consistency mechanism is preserved.\nFor instance if we introduce a timeout mechanism, even when a critic agent crashes or omits to answer, the consistency with the other critics (within the remaining agents) is entailed.\nFurther work concerns first coupling induction and abduction in order to perform collaborative concept learning when examples are only partially observed by each agent, and second, investigating partial memory learning: how learning is preserved whenever one agent or the whole MAS forgets some selected examples.","lvl-2":"SMILE: Sound Multi-agent Incremental LEarning;--RRB- *\nABSTRACT\nThis article deals with the problem of collaborative learning in a multi-agent system.\nHere each agent can update incrementally its beliefs B (the concept representation) so that it is in a way kept consistent with the whole set of information K (the examples) that he has received from the environment or other agents.\nWe extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent.\nThe corresponding protocol is applied to supervised concept learning.\nThe resulting method SMILE (standing for Sound Multiagent Incremental LEarning) is described and experimented here.\nSurprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.\n1.\nINTRODUCTION\nThis article deals with the problem of collaborative concept learning in a multi-agent system.\n[6] introduces a characterisation of learning in multi-agent system according to the level of awareness of the agents.\nAt level 1, agents learn * The primary author of this paper is a student.\nin the system without taking into account the presence of other agents, except through the modification brought upon the environment by their action.\nLevel 2 implies direct interaction between the agents as they can exchange messages to improve their learning.\nLevel 3 would require agents to take into account the competencies of other agents, and be able to learn from observation of the other agents' behaviour (while considering them as independant entities and not indetermined part of the environment as in level 1).\nWe focus in this paper on level 2, studying direct interaction between agents involved in a learning process.\nEach agent is assumed to be able to learn incrementally from the data he receives, meaning that each agent can update his belief set B to keep it consistent with the whole set of information K that he has received from the environment or from other agents.\nIn such a case, we will say that he is a-consistent.\nHere, the belief set B represents hypothetical knowledge that can therefore be revised, whereas the set of information K represents certain knowledge, consisting of non revisable observations and facts.\nMoreover, we suppose that at least a part Bc of the beliefs of each agent is common to all agents and must stay that way.\nTherefore, an update of this common set Bc by agent r must provoke an update of Bc for the whole community of agents.\nIt leads us to define what is the mas-consistency of an agent with respect to the community.\nThe update process of the community beliefs when one of its members gets new information can then be defined as the consistency maintenance process ensuring that every agent in the community will stay masconsistent.\nThis mas-consistency maintenance process of an agent getting new information gives him the role of a learner and implies communication with other agents acting as critics.\nHowever, agents are not specialised and can in turn be learners or critics, none of them being kept to a specific role.\nPieces of information are distributed among the agents, but can be redundant.\nThere is no central memory.\nThe work described here has its origin in a former work concerning learning in an intentional multi-agent system using a BDI formalism [6].\nIn that work, agents had plans, each of them being associated with a context defining in which conditions it can be triggered.\nPlans (each of them having its own context) were common to the whole set of agents in the community.\nAgents had to adapt their plan contexts depending on the failure or success of executed plans, using a learning mechanism and asking other agents for examples (plans successes or failures).\nHowever this work lacked a collective learning protocol enabling a real autonomy of the multi-agent system.\nThe study of such a protocol is the ob\nject of the present paper.\nIn section 2 we formally define the mas-consistency of an update mechanism for the whole MAS and we propose a generic update mechanism proved to be mas consistent.\nIn section 3 we describe SMILE, an incremental multi agent concept learner applying our mas consistent update mechanism to collaborative concept learning.\nSection 4 describes various experiments on SMILE and discusses various issues including how the accuracy and the simplicity of the current hypothesis vary when comparing single agent learning and mas learning.\nIn section 5 we briefly present some related works and then conclude in section 6 by discussing further investigations on mas consistent learning.\n2.\nFORMAL MODEL\n2.1 Definitions and framework\nIn this section, we present a general formulation of collective incremental learning in a cognitive multi agent system.\nWe represent a MAS as a set of agents r1,..., rn.\nEach agent ri has a belief set Bi consisting of all the revisable knowledge he has.\nPart of these knowledges must be shared with other agents.\nThe part of Bi that is common to all agents is denoted as BC.\nThis common part provokes a dependency between the agents.\nIf an agent ri updates his belief set Bi to B ~ i, changing in the process BC into B ~ C, all other agents rk must then update their belief set Bk to B ~ k so that B ~ C \u2286 B ~ k. Moreover, each agent ri has stored some certain information Ki.\nWe suppose that some consistency property Cons (Bi, Ki) can be verified by the agent itself between its beliefs Bi and its information Ki.\nAs said before, Bi represents knowledge that might be revised whereas Ki represents observed facts, taken as being true, and which can possibly contradict Bi.\nDefinition 1.\na-consistency of an agent\nAn agent ri is a-consistent iff Cons (Bi, Ki) is true.\nExample 1.\nAgent r1 has a set of plans which are in the common part BC of B1.\nEach plan P has a triggering context d (P) (which acts as a pre-condition) and a body.\nSome piece of information k could be\" plan P, triggered in situation s, has failed in spite of s being an instance of d (P)\".\nIf this piece of information is added to K1, then agent r1 is not a-consistent anymore: Cons (B1, K1 \u222a k) is false.\nWe also want to define some notion of consistency for the whole MAS depending on the belief and information sets of its constituting elements.\nWe will first define the consistency of an agent ri with respect to its belief set Bi and its own information set Ki together with all information sets K1...Kn from the other agents of the MAS.\nWe will simply do that by considering what would be the a-consistency of the agent if he has the information of all the other agents.\nWe call this notion the mas-consistency:\nAn agent ri is mas-consistent iff Cons (Bi, Ki \u222a K) is true, where K = \u222a j \u2208 {1,.\n.\n, n} \u2212 {i} Kj1 is the set of all information from other agents of the MAS.\nExample 2.\nUsing the previous example, suppose that the piece of information k is included in the information K2 of agent r2.\nAs long as the piece of information is not transmitted to r1, and so added to K1, r1 remains a-consistent.\nHowever, r1 is not mas-consistent as k is in the set K of all information of the MAS.\nThe global consistency of the MAS is then simply the mas-consistency of all its agents.\nDefinition 3.\nConsistency of a MAS\nA MAS r1,..., rn is consistent iff all its agents ri are masconsistent.\nWe now define the required properties for a revision mechanism M updating an agent ri when it gets a piece of information k.\nIn the following, we will suppose that:\n\u2022 Update is always possible, that is, an agent can always modify its belief set Bi in order to regain its a-consistency.\nWe will say that each agent is locally efficient.\n\u2022 Considering two sets of information Cons (Bi, K1) and Cons (Bi, K2), we also have Cons (Bi, K1 \u222a K2).\nThat is, a-consistency of the agents is additive.\n\u2022 If a piece of information k concerning the common set BC is consistent with an agent, it is consistent with all agents: for all pair of agents (ri, rj) such that Cons (Bi, Ki) and Cons (Bj, Kj) are true, we have, for all piece of information k: Cons (Bi, Ki \u222a k) iff Cons (Bj, Kj \u222a k).\nIn such a case, we will say that the MAS is coherent.\nThis last condition simply means that the common belief set BC is independent of the possible differences between the belief sets Bi of each agent ri.\nIn the simplest case, B1 =...= Bn = BC.\nM will also be viewed as an incremental learning mechanism and represented as an application changing Bi in B ~ i.\nIn the following, we shall note ri (Bi, Ki) for ri when it is useful.\nDefinition 4.\na-consistency of a revision\nAn update mechanism M is a-consistent iff for any agent ri and any piece of information k reaching ri, the a-consistency of this agent is preserved.\nIn other words, iff:\nwhere B ~ i = M (Bi) and K ~ i = Ki \u222a k is the set of all information from other agents of the MAS.\nIn the same way, we define the mas-consistency of a revision mechanism as the a-consistency of this mechanism should the agents dispose of all information in the MAS.\nIn the following, we shall note, if needed, ri (Bi, Ki, K) for the agent ri in MAS r1...rn.\nAn update mechanism Ms is mas-consistent iff for all agent ri and all pieces of information k reaching ri, the masconsistency of this agent is preserved.\nIn other words, if: ri (Bi, Ki, K) mas-consistent \u21d2 ri (B ~ i, K ~ i, K) mas-consistent, where B ~ i = Ms (Bi), K ~ i = Ki \u222a k, and K = \u222a Kj is the set of all information from the MAS.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 165\nAt last, when a mas-consistent mechanism is applied by an agent getting a new piece of information, a desirable sideeffect of the mechanism should be that all others agents remains mas-consistent after any modification of the common part BC, that is, the MAS itself should become consistent again.\nThis property is defined as follows: Definition 6.\nStrong mas-consistency of a revision An update mechanism Ms is strongly mas-consistent iff - Ms is mas-consistent, and - the application of Ms by an agent preserves the consistency of the MAS.\n2.2 A strongly mas-consistent update mecha\nnism The general idea is that, since information is distributed among all the agents of the MAS, there must be some interaction between the learner agent and the other agents in a strongly mas-consistent update mechanism Ms. In order to ensure its mas-consistency, Ms will be constituted of reiterated applications by the learner agent ri of an internal a-consistent mechanism M, followed by some interactions between ri and the other agents, until ri regain its masconsistency.\nWe describe below such a mechanism, first with a description of an interaction, then an iteration, and finally a statement of the termination condition of the mechanism.\nThe mechanism is triggered by an agent ri upon receipt of a piece of information k disrupting the mas-consistency.\nWe shall note M (Bi) the belief set of the learner agent ri after an update, B' C the common part modified by ri, and Bj' the belief set of another agent rj induced by the modification of its common part BC in BC'.\nAn interaction I (ri, rj) between the learner agent ri and another agent rj, acting as critic is constituted of the following steps:\n\u2022 agent ri sends the update B' C of the common part of its beliefs.\nHaving applied its update mechanism, ri is a-consistent.\n\u2022 agent rj checks the modification Bj' of its beliefs induced by the update B' C.\nIf this modification preserve its a-consistency, rj adopts this modification.\n\u2022 agent rj sends either an acceptation of B' C or a denial along with one (or more) piece (s) of information k' such that Cons (B' j, k') is false.\nAn iteration of Ms will then be composed of: \u2022 the reception by the learner agent ri of a piece of information and the update M (Bi) restoring its aconsistency \u2022 a set of interactions I (ri, rj) (in which several critic\nagents can possibly participate).\nIf at least one piece of information k' is transmitted to ri, the addition of k' will necessarily make ri a-inconsistent and a new iteration will then occur.\nThis mechanism Ms ends when no agent can provide such a piece of information k'.\nWhen it is the case, the masconsistency of the learner agent ri is restored.\nProposition 1.\nLet r1,..., rn be a consistent MAS in which agent ri receives a piece of information k breaking its aconsistency, and M an a-consistent internal update mechanism.\nThe update mechanism Ms described above is strongly mas-consistent.\nProof.\nThe proof directly derives from the mechanism description.\nThis mechanism ensures that each time an agent receives an event, its mas-consistency will be restored.\nAs the other agents all adopt the final update BC, they are all mas-consistent, and the MAS is consistent.\nTherefore Ms is a strongly consistent update mechanism.\nIn the mechanism Ms described above, the learner agent is the only one that receives and memorizes information during the mechanism execution.\nIt ensures that Ms terminates.\nThe pieces of information transmitted by other agents and memorized by the learner agent are redundant as they are already present in the MAS, more precisely in the memory of the critic agents that transmitted them.\nNote that the mechanism Ms proposed here does not explicitly indicate the order nor the scope of the interactions.\nWe will consider in the following that the modification proposal B' C is sent sequentially to the different agents (synchronous mechanism).\nMoreover, the response of a critic agent will only contain one piece of information inconsistent with the proposed modification.\nWe will say that the response of the agent is minimal.\nThis mechanism Ms, being synchronous with minimal response, minimizes the amount of information transmitted by the agents.\nWe will now illustrate it in the case of multi-agent concept learning.\n3.\nSOUND MULTI-AGENT INCREMENTAL LEARNING\n3.1 The learning task\nWe experiment the mechanism proposed above in the case of incremental MAS concept learning.\nWe consider here a hypothesis language in which a hypothesis is a disjunction of terms.\nEach term is a conjunction of atoms from a set A.\nAn example is represented by a tag + or \u2212 and a description 2 composed of a subset of atoms e C A.\nA term covers an example if its constituting atoms are included in the example.\nA hypothesis covers an example if one of its term covers it.\nThis representation will be used below for learning boolean formulae.\nNegative literals are here represented by additional atoms, like not \u2212 a.\nThe boolean formulae f = (a n b) V (b n - c) will then be written (a n b) V (b n not \u2212 c).\nA positive example of f, like {not \u2212 a, b, not \u2212 c}, represents a model for f.\n3.2 Incremental learning process\nThe learning process is an update mechanism that, given a current hypothesis H, a memory E = E + U E--filled with the previously received examples, and a new positive or negative example e, produces a new updated hypothesis.\nBefore this update, the given hypothesis is complete, meaning that it covers all positive examples of E +, and 2When no confusion is possible, the word example will be used to refer to the pair (tag, description) as well as the description alone.\n166 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ncoherent, meaning that it does not cover any negative example of E--.\nAfter the update, the new hypothesis must be complete and coherent with the new memory state E U {e}.\nWe describe below our single agent update mechanism, inspired from a previous work on incremental learning [7].\nIn the following, a hypothesis H for the target formula f is a list of terms h, each of them being a conjunction of atoms.\nH is coherent if all terms h are coherent, and H is complete if each element of E + is covered by at least one term h of H. Each term is by construction the lgg (least general generalization) of a subset of positives instances {e1,..., en} [5], that is the most specific term covering {e1,..., en}.\nThe lgg operator is defined by considering examples as terms, so we denote as lgg (e) the most specific term that covers e, and as lgg (h, e) the most specific term which is more general than h and that covers e. Restricting the term to lgg is the basis of a lot of Bottom-Up learning algorithms (for instance [5]).\nIn the typology proposed by [9], our update mechanism is an incremental learner with full instance memory: learning is made by successive updates and all examples are stored.\nThe update mechanism depends of the ongoing hypothesis H, the ongoing examples E + and E--, and the new example e.\nThere are three possible cases:\n\u2022 e is positive and H covers e, or e is negative and H does not cover e. No update is needed, H is already complete and coherent with E U {e}.\n\u2022 e is positive and H does not cover e: e is denoted as a positive counterexample of H.\nThen we seek to generalize in turn the terms h of H.\nAs soon as a correct generalization h' = lgg (h, e) is found, h' replaces h in H.\nIf there is a term that is less general that h', it is discarded.\nIf no generalization is correct (meaning here coherent), H U lgg (e) replaces H. \u2022 e is negative and H covers e: e is denoted as a negative counterexample of H. Each term h covering e is then discarded from H and replaced by a set of terms {h' 1,..., h 'n} that is, as a whole, coherent with E--U {e} and that covers the examples of E + uncovered by H \u2212 {h}.\nTerms of the final hypothesis H that are less general than others are discarded from H.\nWe will now describe the case where e = e--is a covered negative example.\nThe following functions are used here:\n\u2022 coveredOnlyBy (h, E +) gives the subset of E + covered by h and no other term of H. \u2022 bestCover (h1, h2) gives h1 if h1 covers more examples from uncoveredPos than h2, otherwise it gives h2.\n\u2022 covered (h) gives the elements of uncoveredPos covered by h. \/ \/ Specialization of each h covering e --\nNote that this mechanism tends to both make a minimal update of the current hypothesis and minimize the number of terms in the hypothesis, in particular by discarding terms less general than other ones after updating a hypothesis.\n3.3 Collective learning\nIf H is the current hypothesis, Ei the current example memory of agent ri and E the set of all the examples received by the system, the notation of section 2 becomes Bi = BC = H, Ki = Ei and K = E. Cons (H, Ei) states that H is complete and coherent with Ei.\nIn such a case, ri is a-consistent.\nThe piece of information k received by agent ri is here simply an example e along with its tag.\nIf e is such that the current hypothesis H is not complete or coherent with Ei U {e}, e contradicts H: ri becomes a-inconsistent, and therefore the MAS is not consistent anymore.\nThe update of a hypothesis when a new example arrives is an a - consistent mechanism.\nFollowing proposition 1 this mechanism can be used to produce a strong mas-consistent mechanism: upon reception of a new example in the MAS by an agent r, an update is possibly needed and, after a set of interactions between r and the other agents, results in a new hypothesis shared by all the agents and that restores the consistency of the MAS, that is which is complete and coherent with the set ES of all the examples present in the MAS.\nIt is clear that by minimizing the number of hypothesis modifications, this synchronous and minimal mechanism minimize the number of examples received by the learner from other agents, and therefore, the total number of examples stored in the system.\n4.\nEXPERIMENTS\nIn the following, we will learn a boolean formula that is a difficult test for the learning method: the 11-multiplexer (see [4]).\nIt concerns 3 address boolean attributes a0, a1, a2 and 8 data boolean attributes d0,..., d7.\nFormulae f11 is satisfied if the number coded by the 3 address attributes is the number of a data attribute whose value is 1.\nIts formula is the following: f11 = (a0 n a1 n a2 n d7) V (a0 n a1 n - a2 n d6) V (a0 n - a1 n a2 n d5) V (a0 n - a1 n - a2 n d4) V (- a0 n a1 n a2 n d3) V (- a0 n a1 n - a2 n d2) V (- a0 n - a1 n a2 n d1) V (- a0 n - a1 n - a2 n d0).\nThere are 2048 = 211 possible examples, half of whom are positive (meaning they satisfy f11) while the other half is negative.\nAn experiment is typically composed of 50 trials.\nEach run corresponds to a sequence of 600 examples that are incrementally learned by a Multi Agent System with n agents\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 167\n(n-MAS).\nA number of variables such as accuracy, (i.e. the frequency of correct classification of a set of unseen examples), hypothesis size (i.e. the number of terms in the current formula) or number of stored examples, is recorded each time 25 examples are received by the system during those runs.\nIn the protocol that is used here, a new example is sent to a random agent when the MAS is consistent.\nThe next example will be sent in turn to an other agent when the MAS consistency will have been restored.\nIn such a way we simulate a kind of slow learning: the frequency of example arrivals is slow compared to the time taken by an update.\n4.1 Efficiency of MAS concept learning\n4.1.1 Execution time\nWe briefly discuss here execution time of learning in the MAS.\nNote that the whole set of action and interaction in the MAS is simulated on a single processor.\nFigure 1 shows that time linearly depends on the number of agents.\nAt the end of the most active part of learning (200 examples), a 16MAS has taken 4 times more learning time than a 4-MAS.\nThis execution time represents the whole set of learning and\nFigure 1: Execution time of a n-MAS (from n = 2 at the bottom to n = 20 on the top).\ncommunication activity and hints at the cost of maintaining a consistent learning hypothesis in a MAS composed of autonomous agents.\n4.1.2 Redundancy in the MAS memory\nWe study now the distribution of the examples in the MAS memory.\nRedundancy is written Rs = ns\/n.\n, where ns is the total number of examples stored in the MAS, that is the sum of the sizes of agents examples memories E;, and n. is the total number of examples received from the environment in the MAS.\nIn figure 2, we compare redundancies in 2 to 20 agents MAS.\nThere is a peak, slowly moving from 80 to 100 examples, that represents the number of examples for which the learning is most active.\nFor 20 agents, maximal redundancy is no more than 6, which is far less than the maximal theoretical value of 20.\nNote that when learning becomes less active, redundancy tends towards its minimal value 1: when there is no more updates, examples are only\nFigure 2: Redundancy of examples stored in a nMAS (from n = 2 at the bottom to n = 20 on the top).\nstored by the agent that receives them.\n4.1.3 A n-MAS selects a simpler solution than a single agent\nThe proposed mechanism tends to minimize the number of terms in the selected hypothesis.\nDuring learning, the size of the current hypothesis grows up beyond the optimum, and then decreases when the MAS converges.\nIn the Multiplexer 11 testbed, the optimal number of terms is 8, but there also exist equivalent formulas with more terms.\nIt is interesting to note that in this case the 10-MAS converges towards an exact solution closer to the optimal number of terms (here 8) (see Figure 3).\nAfter 1450 examples have been presented both 1-MAS and 10-MAS have exactly learned the concept (the respective accuracies are 0.9999 and 1) but the single agent expresses in average the result as a 11.0 terms DNF whereas the 10-MAS expresses it as a 8.8 terms DNF.\nHowever for some other boolean functions we found that during learning 1-MAS always produces larger hypotheses than 10-MAS but that both MAS converge to hypotheses with similar size results.\n4.1.4 A n-MAS is more accurate than a single agent\nFigure 4 shows the improvement brought by a MAS with n agents compared to a single agent.\nThis improvement was not especially expected, because whether we have one or n agents, when N examples are given to the MAS it has access to the same amount of information, maintains only on ongoing hypothesis and uses the same basic revision algorithm whenever an agent has to modify the current hypothesis.\nNote that if the accuracy of 1, 2, 4 and 10-MAS are significantly different, getting better as the number of agents increases, there is no clear difference beyond this point: the accuracy curve of the 100 agents MAS is very close to the one of the 10 agents MAS.\n4.1.4.1 Boolean formulas.\nTo evaluate this accuracy improvement, we have experimented our protocol on other problems of boolean function learning, As in the Multiplexer-11 case, these functions\nFigure 3: Size of the hypothesis built by 1 and 10MAS: the M11 case.\nFigure 4: Accuracy of a n-MAS: the M11 case (from bottom to top, n = 1, 2, 4, 10, 100).\nare learnt in the form of more or less syntactically complex DNF3 (that is with more or less conjunctive terms in the DNF), but are also more or less difficult to learn as it can be difficult to get its way in the hypothesis space to reach them.\nFurthermore, the presence in the description of irrelevant attributes (that is attributes that does not belong to the target DNF) makes the problem more difficult.\nThe following problems have been selected to experiment our protocol: (i) the multiplexer-11 with 9 irrelevant attributes: M11 9, (ii) the 20-multiplexer M20 (with 4 address bits and 16 data bits), (iii) a difficult parity problem (see [4]) the Xorp m: there must be an odd number of bits with value 1 in the p first attributes for the instance to be positive, the p others bits being irrelevant, and (iv) a simple DNF formula (a n b n c) V (c n d n e) (e n f n g) n (g n h n i) with 19 irrelevant attributes.\nThe following table sums up some information about these problems, giving the total number of attributes including irrelevant ones, the number of irrelevant\nBelow are given the accuracy results of our learning mechanism with a single agent and a 10 agents MAS, along with the results of two standard algorithms implemented with the learning environment WEKA [16]: JRip (an implementation of RIPPER [2]) and Id3 [12].\nFor the experiments with JRip and Id3, we measured the mean accuracy on 50 trials, each time randomly separating examples in a learning set and a test set.\nJRip and Id3 parameters are default parameters, except that JRip is used without pruning.\nThe following table shows the results:\nIt is clear that difficult problems are better solved with more agents (see for instance xor5 15).\nWe think that these benefits, which can be important with an increasing number of agents, are due to the fact that each agent really memorizes only part of the total number of examples, and this part is partly selected by other agents as counter examples, which cause a greater number of current hypothesis updates and therefore, a better exploration of the hypotheses space.\n4.1.4.2 ML database problems.\nWe did also experiments with some non boolean problems.\nWe considered only two classes (positive\/negative) problems, taken from the UCI's learning problems database [3].\nIn all these problems, examples are described as a vector of couples (attribute, value).\nThe value domains can be either boolean, numeric (wholly ordered set), or nominal (non-ordered set).\nAn adequate set of atoms A must be constituted for each problem.\nFor instance, if a is a numeric attribute, we define at most k threshold si, giving k +1 intervals of uniform density 4.\nTherefore, each distinct threshold si gives two atoms a si.\nIn our experiments, we took a maximal number of threshold k = 8.\nFor instance, in the iono problem case, there were 34 numeric attributes, and an instance is described with 506 atoms.\nBelow are given the accuracy results of our system along with previous results.\nThe column\" Nb ex.\"\nrefer to the\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 169\nnumber of examples used for learning5.\nColumn (1) represents minimal and maximal accuracy values for the thirty three classifiers tested in [8].\nColumn (2) represents the results of [13], where various learning methods are compared to ensemble learning methods using weighted classifiers sets.\nColumn S-1 and S-10 gives the accuracy of SMILE with respectively 1 and 10 agents.\nThis table shows that the incremental algorithm corresponding to the single agent case, gives honorable results relatively to non-incremental classical methods using larger and more complex hypotheses.\nIn some cases, there is an accuracy improvement with a 10 agents MAS.\nHowever, with such benchmarks data, which are often noisy, the difficulty does not really come from the way in which the search space is explored, and therefore the improvement observed is not always significant.\nThe same kind of phenomenon have been observed with methods dedicated to hard boolean problems [4].\n4.2 MAS synchronization\nHere we consider that n single agents learn without interactions and at a given time start interacting thus forming a MAS.\nThe purpose is to observe how the agents take advantage of collaboration when they start from different states of beliefs and memories.\nWe compare in this section a 1-MAS, a 10-MAS (ref) and a 10-MAS (100sync) whose agents did not communicate during the arrival of the first 100 examples (10 by agents).\nThe three accuracy curves are shown in figure 5.\nBy comparing the single agent curve and the synchronized 10-MAS, we can observe that after the beginning of the synchronization, that is at 125 examples, accuracies are identical.\nThis was expected since as soon as an example e received by the MAS contradicts the current hypothesis of the agent ra receiving it, this agent makes an update and its new hypothesis is proposed to the others agents for criticism.\nTherefore, this first contradictory example brings the MAS to reach consistency relatively to the whole set of examples present in agents' memories.\nA higher accuracy, corresponding to a 10-MAS is obtained later, from the 175th example.\nIn other words, the benefit of a better exploration of the research space is obtained slightly later in the learning process.\nNote that this synchronization happens naturally in all situations where agents have, for some reason, a divergence between their hypothesis and the system memory.\nThis includes the fusion of two MAS into a single one or the arrival of new agents in an existing MAS.\n4.3 Experiments on asynchronous learning: the effect of a large data stream\n5For ttt and kr-vs-kp, our protocol did not use more than respectively 574 and 958 learning examples, so we put another number in the column.\nFigure 5: Accuracies of a 1-MAS, a 10-MAS, and a 10-MAS synchronized after 100 examples.\nIn this experiment we relax our slow learning mode: the examples are sent at a given rate to the MAS.\nThe resulting example stream is measured in ms \u2212 1, and represents the number of examples sent to the MAS each ms. Whenever the stream is too large, the MAS cannot reach MAS consistency on reception of an example from the environment before a new example arrives.\nThis means that the update process, started by agent r0 as he received an example, may be unfinished when a new example is received by r0 or another agent r1.\nAs a result, a critic agent may have at instant t to send counterexamples of hypotheses sent by various agents.\nHowever as far as the agents, in our setting, memorizes all the examples they receive whenever the stream ends, the MAS necessarily reaches MAS consistency with respect to all the examples received so far.\nIn our experiments, though its learning curve is slowed down during the intense learning phase (corresponding to low accuracy of the current hypotheses), the MAS still reaches a satisfying hypothesis later on as there are less and less counterexamples in the example stream.\nIn Figure 6 we compare the accuracies of two 11-MAS respectively submitted to example streams of different rates when learning the M11 formula.\nThe learning curve of the MAS receiving an example at a 1\/33 ms \u2212 1 rate is almost not altered (see Figure 4) whereas the 1\/16 ms \u2212 1 MAS is first severely slowed down before catching up with the first one.\n5.\nRELATED WORKS\nSince 96 [15], various work have been performed on learning in MAS, but rather few on concept learning.\nIn [11] the MAS performs a form of ensemble learning in which the agents are lazy learners (no explicit representation is maintained) and sell useless examples to other agents.\nIn [10] each agent observes all the examples but only perceive a part of their representation.\nIn mutual online concept learning [14] the agents converge to a unique hypothesis, but each agent produces examples from its own concept representation, thus resulting in a kind of synchronization rather than in pure concept learning.\nFigure 6: Accuracies of two asynchronous 11-MAS (1\/33ms \u2212 1 and 1\/16ms \u2212 1 example rates).\n6.\nCONCLUSION\nWe have presented here and experimented a protocol for MAS online concept learning.\nThe main feature of this collaborative learning mechanism is that it maintains a consistency property: though during the learning process each agent only receives and stores, with some limited redundancy, part of the examples received by the MAS, at any moment the current hypothesis is consistent with the whole set of examples.\nThe hypotheses of our experiments do not address the issues of distributed MAS such as faults (for instance messages could be lost or corrupted) or other failures in general (crash, byzantine faults, etc.).\nNevertheless, our framework is open, i.e., the agents can leave the system or enter it while the consistency mechanism is preserved.\nFor instance if we introduce a timeout mechanism, even when a critic agent crashes or omits to answer, the consistency with the other critics (within the remaining agents) is entailed.\nIn [1], a similar approach has been applied to MAS abduction problems: the hypotheses to maintain, given an incomplete information, are then facts or statements.\nFurther work concerns first coupling induction and abduction in order to perform collaborative concept learning when examples are only partially observed by each agent, and second, investigating partial memory learning: how learning is preserved whenever one agent or the whole MAS forgets some selected examples.","keyphrases":["increment learn","agent","multi-agent learn","collabor concept learn","learn process","knowledg","ma-consist","updat mechan","synchron"],"prmu":["P","P","M","R","M","U","U","M","U"]} {"id":"I-11","title":"Real-Time Agent Characterization and Prediction","abstract":"Reasoning about agents that we observe in the world is challenging. Our available information is often limited to observations of the agent's external behavior in the past and present. To understand these actions, we need to deduce the agent's internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear). In addition, we often want to predict the agent's future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent's interaction with its environment. BEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agent-based model of the environment to characterize agents' internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.","lvl-1":"Real-Time Agent Characterization and Prediction H. Van Dyke Parunak, Sven Brueckner, Robert Matthews, John Sauter, Steve Brophy NewVectors LLC 3520 Green Court, Suite 250 Ann Arbor, MI 48105 USA +1 734\u00a0302\u00a04684 {van.parunak, sven.brueckner, robert.matthews, john.sauter, steve.brophy}@newvectors.\nnet ABSTRACT Reasoning about agents that we observe in the world is challenging.\nOur available information is often limited to observations of the agent``s external behavior in the past and present.\nTo understand these actions, we need to deduce the agent``s internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear).\nIn addition, we often want to predict the agent``s future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent``s interaction with its environment.\nBEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agentbased model of the environment to characterize agents'' internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.\nCategories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning - parameter learning.\nI.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence- multiagent systems.\nGeneral Terms Algorithms, Measurement, Experimentation.\n1.\nINTRODUCTION Reasoning about agents that we observe in the world must integrate two disparate levels.\nOur observations are often limited to the agent``s external behavior, which can frequently be summarized numerically as a trajectory in space-time (perhaps punctuated by actions from a fairly limited vocabulary).\nHowever, this behavior is driven by the agent``s internal state, which (in the case of a human) may involve high-level psychological and cognitive concepts such as intentions and emotions.\nA central challenge in many application domains is reasoning from external observations of agent behavior to an estimate of their internal state.\nSuch reasoning is motivated by a desire to predict the agent``s behavior.\nThis problem has traditionally been addressed under the rubric of plan recognition or plan inference.\nWork to date focuses almost entirely on recognizing the rational state (as opposed to the emotional state) of a single agent (as opposed to an interacting community), and frequently takes advantage of explicit communications between agents (as in managing conversational protocols).\nMany realistic problems deviate from these conditions.\nIncreasing the number of agents leads to a combinatorial explosion that can swamp conventional analysis.\nEnvironmental dynamics can frustrate agent intentions.\nThe agents often are trying to hide their intentions (and even their presence), rather than intentionally sharing information.\nAn agent``s emotional state may be at least as important as its rational state in determining its behavior.\nDomains that exhibit these constraints can often be characterized as adversarial, and include military combat, competitive business tactics, and multi-player computer games.\nBEE (Behavioral Evolution and Extrapolation) is a novel approach to recognizing the rational and emotional state of multiple interacting agents based solely on their behavior, without recourse to intentional communications from them.\nIt is inspired by techniques used to predict the behavior of nonlinear dynamical systems, in which a representation of the system is continually fit to its recent past behavior.\nFor nonlinear dynamical systems, the representation is a closed-form mathematical equation.\nIn BEE, it is a set of parameters governing the behavior of software agents representing the individuals being analyzed.\nThe current version of BEE characterizes and predicts the behavior of agents representing soldiers engaged in urban combat [8].\nSection 2 reviews relevant previous work.\nSection 3 describes the architecture of BEE.\nSection 4 reports results from experiments with the system.\nSection 5 concludes.\nFurther details that cannot be included here for the sake of space are available in an on-line technical report [16].\n2.\nPREVIOUS WORK BEE bears comparison with previous research in AI (plan recognition), Hidden Markov Models, and nonlinear dynamics systems (trajectory prediction).\n2.1 Plan Recognition in AI Agent theory commonly describes an agent``s cognitive state in terms of its beliefs, desires, and intentions (the so-called BDI model [5, 20]).\nAn agent``s beliefs are propositions about the state of the world that it considers true, based on its perceptions.\nIts desires are propositions about the world that it would like to be true.\nDesires are not necessarily consistent with one another: an agent might desire both to be rich and not to work at the same time.\nAn agent``s intentions, or goals, are a subset of its desires that it has selected, based on its beliefs, to guide its future actions.\nUnlike desires, goals must be consistent with one another (or at least believed to be consistent by the agent).\nAn agent``s goals guide its actions.\nThus one ought to be able to learn something about an agent``s goals by observing its past actions, and knowledge of the agent``s goals in turn enables conclusions about what the agent may do in the future.\nThis process of reasoning from an agent``s actions to its goals is known as plan recognition or plan inference.\nThis body of work (surveyed recently at [3]) is rich and varied.\nIt covers both single-agent and multi-agent (e.g., robot soccer team) plans, intentional vs. non-intentional actions, speech vs. non-speech behavior, adversarial vs. cooperative intent, complete vs. incomplete world knowledge, and correct vs. faulty plans, among other dimensions.\nPlan recognition is seldom pursued for its own sake.\nIt usually supports a higher-level function.\nFor example, in humancomputer interfaces, recognizing a user``s plan can enable the system to provide more appropriate information and options for user action.\nIn a tutoring system, inferring the student``s plan is a first step to identifying buggy plans and providing appropriate remediation.\nIn many cases, the higher-level function is predicting likely future actions by the entity whose plan is being inferred.\nWe focus on plan recognition in support of prediction.\nAn agent``s plan is a necessary input to a prediction of its future behavior, but hardly a sufficient one.\nAt least two other influences, one internal and one external, need to be taken into account.\nThe external influence is the dynamics of the environment, which may include other agents.\nThe dynamics of the real world impose significant constraints.\nThe environment may interfere with the desires of the agent [4, 10].\nMost interactions among agents, and between agents and the world, are nonlinear.\nWhen iterated, these can generate chaos (extreme sensitivity to initial conditions).\nA rational analysis of an agent``s goals may enable us to predict what it will attempt, but any nontrivial plan with several steps will depend sensitively at each step to the reaction of the environment, and our prediction must take this reaction into account as well.\nActual simulation of futures is one way (the only one we know now) to deal with the impact of environmental dynamics on an agent``s actions.\nHuman agents are also subject to an internal influence.\nThe agent``s emotional state can modulate its decision process and its focus of attention (and thus its perception of the environment).\nIn extreme cases, emotion can lead an agent to choose actions that from the standpoint of a logical analysis may appear irrational.\nCurrent work on plan recognition for prediction focuses on the rational plan, and does not take into account either external environmental influences or internal emotional biases.\nBEE integrates all three elements into its predictions.\n2.2 Hidden Markov Models BEE is superficially similar to Hidden Markov Models (HMM``s [19]).\nIn both cases, the agent has hidden internal state (the agent``s personality) and observable state (its outward behavior), and we wish to learn the hidden state from the observable state (by evolution in BEE, by the Baum-Welch algorithm [1] in HMM``s) and then predict the agent``s future behavior (by extrapolation via ghosts in BEE, by the forward algorithm in HMM``s).\nBEE offers two important benefits over HMM``s. First, a single agent``s hidden variables do not satisfy the Markov property.\nThat is, their values at t + 1 depend not only on their values at t, but also on the hidden variables of other agents.\nOne could avoid this limitation by constructing a single HMM over the joint state space of all of the agents, but this approach is combinatorially prohibitive.\nBEE combines the efficiency of independently modeling individual agents with the reality of taking into account interactions among them.\nSecond, Markov models assume that transition probabilities are stationary.\nThis assumption is unrealistic in dynamic situations.\nBEE``s evolutionary process continually updates the agents'' personalities based on actual observations, and thus automatically accounts for changes in the agents'' personalities.\n2.3 Real-Time Nonlinear Systems Fitting Many systems of interest can be described by a vector of real numbers that changes as a function of time.\nThe dimensions of the vector define the system``s state space.\nOne typically analyzes such systems as vector differential equations, e.g., )(xf dt xd .\nWhen f is nonlinear, the system can be formally chaotic, and starting points arbitrarily close to one another can lead to trajectories that diverge exponentially rapidly.\nLong-range prediction of such a system is impossible.\nHowever, it is often useful to anticipate the system``s behavior a short distance into the future.\nA common technique is to fit a convenient functional form for f to the system``s trajectory in the recent past, then extrapolate this fit into the future (Figure 1, [7]).\nThis process is repeated constantly, providing the user with a limited look-ahead.\nThis approach is robust and widely applied, but requires systems that can efficiently be described with mathematical equations.\nBEE extends this approach to agent behaviors, which it fits to observed behavior using a genetic algorithm.\n3.\nARCHITECTURE BEE predicts the future by observing the emergent behavior of agents representing the entities of interest in a fine-grained agent simulation.\nKey elements of the BEE architecture include the model of an individual agent, the pheromone infrastructure through which agents interact, the information sources that guide them, and the overall evolutionary cycle that they execute.\n3.1 Agent Model The agents in BEE are inspired by two bodies of work: our previous work on fine-grained agents that coordinate their actions through digital pheromones in a shared environment [2, 13, 17, 18, 21], and the success of previous agentbased combat modeling.\nDigital pheromones are scalar variables that agents deposit and sense at their current location a c b d a c b d Figure 1: Tracking a nonlinear dynamical system.\na = system state space; b = system trajectory over time; c = recent measurements of system state; d = short-range prediction.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1427 in the environment.\nAgents respond to local concentrations of these variables tropistically, climbing or descending local gradients.\nTheir movements change the deposit patterns.\nThis feedback loop, together with processes of evaporation and propagation in the environment, support complex patterns of interaction and coordination among the agents [15].\nTable 1 shows the BEE``s current pheromone flavors.\nFor example, a living member of the adversary emits a RED-ALIVE pheromone, while roads emit a MOBILITY pheromone.\nOur soldier agents are inspired by EINSTein and MANA.\nEINSTein [6] represents an agent as a set of six weights, each in [-1, 1], describing the agent``s response to six kinds of information.\nFour of these describe the number of alive friendly, alive enemy, injured friendly, and injured enemy troops within the agent``s sensor range.\nThe other two weights relate to the agent``s distance to its own flag and that of the adversary, representing objectives that it seeks to protect and attack, respectively.\nA positive weight indicates attraction to the entity described by the weight, while a negative weight indicates repulsion.\nMANA [9] extends the concepts in EINSTein.\nFriendly and enemy flags are replaced by the waypoints pursued by each side.\nMANA includes low, medium, and high threat enemies.\nIn addition, it defines a set of triggers (e.g., reaching a waypoint, being shot at, making contact with the enemy, being injured) that shift the agent from one personality vector to another.\nA default state defines the personality vector when no trigger state is active.\nThe personality vectors in MANA and EINSTein reflect both rational and emotive aspects of decision-making.\nThe notion of being attracted or repelled by friendly or adversarial forces in various states of health is an important component of what we informally think of as emotion (e.g., fear, compassion, aggression), and the use of the term personality in both EINSTein and MANA suggests that the system designers are thinking anthropomorphically, though they do not use emotion to describe the effect they are trying to achieve.\nThe notion of waypoints to which an agent is attracted reflects goal-oriented rationality.\nBEE uses an integrated rational-emotive personality model.\nA BEE agent``s rationality is a vector of seven desires, which are values in [-1, +1]: ProtectRed (the adversary), ProtectBlue (friendly forces), ProtectGreen (civilians), ProtectKeySites, AvoidCombat, AvoidDetection, and Survive.\nNegative values reverse the sense suggested by the label.\nFor example, a negative value of ProtectRed indicates a desire to harm Red, and an agent with a high positive desire to ProtectRed will be attracted to REDALIVE, RED-CASUALTY, and MOBILITY pheromone, and will move at maximum speed.\nThe emotive component of a BEE``s personality is based on the Ortony-Clore-Collins (OCC) framework [11], and is described in detail elsewhere [12].\nOCC define emotions as valanced reactions to agents, states, or events in the environment.\nThis notion of reaction is captured in MANA``s trigger states.\nAn important advance in BEE``s emotional model is the recognition that agents may differ in how sensitive they are to triggers.\nFor example, threatening situations tend to stimulate the emotion of fear, but a given level of threat will produce more fear in a new recruit than in a seasoned veteran.\nThus our model includes not only Emotions, but Dispositions.\nEach Emotion has a corresponding Disposition.\nDispositions are relatively stable, and considered constant over the time horizon of a run of the BEE, while Emotions vary based on the agent``s disposition and the stimuli to which it is exposed.\nInterviews with military domain experts identified the two most crucial emotions for combat behavior as Anger (with the corresponding disposition Irritability) and Fear (whose disposition is Cowardice).\nTable 2 shows which pheromones trigger which emotions.\nFor example, RED-CASUALTY pheromone stimulates both Anger and Fear in a Red agent, but not in a Blue agent.\nEmotions are modeled as agent hormones (internal pheromones) that are augmented in the presence of the triggering environmental condition and evaporate over time.\nA non-zero emotion modifies the agent``s actions.\nElevated level Anger increases movement likelihood, weapon firing likelihood, and tendency toward an exposed posture.\nElevated Fear decreases these likelihoods.\nFigure 2 summarizes the BEE``s personality model.\nThe left side is a straightforward BDI model (we prefer the term goal to intention).\nThe right side is the emotive component, where an appraisal of the agent``s beliefs, moderated by the disposition, leads to an emotion that in turn influences the BDI analysis.\nTable 1.\nPheromone flavors in BEE Pheromone Flavor Description RedAlive RedCasualty BlueAlive BlueCasualty GreenAlive GreenCasualty Emitted by a living or dead entity of the appropriate group (Red = enemy, Blue = friendly, Green = neutral) WeaponsFire Emitted by a firing weapon KeySite Emitted by a site of particular importance to Red Cover Emitted by locations that afford cover from fire Mobility Emitted by roads and other structures that enhance agent mobility RedThreat BlueThreat Determined by external process (see Section 3.3) Table 2: Interactions of pheromones and dispositions\/emotions Dispositions\/Emotions Red Perspective Blue Perspective Green Perspective Pheromone Irritability \/Anger Cowardice \/Fear Irritability \/Anger Cowardice \/Fear Irritability \/Anger Cowardice \/FearRedAlive X X RedCasualty X X BlueAlive X X X X BlueCasualty X X GreenCasualty X X X X WeaponsFire X X X X X X KeySites X X 1428 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2 The BEE Cycle BEE``s major innovation is extending the nonlinear systems technique of Section 2.2 to agent behaviors.\nThis section describes this process at a high level, then details the multi-page pheromone infrastructure that implements it.\n3.2.1 Overview Figure 3 is an overview of Behavior Evolution and Extrapolation.\nEach active entity in the battlespace has an persistent avatar that continuously generates a stream of ghost agents representing itself.\nWe call the combined modeling entity consisting of avatar and ghosts a polyagent [14].\nGhosts live on a timeline indexed by that begins in the past and runs into the future.\nis offset with respect to the current time t.\nThe timeline is divided into discrete pages, each representing a successive value of .\nThe avatar inserts the ghosts at the insertion horizon.\nIn our current system, the insertion horizon is at - t = -30, meaning that ghosts are inserted into a page representing the state of the world 30 minutes ago.\nAt the insertion horizon, each ghost``s behavioral parameters (desires and dispositions) are sampled from distributions to explore alternative personalities of the entity it represents.\nEach page between the insertion horizon and = t (now) records the historical state of the world at the point in the past to which it corresponds.\nAs ghosts move from page to page, they interact with this past state, based on their behavioral parameters.\nThese interactions mean that their fitness depends not just on their own actions, but also on the behaviors of the rest of the population, which is also evolving.\nBecause advances faster than real time, eventually = t (actual time).\nAt this point, each ghost is evaluated based on its location compared with the actual location of its corresponding real-world entity.\nThe fittest ghosts have three functions.\n1.\nThe personality of each entity``s fittest ghost is reported to the rest of the system as the likely personality of that entity.\nThis information enables us to characterize individual warriors as unusually cowardly or brave.\n2.\nThe fittest ghosts breed genetically and their offspring return to the insertion horizon to continue the fitting process.\n3.\nThe fittest ghosts for each entity form the basis for a population of ghosts that run past the avatar's present into the future.\nEach ghost that runs into the future explores a different possible future of the battle, analogous to how some people plan ahead by mentally simulating different ways that a situation might unfold.\nAnalysis of the behaviors of these different possible futures yields predictions.\nThus BEE has three distinct notions of time, all of which may be distinct from real-world time.\n1.\nDomain time t is the current time in the domain being modeled.\nIf BEE is applied to a real-world situation, this time is the same as real-world time.\nIn our experiments, we apply BEE to a simulated battle, and domain time is the time stamp published by the simulator.\nDuring actual runs, the simulator is often paused, so domain time runs slower than real time.\nWhen we replay logs from simulation runs, we can speed them up so that domain time runs faster than real time.\n2.\nBEE time for a page records the domain time corresponding to the state of the world represented on that page, and is offset from the current domain time.\n3.\nShift time is incremented every time the ghosts move from one page to the next.\nThe relation between shift time and real time depends on the processing resources available.\n3.2.2 Pheromone Infrastructure BEE must operate very rapidly, to keep pace with the ongoing battle.\nThus we use simple agents coordinated using pheromone mechanisms.\nWe have described the basic dynamics of our pheromone infrastructure elsewhere [2].\nThis infrastructure runs on the nodes of a graph-structured environment (in the case of BEE, a rectangular lattice).\nEach node maintains a scalar value for each flavor of pheromone, and provides three functions: It aggregates deposits from individual agents, fusing information across multiple agents and through time.\nIt evaporates pheromones over time, providing an innovative alternative to traditional truth maintenance.\nTraditionally, knowledge bases remember everything they are told unless they have a reason to forget.\nPheromone-based systems immediately begin to forget everything they learn, unless it is continually reinforced.\nThus inconsistencies automatically remove themselves within a known period.\nIt diffuses pheromones to nearby places, disseminating information for access by nearby agents.\nThe distribution of each pheromone flavor over the environment forms a field that represents some aspect of the state of the world at an instant in time.\nEach page of the timeline is a complete pheromone field for the world at the BEE time represented by that page.\nThe behavior of the pheromones on each page depends on whether the page represents the past or the future.\nEnvironment Beliefs Desires Goal Emotion Disposition State Process Analysis Action Perception Appraisal Rational Emotive Figure 2: BEE``s Integrated Rational and Emotive Personality Model Ghost time =t(now) Avatar Insertion Horizon Measure Ghost fitness Prediction Horizon Observe Ghost prediction Ghosts ReadPersonality ReadPrediction Entity Ghost time =t(now) Avatar Insertion Horizon Measure Ghost fitness Prediction Horizon Observe Ghost prediction Ghosts ReadPersonality ReadPrediction Entity Figure 3: Behavioral Emulation and Extrapolation.\nEach avatar generates a stream of ghosts that sample the personality space of its entity.\nThey evolve against the entity``s recent observed behavior, and the fittest ghosts run into the future to generate predictions.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1429 In pages representing the future ( > t), the usual pheromone mechanisms apply.\nGhosts deposit pheromone each time they move to a new page, and pheromones evaporate and propagate from one page to the next.\nIn pages representing the past ( t), we have an observed state of the real world.\nThis has two consequences for pheromone management.\nFirst, we can generate the pheromone fields directly from the observed locations of individual entities, so there is no need for the ghosts to make deposits.\nSecond, we can adjust the pheromone intensities based on the changed locations of entities from page to page, so we do not need to evaporate or propagate the pheromones.\nBoth of these simplifications reflect the fact that in our current system, we have complete knowledge of the past.\nWhen we introduce noise and uncertainty, we will probably need to introduce dynamic pheromones in the past as well as the future.\nExecution of the pheromone infrastructure proceeds on two time scales, running in separate threads.\nThe first thread updates the book of pages each time the domain time advances past the next page boundary.\nAt each step, The former now + 1page is replaced with a new current page, whose pheromones correspond to the locations and strengths of observed units; An empty page is added at the prediction horizon; The oldest page is discarded, since it has passed the insertion horizon.\nThe second thread moves the ghosts from one page to the next, as fast as the processor allows.\nAt each step, Ghosts reaching the = t page are evaluated for fitness and removed or evolved; New ghosts from the avatars and from the evolutionary process are inserted at the insertion horizon; A population of ghosts based on the fittest ghosts are inserted at = t to run into the future; Ghosts that have moved beyond the prediction horizon are removed; All ghosts plan their next actions based on the pheromone field in the pages they currently occupy; The system computes the next state of each page, including executing the actions elected by the ghosts, and (in future pages) evaporating pheromones and recording new deposits from the recently arrived ghosts.\nGhost movement based on pheromone gradients is a simple process, so this system can support realistic agent populations without excessive computer load.\nIn our current system, each avatar generates eight ghosts per shift.\nSince there are about 50 entities in the battlespace (about 20 units each of Red and Blue and about 5 of Green), we must support about 400 ghosts per page, or about 24000 over the entire book.\nHow fast a processor do we need?\nLet p be the real-time duration of a page in seconds.\nIf each page represents 60 seconds of domain time, and we are replaying a simulation at 2x domain time, p = 30.\nLet n be the number of pages between the insertion horizon and = t.\nIn our current system, n = 30.\nThen a shift rate of n\/p shifts per second will permit ghosts to run from the insertion horizon to the current time at least once before a new page is generated.\nEmpirically, this level is a lower bound for reasonable performance, and easily achievable on stock WinTel platforms.\n3.3 Information sources The flexibility of the BEE``s pheromone infrastructure permits the integration of numerous information sources as input to our characterizations of entity personalities and predictions of their future behavior.\nOur current system draws on three sources of information, but others can readily be added.\nReal-world observations.-Observations from the real world are encoded into the pheromone field each increment of BEE time, as a new current page is generated.\nTable 1 identifies the entities that generate each flavor of pheromone.\nStatistical estimates of threat regions.-Statistical techniques1 estimate the level of threat to each force (Red or Blue), based on the topology of the battlefield and the known disposition of forces.\nFor example, a broad open area with no cover is threatening, especially if the opposite force occupies its margins.\nThe results of this process are posted to the pheromone pages as RedThreat pheromone (representing a threat to red) and BlueThreat pheromone (representing a threat to Blue).\nAI-based plan recognition.-While plan recognition is not sufficient for effective prediction, it is a valuable input.\nWe dynamically configure a Bayes net based on heuristics to identify the likely goals that each entity may hold.2 The destinations of these goals function as virtual pheromones.\nGhosts include their distance to such points in their action decisions, achieving the result of gradient following without the computational expense of maintaining a pheromone field.\n4.\nEXPERIMENTAL RESULTS We have tested BEE in a series of experiments in which human wargamers make decisions that are played out in a battlefield simulator.\nThe commander for each side (Red and Blue) has at his disposal a team of pucksters, human operators who set waypoints for individual units in the simulator.\nEach puckster is responsible for four to six units.\nThe simulator moves the units, determines firing actions, and resolves the outcome of conflicts.\nIt is important to emphasize that this simulator is simply a surrogate for a sensor feed from a real-world battlefield 4.1 Fitting Dispositions To test our ability to fit personalities based on behavior, one Red puckster responsible for four units is designated the emotional puckster.\nHe selects two of his units to be cowardly (chickens) and two to be irritable (Rambos).\nHe does not disclose this assignment during the run.\nHe moves each unit according to the commander``s orders until the unit encounters circumstances that would trigger the emotion associated with the unit``s disposition.\nThen he manipulates chickens as though they are fearful (avoiding combat and moving away from Blue), and moves Rambos into combat as quickly as possible.\nOur software receives position reports on all units, every twenty seconds.\n1 This process, known as SAD (Statistical Anomaly Detection), is developed by our colleagues Rafael Alonso, Hua Li, and John Asmuth at Sarnoff Corporation.\nAlonso and Li are now at SET Corporation.\n2 This process, known as KIP (Knowledge-based Intention Projection), is developed by our colleagues Paul Nielsen, Jacob Crossman, and Rich Frederiksen at Soar Technology.\n1430 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) The difference between the two disposition values (Irritability - Cowardice) of the fittest ghosts proves a better indicator of the emotional state of the corresponding entity than either value by itself.\nFigure 4 shows the delta disposition for each of the eight fittest ghosts at each time step, plotted against the time in seconds, for a unit played as a chicken.\nThe values clearly trend negative.\nFigure 5 shows a similar plot for a Rambo.\nRambos tend to die early, and often do not give their ghosts enough time to evolve a clear picture of their personality, but in this case the positive Delta Disposition is evident before the unit``s demise.\nTo characterize a unit``s personality, we maintain a 800-second exponentially weighted moving average of the Delta Disposition, and declare the unit to be a chicken or Rambo if this value passes a negative or positive threshold, respectively.\nCurrently, this threshold is set at 0.25.\nWe are exploring additional filters.\nFor example, a rapid rate of increase enhances the likelihood of calling a Rambo; units that seek to avoid detection and avoid combat are more readily called chicken.\nTable 1 shows the detection results for emotional units in a recent series of experiments.\nWe never called a Rambo a chicken.\nIn the one case where we called a chicken a Rambo, logs show that in fact the unit was being played aggressively, rushing toward oncoming Blue forces.\nThe brave die young, so we almost never detect units played intentionally as Rambos.\nFigure 6 shows a comparison on a separate series of experiments of our emotion detector compared with humans.\nTwo cowards were played in each of eleven games.\nHuman observers in each game were able to detect a total of 13 of the cowards.\nBEE was able to detect cowards (= chickens) much earlier than the human, while missing only one chicken that the humans detected.\nIn addition to these results on units intentionally played as emotional, BEE sometimes detects other units as cowardly or brave.\nAnalysis of these units shows that these characterizations were appropriate: units that flee in the face of enemy forces or weapons fire are detected as chickens, while those that stand their ground or rush the adversary are denominated as Rambos.\n4.2 Integrated Predictions Each ghost that runs into the future generates a possible path that its unit might follow.\nThe paths in the resulting set over all ghosts vary in how likely they are, the risk they pose to their own or the opposite side, and so forth.\nIn the experiments reported here, we select the future whose ghost receives the most guidance from pheromones in the environment at each step along the way.\nIn this sense, it is the most likely future.\nIn these experiments, we receive position reports only on units that have actually come within visual range of Blue units, or on average fewer than half of the live Red units at any time.\nWe evaluate predictions spatially, comparing an entity``s actual location with the location predicted for it 15 minutes earlier.\nWe compare BEE with two baselines: a gametheoretic predictor based on linguistic geometry [22], and estimates by military officers.\nIn both cases, we use a CEP (circular error probable) measure of accuracy, the radius of the circle that one would have to draw around each prediction to capture 50% of the actual unit locations.\nThe higher the CEP measure, the worse the accuracy.\nFigure 7 compares our accuracy with that of the gametheoretic predictor.\nEach point gives the median CEP measure over all predictions in a single run.\nPoints above the diagonal favor BEE, while points below the line favor the game-theoretic predictor.\nIn all but two missions, BEE is more accurate.\nIn one mission, the two systems are comparable, while in one, the gameTable 1: Experimental Results on Fitting Disposition (16 runs) Called Correctly Called Incorrectly Not Called Chickens 68% 5% 27% Rambos 5% 0% 95% Figure 4: Delta Disposition for a Chicken``s Ghosts.\nFigure 5: Delta Disposition for a Rambo.\nCowards Found vs Percent of Run Time 0 2 4 6 8 10 12 14 0% 20% 40% 60% 80% 100% Percent of Run Time (Wall Clock) CowardsFound(outof22) Human ARM-A Figure 6: BEE vs. Human.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1431 theoretic predictor is more accurate.\nIn 18 RAID runs, BEE generated 1405 predictions at each of two time horizons (0 and 15 minutes), while in 18 non-RAID runs, staff generated 102 predictions.\nFigure.\n8 shows a box-andwhisker plot of the CEP measures, in meters, of these predictions.\nThe box covers the inter-quartile range with a line at the median, whiskers extend to the most distant data points within 1.5 of the interquartile range from the edge of the box, squares show outliers within 3 interquartile ranges, and stars show more distant outliers.\nBEE``s median score even at 15 minutes is lower than either Staff median.\nThe Wilcoxon test shows that the difference between the H15 scores is significant at the 99.76% level, while that between the H0 scores is significant at more than 99.999%.\n5.\nCONCLUSIONS In many domains, it is important to reason from an entity``s observed behavior to an estimate of its internal state, and then to extrapolate that estimate to predict the entity``s future behavior.\nBEE performs this task using a faster-than-real-time simulation of swarming agents, coordinated through digital pheromones.\nThis simulation integrates knowledge of threat regions, a cognitive analysis of the agent``s beliefs, desires, and intentions, a model of the agent``s emotional disposition and state, and the dynamics of interactions with the environment.\nBy evolving agents in this rich environment, we can fit their internal state to their observed behavior.\nIn realistic wargames, the system successfully detects deliberately played emotions and makes reasonable predictions about the entities'' future behaviors.\nBEE can only model internal state variables that impact the agent``s external behavior.\nIt cannot fit variables that the agent does not manifest externally, since the basis for the evolutionary cycle is a comparison of the outward behavior of the simulated agent with that of the real entity.\nThis limitation is serious if our purpose is to understand the entity``s internal state for its own sake.\nIf our purpose of fitting agents is to predict their subsequent behavior, the limitation is much less serious.\nState variables that do not impact behavior, while invisible to a behavior-based analysis, are irrelevant to a behavioral prediction.\nThe BEE architecture lends itself to extension in several promising directions.\nThe various inputs being integrated by the BEE are only an example of the kinds of information that can be handled.\nThe basic principle of using a dynamical simulation to integrate a wide range of influences can be extended to other inputs as well, requiring much less additional engineering than other more traditional ways of reasoning about how different knowledge sources come together in impacting an agent``s behavior.\nWith such a change in inputs, BEE could be applied more widely than its current domain of adversarial reasoning in urban warfare.\nPotential applications of interest include computer games, business strategy, and sensor fusion.\nOur initial limited repertoire of emotions is a small subset of those that have been distinguished by psychologists, and that might be useful for understanding and projecting behavior.\nWe expect to extend the set of emotions and supporting dispositions that BEE can detect.\nThe mapping between an agent``s psychological (cognitive and emotional) state and its outward behavior is not one-to-one.\nSeveral different internal states might be consistent with a given observed behavior under one set of environmental conditions, but might yield distinct behaviors under other conditions.\nIf the environment in the recent past is one that confounds such distinct internal states, we will be unable to distinguish them.\nAs long as the environment stays in this state, our predictions will be accurate, whichever of the internal states we assign to the agent.\nIf the environment then shifts to one under which the different internal states lead to different behaviors, using the previously chosen internal state will yield inaccurate predictions.\nOne way to address these concerns is to probe the real world, perturbing it in ways that would stimulate distinct behaviors from entities whose psychological state is otherwise indistinguishable.\nSuch probing is an important intelligence technique.\nBEE``s faster-than-real-time simulation may enable us to identify appropriate probing actions, greatly increasing the effectiveness of intelligence efforts.\n6.\nACKNOWLEDGEMENTS This material is based in part upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No.\nNBCHC040153.\nAny opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA or the Department of Interior-National Business Center (DOI-NBC).\nDistribution Statement A (Approved for Public Release, Distribution Unlimited).\n7.\nREFERENCES [1] Baum, L. E., Petrie, T., Soules, G., and Weiss, N.\nA maximization technique occurring in the statistical analysis of prob50 100\u00a0150\u00a0200\u00a0250 300 BEE Median Error 50 100 150 200 250 300 GLnaideMrorrE Figure 7: Median errors for BEE vs. Linguistic Geometry on each run.-Squares are Defend missions, triangles are Move missions, diamonds are Attack missions.\nRAID H0 Staff H0 RAID H15 Staff H15 100 200 300 400 500 Figure.\n8: Box-and-whisker plots of RAID and Staff predictions at 0 and 15 minutes Horizons.\nY-axis is CEP radius in meters; lower values indicate greater accuracy.\n1432 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) abilistic functions of Markov chains.\nAnn.\nMath.\nStatist., 41, 1: 1970, 164-171.\n[2] Brueckner, S. Return from the Ant: Synthetic Ecosystems for Manufacturing Control.\nThesis at Humboldt University Berlin, Department of Computer Science, 2000.\n[3] Carberry, S. Techniques for Plan Recognition.\nUser Modeling and User-Adapted Interaction, 11, 1-2: 2001, 31-48.\n[4] Ferber, J. and M\u00fcller, J.-P.\nInfluences and Reactions: a Model of Situated Multiagent Systems.\nIn Proceedings of Second International Conference on Multi-Agent Systems (ICMAS-96), AAAI, 1996, 72-79.\n[5] Haddadi, A. and Sundermeyer, K. Belief-Desire-Intention Agent Architectures.\nIn G. M. P. O'Hare and N. R. Jennings, Editors, Foundations of Distributed Artificial Intelligence, John Wiley, New York, NY, 1996, 169-185.\n[6] Ilachinski, A. Artificial War: Multiagent-based Simulation of Combat.\nSingapore, World Scientific, 2004.\n[7] Kantz, H. and Schreiber, T. Nonlinear Time Series Analysis.\nCambridge, UK, Cambridge University Press, 1997.\n[8] Kott, A. Real-Time Adversarial Intelligence & Decision Making (RAID).\nvol.\n2005, DARPA, Arlington, VA, 2004.\nWeb Site.\n[9] Lauren, M. K. and Stephen, R. T. Map-Aware Non-uniform Automata (MANA)-A New Zealand Approach to Scenario Modelling.\nJournal of Battlefield Technology, 5, 1 (March): 2002, 27ff.\n[10] Michel, F. Formalisme, m\u00e9thodologie et outils pour la mod\u00e9lisation et la simulation de syst\u00e8mes multi-agents.\nThesis at Universit\u00e9 des Sciences et Techniques du Languedoc, Department of Informatique, 2004.\n[11] Ortony, A., Clore, G. L., and Collins, A.\nThe cognitive structure of emotions.\nCambridge, UK, Cambridge University Press, 1988.\n[12] Parunak, H. V. D., Bisson, R., Brueckner, S., Matthews, R., and Sauter, J. Representing Dispositions and Emotions in Simulated Combat.\nIn Proceedings of Workshop on Defence Applications of Multi-Agent Systems (DAMAS05, at AAMAS05), Springer, 2005, 51-65.\n[13] Parunak, H. V. D. and Brueckner, S. Ant-Like Missionaries and Cannibals: Synthetic Pheromones for Distributed Motion Control.\nIn Proceedings of Fourth International Conference on Autonomous Agents (Agents 2000), 2000, 467-474.\n[14] Parunak, H. V. D. and Brueckner, S. Modeling Uncertain Domains with Polyagents.\nIn Proceedings of International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS'06), ACM, 2006.\n[15] Parunak, H. V. D., Brueckner, S., Fleischer, M., and Odell, J.\nA Design Taxonomy of Multi-Agent Interactions.\nIn Proceedings of Agent-Oriented Software Engineering IV, Springer, 2003, 123-137.\n[16] Parunak, H. V. D., Brueckner, S., Matthews, R., Sauter, J., and Brophy, S. Characterizing and Predicting Agents via Multi-Agent Evolution.\nAltarum Institute, Ann Arbor, MI, 2005.\nhttp:\/\/www.newvectors.net\/staff\/parunakv\/BEE.pdf.\n[17] Parunak, H. V. D., Brueckner, S., and Sauter, J. Digital Pheromones for Coordination of Unmanned Vehicles.\nIn Proceedings of Workshop on Environments for Multi-Agent Systems (E4MAS 2004), Springer, 2004, 246-263.\n[18] Parunak, H. V. D., Brueckner, S. A., and Sauter, J. Digital Pheromone Mechanisms for Coordination of Unmanned Vehicles.\nIn Proceedings of First International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), ACM, 2002, 449-450.\n[19] Rabiner, L. R.\nA Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.\nProceedings of the IEEE, 77, 2: 1989, 257-286.\n[20] Rao, A. S. and Georgeff, M. P. Modeling Rational Agents within a BDI Architecture.\nIn Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR-91), Morgan Kaufman, 1991, 473-484.\n[21] Sauter, J. A., Matthews, R., Parunak, H. V. D., and Brueckner, S. Evolving Adaptive Pheromone Path Planning Mechanisms.\nIn Proceedings of Autonomous Agents and MultiAgent Systems (AAMAS02), ACM, 2002, 434-440.\n[22] Stilman, B. Linguistic Geometry: From Search to Construction.\nBoston, Kluwer, 2000.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1433","lvl-3":"Real-Time Agent Characterization and Prediction\nABSTRACT\nReasoning about agents that we observe in the world is challenging.\nOur available information is often limited to observations of the agent's external behavior in the past and present.\nTo understand these actions, we need to deduce the agent's internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear).\nIn addition, we often want to predict the agent's future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent's interaction with its environment.\nBEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agentbased model of the environment to characterize agents' internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.\n1.\nINTRODUCTION\nReasoning about agents that we observe in the world must integrate two disparate levels.\nOur observations are often limited to the agent's external behavior, which can frequently be summarized numerically as a trajectory in space-time (perhaps punctuated by actions from a fairly limited vocabulary).\nHowever, this behavior is driven by the agent's internal state, which (in the case of a human) may involve high-level psychological and cognitive concepts such as intentions and emotions.\nA central challenge in\nmany application domains is reasoning from external observations of agent behavior to an estimate of their internal state.\nSuch reasoning is motivated by a desire to predict the agent's behavior.\nThis problem has traditionally been addressed under the rubric of \"plan recognition\" or \"plan inference.\"\nWork to date focuses almost entirely on recognizing the rational state (as opposed to the emotional state) of a single agent (as opposed to an interacting community), and frequently takes advantage of explicit communications between agents (as in managing conversational protocols).\nMany realistic problems deviate from these conditions.\n\u2022 Increasing the number of agents leads to a combinatorial explosion that can swamp conventional analysis.\n\u2022 Environmental dynamics can frustrate agent intentions.\n\u2022 The agents often are trying to hide their intentions (and even their presence), rather than intentionally sharing information.\n\u2022 An agent's emotional state may be at least as important as its rational state in determining its behavior.\nDomains that exhibit these constraints can often be characterized as adversarial, and include military combat, competitive business tactics, and multi-player computer games.\nBEE (Behavioral Evolution and Extrapolation) is a novel approach to recognizing the rational and emotional state of multiple interacting agents based solely on their behavior, without recourse to intentional communications from them.\nIt is inspired by techniques used to predict the behavior of nonlinear dynamical systems, in which a representation of the system is continually fit to its recent past behavior.\nFor nonlinear dynamical systems, the representation is a closed-form mathematical equation.\nIn BEE, it is a set of parameters governing the behavior of software agents representing the individuals being analyzed.\nThe current version of BEE characterizes and predicts the behavior of agents representing soldiers engaged in urban combat [8].\nSection 2 reviews relevant previous work.\nSection 3 describes the architecture of BEE.\nSection 4 reports results from experiments with the system.\nSection 5 concludes.\nFurther details that cannot be included here for the sake of space are available in an on-line technical report [16].\n2.\nPREVIOUS WORK\nBEE bears comparison with previous research in AI (plan recognition), Hidden Markov Models, and nonlinear dynamics systems (trajectory prediction).\n2.1 Plan Recognition in AI\nAgent theory commonly describes an agent's cognitive state in terms of its beliefs, desires, and intentions (the so-called \"BDI\" model [5, 20]).\nAn agent's beliefs are propositions about the state of the world that it considers true, based on its perceptions.\nIts\ndesires are propositions about the world that it would like to be true.\nDesires are not necessarily consistent with one another: an agent might desire both to be rich and not to work at the same time.\nAn agent's intentions, or goals, are a subset of its desires that it has selected, based on its beliefs, to guide its future actions.\nUnlike desires, goals must be consistent with one another (or at least believed to be consistent by the agent).\nAn agent's goals guide its actions.\nThus one ought to be able to learn something about an agent's goals by observing its past actions, and knowledge of the agent's goals in turn enables conclusions about what the agent may do in the future.\nThis process of reasoning from an agent's actions to its goals is known as \"plan recognition\" or \"plan inference.\"\nThis body of work (surveyed recently at [3]) is rich and varied.\nIt covers both single-agent and multi-agent (e.g., robot soccer team) plans, intentional vs. non-intentional actions, speech vs. non-speech behavior, adversarial vs. cooperative intent, complete vs. incomplete world knowledge, and correct vs. faulty plans, among other dimensions.\nPlan recognition is seldom pursued for its own sake.\nIt usually supports a higher-level function.\nFor example, in humancomputer interfaces, recognizing a user's plan can enable the system to provide more appropriate information and options for user action.\nIn a tutoring system, inferring the student's plan is a first step to identifying buggy plans and providing appropriate remediation.\nIn many cases, the higher-level function is predicting likely future actions by the entity whose plan is being inferred.\nWe focus on plan recognition in support of prediction.\nAn agent's plan is a necessary input to a prediction of its future behavior, but hardly a sufficient one.\nAt least two other influences, one internal and one external, need to be taken into account.\nThe external influence is the dynamics of the environment, which may include other agents.\nThe dynamics of the real world impose significant constraints.\n\u2022 The environment may interfere with the desires of the agent [4, 10].\n\u2022 Most interactions among agents, and between agents and the world, are nonlinear.\nWhen iterated, these can generate chaos (extreme sensitivity to initial conditions).\nA rational analysis of an agent's goals may enable us to predict what it will attempt, but any nontrivial plan with several steps will depend sensitively at each step to the reaction of the environment, and our prediction must take this reaction into account as well.\nActual simulation of futures is one way (the only one we know now) to deal with the impact of environmental dynamics on an agent's actions.\nHuman agents are also subject to an internal influence.\nThe agent's emotional state can modulate its decision process and its focus of attention (and thus its perception of the environment).\nIn extreme cases, emotion can lead an agent to choose actions that from the standpoint of a logical analysis may appear irrational.\nCurrent work on plan recognition for prediction focuses on the rational plan, and does not take into account either external environmental influences or internal emotional biases.\nBEE integrates all three elements into its predictions.\n2.2 Hidden Markov Models\nBEE is superficially similar to Hidden Markov Models (HMM's [19]).\nIn both cases, the agent has hidden internal state (the agent's personality) and observable state (its outward behavior), and we wish to learn the hidden state from the observable state (by evolution in BEE, by the Baum-Welch algorithm [1] in HMM's) and then predict the agent's future behavior (by extrapolation via ghosts in BEE, by the forward algorithm in HMM's).\nBEE offers two important benefits over HMM's.\nFirst, a single agent's hidden variables do not satisfy the Markov property.\nThat is, their values at t + 1 depend not only on their values at t, but also on the hidden variables of other agents.\nOne could avoid this limitation by constructing a single HMM over the joint state space of all of the agents, but this approach is combinatorially prohibitive.\nBEE combines the efficiency of independently modeling individual agents with the reality of taking into account interactions among them.\nSecond, Markov models assume that transition probabilities are stationary.\nThis assumption is unrealistic in dynamic situations.\nBEE's evolutionary process continually updates the agents' personalities based on actual observations, and thus automatically accounts for changes in the agents' personalities.\n2.3 Real-Time Nonlinear Systems Fitting\nMany systems of interest can be described by a vector of real numbers that changes as a function of time.\nThe dimensions of the vector define the system's state space.\nOne typically analyzes such systems as vector differential equations, e.g., When f is nonlinear, the system can be formally chaotic, and starting points arbitrarily close to one another can lead to trajectories that diverge exponentially rapidly.\nLong-range prediction of such a system is impossible.\nHowever, it is often useful to anticipate the system's behavior a short distance into the future.\nA common technique is to fit a convenient functional form for f to the system's trajectory in the recent past, then extrapolate this fit into the future (Figure 1, [7]).\nThis process is repeated constantly, providing the user with a limited look-ahead.\nThis approach is robust and widely applied, but requires systems that can efficiently be described with mathematical equations.\nBEE extends this approach to agent behaviors, which it fits to observed behavior using a genetic algorithm.\n3.\nARCHITECTURE\n3.1 Agent Model\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1427\n1428 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2 The BEE Cycle\n3.2.1 Overview\n3.2.2 Pheromone Infrastructure\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1429\n3.3 Information sources\n4.\nEXPERIMENTAL RESULTS\n4.1 Fitting Dispositions\n1430 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Integrated Predictions\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1431\n5.\nCONCLUSIONS\nIn many domains, it is important to reason from an entity's observed behavior to an estimate of its internal state, and then to extrapolate that estimate to predict the entity's future behavior.\nBEE performs this task using a faster-than-real-time simulation of swarming agents, coordinated through digital pheromones.\nThis simulation integrates knowledge of threat regions, a cognitive analysis of the agent's beliefs, desires, and intentions, a model of the agent's emotional disposition and state, and the dynamics of interactions with the environment.\nBy evolving agents in this rich environment, we can fit their internal state to their observed behavior.\nIn realistic wargames, the system successfully detects deliberately played emotions and makes reasonable predictions about the entities' future behaviors.\nBEE can only model internal state variables that impact the agent's external behavior.\nIt cannot fit variables that the agent does not manifest externally, since the basis for the evolutionary cycle is a comparison of the outward behavior of the simulated agent with that of the real entity.\nThis limitation is serious if our purpose is to understand the entity's internal state for its own sake.\nIf our purpose of fitting agents is to predict their subsequent behavior, the limitation is much less serious.\nState variables that do not impact behavior, while invisible to a behavior-based analysis, are irrelevant to a behavioral prediction.\nThe BEE architecture lends itself to extension in several promising directions.\n\u2022 The various inputs being integrated by the BEE are only an example of the kinds of information that can be handled.\nThe basic principle of using a dynamical simulation to integrate a wide range of influences can be extended to other inputs as well, requiring much less additional engineering than other more traditional ways of reasoning about how different knowledge sources come together in impacting an agent's behavior.\n\u2022 With such a change in inputs, BEE could be applied more widely than its current domain of adversarial reasoning in urban warfare.\nPotential applications of interest include computer games, business strategy, and sensor fusion.\n\u2022 Our initial limited repertoire of emotions is a small subset of\nthose that have been distinguished by psychologists, and that might be useful for understanding and projecting behavior.\nWe expect to extend the set of emotions and supporting dispositions that BEE can detect.\n\u2022 The mapping between an agent's psychological (cognitive and emotional) state and its outward behavior is not one-to-one.\nSeveral different internal states might be consistent with a given observed behavior under one set of environmental conditions, but might yield distinct behaviors under other conditions.\nIf the environment in the recent past is one that confounds such distinct internal states, we will be unable to distinguish them.\nAs long as the environment stays in this state, our predictions will be accurate, whichever of the internal states we assign to the agent.\nIf the environment then shifts to one under which the different internal states lead to different behaviors, using the previously chosen internal state will yield inaccurate predictions.\nOne way to address these concerns is to probe the real world, perturbing it in ways that would stimulate distinct behaviors from entities whose psychological state is otherwise indistinguishable.\nSuch probing is an important intelligence technique.\nBEE's faster-than-real-time simulation may enable us to identify appropriate probing actions, greatly increasing the effectiveness of intelligence efforts.","lvl-4":"Real-Time Agent Characterization and Prediction\nABSTRACT\nReasoning about agents that we observe in the world is challenging.\nOur available information is often limited to observations of the agent's external behavior in the past and present.\nTo understand these actions, we need to deduce the agent's internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear).\nIn addition, we often want to predict the agent's future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent's interaction with its environment.\nBEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agentbased model of the environment to characterize agents' internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.\n1.\nINTRODUCTION\nReasoning about agents that we observe in the world must integrate two disparate levels.\nOur observations are often limited to the agent's external behavior, which can frequently be summarized numerically as a trajectory in space-time (perhaps punctuated by actions from a fairly limited vocabulary).\nHowever, this behavior is driven by the agent's internal state, which (in the case of a human) may involve high-level psychological and cognitive concepts such as intentions and emotions.\nA central challenge in\nmany application domains is reasoning from external observations of agent behavior to an estimate of their internal state.\nSuch reasoning is motivated by a desire to predict the agent's behavior.\nThis problem has traditionally been addressed under the rubric of \"plan recognition\" or \"plan inference.\"\nMany realistic problems deviate from these conditions.\n\u2022 Increasing the number of agents leads to a combinatorial explosion that can swamp conventional analysis.\n\u2022 Environmental dynamics can frustrate agent intentions.\n\u2022 The agents often are trying to hide their intentions (and even their presence), rather than intentionally sharing information.\n\u2022 An agent's emotional state may be at least as important as its rational state in determining its behavior.\nBEE (Behavioral Evolution and Extrapolation) is a novel approach to recognizing the rational and emotional state of multiple interacting agents based solely on their behavior, without recourse to intentional communications from them.\nIt is inspired by techniques used to predict the behavior of nonlinear dynamical systems, in which a representation of the system is continually fit to its recent past behavior.\nFor nonlinear dynamical systems, the representation is a closed-form mathematical equation.\nIn BEE, it is a set of parameters governing the behavior of software agents representing the individuals being analyzed.\nThe current version of BEE characterizes and predicts the behavior of agents representing soldiers engaged in urban combat [8].\nSection 2 reviews relevant previous work.\nSection 3 describes the architecture of BEE.\nSection 4 reports results from experiments with the system.\nSection 5 concludes.\n2.\nPREVIOUS WORK\nBEE bears comparison with previous research in AI (plan recognition), Hidden Markov Models, and nonlinear dynamics systems (trajectory prediction).\n2.1 Plan Recognition in AI\nAgent theory commonly describes an agent's cognitive state in terms of its beliefs, desires, and intentions (the so-called \"BDI\" model [5, 20]).\nAn agent's beliefs are propositions about the state of the world that it considers true, based on its perceptions.\nIts\ndesires are propositions about the world that it would like to be true.\nDesires are not necessarily consistent with one another: an agent might desire both to be rich and not to work at the same time.\nAn agent's intentions, or goals, are a subset of its desires that it has selected, based on its beliefs, to guide its future actions.\nUnlike desires, goals must be consistent with one another (or at least believed to be consistent by the agent).\nAn agent's goals guide its actions.\nThus one ought to be able to learn something about an agent's goals by observing its past actions, and knowledge of the agent's goals in turn enables conclusions about what the agent may do in the future.\nThis process of reasoning from an agent's actions to its goals is known as \"plan recognition\" or \"plan inference.\"\nPlan recognition is seldom pursued for its own sake.\nIt usually supports a higher-level function.\nFor example, in humancomputer interfaces, recognizing a user's plan can enable the system to provide more appropriate information and options for user action.\nIn a tutoring system, inferring the student's plan is a first step to identifying buggy plans and providing appropriate remediation.\nIn many cases, the higher-level function is predicting likely future actions by the entity whose plan is being inferred.\nWe focus on plan recognition in support of prediction.\nAn agent's plan is a necessary input to a prediction of its future behavior, but hardly a sufficient one.\nAt least two other influences, one internal and one external, need to be taken into account.\nThe external influence is the dynamics of the environment, which may include other agents.\nThe dynamics of the real world impose significant constraints.\n\u2022 The environment may interfere with the desires of the agent [4, 10].\n\u2022 Most interactions among agents, and between agents and the world, are nonlinear.\nA rational analysis of an agent's goals may enable us to predict what it will attempt, but any nontrivial plan with several steps will depend sensitively at each step to the reaction of the environment, and our prediction must take this reaction into account as well.\nActual simulation of futures is one way (the only one we know now) to deal with the impact of environmental dynamics on an agent's actions.\nHuman agents are also subject to an internal influence.\nThe agent's emotional state can modulate its decision process and its focus of attention (and thus its perception of the environment).\nIn extreme cases, emotion can lead an agent to choose actions that from the standpoint of a logical analysis may appear irrational.\nCurrent work on plan recognition for prediction focuses on the rational plan, and does not take into account either external environmental influences or internal emotional biases.\nBEE integrates all three elements into its predictions.\n2.2 Hidden Markov Models\nBEE is superficially similar to Hidden Markov Models (HMM's [19]).\nBEE offers two important benefits over HMM's.\nFirst, a single agent's hidden variables do not satisfy the Markov property.\nThat is, their values at t + 1 depend not only on their values at t, but also on the hidden variables of other agents.\nOne could avoid this limitation by constructing a single HMM over the joint state space of all of the agents, but this approach is combinatorially prohibitive.\nBEE combines the efficiency of independently modeling individual agents with the reality of taking into account interactions among them.\nSecond, Markov models assume that transition probabilities are stationary.\nBEE's evolutionary process continually updates the agents' personalities based on actual observations, and thus automatically accounts for changes in the agents' personalities.\n2.3 Real-Time Nonlinear Systems Fitting\nMany systems of interest can be described by a vector of real numbers that changes as a function of time.\nThe dimensions of the vector define the system's state space.\nLong-range prediction of such a system is impossible.\nHowever, it is often useful to anticipate the system's behavior a short distance into the future.\nThis process is repeated constantly, providing the user with a limited look-ahead.\nThis approach is robust and widely applied, but requires systems that can efficiently be described with mathematical equations.\nBEE extends this approach to agent behaviors, which it fits to observed behavior using a genetic algorithm.\n5.\nCONCLUSIONS\nIn many domains, it is important to reason from an entity's observed behavior to an estimate of its internal state, and then to extrapolate that estimate to predict the entity's future behavior.\nBEE performs this task using a faster-than-real-time simulation of swarming agents, coordinated through digital pheromones.\nThis simulation integrates knowledge of threat regions, a cognitive analysis of the agent's beliefs, desires, and intentions, a model of the agent's emotional disposition and state, and the dynamics of interactions with the environment.\nBy evolving agents in this rich environment, we can fit their internal state to their observed behavior.\nIn realistic wargames, the system successfully detects deliberately played emotions and makes reasonable predictions about the entities' future behaviors.\nBEE can only model internal state variables that impact the agent's external behavior.\nIt cannot fit variables that the agent does not manifest externally, since the basis for the evolutionary cycle is a comparison of the outward behavior of the simulated agent with that of the real entity.\nThis limitation is serious if our purpose is to understand the entity's internal state for its own sake.\nIf our purpose of fitting agents is to predict their subsequent behavior, the limitation is much less serious.\nState variables that do not impact behavior, while invisible to a behavior-based analysis, are irrelevant to a behavioral prediction.\n\u2022 Our initial limited repertoire of emotions is a small subset of\nthose that have been distinguished by psychologists, and that might be useful for understanding and projecting behavior.\nWe expect to extend the set of emotions and supporting dispositions that BEE can detect.\n\u2022 The mapping between an agent's psychological (cognitive and emotional) state and its outward behavior is not one-to-one.\nSeveral different internal states might be consistent with a given observed behavior under one set of environmental conditions, but might yield distinct behaviors under other conditions.\nIf the environment in the recent past is one that confounds such distinct internal states, we will be unable to distinguish them.\nAs long as the environment stays in this state, our predictions will be accurate, whichever of the internal states we assign to the agent.\nIf the environment then shifts to one under which the different internal states lead to different behaviors, using the previously chosen internal state will yield inaccurate predictions.\nOne way to address these concerns is to probe the real world, perturbing it in ways that would stimulate distinct behaviors from entities whose psychological state is otherwise indistinguishable.\nSuch probing is an important intelligence technique.\nBEE's faster-than-real-time simulation may enable us to identify appropriate probing actions, greatly increasing the effectiveness of intelligence efforts.","lvl-2":"Real-Time Agent Characterization and Prediction\nABSTRACT\nReasoning about agents that we observe in the world is challenging.\nOur available information is often limited to observations of the agent's external behavior in the past and present.\nTo understand these actions, we need to deduce the agent's internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear).\nIn addition, we often want to predict the agent's future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent's interaction with its environment.\nBEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agentbased model of the environment to characterize agents' internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.\n1.\nINTRODUCTION\nReasoning about agents that we observe in the world must integrate two disparate levels.\nOur observations are often limited to the agent's external behavior, which can frequently be summarized numerically as a trajectory in space-time (perhaps punctuated by actions from a fairly limited vocabulary).\nHowever, this behavior is driven by the agent's internal state, which (in the case of a human) may involve high-level psychological and cognitive concepts such as intentions and emotions.\nA central challenge in\nmany application domains is reasoning from external observations of agent behavior to an estimate of their internal state.\nSuch reasoning is motivated by a desire to predict the agent's behavior.\nThis problem has traditionally been addressed under the rubric of \"plan recognition\" or \"plan inference.\"\nWork to date focuses almost entirely on recognizing the rational state (as opposed to the emotional state) of a single agent (as opposed to an interacting community), and frequently takes advantage of explicit communications between agents (as in managing conversational protocols).\nMany realistic problems deviate from these conditions.\n\u2022 Increasing the number of agents leads to a combinatorial explosion that can swamp conventional analysis.\n\u2022 Environmental dynamics can frustrate agent intentions.\n\u2022 The agents often are trying to hide their intentions (and even their presence), rather than intentionally sharing information.\n\u2022 An agent's emotional state may be at least as important as its rational state in determining its behavior.\nDomains that exhibit these constraints can often be characterized as adversarial, and include military combat, competitive business tactics, and multi-player computer games.\nBEE (Behavioral Evolution and Extrapolation) is a novel approach to recognizing the rational and emotional state of multiple interacting agents based solely on their behavior, without recourse to intentional communications from them.\nIt is inspired by techniques used to predict the behavior of nonlinear dynamical systems, in which a representation of the system is continually fit to its recent past behavior.\nFor nonlinear dynamical systems, the representation is a closed-form mathematical equation.\nIn BEE, it is a set of parameters governing the behavior of software agents representing the individuals being analyzed.\nThe current version of BEE characterizes and predicts the behavior of agents representing soldiers engaged in urban combat [8].\nSection 2 reviews relevant previous work.\nSection 3 describes the architecture of BEE.\nSection 4 reports results from experiments with the system.\nSection 5 concludes.\nFurther details that cannot be included here for the sake of space are available in an on-line technical report [16].\n2.\nPREVIOUS WORK\nBEE bears comparison with previous research in AI (plan recognition), Hidden Markov Models, and nonlinear dynamics systems (trajectory prediction).\n2.1 Plan Recognition in AI\nAgent theory commonly describes an agent's cognitive state in terms of its beliefs, desires, and intentions (the so-called \"BDI\" model [5, 20]).\nAn agent's beliefs are propositions about the state of the world that it considers true, based on its perceptions.\nIts\ndesires are propositions about the world that it would like to be true.\nDesires are not necessarily consistent with one another: an agent might desire both to be rich and not to work at the same time.\nAn agent's intentions, or goals, are a subset of its desires that it has selected, based on its beliefs, to guide its future actions.\nUnlike desires, goals must be consistent with one another (or at least believed to be consistent by the agent).\nAn agent's goals guide its actions.\nThus one ought to be able to learn something about an agent's goals by observing its past actions, and knowledge of the agent's goals in turn enables conclusions about what the agent may do in the future.\nThis process of reasoning from an agent's actions to its goals is known as \"plan recognition\" or \"plan inference.\"\nThis body of work (surveyed recently at [3]) is rich and varied.\nIt covers both single-agent and multi-agent (e.g., robot soccer team) plans, intentional vs. non-intentional actions, speech vs. non-speech behavior, adversarial vs. cooperative intent, complete vs. incomplete world knowledge, and correct vs. faulty plans, among other dimensions.\nPlan recognition is seldom pursued for its own sake.\nIt usually supports a higher-level function.\nFor example, in humancomputer interfaces, recognizing a user's plan can enable the system to provide more appropriate information and options for user action.\nIn a tutoring system, inferring the student's plan is a first step to identifying buggy plans and providing appropriate remediation.\nIn many cases, the higher-level function is predicting likely future actions by the entity whose plan is being inferred.\nWe focus on plan recognition in support of prediction.\nAn agent's plan is a necessary input to a prediction of its future behavior, but hardly a sufficient one.\nAt least two other influences, one internal and one external, need to be taken into account.\nThe external influence is the dynamics of the environment, which may include other agents.\nThe dynamics of the real world impose significant constraints.\n\u2022 The environment may interfere with the desires of the agent [4, 10].\n\u2022 Most interactions among agents, and between agents and the world, are nonlinear.\nWhen iterated, these can generate chaos (extreme sensitivity to initial conditions).\nA rational analysis of an agent's goals may enable us to predict what it will attempt, but any nontrivial plan with several steps will depend sensitively at each step to the reaction of the environment, and our prediction must take this reaction into account as well.\nActual simulation of futures is one way (the only one we know now) to deal with the impact of environmental dynamics on an agent's actions.\nHuman agents are also subject to an internal influence.\nThe agent's emotional state can modulate its decision process and its focus of attention (and thus its perception of the environment).\nIn extreme cases, emotion can lead an agent to choose actions that from the standpoint of a logical analysis may appear irrational.\nCurrent work on plan recognition for prediction focuses on the rational plan, and does not take into account either external environmental influences or internal emotional biases.\nBEE integrates all three elements into its predictions.\n2.2 Hidden Markov Models\nBEE is superficially similar to Hidden Markov Models (HMM's [19]).\nIn both cases, the agent has hidden internal state (the agent's personality) and observable state (its outward behavior), and we wish to learn the hidden state from the observable state (by evolution in BEE, by the Baum-Welch algorithm [1] in HMM's) and then predict the agent's future behavior (by extrapolation via ghosts in BEE, by the forward algorithm in HMM's).\nBEE offers two important benefits over HMM's.\nFirst, a single agent's hidden variables do not satisfy the Markov property.\nThat is, their values at t + 1 depend not only on their values at t, but also on the hidden variables of other agents.\nOne could avoid this limitation by constructing a single HMM over the joint state space of all of the agents, but this approach is combinatorially prohibitive.\nBEE combines the efficiency of independently modeling individual agents with the reality of taking into account interactions among them.\nSecond, Markov models assume that transition probabilities are stationary.\nThis assumption is unrealistic in dynamic situations.\nBEE's evolutionary process continually updates the agents' personalities based on actual observations, and thus automatically accounts for changes in the agents' personalities.\n2.3 Real-Time Nonlinear Systems Fitting\nMany systems of interest can be described by a vector of real numbers that changes as a function of time.\nThe dimensions of the vector define the system's state space.\nOne typically analyzes such systems as vector differential equations, e.g., When f is nonlinear, the system can be formally chaotic, and starting points arbitrarily close to one another can lead to trajectories that diverge exponentially rapidly.\nLong-range prediction of such a system is impossible.\nHowever, it is often useful to anticipate the system's behavior a short distance into the future.\nA common technique is to fit a convenient functional form for f to the system's trajectory in the recent past, then extrapolate this fit into the future (Figure 1, [7]).\nThis process is repeated constantly, providing the user with a limited look-ahead.\nThis approach is robust and widely applied, but requires systems that can efficiently be described with mathematical equations.\nBEE extends this approach to agent behaviors, which it fits to observed behavior using a genetic algorithm.\n3.\nARCHITECTURE\nBEE predicts the future by observing the emergent behavior of agents representing the entities of interest in a fine-grained agent simulation.\nKey elements of the BEE architecture include the model of an individual agent, the pheromone infrastructure through which agents interact, the information sources that guide them, and the overall evolutionary cycle that they execute.\n3.1 Agent Model\nThe agents in BEE are inspired by two bodies of work: our previous work on fine-grained agents that coordinate their actions through digital pheromones in a shared environment [2, 13, 17, 18, 21], and the success of previous agentbased combat modeling.\nDigital pheromones are scalar variables that agents deposit and sense at their current location\nFigure 1: Tracking a nonlin\near dynamical system.\na = system state space; b = system trajectory over time; c = recent measurements of system state; d = short-range prediction.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1427\nin the environment.\nAgents respond to local concentrations of these variables tropistically, climbing or descending local gradients.\nTheir movements change the deposit patterns.\nThis feedback loop, together with processes of evaporation and propagation in the environment, support complex patterns of interaction and coordination among the agents [15].\nTable 1 shows the BEE's current pheromone flavors.\nFor example, a living member of the adversary emits a RED-ALIVE pheromone, while roads emit a MOBILITY pheromone.\nOur soldier agents are inspired by EINSTein and MANA.\nEINSTein [6] represents an agent as a set of six weights, each in [-1, 1], describing the agent's response to six kinds of information.\nFour of these describe the number of alive friendly, alive enemy, injured friendly, and injured enemy troops within the agent's sensor range.\nThe other two weights relate to the agent's distance to its own flag and that of the adversary, representing objectives that it seeks to protect and attack, respectively.\nA positive weight indicates attraction to the entity described by the weight, while a negative weight indicates repulsion.\nMANA [9] extends the concepts in EINSTein.\nFriendly and enemy flags are replaced by the waypoints pursued by each side.\nMANA includes low, medium, and high threat enemies.\nIn addition, it defines a set of triggers (e.g., reaching a waypoint, being shot at, making contact with the enemy, being injured) that shift the agent from one personality vector to another.\nA default state defines the personality vector when no trigger state is active.\nThe personality vectors in MANA and EINSTein reflect both rational and emotive aspects of decision-making.\nThe notion of being attracted or repelled by friendly or adversarial forces in various states of health is an important component of what we informally think of as emotion (e.g., fear, compassion, aggression), and the use of the term \"personality\" in both EINSTein and MANA suggests that the system designers are thinking anthropomorphically, though they do not use \"emotion\" to describe the effect they are trying to achieve.\nThe notion of waypoints to which an agent is attracted reflects goal-oriented rationality.\nBEE uses an integrated rational-emotive personality model.\nA BEE agent's rationality is a vector of seven desires, which are values in [-1, +1]: ProtectRed (the adversary), ProtectBlue (friendly forces), ProtectGreen (civilians), ProtectKeySites, AvoidCombat, AvoidDetection, and Survive.\nNegative values reverse the sense suggested by the label.\nFor example, a negative\nTable 1.\nPheromone flavors in BEE\nPheromone Description Flavor RedAlive Emitted by a living or dead entity of the appropriate group (Red = enemy, Blue = friendly, Green = neutral) RedCasualty BlueAlive BlueCasualty GreenAlive GreenCasualty WeaponsFire Emitted by a firing weapon KeySite Emitted by a site of particular importance to Red Cover Emitted by locations that afford cover from fire Mobility Emitted by roads and other structures that enhance agent mobility RedThreat Determined by external process (see Section 3.3) BlueThreat value of ProtectRed indicates a desire to harm Red, and an agent with a high positive desire to ProtectRed will be attracted to REDALIVE, RED-CASUALTY, and MOBILITY pheromone, and will move at maximum speed.\nThe emotive component of a BEE's personality is based on the Ortony-Clore-Collins (OCC) framework [11], and is described in detail elsewhere [12].\nOCC define emotions as \"valanced reactions to agents, states, or events in the environment.\"\nThis notion of reaction is captured in MANA's trigger states.\nAn important advance in BEE's emotional model is the recognition that agents may differ in how sensitive they are to triggers.\nFor example, threatening situations tend to stimulate the emotion of fear, but a given level of threat will produce more fear in a new recruit than in a seasoned veteran.\nThus our model includes not only Emotions, but Dispositions.\nEach Emotion has a corresponding Disposition.\nDispositions are relatively stable, and considered constant over the time horizon of a run of the BEE, while Emotions vary based on the agent's disposition and the stimuli to which it is exposed.\nInterviews with military domain experts identified the two most crucial emotions for combat behavior as Anger (with the corresponding disposition Irritability) and Fear (whose disposition is Cowardice).\nTable 2 shows which pheromones trigger which emotions.\nFor example, RED-CASUALTY pheromone stimulates both Anger and Fear in a Red agent, but not in a Blue agent.\nEmotions are modeled as agent hormones (internal pheromones) that are augmented in the presence of the triggering environmental condition and evaporate over time.\nA non-zero emotion modifies the agent's actions.\nElevated level Anger increases movement likelihood, weapon firing likelihood, and tendency toward an exposed posture.\nElevated Fear decreases these likelihoods.\nFigure 2 summarizes the BEE's personality model.\nThe left side is a straightforward BDI model (we prefer the term \"goal\" to \"intention\").\nThe right side is the emotive component, where an appraisal of the agent's beliefs, moderated by the disposition, leads to an emotion that in turn influences the BDI analysis.\nTable 2: Interactions of pheromones and dispositions\/emotions\n1428 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 2: BEE's Integrated Rational and Emotive Personality Model\n3.2 The BEE Cycle\nBEE's major innovation is extending the nonlinear systems technique of Section 2.2 to agent behaviors.\nThis section describes this process at a high level, then details the multi-page pheromone infrastructure that implements it.\n3.2.1 Overview\nFigure 3 is an overview of Behavior Evolution and Extrapolation.\nEach active entity in the battlespace has an persistent avatar that continuously generates a stream of ghost agents representing itself.\nWe call the combined modeling entity consisting of avatar and ghosts a polyagent [14].\nGhosts live on a timeline indexed by ti that begins in the past and runs into the future.\nti is offset with respect to the current time t.\nThe timeline is divided into discrete \"pages,\" each representing a successive value of ti.\nThe avatar inserts the ghosts at the insertion horizon.\nIn our current system, the insertion horizon is at ti - t = -30, meaning that ghosts are inserted into a page representing the state of the world 30 minutes ago.\nAt the insertion horizon, each ghost's behavioral parameters (desires and dispositions) are sampled from distributions to explore alternative personalities of the entity it represents.\nEach page between the insertion horizon and ti = t (\"now\") records the historical state of the world at the point in the past to which it corresponds.\nAs ghosts move from page to page, they interact with this past state, based on their behavioral parameters.\nThese interactions mean that their fitness depends not just on their own actions, but also on the behaviors of the rest of the population, which is also evolving.\nBecause ti advances faster than real time, eventually ti = t (actual time).\nAt this point, each ghost is evaluated based on its location compared with the actual location of its corresponding real-world entity.\nThe fittest ghosts have three functions.\n1.\nThe personality of each entity's fittest ghost is reported to the rest of the system as the likely personality of that entity.\nThis information enables us to characterize individual warriors as unusually cowardly or brave.\n2.\nThe fittest ghosts breed genetically and their offspring return to the insertion horizon to continue the fitting process.\n3.\nThe fittest ghosts for each entity form the basis for a population of ghosts that run past the avatar's present into the future.\nEach ghost that runs into the future explores a different possible future of the battle, analogous to how some people plan ahead by mentally simulating different ways that a situation might unfold.\nAnalysis of the behaviors of these different possible futures yields predictions.\nThus BEE has three distinct notions of time, all of which may be distinct from real-world time.\n1.\nDomain time t is the current time in the domain being modeled.\nIf BEE is applied to a real-world situation, this time is the same as real-world time.\nIn our experiments, we apply BEE to a simulated battle, and domain time is the time stamp published by the simulator.\nDuring actual runs, the simulator is often paused, so domain time runs slower than real time.\nWhen we replay logs from simulation runs, we can speed them up so that domain time runs faster than real time.\n2.\nBEE time ti for a page records the domain time corresponding to the state of the world represented on that page, and is offset from the current domain time.\n3.\nShift time is incremented every time the\nghosts move from one page to the next.\nThe relation between shift time and real time depends on the processing resources available.\n3.2.2 Pheromone Infrastructure\nBEE must operate very rapidly, to keep pace with the ongoing battle.\nThus we use simple agents coordinated using pheromone mechanisms.\nWe have described the basic dynamics of our pheromone infrastructure elsewhere [2].\nThis infrastructure runs on the nodes of a graph-structured environment (in the case of BEE, a rectangular lattice).\nEach node maintains a scalar value for each flavor of pheromone, and provides three functions:\n\u2022 It aggregates deposits from individual agents, fusing information across multiple agents and through time.\n\u2022 It evaporates pheromones over time, providing an innovative alternative to traditional truth maintenance.\nTraditionally, knowledge bases remember everything they are told unless they have a reason to forget.\nPheromone-based systems immediately begin to forget everything they learn, unless it is continually reinforced.\nThus inconsistencies automatically remove themselves within a known period.\n\u2022 It diffuses pheromones to nearby places, disseminating information for access by nearby agents.\nThe distribution of each pheromone flavor over the environment forms a field that represents some aspect of the state of the world at an instant in time.\nEach page of the timeline is a complete pheromone field for the world at the BEE time ti represented by that page.\nThe behavior of the pheromones on each page depends on whether the page represents the past or the future.\nFigure 3: Behavioral Emulation and Extrapolation.\nEach avatar generates a stream of ghosts that sample the personality space of its entity.\nThey evolve against the entity's recent observed behavior, and the fittest ghosts run into the future to generate predictions.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1429\nIn pages representing the future (ti> t), the usual pheromone mechanisms apply.\nGhosts deposit pheromone each time they move to a new page, and pheromones evaporate and propagate from one page to the next.\nIn pages representing the past (ti

p).\nWe distinguish between divisible and indivisible orders.\nAgents submitting divisible orders will accept any quantity \u03b1q where 0 <\u03b1 <1.\nAgents submitting indivisible orders will accept only exactly q units, or none at all.\nWe believe that, given the nature of what is being traded (state-contingent dollars), most agents will be content to trade using divisible orders.\nEvery order o can be translated into a payoff vector \u03a5 across all states \u03c9 E \u2126.\nThe payoff \u03a5 ~ \u03c9 ~ in state \u03c9 is q \u00b7 1\u03c9 \u2208 \u03c8 (1\u03c9 \u2208 \u03c6 \u2212 p), where 1\u03c9 \u2208 E equals 1 iff \u03c9 E E and zero otherwise.\nRecall that the 2n states correspond to the 2n possible combinations of event outcomes.\nWe index multiple orders with subscripts (e.g., oi and \u03a5i).\nLet the set of all orders be O and the set of all corresponding payoff vectors be P.\n3.2.3 The matching problem\nThe auctioneer's task, called the matching problem, is to determine which orders to accept among all orders o E O. Let \u03b1i be the fraction of order oi accepted by the auctioneer (in the indivisible case, \u03b1i must be either 0 or 1; in the divisible case, \u03b1i can range from 0 to 1).\nIf \u03b1i = 0, then order oi is considered rejected and no transactions take place concerning this order.\nFor accepted orders (\u03b1i> 0), the auctioneer receives the money lost by bidders and pays out the money won by bidders, so the auctioneer's payoff vector is:\nWe also call the auctioneer's payoff vector the surplus vector, since it is the (possibly state-contingent) money left over after all accepted orders are filled.\nAssume that the auctioneer wants to choose a set of orders so that it is guaranteed not to lose any money in any future state, but that the auctioneer does not necessarily insist on obtaining a positive benefit from the transaction (i.e., the auctioneer is content to break even).\nauc = c where c is nonnegative, then the surplus leftover after processing this match is c dollars.\nLet m = min\u03c9 [\u03a5 ~ \u03c9 ~ auc].\nIn general, processing a match leaves m dollars in cash and \u03a5 ~ \u03c9 ~ auc \u2212 m in state-contingent dollars, which can then be translated into securities.\nThe auctioneer's payoff\" vector (the negative of the componentwise sum of the above two vectors) is:\nSince all components are nonnegative, the two orders match.\nThe auctioneer can process both orders, leaving a surplus of $0.1 in cash and one unit of ~ A1 \u00af A2 ~ in securities.\n\u2751 Now consider the divisible case, where order can be partially filled.\nDEFINITION 2.\n(Matching problem, divisible case) Given a set of orders O, does there exist \u03b1i \u2208 [0, 1] with at least one \u03b1i> 0 such that\nEXAMPLE 3.\n(Divisible order matching) Suppose | A | = 2.\nConsider an order to sell one unit of ~ A1 ~ at price $0.5, an order to buy one unit of ~ A1A2 | A1 \u2228 A2 ~ at price $0.5, and an order to buy one unit of ~ A1 | \u00af A2 ~ at price $0.4.\nThe corresponding payoff\" vectors are:\nIt is clear by inspection that no non-empty subset of whole orders constitutes a match: in all cases where \u03b1i \u2208 {0, 1} (other than all \u03b1i = 0), at least one state sums to a positive amount (negative for the auctioneer).\nHowever, if \u03b11 = \u03b12 = 3\/5 and \u03b13 = 1, then the auctioneer's payoff\" vector is: Ta \u2212 Tg = ~ 0, 0, 0, 0.1 ~, constituting a match.\nThe auctioneer can process 3\/5 of the first and second orders, and all of the third order, leaving a surplus of 0.1 units of ~ \u00af A1 \u00af A2 ~.\nIn this example, a divisible match exists even though an indivisible match is not possible; we examine the distinction in detail in Section 5, where we separate the two matching problems into distinct complexity classes.\n\u2751 The matching problems defined above are decision problems: the task is only to show the existence or nonexistence of a match.\nHowever, there may be multiple matches from which the auctioneer can choose.\nSometimes the choices are equivalent from the auctioneer's perspective; alternatively, an objective function can be used to find an optimal match according to that objective.\nEXAMPLE 4.\n(Auctioneer alternatives I) Suppose | A | = 2.\nConsider an order to sell one unit of ~ A1 ~ at price $0.7, an order to sell one unit of ~ A2 ~ at price $0.7, an order to buy one unit of ~ A1A2 ~ at price $0.4, an order to buy one unit of ~ A1 \u00af A2 ~ at price $0.4, and an order to buy one unit\nConsider the indivisible case.\nThe auctioneer could choose to accept bids 1, 3, and 4 together, or the auctioneer could choose to accept bids 2, 3, and 5 together.\nBoth constitute matches, and in fact both yield identical payoff \"s (Tauc = ~ 0.1, 0.1, 0.1, 0.1 ~, or $0.1 in cash) for the auctioneer.\n\u2751 EXAMPLE 5.\n(Auctioneer alternatives II) Suppose | A | = 2.\nConsider an order to sell two units of ~ A1 ~ at price $0.6, an order to buy one unit of ~ A1A2 ~ at price $0.3, and an order to buy one unit of ~ A1 \u00af A2 ~ at price $0.5.\nThe corresponding payoff\" vectors are:\nConsider the divisible case.\nThe auctioneer could choose to accept one unit each of all three bids, yielding a payoff\" to the auctioneer of $0.2 in cash (Tauc = ~ 0.2, 0.2, 0.2, 0.2 ~).\nAlternatively, the auctioneer could choose to accept 4\/3 units of bid 1, and one unit each of bids 2 and 3, yielding a payoff\" to the auctioneer of 1\/3 units of security ~ A1 ~.\nBoth choices constitute matches (in fact, accepting any number of units of bid 1 between 1 and 4\/3 can be part of a match), though depending on the auctioneer's objective, one choice might be preferred over another.\nFor example, if the auctioneer believes that A1 is very likely to occur, it may prefer to accept 4\/3 units of bid 1.\n\u2751 There are many possible criteria for the auctioneer to decide among matches, all of which seem reasonable in some circumstances.\nOne natural quantity to maximize is the volume of trade among bidders; another is the auctioneer's utility, either with or without the arbitrage constraint.\nDEFINITION 3.\n(Trade maximization problem) Given a set of indivisible (divisible) orders O, choose \u03b1i \u2208 {0, 1} (\u03b1i \u2208 [0, 1]) to maximize\ncent of orders filled, or P Another reasonable variation is to maximize the total per\nproblem) Let the auctioneer's subjective probability for each state \u03c9 be Pr (\u03c9), and let the auctioneer's utility for y dollars be u (y).\nGiven a set of indivisible (divisible) orders O, choose \u03b1i E {0, 11 (\u03b1i E [0, 1]) to maximize\nThis last objective function drops the risk-free (arbitrage) constraint.\nIn this case, the auctioneer is a market maker with beliefs about the likelihood of outcomes, and the auctioneer may actually lose money is some outcomes.\nStill other variations and other optimization criteria seem reasonable, including social welfare, etc. .\nIt also seems reasonable to suppose that the surplus be shared among bidders and the auctioneer, rather than retained solely by the auctioneer.\nThis is analogous to choosing a common transaction price in a double auction (e.g., the midpoint between the bid and ask prices), rather than the buyer paying the bid price and the seller receiving the ask price, with the difference going to the auctioneer.\nThe problem becomes more complicated when dividing surplus securities, in part because they are valued differently by different agents.\nFormulating reasonable sharing rules and examining the resulting incentive properties seems a rich and promising avenue for further investigation.\n4.\nMATCHING ALGORITHMS\nThe straightforward algorithm for solving the divisible matching problem is linear programming; we set up an appropriate linear program in Section 5.1.\nThe straightforward algorithm for solving the indivisible matching problem is integer programming.\nWith n events, to set up the appropriate linear or integer programs, simply writing out the payoff vectors in the straightforward way requires O (2n) space.\nThere is some hope that specialized algorithms that exploit structure among bids can perform better in terms of average-case time and space complexity.\nFor example, in some cases matches can be identified using logical reduction techniques, without writing down the full payoff vectors.\nSo a match between the following bids:\n9 sell 1 of (A1A2) at $0.2 9 sell 1 of (A1 \u00af A2) at $0.1 9 buy 1 of (A1) at $0.4\ncan be identified by reducing the first two bids to an equivalent offer to sell (A1) at $0.3 that clearly matches with the third bid.\nFormalizing a logical-reduction algorithm for matching, or other algorithms that can exploit special structure among the bids, is a promising avenue for future work.\n5.\nTHE COMPUTATIONAL COMPLEXITY OF MATCHING\nIn this section we examine the computational complexity of the auctioneer's matching problem.\nHere n refers to the problem's input size that includes descriptions of all of the buy and sell orders.\nWe also assume that n bounds the number of base securities.\nWe consider four cases based on two parameters:\n1.\nWhether to allow divisible or indivisible orders.\n2.\nThe number of securities.\nWe consider two possibilities: (a) O (log n) base securities yielding a polynomial number of states.\n(b) An unlimited number of base securities yielding an exponential number of states.\nWe show the following results.\nTHEOREM 1.\nThe matching problem is\n1.\ncomputable in polynomial-time for O (log n) base securities with divisible orders.\n2.\nco-NP-complete for unlimited securities with divisible orders.\n3.\nNP-complete for O (log n) base securities with indivisible orders.\n4.\n\u03a3p2-complete for unlimited securities with indivisible orders.\n5.1 Small number of securities with divisible orders\nWe can build a linear program based on Definition 2.\nWe have variables \u03b1i.\nFor each i, we have\nand for each state \u03c9 in \u2126 we have the constraint\nA set of orders has a matching exactly when Pi \u03b1i> 0.\nWith O (log n) base securities, we have 1\u21261 bounded by a polynomial so we can solve this linear program in polynomial time.\nNote that one might argue that one should maximize some linear combination of the \u03a5 ~ \u03c9) i s to maximize the surplus.\nHowever this approach will not find matchings that have zero surplus.\n5.2 Large number of securities with divisible orders\nWith unlimited base securities, the linear program given in Section 5.1 has an exponential number of constraint equations.\nEach constraint is short to describe and easily computable given \u03c9.\nLet m 0.\nSince I\u2126I and ISI are bounded by a polynomial in n, the verification can be done in polynomial time.\nTo show that matching is NP-complete we reduce the NPcomplete problem EXACT COVER BY 3-SETS (X3C) to a matching of securities.\nThe input to X3C consists of a set X and a collection C of 3-element subsets of X.\nThe input (X, C) is in X3C if C contains an exact cover of X, i.e., there is a subcollection C ~ of C such that every element of X occurs in exactly one member of C ~.\nKarp showed that X3C is NP-complete.\nSuppose we have an instance (X, C) with the vector X = {x1,..., x3q} and C = {c1,..., cm}.\nWe set \u2126 = {e1,..., e3q, r, s} and define securities labelled (\u03c61),..., (\u03c6m), (\u03c81),..., (\u03c8q) and (\u03c4), as follows:\n\u2022 Security (\u03c6i) is true in state r, and is true in state ek if k is not in ci.\n\u2022 Security (\u03c8j) is true only in state s. \u2022 Security (\u03c4) is true in each state ek but not r or s.\nWe have buy orders on each (\u03c6i) and (\u03c8j) security for 0.5 - 1 and a buy order on (\u03c4) for 0.5.\nWe claim that a matching exists if and only if (X, C) is in X3C.\nIf (X, C) is in X3C, let C ~ be the subcollection that covers each element of X exactly once.\nNote that IC ~ I = q.\nWe claim the collection consisting of (\u03c6i) for each ci in C ~, every (\u03c8j) and (\u03c4) has a matching.\nIn each state ek we have an auctioneer's payoff of\nIn states r and s the auctioneer's payoffs are\nSuppose now that (X, C) is not in X3C but there is a matching.\nConsider the number q ~ of the (\u03c6i) in that matching and q ~ ~ the number of (\u03c8j) in the matching.\nSince a matching requires a nonempty subset of the orders and (\u03c4) by itself is not a matching we have q ~ + q ~ ~> 0.\nWe have three cases.\n5.4 Large Number of Securities with Indivisible Orders\nThe class \u03a3p2 is the second level of the polynomial-time hierarchy.\nA language L is in \u03a3p2 if there exists a polynomial p and a set A in P such that x is in L if and only if there is a y with IyI = p (IxI) such that for all z, with IzI = p (IxI), (x, y, z) is in A.\nThe class \u03a3p2 contains both NP and coNP.\nUnless the polynomial-time hierarchy collapses (which is considered unlikely), a problem that is complete for \u03a3p2 is not contained in NP or co-NP.\nWe will show that computing a matching is \u03a3p2-complete, and remains so even for quite restricted types of securities, and hence is (likely) neither in NP or co-NP.\nWhile it may seem that being NP-complete or co-NP-complete is \"hard enough,\" there are further practical consequences of being outside of NP and co-NP.\nIf the matching problem were in NP, one could use heuristics to search for and verify a match if it exists; even if such heuristics fail in the worst case, they may succeed for most examples in practice.\nSimilarly, if the matching problem were in co-NP, one might hope to at least heuristically rule out the possibility of matching.\nBut for problems outside of NP or co-NP, there is no framework for verifying that a heuristically derived answer is correct.\nLess formally, for NP (or co-NP) - complete problems, you have to be lucky; for \u03a3p2-complete problems, you can't even tell if you've been lucky.\nWe note that the existence of a matching is in \u03a3p2: We use y to choose a subset of the orders and z to represent a state \u03c9 with (x, y, z) in A if the set of orders has a total nonpositive auctioneer's payoff in state \u03c9.\nWe prove a stronger theorem which implies that matching is in \u03a3p 2.\nLet S1,..., Sn be a set of securities, where each security Si has cost ci and pays off pi whenever formula Ci is satisfied.\nThe 0 \u2212 1-matching problem asks whether one can, by accepting either 0 or 1 of each security, guarantee a worst-case payoff strictly larger than the total cost.\nTHEOREM 2.\nThe 0 \u2212 1-matching problem is \u03a3p2-complete.\nFurthermore, the problem remains \u03a3p2-complete under the following two special cases:\n1.\nFor all i, Ci is a conjunction of 3 base events (or their negations), pi = 1, and ci = cj for all i and j. 2.\nFor all i, Ci is a conjunction of at most 2 base securities (or their negations).\nThese hardness results hold even if there is a promise that no subset of the securities guarantees a worst-case payoff identical to their cost.\nTo prove Theorem 2, we reduce from the \"standard\" \u03a3p2 problem that we call T \u2203 \u2200 BF.\nGiven a boolean formula \u03c6 with variables x1,..., xn and y1,..., yn is the following fullyquantified formula true\nis restricted to being a disjunction of conjunctions of at most 3 variables (or their negations), e.g.,\nThis form, without the bound on the conjunction size, is known as disjunctive normal form (DNF); the restriction to conjunctions of 3 variables is 3-DNF.\nWe reduce T \u2203 \u2200 BF to finding a matching.\nFor the simplest reduction, we consider the matching problem where one has a set of Arrow-Debreu securities whose payoff events are conjunctions of the base securities, or their negations.\nThe auctioneer has the option of accepting either 0 or 1 of each of the given securities.\nWe first reduce to the case where the payoff events are conjunctions of arbitrarily many base events (or their negations).\nBy a standard trick we can reduce the number of base events in each conjunction to 3, and with a slight twist we can even ensure that all securities have the same price as well as the same payoff.\nFinally, we show that the problem remains hard even if only conjunctions of 2 variables are allowed, though with securities that deviate slightly from Arrow-Debreu securities in that they may have varying, non unit payoffs.\n5.4.1 The basic reduction\nBefore describing the securities, we give some intuition.\nThe T \u2203 \u2200 BFproblem may be viewed as a game between a selector and an adversary.\nThe selector sets the xi variables, and then the adversary sets the yi variables so as to falsify the formula \u03c6.\nWe can view the 0 \u2212 1-matching problem as one in which the auctioneer is a buyer who buys securities corresponding to disjunctions of the base events, and then the adversary sets the values of the base events to minimize the payoff from the securities.\nWe construct our securities so that the optimal buying strategy is to buy n \"expensive\" securities along with a set of \"cheap\" securities, of negligible cost (for some cases we can modify the construction so that all securities have the same cost).\nThe total cost of the securities will be just under 1, and each security pays off 1, so the adversary must ensure that none of the securities pays off.\nEach expensive security forces the adversary to set some variable, xi to a particular value to prevent the security from paying off; this corresponds to setting the xi variables in the original game.\nThe cheap securities are such that preventing every one of of these securities from paying off is equivalent to falsifying\u03c6 in the original game.\nAmong the technical difficulties we face is how to prevent the buyer from buying conflicting securities, e.g., one that forces xi = 0 and the other that forces xi = 1, allowing for a trivial arbitrage.\nSecondly, for our analysis we need to ensure that a trader cannot spend more to get more, say by spending 1.5 for a set of securities with the property that at least 2 securities pay off under all possible events.\nFor each of the variables {xi}, {yi} in \u03c6, we add a corresponding base security (with the same labels).\nFor each existential variable xi we add additional base securities, ni and zi.\nWe also include a base security Q.\nIn our basic construction, each expensive security costs C and each cheap security costs ~; all securities pay off 1.\nWe require that Cn + ~ (| cheap securities |) <1 and C (n +1)> 1.\nThat is, one can buy n expensive securities and all of the cheap securities for less than 1, but one cannot buy n + 1 expensive securities for less than 1.\nWe at times refer to a security by its payoff clause.\nRemark: We may loosely think of ~ as 0.\nHowever, this would allow one to buy a security for nothing that pays (in the worst case) nothing.\nBy making ~> 0, we can show it hard to distinguish portfolios that guarantee a positive profit from those that risk a positive loss.\nSetting ~> 0 will also allow us to show hardness results for the case where all securities have the same cost.\nFor 1 \u2264 i \u2264 n, we have two expensive securities with payoff clauses (\u00af xi \u2227 Q) and (\u00af ni \u2227 Q) and two cheap securities with payoff clauses (xi \u2227 \u00af zi) and (ni \u2227 \u00af zi).\nFor each clause C \u2208 \u03c6, we convert every negated variable \u00af xi into ni and add the conjunction z1 \u2227 \u00b7 \u00b7 \u00b7 \u2227 zn.\nThus, for a clause C = (x2 \u2227 \u00af x7 \u2227 \u00af y5) we construct a cheap security SC, with payoff clause\nFinally, we have a cheap security with payoff clause (\u00af Q).\nWe now argue that a matching exists iff\nWe do this by successively constraining the buyer and the adversary, eliminating behaviors that would cause the other player to win.\nThe resulting \"reasonable\" strategies correspond exactly to the game version of T \u2203 \u2200 BF.\nFirst, observe that if the adversary sets all of the base securities to false (0), then only the (\u00af Q) security will pay off.\nThus, no buyer can buy more than n expensive securities and guarantee a profit.\nThe problem is thus whether one can buy n expensive securities and all the cheap securities, so that at for any setting of the base events at least one security will pay off.\nClearly, the adversary must make Q hold, or the (\u00af Q) security will pay off.\nNext, we claim that for each i, 1 k. Evaluating whether this condition is satisfied by a subset of bids is quite straightforward.\nAlthough this example is contrived, its application is not entirely implausible.\nFor example, the disjunctions may correspond to insurance customers, who want an insurance contract to cover all the potential causes of their asset loss.\nThe atomic securities are sold by insurers, each of whom specialize in a different form of disaster cause.\n7.\nCONCLUSIONS AND FUTURE DIRECTIONS\nWe have analyzed the computational complexity of matching for securities based on logical formulas.\nMany possible avenues for future work exist, including\n1.\nAnalyzing the agents' optimization problem: \u2022 How to choose quantities and bid\/ask prices for a collection of securities to maximizes one's expected utility, both for linear and nonlinear utility functions.\n9 How to choose securities; that is, deciding on what collection of boolean formulas to offer to trade, subject to constraints or penalties on the number or complexity of bids.\n9 How do make the above choices in a game theoretically sound way, taking into account the choices of other traders, their reasoning about other traders, etc. .","keyphrases":["bet","compound secur","hedg","compound secur market","combinatori bet","effect probabl assess","arbitrari logic combin","bayesian network","combin-valu trade","approxim algorithm","payoff vector","tractabl case","base secur","comput complex of match","trade in financi instrument base on logic formula","risk alloc","inform aggreg","specul","gambl"],"prmu":["P","P","P","P","M","U","R","U","M","M","U","R","R","M","R","M","U","U","U"]} {"id":"J-31","title":"Computing the Optimal Strategy to Commit to","abstract":"In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously. The recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position). In this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games. We give both positive results (efficient algorithms) and negative results (NP-hardness results).","lvl-1":"Computing the Optimal Strategy to Commit to\u2217 Vincent Conitzer Carnegie Mellon University Computer Science Department 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University Computer Science Department 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nWe give both positive results (efficient algorithms) and negative results (NP-hardness results).\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics; I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In multiagent systems with self-interested agents (including most economic settings), the optimal action for one agent to take depends on the actions that the other agents take.\nTo analyze how an agent should behave in such settings, the tools of game theory need to be applied.\nTypically, when a strategic setting is modeled in the framework of game theory, it is assumed that players choose their strategies simultaneously.\nThis is especially true when the setting is modeled as a normal-form game, which only specifies each agent``s utility as a function of the vector of strategies that the agents choose, and does not provide any information on the order in which agents make their decisions and what the agents observe about earlier decisions by other agents.\nGiven that the game is modeled in normal form, it is typically analyzed using the concept of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, such that no player has an incentive to individually deviate from this profile of strategies.\n(Typically, the strategies are allowed to be mixed, that is, probability distributions over the original (pure) strategies.)\nA (mixed-strategy) Nash equilibrium is guaranteed to exist in finite games [18], but one problem is that there may be multiple Nash equilibria.\nThis leads to the equilibrium selection problem of how an agent can know which strategy to play if it does not know which equilibrium is to be played.\nWhen the setting is modeled as an extensive-form game, it is possible to specify that some players receive some information about actions taken by others earlier in the game before deciding on their action.\nNevertheless, in general, the players do not know everything that happened earlier in the game.\nBecause of this, these games are typically still analyzed using an equilibrium concept, where one specifies a mixed strategy for each player, and requires that each player``s strategy is a best response to the others'' strategies.\n(Typically an additional constraint on the strategies is now imposed to ensure that players do not play in a way that is irrational with respect to the information that they have received so far.\nThis leads to refinements of Nash equilibrium such as subgame perfect and sequential equilibrium.)\nHowever, in many real-world settings, strategies are not selected in such a simultaneous manner.\nOftentimes, one player (the leader) is able to commit to a strategy before another player (the follower).\nThis can be due to a variety of reasons.\nFor example, one of the players may arrive at the site at which the game is to be played before another agent (e.g., in economic settings, one player may enter a market earlier and commit to a way of doing busi82 ness).\nSuch commitment power has a profound impact on how the game should be played.\nFor example, the leader may be best off playing a strategy that is dominated in the normal-form representation of the game.\nPerhaps the earliest and best-known example of the effect of commitment is that by von Stackelberg [25], who showed that, in Cournot``s duopoly model [5], if one firm is able to commit to a production quantity first, that firm will do much better than in the simultaneous-move (Nash) solution.\nIn general, if commitment to mixed strategies is possible, then (under minor assumptions) it never hurts, and often helps, to commit to a strategy [26].\nBeing forced to commit to a pure strategy sometimes helps, and sometimes hurts (for example, committing to a pure strategy in rock-paper-scissors before the other player``s decision will naturally result in a loss).\nIn this paper, we will assume commitment is always forced; if it is not, the player who has the choice of whether to commit can simply compare the commitment outcome to the non-commitment (simultaneous-move) outcome.\nModels of leadership are especially important in settings with multiple self-interested software agents.\nOnce the code for an agent (or for a team of agents) is finalized and the agent is deployed, the agent is committed to playing the (possibly randomized) strategy that the code prescribes.\nThus, as long as one can credibly show that one cannot change the code later, the code serves as a commitment device.\nThis holds true for recreational tournaments among agents (e.g., poker tournaments, RoboSoccer), and for industrial applications such as sensor webs.\nFinally, there is also an implicit leadership situation in the field of mechanism design, in which one player (the designer) gets to choose the rules of the game that the remaining players then play.\nMechanism design is an extremely important topic to the EC community: the papers published on mechanism design in recent EC conferences are too numerous to cite.\nIndeed, the mechanism designer may benefit from committing to a choice that, if the (remaining) agents'' actions were fixed, would be suboptimal.\nFor example, in a (first-price) auction, the seller may wish to set a positive (artificial) reserve price for the item, below which the item will not be sold-even if the seller values the item at 0.\nIn hindsight (after the bids have come in), this (na\u00a8\u0131vely) appears suboptimal: if a bid exceeding the reserve price came in, the reserve price had no effect, and if no such bid came in, the seller would have been better off accepting a lower bid.\nOf course, the reason for setting the reserve price is that it incentivizes the bidders to bid higher, and because of this, setting artificial reserve prices can actually increase expected revenue to the seller.\nA significant amount of research has recently been devoted to the computation of solutions according to various solution concepts for settings in which the agents choose their strategies simultaneously, such as dominance [7, 11, 3] and (especially) Nash equilibrium [8, 21, 16, 15, 2, 22, 23, 4].\nHowever, the computation of the optimal strategy to commit to in a leadership situation has gone ignored.\nTheoretically, leadership situations can simply be thought of as an extensive-form game in which one player chooses a strategy (for the original game) first.\nThe number of strategies in this extensive-form game, however, can be exceedingly large.\nFor example, if the leader is able to commit to a mixed strategy in the original game, then every one of the (continuum of) mixed strategies constitutes a pure strategy in the extensive-form representation of the leadership situation.\n(We note that a commitment to a distribution is not the same as a distribution over commitments.)\nMoreover, if the original game is itself an extensive-form game, the number of strategies in the extensive-form representation of the leadership situation (which is a different extensive-form game) becomes even larger.\nBecause of this, it is usually not computationally feasible to simply transform the original game into the extensive-form representation of the leadership situation; instead, we have to analyze the game in its original representation.\nIn this paper, we study how to compute the optimal strategy to commit to, both in normal-form games (Section 2) and in Bayesian games, which are a special case of extensiveform games (Section 3).\n2.\nNORMAL-FORM GAMES In this section, we study how to compute the optimal strategy to commit to for games represented in normal form.\n2.1 Definitions In a normal-form game, every player i \u2208 {1, ... , n} has a set of pure strategies (or actions) Si, and a utility function ui : S1\u00d7S2\u00d7...\u00d7Sn \u2192 R that maps every outcome (a vector consisting of a pure strategy for every player, also known as a profile of pure strategies) to a real number.\nTo ease notation, in the case of two players, we will refer to player 1``s pure strategy set as S, and player 2``s pure strategy set as T.\nSuch games can be represented in (bi-)matrix form, in which the rows correspond to player 1``s pure strategies, the columns correspond to player 2``s pure strategies, and the entries of the matrix give the row and column player``s utilities (in that order) for the corresponding outcome of the game.\nIn the case of three players, we will use R, S, and T, for player 1, 2, and 3``s pure strategies, respectively.\nA mixed strategy for a player is a probability distribution over that player``s pure strategies.\nIn the case of two-player games, we will refer to player 1 as the leader and player 2 as the follower.\nBefore defining optimal leadership strategies, consider the following game which illustrates the effect of the leader``s ability to commit.\n2, 1 4, 0 1, 0 3, 1 In this normal-form representation, the bottom strategy for the row player is strictly dominated by the top strategy.\nNevertheless, if the row player has the ability to commit to a pure strategy before the column player chooses his strategy, the row player should commit to the bottom strategy: doing so will make the column player prefer to play the right strategy, leading to a utility of 3 for the row player.\nBy contrast, if the row player were to commit to the top strategy, the column player would prefer to play the left strategy, leading to a utility of only 2 for the row player.\nIf the row player is able to commit to a mixed strategy, then she can get an even greater (expected) utility: if the row player commits to placing probability p > 1\/2 on the bottom strategy, then the column player will still prefer to play the right strategy, and the row player``s expected utility will be 3p + 4(1 \u2212 p) = 4 \u2212 p \u2265 3.\nIf the row player plays each strategy with probability exactly 1\/2, the column player is 83 indifferent between the strategies.\nIn such cases, we will assume that the column player will choose the strategy that maximizes the row player``s utility (in this case, the right strategy).\nHence, the optimal mixed strategy to commit to for the row player is p = 1\/2.\nThere are a few good reasons for this assumption.\nIf we were to assume the opposite, then there would not exist an optimal strategy for the row player in the example game: the row player would play the bottom strategy with probability p = 1\/2 + with > 0, and the smaller , the better the utility for the row player.\nBy contrast, if we assume that the follower always breaks ties in the leader``s favor, then an optimal mixed strategy for the leader always exists, and this corresponds to a subgame perfect equilibrium of the extensive-form representation of the leadership situation.\nIn any case, this is a standard assumption for such models (e.g. [20]), although some work has investigated what can happen in the other subgame perfect equilibria [26].\n(For generic two-player games, the leader``s subgame-perfect equilibrium payoff is unique.)\nAlso, the same assumption is typically used in mechanism design, in that it is assumed that if an agent is indifferent between revealing his preferences truthfully and revealing them falsely, he will report them truthfully.\nGiven this assumption, we can safely refer to optimal leadership strategies rather than having to use some equilibrium notion.\nHence, for the purposes of this paper, an optimal strategy to commit to in a 2-player game is a strategy s \u2208 S that maximizes maxt\u2208BR(s) ul(s, t), where BR(s) = arg maxt\u2208T uf (s, t).\n(ul and uf are the leader and follower``s utility functions, respectively.)\nWe can have S = S for the case of commitment to pure strategies, or S = \u2206(S), the set of probability distributions over S, for the case of commitment to mixed strategies.\n(We note that replacing T by \u2206(T) makes no difference in this definition.)\nFor games with more than two players, in which the players commit to their strategies in sequence, we define optimal strategies to commit to recursively.\nAfter the leader commits to a strategy, the game to be played by the remaining agents is itself a (smaller) leadership game.\nThus, we define an optimal strategy to commit to as a strategy that maximizes the leader``s utility, assuming that the play of the remaining agents is itself optimal under this definition, and maximizes the leader``s utility among all optimal ways to play the remaining game.\nAgain, commitment to mixed strategies may or may not be a possibility for every player (although for the last player it does not matter if we allow for commitment to mixed strategies).\n2.2 Commitment to pure strategies We first study how to compute the optimal pure strategy to commit to.\nThis is relatively simple, because the number of strategies to commit to is not very large.\n(In the following, #outcomes is the number of complete strategy profiles.)\nTheorem 1.\nUnder commitment to pure strategies, the set of all optimal strategy profiles in a normal-form game can be found in O(#players \u00b7 #outcomes) time.\nProof.\nEach pure strategy that the first player may commit to will induce a subgame for the remaining players.\nWe can solve each such subgame recursively to find all of its optimal strategy profiles; each of these will give the original leader some utility.\nThose that give the leader maximal utility correspond exactly to the optimal strategy profiles of the original game.\nWe now present the algorithm formally.\nLet Su(G, s1) be the subgame that results after the first (remaining) player in G plays s1 \u2208 SG 1 .\nA game with 0 players is simply an outcome of the game.\nThe function Append(s, O) appends the strategy s to each of the vectors of strategies in the set O. Let e be the empty vector with no elements.\nIn a slight abuse of notation, we will write uG 1 (C) when all strategy profiles in the set C give player 1 the same utility in the game G. (Here, player 1 is the first remaining player in the subgame G, not necessarily player 1 in the original game.)\nWe note that arg max is set-valued.\nThen, the following algorithm computes all optimal strategy profiles: Algorithm Solve(G) if G has 0 players return {e} C \u2190 \u2205 for all s1 \u2208 SG 1 { O \u2190 Solve(Su(G, s1)) O \u2190 arg maxo\u2208O uG 1 (s1, o) if C = \u2205 or uG 1 (s1, O ) = uG 1 (C) C \u2190 C\u222aAppend(s1, O ) if uG 1 (s1, O ) > uG 1 (C) C \u2190Append(s1, O ) } return C Every outcome is (potentially) examined by every player, which leads to the given runtime bound.\nAs an example of how the algorithm works, consider the following 3-player game, in which the first player chooses the left or right matrix, the second player chooses a row, and the third player chooses a column.\n0,1,1 1,1,0 1,0,1 2,1,1 3,0,1 1,1,1 0,0,1 0,0,0 3,3,0 3,3,0 0,2,0 3,0,1 4,4,2 0,0,2 0,0,0 0,5,1 0,0,0 3,0,0 First we eliminate the outcomes that do not correspond to best responses for the third player (removing them from the matrix): 0,1,1 1,0,1 2,1,1 3,0,1 1,1,1 0,0,1 3,0,1 4,4,2 0,0,2 0,5,1 Next, we remove the entries in which the third player does not break ties in favor of the second player, as well as entries that do not correspond to best responses for the second player.\n0,1,1 2,1,1 1,1,1 0,5,1 Finally, we remove the entries in which the second and third players do not break ties in favor of the first player, as well as entries that do not correspond to best responses for the first player.\n2,1,1 84 Hence, in optimal play, the first player chooses the left matrix, the second player chooses the middle row, and the third player chooses the left column.\n(We note that this outcome is Pareto-dominated by (Right, Middle, Left).)\nFor general normal-form games, each player``s utility for each of the outcomes has to be explicitly represented in the input, so that the input size is itself \u2126(#players \u00b7 #outcomes).\nTherefore, the algorithm is in fact a linear-time algorithm.\n2.3 Commitment to mixed strategies In the special case of two-player zero-sum games, computing an optimal mixed strategy for the leader to commit to is equivalent to computing a minimax strategy, which minimizes the maximum expected utility that the opponent can obtain.\nMinimax strategies constitute the only natural solution concept for two-player zero-sum games: von Neumann``s Minimax Theorem [24] states that in two-player zero-sum games, it does not matter (in terms of the players'' utilities) which player gets to commit to a mixed strategy first, and a profile of mixed strategies is a Nash equilibrium if and only if both strategies are minimax strategies.\nIt is well-known that a minimax strategy can be found in polynomial time, using linear programming [17].\nOur first result in this section generalizes this result, showing that an optimal mixed strategy for the leader to commit to can be efficiently computed in general-sum two-player games, again using linear programming.\nTheorem 2.\nIn 2-player normal-form games, an optimal mixed strategy to commit to can be found in polynomial time using linear programming.\nProof.\nFor every pure follower strategy t, we compute a mixed strategy for the leader such that 1) playing t is a best response for the follower, and 2) under this constraint, the mixed strategy maximizes the leader``s utility.\nSuch a mixed strategy can be computed using the following simple linear program: maximize s\u2208S psul(s, t) subject to for all t \u2208 T, s\u2208S psuf (s, t) \u2265 s\u2208S psuf (s, t ) s\u2208S ps = 1 We note that this program may be infeasible for some follower strategies t, for example, if t is a strictly dominated strategy.\nNevertheless, the program must be feasible for at least some follower strategies; among these follower strategies, choose a strategy t\u2217 that maximizes the linear program``s solution value.\nThen, if the leader chooses as her mixed strategy the optimal settings of the variables ps for the linear program for t\u2217 , and the follower plays t\u2217 , this constitutes an optimal strategy profile.\nIn the following result, we show that we cannot expect to solve the problem more efficiently than linear programming, because we can reduce any linear program with a probability constraint on its variables to a problem of computing the optimal mixed strategy to commit to in a 2-player normalform game.\nTheorem 3.\nAny linear program whose variables xi (with xi \u2208 R\u22650 ) must satsify i xi = 1 can be modeled as a problem of computing the optimal mixed strategy to commit to in a 2-player normal-form game.\nProof.\nLet the leader have a pure strategy i for every variable xi.\nLet the column player have one pure strategy j for every constraint in the linear program (other than i xi = 1), and a single additional pure strategy 0.\nLet the utility functions be as follows.\nWriting the objective of the linear program as maximize i cixi, for any i, let ul(i, 0) = ci and uf (i, 0) = 0.\nWriting the jth constraint of the linear program (not including i xi = 1) as i aijxi \u2264 bj, for any i, j > 0, let ul(i, j) = mini ci \u2212 1 and uf (i, j) = aij \u2212 bj.\nFor example, consider the following linear program.\nmaximize 2x1 + x2 subject to x1 + x2 = 1 5x1 + 2x2 \u2264 3 7x1 \u2212 2x2 \u2264 2 The optimal solution to this program is x1 = 1\/3, x2 = 2\/3.\nOur reduction transforms this program into the following leader-follower game (where the leader is the row player).\n2, 0 0, 2 0, 5 1, 0 0, -1 0, -4 Indeed, the optimal strategy for the leader is to play the top strategy with probability 1\/3 and the bottom strategy with probability 2\/3.\nWe now show that the reduction works in general.\nClearly, the leader wants to incentivize the follower to play 0, because the utility that the leader gets when the follower plays 0 is always greater than when the follower does not play 0.\nIn order for the follower not to prefer playing j > 0 rather than 0, it must be the case that i pl(i)(aij \u2212 bj) \u2264 0, or equivalently i pl(i)aij \u2264 bj.\nHence the leader will get a utility of at least mini ci if and only if there is a feasible solution to the constraints.\nGiven that the pl(i) incentivize the follower to play 0, the leader attempts to maximize i pl(i)ci.\nThus the leader must solve the original linear program.\nAs an alternative proof of Theorem 3, one may observe that it is known that finding a minimax strategy in a zerosum game is as hard as the linear programming problem [6], and as we pointed out at the beginning of this section, computing a minimax strategy in a zero-sum game is a special case of the problem of computing an optimal mixed strategy to commit to.\nThis polynomial-time solvability of the problem of computing an optimal mixed strategy to commit to in two-player normal-form games contrasts with the unknown complexity of computing a Nash equilibrium in such games [21], as well as with the NP-hardness of finding a Nash equilibrium with maximum utility for a given player in such games [8, 2].\nUnfortunately, this result does not generalize to more than two players-here, the problem becomes NP-hard.\nTo show this, we reduce from the VERTEX-COVER problem.\nDefinition 1.\nIn VERTEX-COVER, we are given a graph G = (V, E) and an integer K.\nWe are asked whether there 85 exists a subset of the vertices S \u2286 V , with |S| = K, such that every edge e \u2208 E has at least one of its endpoints in S. BALANCED-VERTEX-COVER is the special case of VERTEX-COVER in which K = |V |\/2.\nVERTEX-COVER is NP-complete [9].\nThe following lemma shows that the hardness remains if we require K = |V |\/2.\n(Similar results have been shown for other NP-complete problems.)\nLemma 1.\nBALANCED-VERTEX-COVER is NP-complete.\nProof.\nMembership in NP follows from the fact that the problem is a special case of VERTEX-COVER, which is in NP.\nTo show NP-hardness, we reduce an arbitrary VERTEX-COVER instance to a BALANCED-VERTEXCOVER instance, as follows.\nIf, for the VERTEX-COVER instance, K > |V |\/2, then we simply add isolated vertices that are disjoint from the rest of the graph, until K = |V |\/2.\nIf K < |V |\/2, we add isolated triangles (that is, the complete graph on three vertices) to the graph, increasing K by 2 every time, until K = |V |\/2.\nTheorem 4.\nIn 3-player normal-form games, finding an optimal mixed strategy to commit to is NP-hard.\nProof.\nWe reduce an arbitrary BALANCED-VERTEXCOVER instance to the following 3-player normal-form game.\nFor every vertex v, each of the three players has a pure strategy corresponding to that vertex (rv, sv, tv, respectively).\nIn addition, for every edge e, the third player has a pure strategy te; and finally, the third player has one additional pure strategy t0.\nThe utilities are as follows: \u2022 for all r \u2208 R, s \u2208 S, u1(r, s, t0) = u2(r, s, t0) = 1; \u2022 for all r \u2208 R, s \u2208 S, t \u2208 T\u2212{t0}, u1(r, s, t) = u2(r, s, t) = 0; \u2022 for all v \u2208 V, s \u2208 S, u3(rv, s, tv) = 0; \u2022 for all v \u2208 V, r \u2208 R, u3(r, sv, tv) = 0; \u2022 for all v \u2208 V , for all r \u2208 R \u2212 {rv}, s \u2208 S \u2212 {sv}, u3(r, s, tv) = |V | |V |\u22122 ; \u2022 for all e \u2208 E, s \u2208 S, for both v \u2208 e, u3(rv, s, te) = 0; \u2022 for all e \u2208 E, s \u2208 S, for all v \/\u2208 e, u3(rv, s, te) = |V | |V |\u22122 .\n\u2022 for all r \u2208 R, s \u2208 S, u3(r, s, t0) = 1.\nWe note that players 1 and 2 have the same utility function.\nWe claim that there is an optimal strategy profile in which players 1 and 2 both obtain 1 (their maximum utility) if and only if there is a solution to the BALANCED-VERTEXCOVER problem.\n(Otherwise, these players will both obtain 0.)\nFirst, suppose there exists a solution to the BALANCEDVERTEX-COVER problem.\nThen, let player 1 play every rv such that v is in the cover with probability 2 |V | , and let player 2 play every sv such that v is not in the cover with probability 2 |V | .\nThen, for player 3, the expected utility of playing tv (for any v) is (1 \u2212 2 |V | ) |V | |V |\u22122 = 1, because there is a chance of 2 |V | that rv or sv is played.\nAdditionally, the expected utility of playing te (for any e) is at most (1 \u2212 2 |V | ) |V | |V |\u22122 = 1, because there is a chance of at least 2 |V | that some rv with v \u2208 e is played (because player 1 is randomizing over the pure strategies corresponding to the cover).\nIt follows that playing t0 is a best response for player 3, giving players 1 and 2 a utility of 1.\nNow, suppose that players 1 and 2 obtain 1 in optimal play.\nThen, it must be the case that player 3 plays t0.\nHence, for every v \u2208 V , there must be a probability of at least 2 |V | that either rv or sv is played, for otherwise player 3 would be better off playing tv.\nBecause players 1 and 2 have only a total probability of 2 to distribute, it must be the case that for each v, either rv or sv is played with probability 2 |V | , and the other is played with probability 0.\n(It is not possible for both to have nonzero probability, because then there would be some probability that both are played simultaneously (correlation is not possible), hence the total probability of at least one being played could not be high enough for all vertices.)\nThus, for exactly half the v \u2208 V , player 1 places probability 2 |V | on rv.\nMoreover, for every e \u2208 E, there must be a probability of at least 2 |V | that some rv with v \u2208 e is played, for otherwise player 3 would be better off playing te.\nThus, the v \u2208 V such that player 1 places probability 2 |V | on rv constitute a balanced vertex cover.\n3.\nBAYESIAN GAMES So far, we have restricted our attention to normal-form games.\nIn a normal-form game, it is assumed that every agent knows every other agent``s preferences over the outcomes of the game.\nIn general, however, agents may have some private information about their preferences that is not known to the other agents.\nMoreover, at the time of commitment to a strategy, the agents may not even know their own (final) preferences over the outcomes of the game yet, because these preferences may be dependent on a context that has yet to materialize.\nFor example, when the code for a trading agent is written, it may not yet be clear how that agent will value resources that it will negotiate over later, because this depends on information that is not yet available at the time at which the code is written (such as orders that will have been placed to the agent before the negotiation).\nIn this section, we will study commitment in Bayesian games, which can model such uncertainty over preferences.\n3.1 Definitions In a Bayesian game, every player i has a set of actions Si, a set of types \u0398i with an associated probability distribution \u03c0i : \u0398i \u2192 [0, 1], and, for each type \u03b8i, a utility function u\u03b8i i : S1 \u00d7 S2 \u00d7 ... \u00d7 Sn \u2192 R.\nA pure strategy in a Bayesian game is a mapping from the player``s types to actions, \u03c3i : \u0398i \u2192 Si.\n(Bayesian games can be rewritten in normal form by enumerating every pure strategy \u03c3i, but this will cause an exponential blowup in the size of the representation of the game and therefore cannot lead to efficient algorithms.)\nThe strategy that the leader should commit to depends on whether, at the time of commitment, the leader knows her own type.\nIf the leader does know her own type, the other types that the leader might have had become irrelevant and the leader should simply commit to the strategy that is optimal for the type.\nHowever, as argued above, the leader does not necessarily know her own type at the time of commitment (e.g., the time at which the code is submitted).\nIn this case, the leader must commit to a strategy that is 86 dependent upon the leader``s eventual type.\nWe will study this latter model, although we will pay specific attention to the case where the leader has only a single type, which is effectively the same as the former model.\n3.2 Commitment to pure strategies It turns out that computing an optimal pure strategy to commit to is hard in Bayesian games, even with two players.\nTheorem 5.\nFinding an optimal pure strategy to commit to in 2-player Bayesian games is NP-hard, even when the follower has only a single type.\nProof.\nWe reduce an arbitrary VERTEX-COVER instance to the following Bayesian game between the leader and the follower.\nThe leader has K types \u03b81, \u03b82, ... , \u03b8K , each occurring with probability 1\/K, and for every vertex v \u2208 V , the leader has an action sv.\nThe follower has only a single type; for each edge e \u2208 E, the follower has an action te, and the follower has a single additional action t0.\nThe utility function for the leader is given by, for all \u03b8l \u2208 \u0398l and all s \u2208 S, u \u03b8l l (s, t0) = 1, and for all e \u2208 E, u \u03b8l l (s, te) = 0.\nThe follower``s utility is given by: \u2022 For all v \u2208 V , for all e \u2208 E with v \/\u2208 e, uf (sv, te) = 1; \u2022 For all v \u2208 V , for all e \u2208 E with v \u2208 e, uf (sv, te) = \u2212K; \u2022 For all v \u2208 V , uf (sv, t0) = 0.\nWe claim that the leader can get a utility of 1 if and only if there is a solution to the VERTEX-COVER instance.\nFirst, suppose that there is a solution to the VERTEXCOVER instance.\nThen, the leader can commit to a pure strategy such that for each vertex v in the cover, the leader plays sv for some type.\nThen, the follower``s utility for playing te (for any e \u2208 E) is at most K\u22121 K + 1 K (\u2212K) = \u2212 1 K , so that the follower will prefer to play t0, which gives the leader a utility of 1, as required.\nNow, suppose that there is a pure strategy for the leader that will give the leader a utility of 1.\nThen, the follower must play t0.\nIn order for the follower not to prefer playing te (for any e \u2208 E) instead, for at least one v \u2208 e the leader must play sv for some type \u03b8l.\nHence, the set of vertices v that the leader plays for some type must constitute a vertex cover; and this set can have size at most K, because the leader has only K types.\nSo there is a solution to the VERTEXCOVER instance.\nHowever, if the leader has only a single type, then the problem becomes easy again (#types is the number of types for the follower): Theorem 6.\nIn 2-player Bayesian games in which the leader has only a single type, an optimal pure strategy to commit to can be found in O(#outcomes \u00b7 #types) time.\nProof.\nFor every leader action s, we can compute, for every follower type \u03b8f \u2208 \u0398f , which actions t maximize the follower``s utility; call this set of actions BR\u03b8f (s).\nThen, the utility that the leader receives for committing to action s can be computed as \u03b8f \u2208\u0398f \u03c0(\u03b8f ) maxt\u2208BR\u03b8f (s) ul(s, t), and the leader can choose the best action to commit to.\n3.3 Commitment to mixed strategies In two-player zero-sum imperfect information games with perfect recall (no player ever forgets something that it once knew), a minimax strategy can be constructed in polynomial time [12, 13].\nUnfortunately, this result does not extend to computing optimal mixed strategies to commit to in the general-sum case-not even in Bayesian games.\nWe will exhibit NP-hardness by reducing from the INDEPENDENTSET problem.\nDefinition 2.\nIn INDEPENDENT-SET, we are given a graph G = (V, E) and an integer K.\nWe are asked whether there exists a subset of the vertices S \u2286 V , with |S| = K, such that no edge e \u2208 E has both of its endpoints in S. Again, this problem is NP-complete [9].\nTheorem 7.\nFinding an optimal mixed strategy to commit to in 2-player Bayesian games is NP-hard, even when the leader has only a single type and the follower has only two actions.\nProof.\nWe reduce an arbitrary INDEPENDENT-SET instance to the following Bayesian game between the leader and the follower.\nThe leader has only a single type, and for every vertex v \u2208 V , the leader has an action sv.\nThe follower has a type \u03b8v for every v \u2208 V , occurring with probability 1 (|E|+1)|V | , and a type \u03b8e for every e \u2208 E, occurring with probability 1 |E|+1 .\nThe follower has two actions: t0 and t1.\nThe leader``s utility is given by, for all s \u2208 S, ul(s, t0) = 1 and ul(s, t1) = 0.\nThe follower``s utility is given by: \u2022 For all v \u2208 V , u\u03b8v f (sv, t1) = 0; \u2022 For all v \u2208 V and s \u2208 S \u2212 {sv}, u\u03b8v f (s, t1) = K K\u22121 ; \u2022 For all v \u2208 V and s \u2208 S, u\u03b8v f (s, t0) = 1; \u2022 For all e \u2208 E, s \u2208 S, u\u03b8e f (s, t0) = 1; \u2022 For all e \u2208 E, for both v \u2208 e, u\u03b8e f (sv, t1) = 2K 3 ; \u2022 For all e \u2208 E, for all v \/\u2208 e, u\u03b8e f (sv, t1) = 0.\nWe claim that an optimal strategy to commit to gives the leader an expected utility of at least |E| |E|+1 + K (|E|+1)|V | if and only if there is a solution to the INDEPENDENT-SET instance.\nFirst, suppose that there is a solution to the INDEPENDENT-SET instance.\nThen, the leader could commit to the following strategy: for every vertex v in the independent set, play the corresponding sv with probability 1\/K.\nIf the follower has type \u03b8e for some e \u2208 E, the expected utility for the follower of playing t1 is at most 1 K 2K 3 = 2\/3, because there is at most one vertex v \u2208 e such that sv is played with nonzero probability.\nHence, the follower will play t0 and obtain a utility of 1.\nIf the follower has type \u03b8v for some vertex v in the independent set, the expected utility for the follower of playing t1 is K\u22121 K K K\u22121 = 1, because the leader plays sv with probability 1\/K.\nIt follows that the follower (who breaks ties to maximize the leader``s utility) will play t0, which also gives a utility of 1 and gives the leader a higher utility.\nHence the leader``s expected utility for this strategy is at least |E| |E|+1 + K (|E|+1)|V | , as required.\n87 Now, suppose that there is a strategy that gives the leader an expected utility of at least |E| |E|+1 + K (|E|+1)|V | .\nThen, this strategy must induce the follower to play t0 whenever it has a type of the form \u03b8e (because otherwise, the utility could be at most |E|\u22121 |E|+1 + |V | (|E|+1)|V | = |E| |E|+1 < |E| |E|+1 + K (|E|+1)|V | ).\nThus, it cannot be the case that for some edge e = (v1, v2) \u2208 E, the probability that the leader plays one of sv1 and sv2 is at least 2\/K, because then the expected utility for the follower of playing t1 when it has type \u03b8e would be at least 2 K 2K 3 = 4\/3 > 1.\nMoreover, the strategy must induce the follower to play t0 for at least K types of the form \u03b8v.\nInducing the follower to play t0 when it has type \u03b8v can be done only by playing sv with probability at least 1\/K, which will give the follower a utility of at most K\u22121 K K K\u22121 = 1 for playing t1.\nBut then, the set of vertices v such that sv is played with probability at least 1\/K must constitute an independent set of size K (because if there were an edge e between two such vertices, it would induce the follower to play t1 for type \u03b8e by the above).\nBy contrast, if the follower has only a single type, then we can generalize the linear programming approach for normalform games: Theorem 8.\nIn 2-player Bayesian games in which the follower has only a single type, an optimal mixed strategy to commit to can be found in polynomial time using linear programming.\nProof.\nWe generalize the approach in Theorem 2 as follows.\nFor every pure follower strategy t, we compute a mixed strategy for the leader for every one of the leader``s types such that 1) playing t is a best response for the follower, and 2) under this constraint, the mixed strategy maximizes the leader``s ex ante expected utility.\nTo do so, we generalize the linear program as follows: maximize \u03b8l\u2208\u0398l \u03c0(\u03b8l) s\u2208S p\u03b8l s u\u03b8l l (s, t) subject to for all t \u2208 T, \u03b8l\u2208\u0398l \u03c0(\u03b8l) s\u2208S p \u03b8l s uf (s, t) \u2265 \u03b8l\u2208\u0398l \u03c0(\u03b8l) s\u2208S p \u03b8l s uf (s, t ) for all \u03b8l \u2208 \u0398l, s\u2208S p \u03b8l s = 1 As in Theorem 2, the solution for the linear program that maximizes the solution value is an optimal strategy to commit to.\nThis shows an interesting contrast between commitment to pure strategies and commitment to mixed strategies in Bayesian games: for pure strategies, the problem becomes easy if the leader has only a single type (but not if the follower has only a single type), whereas for mixed strategies, the problem becomes easy if the follower has only a single type (but not if the leader has only a single type).\n4.\nCONCLUSIONS AND FUTURE RESEARCH In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nThis requires some equilibrium notion (Nash equilibrium and its refinements), and often leads to the equilibrium selection problem: it is unclear to each individual player according to which equilibrium she should play.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nFor example, one agent may arrive at the (real or virtual) site of the game before the other, or, in the specific case of software agents, the code for one agent may be completed and committed before that of another agent.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nSpecifically, if commitment to mixed strategies is possible, then (optimal) commitment never hurts the leader, and often helps.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we studied how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nFor normal-form games, we showed that the optimal pure strategy to commit to can be found efficiently for any number of players.\nAn optimal mixed strategy to commit to in a normal-form game can be found efficiently for two players using linear programming (and no more efficiently than that, in the sense that any linear program with a probability constraint can be encoded as such a problem).\n(This is a generalization of the polynomial-time computability of minimax strategies in normal-form games.)\nThe problem becomes NP-hard for three (or more) players.\nIn Bayesian games, the problem of finding an optimal pure strategy to commit to is NP-hard even in two-player games in which the follower has only a single type, although two-player games in which the leader has only a single type can be solved efficiently.\nThe problem of finding an optimal mixed strategy to commit to in a Bayesian game is NP-hard even in two-player games in which the leader has only a single type, although two-player games in which the follower has only a single type can be solved efficiently using a generalization of the linear progamming approach for normal-form games.\nThe following two tables summarize these results.\n2 players \u2265 3 players normal-form O(#outcomes) O(#outcomes\u00b7 #players) Bayesian, O(#outcomes\u00b7 NP-hard 1-type leader #types) Bayesian, NP-hard NP-hard 1-type follower Bayesian (general) NP-hard NP-hard Results for commitment to pure strategies.\n(With more than 2 players, the follower is the last player to commit, the leader is the first.)\n88 2 players \u2265 3 players normal-form one LP-solve per NP-hard follower action Bayesian, NP-hard NP-hard 1-type leader Bayesian, one LP-solve per NP-hard 1-type follower follower action Bayesian (general) NP-hard NP-hard Results for commitment to mixed strategies.\n(With more than 2 players, the follower is the last player to commit, the leader is the first.)\nFuture research can take a number of directions.\nFirst, we can empirically evaluate the techniques presented here on test suites such as GAMUT [19].\nWe can also study the computation of optimal strategies to commit to in other1 concise representations of normal-form games-for example, in graphical games [10] or local-effect\/action graph games [14, 1].\nFor the cases where computing an optimal strategy to commit to is NP-hard, we can also study the computation of approximately optimal strategies to commit to.\nWhile the correct definition of an approximately optimal strategy is in this setting may appear simple at first-it should be a strategy that, if the following players play optimally, performs almost as well as the optimal strategy in expectation-this definition becomes problematic when we consider that the other players may also be playing only approximately optimally.\nOne may also study models in which multiple (but not all) players commit at the same time.\nAnother interesting direction to pursue is to see if computing optimal mixed strategies to commit to can help us in, or otherwise shed light on, computing Nash equilibria.\nOften, optimal mixed strategies to commit to are also Nash equilibrium strategies (for example, in two-player zero-sum games this is always true), although this is not always the case (for example, as we already pointed out, sometimes the optimal strategy to commit to is a strictly dominated strategy, which can never be a Nash equilibrium strategy).\n5.\nREFERENCES [1] N. A. R. Bhat and K. Leyton-Brown.\nComputing Nash equilibria of action-graph games.\nIn Proceedings of the 20th Annual Conference on Uncertainty in Artificial Intelligence (UAI), Banff, Canada, 2004.\n[2] V. Conitzer and T. Sandholm.\nComplexity results about Nash equilibria.\nIn Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), pages 765-771, Acapulco, Mexico, 2003.\n[3] V. Conitzer and T. Sandholm.\nComplexity of (iterated) dominance.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 88-97, Vancouver, Canada, 2005.\n[4] V. Conitzer and T. Sandholm.\nA generalized strategy eliminability criterion and computational methods for applying it.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 483-488, Pittsburgh, PA, USA, 2005.\n[5] A. A. Cournot.\nRecherches sur les principes math\u00b4ematiques de la th\u00b4eorie des richesses (Researches 1 Bayesian games are one potentially concise representation of normal-form games.\ninto the Mathematical Principles of the Theory of Wealth).\nHachette, Paris, 1838.\n[6] G. Dantzig.\nA proof of the equivalence of the programming problem and the game problem.\nIn T. Koopmans, editor, Activity Analysis of Production and Allocation, pages 330-335.\nJohn Wiley & Sons, 1951.\n[7] I. Gilboa, E. Kalai, and E. Zemel.\nThe complexity of eliminating dominated strategies.\nMathematics of Operation Research, 18:553-565, 1993.\n[8] I. Gilboa and E. Zemel.\nNash and correlated equilibria: Some complexity considerations.\nGames and Economic Behavior, 1:80-93, 1989.\n[9] R. Karp.\nReducibility among combinatorial problems.\nIn R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, pages 85-103.\nPlenum Press, NY, 1972.\n[10] M. Kearns, M. Littman, and S. Singh.\nGraphical models for game theory.\nIn Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), 2001.\n[11] D. E. Knuth, C. H. Papadimitriou, and J. N. Tsitsiklis.\nA note on strategy elimination in bimatrix games.\nOperations Research Letters, 7(3):103-107, 1988.\n[12] D. Koller and N. Megiddo.\nThe complexity of two-person zero-sum games in extensive form.\nGames and Economic Behavior, 4(4):528-552, Oct. 1992.\n[13] D. Koller, N. Megiddo, and B. von Stengel.\nEfficient computation of equilibria for extensive two-person games.\nGames and Economic Behavior, 14(2):247-259, 1996.\n[14] K. Leyton-Brown and M. Tennenholtz.\nLocal-effect games.\nIn Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, 2003.\n[15] R. Lipton, E. Markakis, and A. Mehta.\nPlaying large games using simple strategies.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 36-41, San Diego, CA, 2003.\n[16] M. Littman and P. Stone.\nA polynomial-time Nash equilibrium algorithm for repeated games.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 48-54, San Diego, CA, 2003.\n[17] R. D. Luce and H. Raiffa.\nGames and Decisions.\nJohn Wiley and Sons, New York, 1957.\nDover republication 1989.\n[18] J. Nash.\nEquilibrium points in n-person games.\nProc.\nof the National Academy of Sciences, 36:48-49, 1950.\n[19] E. Nudelman, J. Wortman, K. Leyton-Brown, and Y. Shoham.\nRun the GAMUT: A comprehensive approach to evaluating game-theoretic algorithms.\nIn International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), New York, NY, USA, 2004.\n[20] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nMIT Press, 1994.\n[21] C. Papadimitriou.\nAlgorithms, games and the Internet.\nIn Proceedings of the Annual Symposium on Theory of Computing (STOC), pages 749-753, 2001.\n89 [22] R. Porter, E. Nudelman, and Y. Shoham.\nSimple search methods for finding a Nash equilibrium.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 664-669, San Jose, CA, USA, 2004.\n[23] T. Sandholm, A. Gilpin, and V. Conitzer.\nMixed-integer programming methods for finding Nash equilibria.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 495-501, Pittsburgh, PA, USA, 2005.\n[24] J. von Neumann.\nZur Theorie der Gesellschaftsspiele.\nMathematische Annalen, 100:295-320, 1927.\n[25] H. von Stackelberg.\nMarktform und Gleichgewicht.\nSpringer, Vienna, 1934.\n[26] B. von Stengel and S. Zamir.\nLeadership with commitment to mixed strategies.\nCDAM Research Report LSE-CDAM-2004-01, London School of Economics, Feb. 2004.\n90","lvl-3":"Computing the Optimal Strategy to Commit to \u2217\nABSTRACT\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nWe give both positive results (efficient algorithms) and negative results (NP-hardness results).\n1.\nINTRODUCTION\nIn multiagent systems with self-interested agents (including most economic settings), the optimal action for one agent to take depends on the actions that the other agents take.\nTo analyze how an agent should behave in such settings, the tools of game theory need to be applied.\nTypically, when a strategic setting is modeled in the framework of game theory, it is assumed that players choose their strategies simultaneously.\nThis is especially true when the setting is modeled as a normal-form game, which only specifies each agent's utility as a function of the vector of strategies that the agents choose, and does not provide any information on the order in which agents make their decisions and what the agents observe about earlier decisions by other agents.\nGiven that the game is modeled in normal form, it is typically analyzed using the concept of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, such that no player has an incentive to individually deviate from this profile of strategies.\n(Typically, the strategies are allowed to be mixed, that is, probability distributions over the original (pure) strategies.)\nA (mixed-strategy) Nash equilibrium is guaranteed to exist in finite games [18], but one problem is that there may be multiple Nash equilibria.\nThis leads to the equilibrium selection problem of how an agent can know which strategy to play if it does not know which equilibrium is to be played.\nWhen the setting is modeled as an extensive-form game, it is possible to specify that some players receive some information about actions taken by others earlier in the game before deciding on their action.\nNevertheless, in general, the players do not know everything that happened earlier in the game.\nBecause of this, these games are typically still analyzed using an equilibrium concept, where one specifies a mixed strategy for each player, and requires that each player's strategy is a best response to the others' strategies.\n(Typically an additional constraint on the strategies is now imposed to ensure that players do not play in a way that is irrational with respect to the information that they have received so far.\nThis leads to refinements of Nash equilibrium such as subgame perfect and sequential equilibrium.)\nHowever, in many real-world settings, strategies are not selected in such a simultaneous manner.\nOftentimes, one player (the leader) is able to commit to a strategy before another player (the follower).\nThis can be due to a variety of reasons.\nFor example, one of the players may arrive at the site at which the game is to be played before another agent (e.g., in economic settings, one player may enter a market earlier and commit to a way of doing busi\nness).\nSuch commitment power has a profound impact on how the game should be played.\nFor example, the leader may be best off playing a strategy that is dominated in the normal-form representation of the game.\nPerhaps the earliest and best-known example of the effect of commitment is that by von Stackelberg [25], who showed that, in Cournot's duopoly model [5], if one firm is able to commit to a production quantity first, that firm will do much better than in the simultaneous-move (Nash) solution.\nIn general, if commitment to mixed strategies is possible, then (under minor assumptions) it never hurts, and often helps, to commit to a strategy [26].\nBeing forced to commit to a pure strategy sometimes helps, and sometimes hurts (for example, committing to a pure strategy in rock-paper-scissors before the other player's decision will naturally result in a loss).\nIn this paper, we will assume commitment is always forced; if it is not, the player who has the choice of whether to commit can simply compare the commitment outcome to the non-commitment (simultaneous-move) outcome.\nModels of leadership are especially important in settings with multiple self-interested software agents.\nOnce the code for an agent (or for a team of agents) is finalized and the agent is deployed, the agent is committed to playing the (possibly randomized) strategy that the code prescribes.\nThus, as long as one can credibly show that one cannot change the code later, the code serves as a commitment device.\nThis holds true for recreational tournaments among agents (e.g., poker tournaments, RoboSoccer), and for industrial applications such as sensor webs.\nFinally, there is also an implicit leadership situation in the field of mechanism design, in which one player (the designer) gets to choose the rules of the game that the remaining players then play.\nMechanism design is an extremely important topic to the EC community: the papers published on mechanism design in recent EC conferences are too numerous to cite.\nIndeed, the mechanism designer may benefit from committing to a choice that, if the (remaining) agents' actions were fixed, would be suboptimal.\nFor example, in a (first-price) auction, the seller may wish to set a positive (artificial) reserve price for the item, below which the item will not be sold--even if the seller values the item at 0.\nIn hindsight (after the bids have come in), this (na \u00a8 \u0131vely) appears suboptimal: if a bid exceeding the reserve price came in, the reserve price had no effect, and if no such bid came in, the seller would have been better off accepting a lower bid.\nOf course, the reason for setting the reserve price is that it incentivizes the bidders to bid higher, and because of this, setting artificial reserve prices can actually increase expected revenue to the seller.\nA significant amount of research has recently been devoted to the computation of solutions according to various solution concepts for settings in which the agents choose their strategies simultaneously, such as dominance [7, 11, 3] and (especially) Nash equilibrium [8, 21, 16, 15, 2, 22, 23, 4].\nHowever, the computation of the optimal strategy to commit to in a leadership situation has gone ignored.\nTheoretically, leadership situations can simply be thought of as an extensive-form game in which one player chooses a strategy (for the original game) first.\nThe number of strategies in this extensive-form game, however, can be exceedingly large.\nFor example, if the leader is able to commit to a mixed strategy in the original game, then every one of the (continuum of) mixed strategies constitutes a pure strategy in the extensive-form representation of the leadership situation.\n(We note that a commitment to a distribution is not the same as a distribution over commitments.)\nMoreover, if the original game is itself an extensive-form game, the number of strategies in the extensive-form representation of the leadership situation (which is a different extensive-form game) becomes even larger.\nBecause of this, it is usually not computationally feasible to simply transform the original game into the extensive-form representation of the leadership situation; instead, we have to analyze the game in its original representation.\nIn this paper, we study how to compute the optimal strategy to commit to, both in normal-form games (Section 2) and in Bayesian games, which are a special case of extensiveform games (Section 3).\n2.\nNORMAL-FORM GAMES\n2.1 Definitions\n2.2 Commitment to pure strategies\n2.3 Commitment to mixed strategies\n3.\nBAYESIAN GAMES\n3.1 Definitions\n3.2 Commitment to pure strategies\n3.3 Commitment to mixed strategies\n4.\nCONCLUSIONS AND FUTURE RESEARCH\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nThis requires some equilibrium notion (Nash equilibrium and its refinements), and often leads to the equilibrium selection problem: it is unclear to each individual player according to which equilibrium she should play.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nFor example, one agent may arrive at the (real or virtual) site of the game before the other, or, in the specific case of software agents, the code for one agent may be completed and committed before that of another agent.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nSpecifically, if commitment to mixed strategies is possible, then (optimal) commitment never hurts the leader, and often helps.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we studied how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nFor normal-form games, we showed that the optimal pure strategy to commit to can be found efficiently for any number of players.\nAn optimal mixed strategy to commit to in a normal-form game can be found efficiently for two players using linear programming (and no more efficiently than that, in the sense that any linear program with a probability constraint can be encoded as such a problem).\n(This is a generalization of the polynomial-time computability of minimax strategies in normal-form games.)\nThe problem becomes NP-hard for three (or more) players.\nIn Bayesian games, the problem of finding an optimal pure strategy to commit to is NP-hard even in two-player games in which the follower has only a single type, although two-player games in which the leader has only a single type can be solved efficiently.\nThe problem of finding an optimal mixed strategy to commit to in a Bayesian game is NP-hard even in two-player games in which the leader has only a single type, although two-player games in which the follower has only a single type can be solved efficiently using a generalization of the linear progamming approach for normal-form games.\nThe following two tables summarize these results.\nResults for commitment to mixed strategies.\n(With more than 2 players, the \"follower\" is the last player to commit, the \"leader\" is the first.)\nFuture research can take a number of directions.\nFirst, we can empirically evaluate the techniques presented here on test suites such as GAMUT [19].\nWe can also study the computation of optimal strategies to commit to in other1 concise representations of normal-form games--for example, in graphical games [10] or local-effect\/action graph games [14, 1].\nFor the cases where computing an optimal strategy to commit to is NP-hard, we can also study the computation of approximately optimal strategies to commit to.\nWhile the correct definition of an approximately optimal strategy is in this setting may appear simple at first--it should be a strategy that, if the following players play optimally, performs almost as well as the optimal strategy in expectation--this definition becomes problematic when we consider that the other players may also be playing only approximately optimally.\nOne may also study models in which multiple (but not all) players commit at the same time.\nAnother interesting direction to pursue is to see if computing optimal mixed strategies to commit to can help us in, or otherwise shed light on, computing Nash equilibria.\nOften, optimal mixed strategies to commit to are also Nash equilibrium strategies (for example, in two-player zero-sum games this is always true), although this is not always the case (for example, as we already pointed out, sometimes the optimal strategy to commit to is a strictly dominated strategy, which can never be a Nash equilibrium strategy).","lvl-4":"Computing the Optimal Strategy to Commit to \u2217\nABSTRACT\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nWe give both positive results (efficient algorithms) and negative results (NP-hardness results).\n1.\nINTRODUCTION\nIn multiagent systems with self-interested agents (including most economic settings), the optimal action for one agent to take depends on the actions that the other agents take.\nTo analyze how an agent should behave in such settings, the tools of game theory need to be applied.\nTypically, when a strategic setting is modeled in the framework of game theory, it is assumed that players choose their strategies simultaneously.\nThis is especially true when the setting is modeled as a normal-form game, which only specifies each agent's utility as a function of the vector of strategies that the agents choose, and does not provide any information on the order in which agents make their decisions and what the agents observe about earlier decisions by other agents.\nGiven that the game is modeled in normal form, it is typically analyzed using the concept of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, such that no player has an incentive to individually deviate from this profile of strategies.\n(Typically, the strategies are allowed to be mixed, that is, probability distributions over the original (pure) strategies.)\nA (mixed-strategy) Nash equilibrium is guaranteed to exist in finite games [18], but one problem is that there may be multiple Nash equilibria.\nThis leads to the equilibrium selection problem of how an agent can know which strategy to play if it does not know which equilibrium is to be played.\nWhen the setting is modeled as an extensive-form game, it is possible to specify that some players receive some information about actions taken by others earlier in the game before deciding on their action.\nNevertheless, in general, the players do not know everything that happened earlier in the game.\nBecause of this, these games are typically still analyzed using an equilibrium concept, where one specifies a mixed strategy for each player, and requires that each player's strategy is a best response to the others' strategies.\n(Typically an additional constraint on the strategies is now imposed to ensure that players do not play in a way that is irrational with respect to the information that they have received so far.\nThis leads to refinements of Nash equilibrium such as subgame perfect and sequential equilibrium.)\nHowever, in many real-world settings, strategies are not selected in such a simultaneous manner.\nOftentimes, one player (the leader) is able to commit to a strategy before another player (the follower).\nThis can be due to a variety of reasons.\nFor example, one of the players may arrive at the site at which the game is to be played before another agent (e.g., in economic settings, one player may enter a market earlier and commit to a way of doing busi\nness).\nSuch commitment power has a profound impact on how the game should be played.\nFor example, the leader may be best off playing a strategy that is dominated in the normal-form representation of the game.\nIn general, if commitment to mixed strategies is possible, then (under minor assumptions) it never hurts, and often helps, to commit to a strategy [26].\nBeing forced to commit to a pure strategy sometimes helps, and sometimes hurts (for example, committing to a pure strategy in rock-paper-scissors before the other player's decision will naturally result in a loss).\nIn this paper, we will assume commitment is always forced; if it is not, the player who has the choice of whether to commit can simply compare the commitment outcome to the non-commitment (simultaneous-move) outcome.\nModels of leadership are especially important in settings with multiple self-interested software agents.\nOnce the code for an agent (or for a team of agents) is finalized and the agent is deployed, the agent is committed to playing the (possibly randomized) strategy that the code prescribes.\nFinally, there is also an implicit leadership situation in the field of mechanism design, in which one player (the designer) gets to choose the rules of the game that the remaining players then play.\nIndeed, the mechanism designer may benefit from committing to a choice that, if the (remaining) agents' actions were fixed, would be suboptimal.\nHowever, the computation of the optimal strategy to commit to in a leadership situation has gone ignored.\nTheoretically, leadership situations can simply be thought of as an extensive-form game in which one player chooses a strategy (for the original game) first.\nThe number of strategies in this extensive-form game, however, can be exceedingly large.\nFor example, if the leader is able to commit to a mixed strategy in the original game, then every one of the (continuum of) mixed strategies constitutes a pure strategy in the extensive-form representation of the leadership situation.\n(We note that a commitment to a distribution is not the same as a distribution over commitments.)\nMoreover, if the original game is itself an extensive-form game, the number of strategies in the extensive-form representation of the leadership situation (which is a different extensive-form game) becomes even larger.\nBecause of this, it is usually not computationally feasible to simply transform the original game into the extensive-form representation of the leadership situation; instead, we have to analyze the game in its original representation.\nIn this paper, we study how to compute the optimal strategy to commit to, both in normal-form games (Section 2) and in Bayesian games, which are a special case of extensiveform games (Section 3).\n4.\nCONCLUSIONS AND FUTURE RESEARCH\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nThis requires some equilibrium notion (Nash equilibrium and its refinements), and often leads to the equilibrium selection problem: it is unclear to each individual player according to which equilibrium she should play.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nFor example, one agent may arrive at the (real or virtual) site of the game before the other, or, in the specific case of software agents, the code for one agent may be completed and committed before that of another agent.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nSpecifically, if commitment to mixed strategies is possible, then (optimal) commitment never hurts the leader, and often helps.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we studied how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nFor normal-form games, we showed that the optimal pure strategy to commit to can be found efficiently for any number of players.\nAn optimal mixed strategy to commit to in a normal-form game can be found efficiently for two players using linear programming (and no more efficiently than that, in the sense that any linear program with a probability constraint can be encoded as such a problem).\n(This is a generalization of the polynomial-time computability of minimax strategies in normal-form games.)\nThe problem becomes NP-hard for three (or more) players.\nIn Bayesian games, the problem of finding an optimal pure strategy to commit to is NP-hard even in two-player games in which the follower has only a single type, although two-player games in which the leader has only a single type can be solved efficiently.\nThe problem of finding an optimal mixed strategy to commit to in a Bayesian game is NP-hard even in two-player games in which the leader has only a single type, although two-player games in which the follower has only a single type can be solved efficiently using a generalization of the linear progamming approach for normal-form games.\nThe following two tables summarize these results.\nResults for commitment to mixed strategies.\n(With more than 2 players, the \"follower\" is the last player to commit, the \"leader\" is the first.)\nFuture research can take a number of directions.\nWe can also study the computation of optimal strategies to commit to in other1 concise representations of normal-form games--for example, in graphical games [10] or local-effect\/action graph games [14, 1].\nFor the cases where computing an optimal strategy to commit to is NP-hard, we can also study the computation of approximately optimal strategies to commit to.\nOne may also study models in which multiple (but not all) players commit at the same time.\nAnother interesting direction to pursue is to see if computing optimal mixed strategies to commit to can help us in, or otherwise shed light on, computing Nash equilibria.\nOften, optimal mixed strategies to commit to are also Nash equilibrium strategies (for example, in two-player zero-sum games this is always true), although this is not always the case (for example, as we already pointed out, sometimes the optimal strategy to commit to is a strictly dominated strategy, which can never be a Nash equilibrium strategy).","lvl-2":"Computing the Optimal Strategy to Commit to \u2217\nABSTRACT\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nWe give both positive results (efficient algorithms) and negative results (NP-hardness results).\n1.\nINTRODUCTION\nIn multiagent systems with self-interested agents (including most economic settings), the optimal action for one agent to take depends on the actions that the other agents take.\nTo analyze how an agent should behave in such settings, the tools of game theory need to be applied.\nTypically, when a strategic setting is modeled in the framework of game theory, it is assumed that players choose their strategies simultaneously.\nThis is especially true when the setting is modeled as a normal-form game, which only specifies each agent's utility as a function of the vector of strategies that the agents choose, and does not provide any information on the order in which agents make their decisions and what the agents observe about earlier decisions by other agents.\nGiven that the game is modeled in normal form, it is typically analyzed using the concept of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, such that no player has an incentive to individually deviate from this profile of strategies.\n(Typically, the strategies are allowed to be mixed, that is, probability distributions over the original (pure) strategies.)\nA (mixed-strategy) Nash equilibrium is guaranteed to exist in finite games [18], but one problem is that there may be multiple Nash equilibria.\nThis leads to the equilibrium selection problem of how an agent can know which strategy to play if it does not know which equilibrium is to be played.\nWhen the setting is modeled as an extensive-form game, it is possible to specify that some players receive some information about actions taken by others earlier in the game before deciding on their action.\nNevertheless, in general, the players do not know everything that happened earlier in the game.\nBecause of this, these games are typically still analyzed using an equilibrium concept, where one specifies a mixed strategy for each player, and requires that each player's strategy is a best response to the others' strategies.\n(Typically an additional constraint on the strategies is now imposed to ensure that players do not play in a way that is irrational with respect to the information that they have received so far.\nThis leads to refinements of Nash equilibrium such as subgame perfect and sequential equilibrium.)\nHowever, in many real-world settings, strategies are not selected in such a simultaneous manner.\nOftentimes, one player (the leader) is able to commit to a strategy before another player (the follower).\nThis can be due to a variety of reasons.\nFor example, one of the players may arrive at the site at which the game is to be played before another agent (e.g., in economic settings, one player may enter a market earlier and commit to a way of doing busi\nness).\nSuch commitment power has a profound impact on how the game should be played.\nFor example, the leader may be best off playing a strategy that is dominated in the normal-form representation of the game.\nPerhaps the earliest and best-known example of the effect of commitment is that by von Stackelberg [25], who showed that, in Cournot's duopoly model [5], if one firm is able to commit to a production quantity first, that firm will do much better than in the simultaneous-move (Nash) solution.\nIn general, if commitment to mixed strategies is possible, then (under minor assumptions) it never hurts, and often helps, to commit to a strategy [26].\nBeing forced to commit to a pure strategy sometimes helps, and sometimes hurts (for example, committing to a pure strategy in rock-paper-scissors before the other player's decision will naturally result in a loss).\nIn this paper, we will assume commitment is always forced; if it is not, the player who has the choice of whether to commit can simply compare the commitment outcome to the non-commitment (simultaneous-move) outcome.\nModels of leadership are especially important in settings with multiple self-interested software agents.\nOnce the code for an agent (or for a team of agents) is finalized and the agent is deployed, the agent is committed to playing the (possibly randomized) strategy that the code prescribes.\nThus, as long as one can credibly show that one cannot change the code later, the code serves as a commitment device.\nThis holds true for recreational tournaments among agents (e.g., poker tournaments, RoboSoccer), and for industrial applications such as sensor webs.\nFinally, there is also an implicit leadership situation in the field of mechanism design, in which one player (the designer) gets to choose the rules of the game that the remaining players then play.\nMechanism design is an extremely important topic to the EC community: the papers published on mechanism design in recent EC conferences are too numerous to cite.\nIndeed, the mechanism designer may benefit from committing to a choice that, if the (remaining) agents' actions were fixed, would be suboptimal.\nFor example, in a (first-price) auction, the seller may wish to set a positive (artificial) reserve price for the item, below which the item will not be sold--even if the seller values the item at 0.\nIn hindsight (after the bids have come in), this (na \u00a8 \u0131vely) appears suboptimal: if a bid exceeding the reserve price came in, the reserve price had no effect, and if no such bid came in, the seller would have been better off accepting a lower bid.\nOf course, the reason for setting the reserve price is that it incentivizes the bidders to bid higher, and because of this, setting artificial reserve prices can actually increase expected revenue to the seller.\nA significant amount of research has recently been devoted to the computation of solutions according to various solution concepts for settings in which the agents choose their strategies simultaneously, such as dominance [7, 11, 3] and (especially) Nash equilibrium [8, 21, 16, 15, 2, 22, 23, 4].\nHowever, the computation of the optimal strategy to commit to in a leadership situation has gone ignored.\nTheoretically, leadership situations can simply be thought of as an extensive-form game in which one player chooses a strategy (for the original game) first.\nThe number of strategies in this extensive-form game, however, can be exceedingly large.\nFor example, if the leader is able to commit to a mixed strategy in the original game, then every one of the (continuum of) mixed strategies constitutes a pure strategy in the extensive-form representation of the leadership situation.\n(We note that a commitment to a distribution is not the same as a distribution over commitments.)\nMoreover, if the original game is itself an extensive-form game, the number of strategies in the extensive-form representation of the leadership situation (which is a different extensive-form game) becomes even larger.\nBecause of this, it is usually not computationally feasible to simply transform the original game into the extensive-form representation of the leadership situation; instead, we have to analyze the game in its original representation.\nIn this paper, we study how to compute the optimal strategy to commit to, both in normal-form games (Section 2) and in Bayesian games, which are a special case of extensiveform games (Section 3).\n2.\nNORMAL-FORM GAMES\nIn this section, we study how to compute the optimal strategy to commit to for games represented in normal form.\n2.1 Definitions\nIn a normal-form game, every player i \u2208 {1,..., n} has a set of pure strategies (or actions) Si, and a utility function ui: S1 \u00d7 S2 \u00d7...\u00d7 Sn \u2192 R that maps every outcome (a vector consisting of a pure strategy for every player, also known as a profile of pure strategies) to a real number.\nTo ease notation, in the case of two players, we will refer to player 1's pure strategy set as S, and player 2's pure strategy set as T.\nSuch games can be represented in (bi -) matrix form, in which the rows correspond to player 1's pure strategies, the columns correspond to player 2's pure strategies, and the entries of the matrix give the row and column player's utilities (in that order) for the corresponding outcome of the game.\nIn the case of three players, we will use R, S, and T, for player 1, 2, and 3's pure strategies, respectively.\nA mixed strategy for a player is a probability distribution over that player's pure strategies.\nIn the case of two-player games, we will refer to player 1 as the leader and player 2 as the follower.\nBefore defining optimal leadership strategies, consider the following game which illustrates the effect of the leader's ability to commit.\n2, 1 4, 0 1, 0 3, 1 In this normal-form representation, the bottom strategy for the row player is strictly dominated by the top strategy.\nNevertheless, if the row player has the ability to commit to a pure strategy before the column player chooses his strategy, the row player should commit to the bottom strategy: doing so will make the column player prefer to play the right strategy, leading to a utility of 3 for the row player.\nBy contrast, if the row player were to commit to the top strategy, the column player would prefer to play the left strategy, leading to a utility of only 2 for the row player.\nIf the row player is able to commit to a mixed strategy, then she can get an even greater (expected) utility: if the row player commits to placing probability p> 1\/2 on the bottom strategy, then the column player will still prefer to play the right strategy, and the row player's expected utility will be 3p + 4 (1 \u2212 p) = 4 \u2212 p \u2265 3.\nIf the row player plays each strategy with probability exactly 1\/2, the column player is\nindifferent between the strategies.\nIn such cases, we will assume that the column player will choose the strategy that maximizes the row player's utility (in this case, the right strategy).\nHence, the optimal mixed strategy to commit to for the row player is p = 1\/2.\nThere are a few good reasons for this assumption.\nIf we were to assume the opposite, then there would not exist an optimal strategy for the row player in the example game: the row player would play the bottom strategy with probability p = 1\/2 + e with e> 0, and the smaller e, the better the utility for the row player.\nBy contrast, if we assume that the follower always breaks ties in the leader's favor, then an optimal mixed strategy for the leader always exists, and this corresponds to a subgame perfect equilibrium of the extensive-form representation of the leadership situation.\nIn any case, this is a standard assumption for such models (e.g. [20]), although some work has investigated what can happen in the other subgame perfect equilibria [26].\n(For generic two-player games, the leader's subgame-perfect equilibrium payoff is unique.)\nAlso, the same assumption is typically used in mechanism design, in that it is assumed that if an agent is indifferent between revealing his preferences truthfully and revealing them falsely, he will report them truthfully.\nGiven this assumption, we can safely refer to \"optimal leadership strategies\" rather than having to use some equilibrium notion.\nHence, for the purposes of this paper, an optimal strategy to commit to in a 2-player game is a strategy s G S' that maximizes maxtEBR (s) ul (s, t), where BR (s) = arg maxtET uf (s, t).\n(ul and uf are the leader and follower's utility functions, respectively.)\nWe can have S' = S for the case of commitment to pure strategies, or S' = \u2206 (S), the set of probability distributions over S, for the case of commitment to mixed strategies.\n(We note that replacing T by \u2206 (T) makes no difference in this definition.)\nFor games with more than two players, in which the players commit to their strategies in sequence, we define optimal strategies to commit to recursively.\nAfter the leader commits to a strategy, the game to be played by the remaining agents is itself a (smaller) leadership game.\nThus, we define an optimal strategy to commit to as a strategy that maximizes the leader's utility, assuming that the play of the remaining agents is itself optimal under this definition, and maximizes the leader's utility among all optimal ways to play the remaining game.\nAgain, commitment to mixed strategies may or may not be a possibility for every player (although for the last player it does not matter if we allow for commitment to mixed strategies).\n2.2 Commitment to pure strategies\nWe first study how to compute the optimal pure strategy to commit to.\nThis is relatively simple, because the number of strategies to commit to is not very large.\n(In the following, #outcomes is the number of complete strategy profiles.)\nPROOF.\nEach pure strategy that the first player may commit to will induce a subgame for the remaining players.\nWe can solve each such subgame recursively to find all of its optimal strategy profiles; each of these will give the original leader some utility.\nThose that give the leader maximal utility correspond exactly to the optimal strategy profiles of the original game.\nWe now present the algorithm formally.\nLet Su (G, s1) be the subgame that results after the first (remaining) player in G plays s1 G SG1.\nA game with 0 players is simply an outcome of the game.\nThe function Append (s, O) appends the strategy s to each of the vectors of strategies in the set O. Let e be the empty vector with no elements.\nIn a slight abuse of notation, we will write uG1 (C) when all strategy profiles in the set C give player 1 the same utility in the game G. (Here, player 1 is the first remaining player in the subgame G, not necessarily player 1 in the original game.)\nWe note that arg max is set-valued.\nThen, the following algorithm computes all optimal strategy profiles:\nEvery outcome is (potentially) examined by every player, which leads to the given runtime bound.\nAs an example of how the algorithm works, consider the following 3-player game, in which the first player chooses the left or right matrix, the second player chooses a row, and the third player chooses a column.\n3,3,0 0,2,0 3,0,1 4,4,2 0,0,2 0,0,0 0,5,1 0,0,0 3,0,0 First we eliminate the outcomes that do not correspond to best responses for the third player (removing them from the matrix): 0,1,1 1,0,1 2,1,1 3,0,1 1,1,1 0,0,1 Next, we remove the entries in which the third player does not break ties in favor of the second player, as well as entries that do not correspond to best responses for the second player.\nFinally, we remove the entries in which the second and third players do not break ties in favor of the first player, as well as entries that do not correspond to best responses for the first player.\nHence, in optimal play, the first player chooses the left matrix, the second player chooses the middle row, and the third player chooses the left column.\n(We note that this outcome is Pareto-dominated by (Right, Middle, Left).)\nFor general normal-form games, each player's utility for each of the outcomes has to be explicitly represented in the input, so that the input size is itself \u2126 (#players \u2022 #outcomes).\nTherefore, the algorithm is in fact a linear-time algorithm.\n2.3 Commitment to mixed strategies\nIn the special case of two-player zero-sum games, computing an optimal mixed strategy for the leader to commit to is equivalent to computing a minimax strategy, which minimizes the maximum expected utility that the opponent can obtain.\nMinimax strategies constitute the only natural solution concept for two-player zero-sum games: von Neumann's Minimax Theorem [24] states that in two-player zero-sum games, it does not matter (in terms of the players' utilities) which player gets to commit to a mixed strategy first, and a profile of mixed strategies is a Nash equilibrium if and only if both strategies are minimax strategies.\nIt is well-known that a minimax strategy can be found in polynomial time, using linear programming [17].\nOur first result in this section generalizes this result, showing that an optimal mixed strategy for the leader to commit to can be efficiently computed in general-sum two-player games, again using linear programming.\nPROOF.\nFor every pure follower strategy t, we compute a mixed strategy for the leader such that 1) playing t is a best response for the follower, and 2) under this constraint, the mixed strategy maximizes the leader's utility.\nSuch a mixed strategy can be computed using the following simple linear program: We note that this program may be infeasible for some follower strategies t, for example, if t is a strictly dominated strategy.\nNevertheless, the program must be feasible for at least some follower strategies; among these follower strategies, choose a strategy t * that maximizes the linear program's solution value.\nThen, if the leader chooses as her mixed strategy the optimal settings of the variables ps for the linear program for t *, and the follower plays t *, this constitutes an optimal strategy profile.\nIn the following result, we show that we cannot expect to solve the problem more efficiently than linear programming, because we can reduce any linear program with a probability constraint on its variables to a problem of computing the optimal mixed strategy to commit to in a 2-player normalform game.\nPROOF.\nLet the leader have a pure strategy i for every variable xi.\nLet the column player have one pure strategy j for every constraint in the linear program (other than ~ xi = 1), and a single additional pure strategy 0.\nLet the i utility functions be as follows.\nWriting the objective of the linear program as maximize ~ cixi, for any i, let ul (i, 0) =\nThe optimal solution to this program is x1 = 1\/3, x2 = 2\/3.\nOur reduction transforms this program into the following leader-follower game (where the leader is the row player).\n2, 0 0, 2 0, 5 1, 0 0, -1 0, -4 Indeed, the optimal strategy for the leader is to play the top strategy with probability 1\/3 and the bottom strategy with probability 2\/3.\nWe now show that the reduction works in general.\nClearly, the leader wants to incentivize the follower to play 0, because the utility that the leader gets when the follower plays 0 is always greater than when the follower does not play 0.\nIn order for the follower not to prefer playing j> 0 rather than 0, it must be the case that ~ pl (i) (aij--bj) <\nget a utility of at least mini ci if and only if there is a feasible solution to the constraints.\nGiven that the pl (i) incentivize the follower to play 0, the leader attempts to maximize ~ pl (i) ci.\nThus the leader must solve the original i linear program.\nAs an alternative proof of Theorem 3, one may observe that it is known that finding a minimax strategy in a zerosum game is as hard as the linear programming problem [6], and as we pointed out at the beginning of this section, computing a minimax strategy in a zero-sum game is a special case of the problem of computing an optimal mixed strategy to commit to.\nThis polynomial-time solvability of the problem of computing an optimal mixed strategy to commit to in two-player normal-form games contrasts with the unknown complexity of computing a Nash equilibrium in such games [21], as well as with the NP-hardness of finding a Nash equilibrium with maximum utility for a given player in such games [8, 2].\nUnfortunately, this result does not generalize to more than two players--here, the problem becomes NP-hard.\nTo show this, we reduce from the VERTEX-COVER problem.\nexists a subset of the vertices S C _ V, with S = K, such that every edge e E E has at least one of its endpoints in S. BALANCED-VERTEX-COVER is the special case of VERTEX-COVER in which K = V \/ 2.\nVERTEX-COVER is NP-complete [9].\nThe following lemma shows that the hardness remains if we require K = V \/ 2.\n(Similar results have been shown for other NP-complete problems.)\nPROOF.\nMembership in NP follows from the fact that the problem is a special case of VERTEX-COVER, which is in NP.\nTo show NP-hardness, we reduce an arbitrary VERTEX-COVER instance to a BALANCED-VERTEXCOVER instance, as follows.\nIf, for the VERTEX-COVER instance, K> V \/ 2, then we simply add isolated vertices that are disjoint from the rest of the graph, until K = V \/ 2.\nIf K 1.\nMoreover, the strategy must induce\nthe follower to play t0 for at least K types of the form \u03b8v.\nInducing the follower to play t0 when it has type \u03b8v can be done only by playing sv with probability at least 1\/K, which will give the follower a utility of at most K \u2212 1 K \u2212 1 = 1\nfor playing t1.\nBut then, the set of vertices v such that sv is played with probability at least 1\/K must constitute an independent set of size K (because if there were an edge e between two such vertices, it would induce the follower to play t1 for type \u03b8e by the above).\nBy contrast, if the follower has only a single type, then we can generalize the linear programming approach for normalform games:\nPROOF.\nWe generalize the approach in Theorem 2 as follows.\nFor every pure follower strategy t, we compute a mixed strategy for the leader for every one of the leader's types such that 1) playing t is a best response for the follower, and 2) under this constraint, the mixed strategy maximizes the leader's ex ante expected utility.\nTo do so, we generalize the linear program as follows: As in Theorem 2, the solution for the linear program that maximizes the solution value is an optimal strategy to commit to.\nThis shows an interesting contrast between commitment to pure strategies and commitment to mixed strategies in Bayesian games: for pure strategies, the problem becomes easy if the leader has only a single type (but not if the follower has only a single type), whereas for mixed strategies, the problem becomes easy if the follower has only a single type (but not if the leader has only a single type).\n4.\nCONCLUSIONS AND FUTURE RESEARCH\nIn multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously.\nThis requires some equilibrium notion (Nash equilibrium and its refinements), and often leads to the equilibrium selection problem: it is unclear to each individual player according to which equilibrium she should play.\nHowever, this model is not always realistic.\nIn many settings, one player is able to commit to a strategy before the other player makes a decision.\nFor example, one agent may arrive at the (real or virtual) site of the game before the other, or, in the specific case of software agents, the code for one agent may be completed and committed before that of another agent.\nSuch models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.\nSpecifically, if commitment to mixed strategies is possible, then (optimal) commitment never hurts the leader, and often helps.\nThe recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position).\nIn this paper, we studied how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games.\nFor normal-form games, we showed that the optimal pure strategy to commit to can be found efficiently for any number of players.\nAn optimal mixed strategy to commit to in a normal-form game can be found efficiently for two players using linear programming (and no more efficiently than that, in the sense that any linear program with a probability constraint can be encoded as such a problem).\n(This is a generalization of the polynomial-time computability of minimax strategies in normal-form games.)\nThe problem becomes NP-hard for three (or more) players.\nIn Bayesian games, the problem of finding an optimal pure strategy to commit to is NP-hard even in two-player games in which the follower has only a single type, although two-player games in which the leader has only a single type can be solved efficiently.\nThe problem of finding an optimal mixed strategy to commit to in a Bayesian game is NP-hard even in two-player games in which the leader has only a single type, although two-player games in which the follower has only a single type can be solved efficiently using a generalization of the linear progamming approach for normal-form games.\nThe following two tables summarize these results.\nResults for commitment to mixed strategies.\n(With more than 2 players, the \"follower\" is the last player to commit, the \"leader\" is the first.)\nFuture research can take a number of directions.\nFirst, we can empirically evaluate the techniques presented here on test suites such as GAMUT [19].\nWe can also study the computation of optimal strategies to commit to in other1 concise representations of normal-form games--for example, in graphical games [10] or local-effect\/action graph games [14, 1].\nFor the cases where computing an optimal strategy to commit to is NP-hard, we can also study the computation of approximately optimal strategies to commit to.\nWhile the correct definition of an approximately optimal strategy is in this setting may appear simple at first--it should be a strategy that, if the following players play optimally, performs almost as well as the optimal strategy in expectation--this definition becomes problematic when we consider that the other players may also be playing only approximately optimally.\nOne may also study models in which multiple (but not all) players commit at the same time.\nAnother interesting direction to pursue is to see if computing optimal mixed strategies to commit to can help us in, or otherwise shed light on, computing Nash equilibria.\nOften, optimal mixed strategies to commit to are also Nash equilibrium strategies (for example, in two-player zero-sum games this is always true), although this is not always the case (for example, as we already pointed out, sometimes the optimal strategy to commit to is a strictly dominated strategy, which can never be a Nash equilibrium strategy).","keyphrases":["optim strategi","commit","multiag system","leadership","stackelberg model","stackelberg","leadership model","pure strategi","mix strategi","bayesian game","np-hard","simultan manner","normal-form game","nash equilibrium","game theori"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","R","U","M"]} {"id":"C-29","title":"Implementation and Performance Evaluation of CONFLEX-G: Grid-enabled Molecular Conformational Space Search Program with OmniRPC","abstract":"CONFLEX-G is the grid-enabled version of a molecular conformational space search program called CONFLEX. We have implemented CONFLEX-G using a grid RPC system called OmniRPC. In this paper, we report the performance of CONFLEX-G in a grid testbed of several geographically distributed PC clusters. In order to explore many conformation of large bio-molecules, CONFLEX-G generates trial structures of the molecules and allocates jobs to optimize a trial structure with a reliable molecular mechanics method in the grid. OmniRPC provides a restricted persistence model to support the parametric search applications. In this model, when the initialization procedure is defined in the RPC module, the module is automatically initialized at the time of invocation by calling the initialization procedure. This can eliminate unnecessary communication and initialization at each call in CONFLEX-G. CONFLEX-G can achieve performance comparable to CONFLEX MPI and can exploit more computing resources by allowing the use of a cluster of multiple clusters in the grid. The experimental result shows that CONFLEX-G achieved a speedup of 56.5 times in the case of the 1BL1 molecule, where the molecule consists of a large number of atoms, and each trial structure optimization requires significant time. The load imbalance of the optimization time of the trial structure may also cause performance degradation.","lvl-1":"Implementation and Performance Evaluation of CONFLEX-G: Grid-enabled Molecular Conformational Space Search Program with OmniRPC Yoshihiro Nakajima Graduate School of Systems & Information Engineering, University of Tsukuba Tsukuba, 305-8577, Japan yoshihiro@hpcs.is.tsukuba.ac.jp Mitsuhisa Sato Institute of Information Sciences and Electronics, University of Tsukuba Tsukuba, 305-8577, Japan msato@is.tsukuba.ac.jp Hitoshi Goto Knowledge-based Information Engineering, Toyohashi University of Technology Toyohashi, 441-8580, Japan gotoh@cochem2.tutkie.tut.ac.jp Taisuke Boku, Daisuke Takahashi Institute of Information Sciences and Electronics, University of Tsukuba Tsukuba, 305-8577, Japan {taisuke,daisuke}@hpcs.\nis.tsukuba.ac.jp ABSTRACT CONFLEX-G is the grid-enabled version of a molecular conformational space search program called CONFLEX.\nWe have implemented CONFLEX-G using a grid RPC system called OmniRPC.\nIn this paper, we report the performance of CONFLEX-G in a grid testbed of several geographically distributed PC clusters.\nIn order to explore many conformation of large bio-molecules, CONFLEX-G generates trial structures of the molecules and allocates jobs to optimize a trial structure with a reliable molecular mechanics method in the grid.\nOmniRPC provides a restricted persistence model to support the parametric search applications.\nIn this model, when the initialization procedure is defined in the RPC module, the module is automatically initialized at the time of invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and initialization at each call in CONFLEX-G.\nCONFLEXG can achieve performance comparable to CONFLEX MPI and can exploit more computing resources by allowing the use of a cluster of multiple clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times in the case of the 1BL1 molecule, where the molecule consists of a large number of atoms, and each trial structure optimization requires significant time.\nThe load imbalance of the optimization time of the trial structure may also cause performance degradation.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: COMPUTERCOMMUNICATION NETWORK-Distributed Systems; J.2.4 [Computer Applications]: PHYSICAL SCIENCES AND ENGINEERING General Terms Design,Performance 1.\nINTRODUCTION Elucidation of the stable conformations and the folding process of proteins is one of the most fundamental and challenging goals in life science.\nWhile some of the most common secondary structures (e.g., certain types of helix, the beta-strand, and the coil) are well known, precise analysis of the thousands of chemically important conformers and pico-second-order analysis of their conformational interconversions via the transition states on the potential energy surface are required for the microsecond-order investigation of the folding process toward the tertiary structure formations.\nRecently, the concept of the computational grid has begun to attract significant interest in the field of high-performance network computing.\nRapid advances in wide-area networking technology and infrastructure have made it possible to construct large-scale, high-performance distributed computing environments, or computational grids, that provide dependable, consistent and pervasive access to enormous computational resources.\nCONFLEX is one of the most efficient and reliable conformational space search programs[1].\nWe have applied this 154 program to parallelization using global computing.\nThe performance of the parallelized CONFLEX enables exploration of the lower-energy region of the conformational space of small peptides within an available elapsed time using a local PC cluster.\nSince trial structure optimization in CONFLEX is calculated via molecular mechanics, conformational space search can be performed quickly compared to that using molecular orbital calculation.\nAlthough the parallelized version of CONFLEX was used to calculate in parallel the structure optimization, which takes up over 90% of the processing in the molecular conformation search, sufficient improvement in the speedup could not be achieved by this method alone.\nTherefore, for high polymers from live organisms, such as HIV protease, the use one PC cluster is insufficient due to the requirement for optimization of a huge number of trial structures.\nThis requires the vast computer resources of a grid computing environment.\nIn this paper, we describe CONFLEX-G, a grid-enabled molecular conformational search program, using OmniRPC and report its performance in a grid of several PC clusters which are geographically distributed.\nThe prototype CONFLEX-G allocates calculation trial structures optimization, which is a very time-consuming task, to worker nodes in the grid environment in order to obtain high throughput.\nIn addition, we compare the performance of CONFLEX-G in a local PC cluster to that in a grid testbed.\nOmniRPC[2, 3, 4] is a thread-safe implementation of Ninf RPC[5, 6] which is a Grid RPC facility for grid environment computing.\nSeveral systems adopt the concept of the RPC as the basic model for grid environment computing, including Ninf-G[7], NetSolve[8] and CORBA[9].\nThe RPCstyle system provides an easy-to-use, intuitive programming interface, allowing users of the grid system to easily create grid-enabled applications.\nIn order to support parallel programming, an RPC client can issue asynchronous call requests to a different remote computer to exploit networkwide parallelism via OmniRPC.\nIn this paper, we propose the OmniRPC persistence model to a Grid RPC system and demonstrate its effectiveness.\nIn order to support a typical application for a grid environment, such as a parametric search application, in which the same function is executed with different input parameters on the same data set.\nIn the current GridRPC system[10], the data set by the previous call cannot be used by subsequent calls.\nIn the OmniRPC system, once a remote executable is invoked, the client attempts to use the invoked remote executable and its initialized state for subsequent RPC calls to the same remote functions in order to eliminate the invocation cost of each call.\nThis paper demonstrates that CONFLEX-G is able to exploit the huge computer resources of a grid environment and search large-scale molecular conformers.\nWe demonstrate CONFLEX-G on our grid testbed using the actual protein as a sample molecule.\nThe OmniRPC facility of the automatic initializable module (AIM) allows the system to efficiently calculate numerous conformers.\nFurthermore, by using OmniRPC, the user can grid-parallelize the existing application, and move from the cluster to the grid environment without modifying program code and compiling the program.\nIn addition, the user can easily build a private grid environment.\nThe rest of this paper is organized as follows.\nAn overview Selection of Initial Structure Conformations Database Local Perturbation Geometry Optimization Comparison and Registration Figure 1: Algorithm of conformational space search in the original CONFLEX.\nof the CONFLEX system is presented in Section2, and the implementation and design of CONFLEX-G are described in Section 3.\nWe report experimental results obtained using CONFLEX-G and discuss its performance in Section 4.\nIn Section 6, we present conclusions and discuss subjects for future study.\n2.\nCONFLEX CONFLEX [1] is an efficient conformational space search program, which can predominately and exhaustively search the conformers in the lower-energy region.\nApplications of CONFLEX include the elucidation of the reactivity and selectivity of drugs and possible drug materials with regard to their conformational flexibility.\n2.1 Algorithm of ConformationalSpaceSearch The basic strategy of CONFLEX is an exhaustive search of only the low-energy regions.\nThe original CONFLEX performs the following four major steps: 1.\nSelection of an initial structure among the previously discovered unique conformers sorted in a conformational database.\n(An input structure is used as the first initial structure at the beginning of a search execution only.)\n2.\nGeneration of trial structures by local perturbations to the selected initial structure.\n3.\nGeometry optimization for the newly generated trial structures.\n4.\nComparison of the successfully optimized (trial) structures with the other conformers stored in a conformation database, and preservation of newly discovered unique conformers in the database.\nFigure 1 shows the outline of CONFLEX, the original conformational space search algorithm.\nThese procedures incorporate two unique strategies.\nFigure 2 shows the strategies for generating local perturbations in CONFLEX.\nThe first strategy involves both corner flapping and edge flipping for the ring atoms and stepwise rotation for side-chains or backbone chains.\nThese methods provide a highly efficient way to produce several good trial structures.\nThese perturbations can be considered to mimic 155 Stepwise Rotation Corner Flap Edge Flip Figure 2: Strategies used to generate the local perturbations.\na barrier-crossing step in the elementary process of the thermal conformational inter-conversion.\nActually, the perturbations of an initial structure correspond to the precise performance around the space of the initial structure because of localization and weakness of the perturbation.\nThe selection rule of the initial structure, the LowestConformer-First rule, is the second strategy for directing the conformation search expanded to the low-energy regions.\nThe initial structure is selected as the set of lowest energy conformers stored in the conformation database.\nThis rule is effective in moving down the search space toward lower energy regions, like water from a stream running into an empty reservoir, while filling local depressions along the way.\nTherefore, these tactical procedures of the CONFLEX search are referred to as the Reservoir Filling Algorithm.\nIn order to remain in the low-energy region and perform an exhaustive search, the search limit (SEL), which determines the maximum energy of the initial structures, is pre-defined.\nGradually increasing SEL allows only the lowenergy conformers to be searched and avoids straying into unnecessarily high-energy regions.\n2.2 Parallelization of CONFLEX for Cluster For application to over 100 atoms, CONFLEX was improved using high-performance parallel computing techniques.\nIn the CONFLEX search algorithm, the geometry optimization procedures always take 95% of the elapsed time of the search execution.\nTherefore, we parallelized this optimization using the Master\/Worker parallelization technique.\nWe modified the search procedures as follows.\nAfter trial structures are generated (step 2), they are temporarily stored in a task pool on the master node.\nThen, each worker node is dynamically supplied with one trial structure from the master node.\nAfter an optimization on a worker node is finished, the worker is immediately supplied with another trial structure.\nWhen all of the trial structures related to a given initial structure are optimized, only the master procedure is used in comparison.\nBy parallelizing CONFLEX, the speedup of searching molecular conformers obtained is as reported in[11].\n3.\nCONFLEX-G Originally, CONFLEX was intended for use in exploring the conformers of the large bio-molecules, such HIV protease.\nIn such molecules, the number of trial structures increases and the time required for optimization of RPC Selection of Initial Structure Conformations Database Local Perturbation Comparison and Registration Client program Task Pool of Geometry Optimization RPC RPC Grid environment Cluster B Cluster A Cluster C Trial structureTrial structure Trial structure Trial structure Figure 3: Procedure of CONFLEX-G.\nagent rexrex rex Client jones.tsukuba.ac.jp hpc-serv.\nhpcc.jp hpc1 hpc2 hpc3 Agent invocation communicationNetwork Figure 4: Overview of the OmniRPC system for the remote cluster having a private IP address.\nthe trial structure becomes immense.\nWe implemented the parallelized version of CONFLEX, which cannot treat such molecules using only a local PC cluster.\nIn order to exploit the vast computing resources of a grid environment, we designed and implemented CONFLEX-G, which is a grid-enabled version of CONFLEX, with the OmniRPC system.\nCONFLEX-G allocates jobs to optimize a trial structure to the computational nodes of each cluster in the grid environment.\nFigure 3 shows the process of CONFLEX-G.\nThe worker programs are initialized by the initialize method, which is provided by the OmniRPC AIM facility at worker invocation.\nAt each RPC call, the initialized state is reused on the remote host.\nIn other words, the client program can eliminate the initialization for each RPC call, and can therefore optimize trial structures efficiently.\n3.1 The OmniRPC system OmniRPC is a Grid RPC system which allows seamless parallel programming from a PC cluster to a grid environment.\nOmniRPC inherits its API and basic architecture from Ninf.\nA client and the remote computational hosts which execute the remote procedures may be connected via a network.\nThe remote libraries are implemented as an executable program which contains a network stub routine as its main routine.\nWe call this executable program a remote executable program (rex).\nWhen the OmniRPC client program starts, the initialization function of OmniRPC system invokes the OmniRPC agent program omrpc-agent in the remote hosts listed in the host file.\nTo invoke the agent, the user can use the remote shell command rsh in a local-area network, the GRAM (Globus Resource Allocation Manager) API of the Globus 156 toolkit[12] in a grid environment, or the secure remote shell command ssh.\nThe user can switch the configurations only by changing the host file.\nOmniRpcCall is a simple client programming interface for calling remote functions.\nWhen OmniRpcCall makes a remote procedure call, the call is allocated to an appropriate remote host.\nWhen the client issues the RPC request, it requests that the agent in the selected host submit the job of the remote executable with the local job scheduler specified in the host file.\nIf the job scheduler is not specified, the agent executes the remote executable in the same node by the fork system call.\nThe client sends the data of the input arguments to the invoked remote executable, and receives the results upon return of the remote function.\nOnce a remote executable is invoked, the client attempts to use the invoked remote executable for subsequent RPC calls in order to eliminate the cost of invoking the same remote executable again.\nWhen the agent and the remote executables are invoked, the remote programs obtain the client address and port from the argument list and connect back to the client by direct TCP\/IP or Globus-IO for data transmission.\nBecause the OmniRPC system does not use any fixed service ports, the client program allocates unused ports dynamically to wait for connection from the remote executables.\nThis avoids possible security problems, and allows the user to install the OmniRPC system without requiring a privileged account.\nHerein, a typical grid resource is regarded as a cluster of geographically distributed PC clusters.\nFor PC clusters on a private network, an OmniRPC agent process on the server host functions as a proxy to relay communications between the client and the remote executables by multiplexing the communications using a single connection.\nThis feature, called multiplex IO (MXIO), allows a single client to use up to 1,000 remote computing hosts.\nWhen the PC cluster is inside a firewall, the port forwarding of SSH enables the node to communicate to the outside with MXIO.\nFigure 4 shows the overview of the OmniRPC system for a remote cluster with a private IP address.\nFor parallel programming, the programmer can use asynchronous remote procedure calls, allowing the client to issue several requests while continuing with other computations.\nThe requests are dispatched to different remote hosts to be executed in parallel, and the client waits or polls the completed request.\nIn such a programming model with asynchronous remote procedure calls, the programmer should handle outstanding requests explicitly.\nBecause OmniRPC is a thread-safe system, a number of remote procedure calls may be outstanding at any time for multi-threaded programs written in OpenMP.\n3.2 OmniRPC persistence model: automatic initializable module OmniRPC efficiently supports typical Master\/Worker parallel applications such as parametric execution programs.\nFor parametric search applications, which often require large amount of identical data for each call, OmniRPC supports a limited persistence model, which is implemented by the automatic initializable module.\nThe user can define an initialization procedure in the remote executable in order to send and store data automatically in advance of actual remote procedure calls.\nSince the remote executable may accept requests for subsequent calls, the data set which has been set by the initialization procedure can be re-used.\nAs a result, the worker program can execute efficiently and reduce the amount of data transmitted for initialization.\nOnce a remote executable is invoked, the client attempts to use the invoked remote executable for subsequent RPC calls.\nHowever, OmniRPC does not guarantee persistence of the remote executable, so that the data set by the previous call cannot be used by subsequent calls.\nThis is because a remote call by OmniRpcCall may be scheduled to any remote host dynamically, and remote executables may be terminated accidentally due to dynamic re-scheduling or host faults.\nHowever, persistence of the remote executable can be exploited in certain applications.\nAn example is a parametric search application: in such an application, it would be efficient if a large set of data could be pre-loaded by the first call, and subsequent calls could be performed on the same data, but with different parameters.\nThis is the case for CONFLEX.\nOmniRPC provides a restricted persistence model through the automatic initializable module (AIM) in order to support this type of application.\nIf the initialization procedure is defined in the module, the module is automatically initialized at invocation by calling the initialization procedure.\nWhen the remote executable is re-scheduled in different hosts, the initialization is called to initialize the newly allocated remote module.\nThis can eliminate unnecessary communications when RPC calls use the same data.\nTo reveal more about the difference in progress between the cases with OmniRPC AIM and without OmniRPC AIM, we present two figures.\nFigure 5 illustrates the time chart of the progress of a typical OmniRPC application using the OmniRPC AIM facility, and Figure 6 illustrates the time chart of the same application without the OmniRPC AIM facility.\nIn both figures, the lines between diamonds represent the processes of initialization, and the lines between points represent the calculation.\nThe bold line indicates the time when the client program sends the data to a worker program.\nIt is necessary for the application without the OmniRPC AIM facility to initialize at each RPC.\nThe application using the OmniRPC AIM facility can re-use the initialized data once the data set is initialized.\nThis can reduce the initialization at each RPC.\nThe workers of the application with the AIM can calculate efficiently compared to the application without the OmniRPC AIM facility.\n3.3 Implementation of CONFLEX-G using OmniRPC Figure 3 shows an overview of the process used in CONFLEXG.\nUsing RPCs, CONFLEX-G allocates the processes of trial structure optimization, which are performed by the computation nodes of a PC cluster in the MPI version of CONFLEX, to the computational nodes of each cluster in a grid environment.\nThere are two computations which are performed by the worker programs in CONFLEX-G.\nOne is the initialization of a worker program, and another is the calculation of trial structure optimization.\nFirst, the OmniRPC facility of the AIM is adapted for initialization of a worker program.\nThis facility automatically calls the initialization function, which is contained in the worker program, once the client program invokes the worker program in a remote node.\nIt is necessary for the common RPC system including GridRPC to initialize a program for every RPC call, since data persistence of worker programs 157 time Client Program Worker Program 1 Worker Program 2 initialization initialization calculation calculation calculation calculation calculation Parallelized using asynchronous RPCs Figure 5: Time chart of applications using the OmniRPC facility of the automatic initializable module.\ntime Client Program Worker Program 1 Worker Program 2 initialization initializationcalculation calculation calculation calculation initialization initialization initialization Parallelized using asynchronous RPCs calculation Figure 6: Time chart of applications without the OmniRPC facility of the automatic initializable module.\nTable 1: Machine configurations in the grid testbed.\nSite Cluster Name Machine Network Authentication # of Nodes # of CPUs Univ. of Tsukuba Dennis Dual Xeon 2.4GHz 1Gb Ethernet Globus, SSH 14 28 Alice Dual Athlon 1800+ 100Mb Ethernet Globus, SSH 18 36 TUT Toyo Dual Athlon 2600+ 100Mb Ethernet SSH 8 16 AIST Ume Dual Pentium3 1.4GHz 1Gb Ethernet Globus, SSH 32 64 is not supported.\nIn OmniRPC, however, when the Initialize remote function is defined in the worker program and a new worker program, corresponding to the other RPC, is assigned to execute, an Initialize function is called automatically.\nTherefore, after the Initialize function call to set up common initialization data, a worker program can re-use this data and increase the efficiency of it``s processes.\nThus, the higher the set-up cost, the greater the potential benefit.\nWe implemented the worker program of CONFLEX-G to receive data, such as evaluation parameters of energy, from a client program and to be initialized by the Initialize function.\nWe arranged the client program of CONFLEX-G to transfer the parameter file at the time of worker initialization.\nThis enables execution to be performed by modify only the client setting if the user wants to run CONFLEX-G with a different data set.\nSecond, in order to calculate trial structure optimization in a worker program, the worker program must receive the data, such as the atom arrangement of the trial structure and the internal energy state.\nThe result is returned to the client program after the worker has Optimized the trial structure.\nSince the calculation portion of the structure optimization in this worker program can be calculated independently using different parameters, we parallelized this portion using asynchronous RPCs on the client side.\nTo call the structure optimization function in a worker program from the client program, we use the OmniRpcCallAsync API, which is intended for asynchronous RPC.\nIn addition, OmniRpcCallWaitAll API which waits until all asynchronous RPCs are used in order to perform synchronization with all of the asynchronous RPCs completed so as to optimize the trial structure.\nThe client program which assigns trial structure optimization to the calculation node of a PC cluster using RPC is outlined as follows.\nOmniRpcInit() OmniRpcModuleInit(``conflex_search'',...); ... while( ) { foreach( ) OmniRpcCallAsync(``conflex_search_worker'', ...); OmniRpcWaitAll(); ... Note that OmniRpcModuleInit API returns only the arguments needed for initialization and will not actually execute the Initialization function.\nAs described above, the actual Initialization is performed at the first remote call.\nSince the OmniRPC system has an easy round-robin scheduler, we do not have to explicitly write the code for load balance.\nTherefore, RPCs are allocated automatically to idle workers.\n158 Table 2: Network performance between the master node of the Dennis cluster and the master node of each PC cluster.\nRound-Trip Throughput Cluster Time (ms) (Mbps) Dennis 0.23 879.31 Alice 0.18 94.12 Toyo 11.27 1.53 Ume 1.07 373.33 4.\nPRELIMINARY RESULTS 4.1 Grid Testbed The grid testbed was constructed by computing resources at the University of Tsukuba, the Toyohashi University of Technology (TUT) and the National Institute of Advanced Industrial Science and Technology (AIST).\nTable 1 shows the computing resources used for the grid of the present study.\nThe University of Tsukuba and AIST are connected by a 1-Gbps Tsukuba WAN, and the other PC clusters are connected by SINET, which is wide-area network dedicated to academic research in Japan.\nTable 2 shows the performance of the measured network between the master node of the Dennis cluster and the master node of each PC cluster in the grid testbed.\nThe communication throughput was measured using netperf, and the round-trip time was measured by ping.\n4.2 Performance of CONFLEX-G In all of the CONFLEX-G experiments, the client program was executed on the master node of the Dennis cluster at the University of Tsukuba.\nThe built-in Round-Robin scheduler of OmniRPC was used as a job scheduler.\nSSH was used for an authentication system, the OminRPC``s MXIO, which relays the I\/O communication between client program and worker programs by port forwarding of SSH was, not used.\nNote that one worker program is assigned and performed on one CPU of the calculation node in a PC cluster.\nThat is, the number of workers is equal to the number of CPUs.\nThese programs were compiled by the Intel Fortran Compiler 7.0 and gcc 2.95.\nMPICH, Version 1.2.5, was used to compare the performance between CONFLEX MPI and CONFLEX-G.\nIn order to demonstrate the usability of the OmniRPC facility of the AIM, we implemented another version of CONFLEX-G which did not utilize the OmniRPC facility.\nThe worker program in this version of CONFLEXG must be initialized at each RPC because the worker does not hold the previous data set.\nIn order to examine the performance of CONFLEX-G, we selected two peptides and two small protein as test molecules: \u2022 N-acetyl tetra-alanine methylester (AlaX04) \u2022 N-acetyl hexdeca-alanine methylester (AlaX16) \u2022 TRP-cage miniprotein construct TC5B (1L2Y) \u2022 PTH receptor N-terminus fragment (1BL1) Table 3 lists the characteristics of these sample molecules.\nThe column trial structure \/ loops in this table shows the Figure 7: Performances of CONFLEX-G, CONFLEX MPI and Original CONFLEX in the Dennis cluster.\nFigure 8: Speedup ratio, which is based on the elapsed time of CONFLEX-G using one worker in the Dennis cluster.\nFigure 9: Performance of CONFLEX-G with and without the OmniRPC facility of automatic initializable module for AlaX16.\n159 Table 3: Characteristics of molecules and data transmission for optimizing trial molecular structures in each molecular code.\nMolecular # of # of total trial trial structure Data transfer to Data transfer code atoms structures \/ loop initialize a worker \/ trial structure AlaX04 181 360 45 2033KB 17.00KB AlaX16 191\u00a0480\u00a0160\u00a02063KB 18.14KB 1L2Y 315\u00a0331\u00a0331\u00a02099KB 29.58KB 1BL1 519\u00a0519\u00a0519\u00a02150KB 48.67KB Table 4: Elapsed search time for the molecular conformation of AlaX04.\nTotal Total Optimization Cluster # of Structures optimization time Elapsed Speed (# of workers) workers \/ worker time (s) \/ structure (s) time (s) up Dennis (sequential) 1 320.0 1786.21 4.96 1786.21 1.00 Toyo (16) 16 20.0 1497.08 4.16 196.32 9.10 Dennis (28) 28 11.4 1905.51 5.29 97.00 18.41 Alice (36) 36 8.9 2055.25 5.71 87.09 20.51 Ume (56) 56 5.7 2196.77 6.10 120.69 14.80 Dennis (28) + Toyo (16) 44 7.3 1630.09 4.53 162.35 11.00 Alice (36) + Toyo (16) 52 6.2 1774.53 4.93 178.24 10.02 Dennis (28) + Alice (36) 64 5.0 1999.02 5.55 81.52 21.91 Dennis (28) + Ume (56) 84 3.8 2085.84 5.79 92.22 19.37 Alice (36) + Ume (56) 92 3.5 2120.87 5.89 101.25 17.64 Table 5: Elapsed search time for the molecular conformation of AlaX16 Total Total Optimization Cluster # of Structures optimization time Elapsed Speed (# of workers) workers \/ worker time (s) \/ structure (s) time (s) up Dennis (1) 1 480.0 74027.80 154.22 74027.80 1.00 Toyo (16) 16 30.0 70414.21 146.70 4699.15 15.75 Dennis (28) 28 17.1 74027.80 154.22 3375.60 21.93 Alice (36) 36 13.3 90047.27 187.60 3260.41 22.71 Ume (56) 56 8.6 123399.38 257.08 2913.63 25.41 Dennis (28) + Toyo (16) 44 10.9 76747.74 159.89 2762.10 26.80 Alice (36) + Toyo (16) 52 9.2 82700.44 172.29 2246.73 32.95 Dennis (28) + Alice (36) 64 7.5 87571.30 182.44 2051.50 36.08 Toyo (16) + Ume (56) 72 6.7 109671.32 228.48 2617.85 28.28 Dennis (28) + Ume (56) 84 5.7 102817.90 214.20 2478.93 29.86 Dennis(28)+Ume(56)+Toyo(16) 100 4.8 98238.07 204.66 2478.93 29.86 Table 6: Elapsed time of the search for the trial structure of 1L2Y.\nCluster Total # of Structures Optimization time Elapsed Elapsed Speed (# of workers) workers \/ worker \/ structure (s) time (s) time (H) up Toyo MPI (1) 1 331.0 867 286,967 79.71 1.00 Toyo MPI (16) 16 20.7 867 18,696 5.19 15.34 Dennis (28) 28 11.8 803 14,101 3.91 20.35 Dennis (28) + Ume(56) 84 3.9 1,064 8,316 2.31 34.50 Table 7: Elapsed time of the search for the trial structure of 1BL1.\nCluster Total # of Structures Optimization time Elapsed Elapsed Speed (# of workers) workers \/ worker \/ structure (s) time (s) time (H) up Toyo MPI (1) 1 519.0 3,646 1892,210 525.61 1.00 Toyo MPI (16) 16 32.4 3,646 120,028 33.34 15.76 Dennis (28) 28 18.5 3,154 61,803 17.16 30.61 Dennis (28) + Ume (56) 84 6.1 4,497 33,502 9.30 56.48 160 number of trial structures generated in each iteration, indicating the degree of parallelism.\nFigure 3 also summarizes the amount of data transmission required for initialization of a worker program and for optimization of each trial structure.\nNote that the amount of data transmission, which is required in order to initialize a worker program and optimize a trial structure in the MPI version of CONFLEX, is equal to that of CONFLEX-G.\nWe used an improvement version of MM2 force field to assign a potential energy function to various geometric properties of a group of atoms.\n4.2.1 Performance in a Local Cluster We first compared the performance of CONFLEX-G, the MPI version of CONFLEX, and the original sequential version of CONFLEX-G using a local cluster.\nWe investigated performance by varying the number of workers using the Dennis cluster.\nWe chose AlaX04 as a test molecule for this experiment.\nFigure 7 compares the results for the CONFLEX MPI and CONFLEX-G in a local PC cluster.\nThe result of this experiment shows that CONFLEX-G can reduce the execution time as the number of workers increases, as in the MPI version of CONFLEX.\nWe found that CONFLEX-G achieved efficiencies comparable to the MPI version.\nWith 28 workers, CONFLEX-G achieved an 18.00 times speedup compared to the CONFLEX sequential version.\nThe performance of CONFLEX-G without the OmniRPC AIM facility is worse than that of CONFLEXG using the facility, based on the increase in the number of workers.\nThis indicates that the OmniRPC AIM enables the worker to calculate efficiently without other calculations, such initialization or invocation of worker programs.\nAs the number of workers is increased, the performance of CONFLEX-G is a slightly lower than that of the MPI version.\nThis performance degradation is caused by differences in the worker initialization processes of CONFLEX-G and CONFLEX MPI.\nIn the case of CONFLEX MPI, all workers are initialized in advance of the optimization phase.\nIn the case of OminRPC, the worker is invoked on-demand when the RPC call is actually issued.\nTherefore, the initialization incurs this overhead.\nSince the objective of CONFLEX-G is to explore the conformations of large bio-molecules, the number of trial structures and the time to optimize the trial structure might be large.\nIn such cases, the overhead to invoke and initialize the worker program can be small compared to the entire elapsed time.\n4.2.2 Performance for Peptides in The Grid Testbed First, the sample molecules (AlaX04 and AlaX16) were used to examine the CONFLEX-G performance in a grid environment.\nFigure 8 shows the speedup achieved by using multiple clusters compared to using one worker in the Dennis cluster.\nDetailed results are shown in Table 4 and Table 5.\nIn both cases, the best performance was obtained using 64 workers of the combination of the Dennis and Alice clusters.\nCONFLEX-G achieved a maximum speedup of 36.08 times for AlaX04 and a maximum speedup of 21.91 times for AlaX16.\nIn the case of AlaX04, the performance is improved only when the network performance between clusters is high.\nHowever, even if two or more clusters are used in a wide area network environment, the performance improvement was slight because the optimization time of one trial structure generated from AlaX04, a small molecule, is short.\nIn addition, the overhead required for invocation of a worker program and network data transmission consume a large portion of the remaining processing time.\nIn particular, the data transmission required for the initialization of a worker program is 2 MB.\nIn the case of Toyo cluster, where the network performance between the client program and the worker programs is poor, the time of data transmission to the worker program required approximately 6.7 seconds.\nSince this transmission time was longer than the processing time of one structure optimization in CONFLEX-G, most of the time was spent for this data transmission.\nTherefore, even if CONFLEX-G uses a large number of calculation nodes in a wide area network environment, the benefit of using a grid resource is not obtained.\nIn the case of AlaX16, CONFLEX-G achieved a speedup by using two or more PC clusters in our grid testbed.\nThis was because the calculation time on the worker program was long and the overhead, such as network latency and the invoking of worker programs, became relatively small and could be hidden.\nThe best performance was obtained using 64 workers in the Dennis and Alice clusters.\nIn the case of AaX16, the achieved performance was a speedup of 36.08 times.\nFigure 9 reveals the effect of using the facility of the OmniRPC AIM on CONFLEX-G performance.\nIn most cases, CONFLEX-G with the OmniRPC AIM facility archived better performance than CONFLEX-G without the facility.\nIn particular, the OmniRPC AIM facility was advantageous when using two clusters connected by a low-performance network.\nThe results indicate that the OmniRPC AIM facility can improve performance in the grid environment.\n4.2.3 PerformanceforSmallProteininTheGridTestbed Finally, we explored the molecular conformation using CONFLEX-G for large molecules.\nIn a grid environment, this experiment was conducted using the Dennis and Ume clusters.\nIn this experiment, we used two proteins, 1L2Y and 1BL1.\nTable 6 and Table 7 show the performance of CONFLEX-G in the grid environment and that of CONFLEX MPI in the Toyo cluster, respectively.\nThe speedups in these tables were computed respectively based on the performance of one worker and 16 workers of the Toyo cluster using CONFLEX MPI.\nCONFLEX-G with 84 workers in Dennis and Ume clusters obtained maximum speedups of 56.5 times for 1L2Y and 34.5 times for 1L2Y.\nSince the calculation time for structure optimization required a great deal of time, the ratio of overhead, including tasks such as the invocation of a worker program and data transmission for initialization, became very small, so that the performance of CONFLEX-G was improved.\nWe found that the load imbalance in the processing time of optimization for each trial structure caused performance degradation.\nWhen we obtained the best performance for 1L2Y using the Dennis and Ume clusters, the time for each structure optimization varied from 190 to 27,887 seconds, and the ratio between the longest and shortest times was 13.4.\nFor 1BL1, the ratio of minimum time over maximum time was 190.\nIn addition, in order that the worker program wait until the completion of optimization of all trial structures, all worker programs were found to wait in an idle state for approximately 6 hours.\nThis has caused the performance degradation of CONFLEX-G.\n161 4.3 Discussion In this subsection, we discuss the improvement of the performance reflected in our experiments.\nExploiting parallelism - In order to exploit more computational resources, it is necessary to increase the degree of parallelism.\nIn this experiment, the degree of parallelism was not so large in the case of the sample molecules.\nWhen using a set of over 500 computing nodes for 1BL1, the number of one trial structures assigned to each worker will be only one or two.\nIf over 100 trial structures are assigned to each worker program, calculation can be performed more efficiently due to the reduction of the overhead for worker invocation and initialization via the facility of the OmniRPC AIM.\nOne idea for increasing parallelism is to overlap the execution of two or more sets of trial structures.\nIn the current algorithm, a set of trial structures is generated from one initial structure and computed until optimizations for all structures in this set are calculated.\nFurthermore, this will help to improve load imbalance.\nBy having other sets of trial structures overlap, even if some optimizations require a long time, the optimization for the structures in other sets can be executed to compensate for the idle workers for other optimizations.\nIt is however unclear how such modification of the algorithm might affect the quality of the final results in terms of a conformation search.\nImprovement in load imbalance when optimizing each trial structure - Table 8 lists the statistics for optimization times of trial structures generated for each sample molecule measured using 28 workers in the Dennis cluster.\nWhen two or more sets of PC clusters are used, the speedup in performance is hampered by the load imbalance of the optimization of the trial structures.\nThe longest time for optimizing a trial structure was nearly 24 times longer than the shortest time.\nFurthermore, other workers must wait until the longest job has Finished, so that the entire execution time cannot be reduced.\nWhen CONFLEX-G searched the conformers of 1BL1 by the Dennis cluster, the longest calculation time of the trial structure optimization made up approximately 80% of the elapsed time.\nTherefore, there are two possible solutions for the load Imbalance.\n\u2022 It is necessary to refine the algorithm used to generate the trial structure, which suppresses the time variation for optimizing a trial structure in CONFLEX.\nThis enables CONFLEX-G to achieve high-throughput by using many computer resources.\n\u2022 One of the solutions is to overlap the executions for two or more sets of trial structures.\nIn the current algorithms, a set of trial structures is generated from one initial structure and calculation continues until all structures in this set are calculated.\nBy having other sets of trial structures, even if a structure search takes a long time, a job can be executed in order to compensate the load imbalance by other jobs.\nHowever, how such modification of the algorithms might affect the efficiency is not clear.\n\u2022 In this experiment, we used a simple build-in roundrobbin scheduler of OmniRPC, which is necessary in order to apply the scheduler that allocates structures with long optimization times to a high-performance Table 8: Statistics of elapsed time of trial structure optimization using 28 workers in the Dennis cluster.\nMolecular Min Max Average Variance code (s) (s) (s) AlaX04 2.0 11.3 5.3 3 AlaX16 47.6 920.0 154.2 5404 1L2Y 114.2 13331.4 803.2 636782 1BL1 121.0 29641.8 3153.5 2734811 node and structures with short optimization times to low-performance nodes.\nIn general, however, it might be difficult to predict the time required for trial structure optimization.\nParallelization of the worker program for speedup to optimize a trial structure - In the current implementation, we do not parallelize the worker program.\nIn order to speed up trial structures, hybrid programming using OmniRPC and OpenMP in an SMP (Symmetric Multiple Processor) machine may be one of the alternative methods by which to improve overall performance.\n5.\nRELATED WORK Recently, an algorithm has been developed that solves the problems of parallelization and communication in poorly connected processors to be used for simulation.\nThe Folding@home project[13] simulates timescales thousands to millions of times longer than previously achieved.\nThis has allowed us to simulate folding for the first time and to directly examine folding related diseases.\nSETI@home[14] is a program to search for alien life by analyzing radio telescope signals using Fourier transform radio telescope data from telescopes from different sites.\nSETI@home tackles immensely parallel problems, in which calculation can easily be divided among several computers.\nRadio telescope data chunks can easily be assigned to different computers.\nMost of these efforts explicitly develop a docking application as a parallel application using a special purpose parallel programming language and middleware, such as MPI, which requires development skills and effort.\nHowever, the skills and effort required to develop a grid application may not be required for OmniRPC.\nNimrod\/G[15] is a tool for distributed parametric modeling and implements a parallel task farm for simulations that require several varying input parameters.\nNimrod incorporates a distributed scheduling component that can manage the scheduling of individual experiments to idle computers in a local area network.\nNimrod has been applied to applications including bio-informatics, operations research, and molecular modeling for drug design.\nNetSolve[8] is an RPC facility similar to OmniRPC and Ninf, providing a similar programming interface and automatic load balancing mechanism.\nNinf-G[7] is a grid-enabled implementation of Ninf and provides a GridRPC[10] system that uses LDAP to manage the database of remote executables, but does not support clusters involving private IP addresses or addresses inside a firewall.\nMatsuoka et al.[16] has also discussed several design issues related to grid RPC systems.\n162 6.\nCONCLUSIONS AND FUTURE WORK We have designed and implemented CONFLEX-G using OmniRPC.\nWe reported its performance in a grid testbed of several geographically distributed PC clusters.\nIn order to explore the conformation of large bio-molecules, CONFLEXG was used to generate trial structures of the molecules, and allocate jobs to optimize them by molecular mechanics in the grid.\nOmniRPC provides a restricted persistence model so that the module is automatically initialized at invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and the initialization at each call in CONFLEX-G.\nCONFLEX-G can achieves performance comparable to CONFLEX MPI and exploits more computing resources by allowing the use of multiple PC clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times for the 1BL1 molecule, where the molecule consists of a large number of atoms and each trial structure optimization requires a great deal of time.\nThe load imbalance of the trial structure optimizations may cause performance degradation.\nWe need to refine the algorithm used to generate the trial structure in order to improve the load balance optimization for trial structures in CONFLEX.\nFuture studies will include development of deployment tools and an examination of fault tolerance.\nIn the current OmniRPC, the registration of an execution program to remote hosts and deployments of worker programs are manually set.\nDeployment tools will be required as the number of remote hosts is increased.\nIn grid environments in which the environment changes dynamically, it is also necessary to support fault tolerance.\nThis feature is especially important in large-scale applications which require lengthy calculation in a grid environment.\nWe plan to refine the conformational optimization algorithm in CONFLEX to explore the conformation space search of larger bio-molecules such HIV protease using up to 1000 workers in a grid environment.\n7.\nACKNOWLEDGMENTS This research was supported in part by a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science and Technology in Japan, No. 14019011, 2002, and as part of the Program of Research and Development for Applying Advanced Computational Science and Technology by the Japan Science and Technology Corporation (Research on the grid computing platform for drug design).\nWe would like to thank grid technology research center, AIST, Japan for providing computing resources for our experiment.\n8.\nREFERENCES [1] H. Goto and E. Osawa.\nAn efficient algorithm for searching low-energy conformers of cyclic and acyclic molecules.\nJ. Chem.\nSoc., Perkin Trans, 2:187-198, 1993.\n[2] M. Sato, T. Boku, and D. Takahashi.\nOmniRPC: a Grid RPC System for Parallel Programming in Cluster and Grid Environment.\nIn Proc.\nof CCGrid2003, pages 219-229, 2003.\n[3] M. Sato, M. Hirano, Y. Tanaka, and S. Sekiguchi.\nOmniRPC: a Grid RPC facility for Cluster and Global Computing in OpenMP.\nIn Proc.\nof Workshop on OpenMP Applications and Tools 2001(LNCS 2104 ), pages 130-135, 2001.\n[4] OmniRPC Project.\nhttp:\/\/www.omni.hpcc.jp\/omnirpc\/.\n[5] M. Sato, H. Nakada, S. Sekiguchi, S. Matsuoka, U. Nagashima, and H. Takagi.\nNinf: A Network Based Information Library for Global World-Wide Computing Infrastructure.\nIn HPCN Europe, pages 491-502, 1997.\n[6] Ninf Project.\nhttp:\/\/ninf.apgrid.org\/.\n[7] Y. Tanaka, H. Nakada, S. Sekiguchi, T. Suzumura, and S. Matsuoka.\nNinf-G: A Reference Implementation of RPC-based Programming Middleware for Grid Computing .\nJournal of Grid Computing, 1(1):41-51, 2003.\n[8] D. Arnold, S. Agrawal, S. Blackford, J. Dongarra, M. Miller, K. Seymour, K. Sagi, Z. Shi, and S. Vadhiyar.\nUsers'' Guide to NetSolve V1.4.1.\nInnovative Computing Dept. Technical Report ICL-UT-02-05, University of Tennessee, Knoxville, TN, June 2002.\n[9] Object management group.\nhttp:\/\/www.omg.org\/.\n[10] K. Seymour, H. Nakada, S. Matsuoka, J. Dongarra, C. Lee, and H. Casanova.\nGridRPC: A Remote Procedure Call API for Grid Computing.\n[11] H.Goto, T. Takahashi, Y. Takata, K. Ohta, and U Nagashima.\nConflex: Conformational behaviors of polypeptides as predicted by a conformational space search.\nIn Nanotech2003, volume 1, pages 32-35, 2003.\n[12] I. Foster and C. Kesselman.\nGlobus: A metacomputing infrastructure toolkit.\nThe International Journal of Supercomputer Applications and High Performanc e Computing, 11(2):115-128, Summer 1997.\n[13] Stefan M. Larson, Christopher D. Snow, Michael Shirts, and Vijay S. Pande.\nFolding@home and genome@home: Using distributed computing to tackle prev iously intractable problems in computational biology.\nComputational Genomics, 2002.\n[14] seti@home project.\nhttp:\/\/setiathome.ssl.berkeley.edu\/.\n[15] R. Buyya, K. Branson, J. Giddy, and D. Abramson.\nThe virtual laboratory: a toolset to enable distributed molecular modelling for drug design on the world-wide grid.\nConcurrency and Computation: Practice and Experience, 15(1):1-25, January 2003.\n[16] S. Matsuoka, H. Nakada, M. Sato, and S. Sekiguchi.\nDesign issues of Network Enabled Server Systems for the Grid.\nIn Proc.\nof GRID 2000 (LNCS 1971), pages 4-17, 2000.\n163","lvl-3":"Implementation and Performance Evaluation of CONFLEX-G: Grid-enabled Molecular Conformational Space Search Program with OmniRPC\nABSTRACT\nCONFLEX-G is the grid-enabled version of a molecular conformational space search program called CONFLEX.\nWe have implemented CONFLEX-G using a grid RPC system called OmniRPC.\nIn this paper, we report the performance of CONFLEX-G in a grid testbed of several geographically distributed PC clusters.\nIn order to explore many conformation of large bio-molecules, CONFLEX-G generates trial structures of the molecules and allocates jobs to optimize a trial structure with a reliable molecular mechanics method in the grid.\nOmniRPC provides a restricted persistence model to support the parametric search applications.\nIn this model, when the initialization procedure is defined in the RPC module, the module is automatically initialized at the time of invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and initialization at each call in CONFLEX-G.\nCONFLEXG can achieve performance comparable to CONFLEX MPI and can exploit more computing resources by allowing the use of a cluster of multiple clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times in the case of the 1BL1 molecule, where the molecule consists of a large number of atoms, and each trial structure optimization requires significant time.\nThe load imbalance of the optimization time of the trial structure may also cause performance degradation.\n1.\nINTRODUCTION\nElucidation of the stable conformations and the folding process of proteins is one of the most fundamental and challenging goals in life science.\nWhile some of the most common secondary structures (e.g., certain types of helix, the beta-strand, and the coil) are well known, precise analysis of the thousands of chemically important conformers and pico-second-order analysis of their conformational interconversions via the transition states on the potential energy surface are required for the microsecond-order investigation of the folding process toward the tertiary structure formations.\nRecently, the concept of the computational grid has begun to attract significant interest in the field of high-performance network computing.\nRapid advances in wide-area networking technology and infrastructure have made it possible to construct large-scale, high-performance distributed computing environments, or computational grids, that provide dependable, consistent and pervasive access to enormous computational resources.\nCONFLEX is one of the most efficient and reliable conformational space search programs [1].\nWe have applied this\nprogram to parallelization using global computing.\nThe performance of the parallelized CONFLEX enables exploration of the lower-energy region of the conformational space of small peptides within an available elapsed time using a local PC cluster.\nSince trial structure optimization in CONFLEX is calculated via molecular mechanics, conformational space search can be performed quickly compared to that using molecular orbital calculation.\nAlthough the parallelized version of CONFLEX was used to calculate in parallel the structure optimization, which takes up over 90% of the processing in the molecular conformation search, sufficient improvement in the speedup could not be achieved by this method alone.\nTherefore, for high polymers from live organisms, such as HIV protease, the use one PC cluster is insufficient due to the requirement for optimization of a huge number of trial structures.\nThis requires the vast computer resources of a grid computing environment.\nIn this paper, we describe CONFLEX-G, a grid-enabled molecular conformational search program, using OmniRPC and report its performance in a grid of several PC clusters which are geographically distributed.\nThe prototype CONFLEX-G allocates calculation trial structures optimization, which is a very time-consuming task, to worker nodes in the grid environment in order to obtain high throughput.\nIn addition, we compare the performance of CONFLEX-G in a local PC cluster to that in a grid testbed.\nOmniRPC [2, 3, 4] is a thread-safe implementation of Ninf RPC [5, 6] which is a Grid RPC facility for grid environment computing.\nSeveral systems adopt the concept of the RPC as the basic model for grid environment computing, including Ninf-G [7], NetSolve [8] and CORBA [9].\nThe RPCstyle system provides an easy-to-use, intuitive programming interface, allowing users of the grid system to easily create grid-enabled applications.\nIn order to support parallel programming, an RPC client can issue asynchronous call requests to a different remote computer to exploit networkwide parallelism via OmniRPC.\nIn this paper, we propose the OmniRPC persistence model to a Grid RPC system and demonstrate its effectiveness.\nIn order to support a typical application for a grid environment, such as a parametric search application, in which the same function is executed with different input parameters on the same data set.\nIn the current GridRPC system [10], the data set by the previous call cannot be used by subsequent calls.\nIn the OmniRPC system, once a remote executable is invoked, the client attempts to use the invoked remote executable and its initialized state for subsequent RPC calls to the same remote functions in order to eliminate the invocation cost of each call.\nThis paper demonstrates that CONFLEX-G is able to exploit the huge computer resources of a grid environment and search large-scale molecular conformers.\nWe demonstrate CONFLEX-G on our grid testbed using the actual protein as a sample molecule.\nThe OmniRPC facility of the automatic initializable module (AIM) allows the system to efficiently calculate numerous conformers.\nFurthermore, by using OmniRPC, the user can grid-parallelize the existing application, and move from the cluster to the grid environment without modifying program code and compiling the program.\nIn addition, the user can easily build a private grid environment.\nThe rest of this paper is organized as follows.\nAn overview\nFigure 1: Algorithm of conformational space search in the original CONFLEX.\nof the CONFLEX system is presented in Section2, and the implementation and design of CONFLEX-G are described in Section 3.\nWe report experimental results obtained using CONFLEX-G and discuss its performance in Section 4.\nIn Section 6, we present conclusions and discuss subjects for future study.\n2.\nCONFLEX\n2.1 Algorithm of Conformational Space Search\n2.2 Parallelization of CONFLEX for Cluster\n3.\nCONFLEX-G\n3.1 The OmniRPC system\n3.2 OmniRPC persistence model: automatic initializable module\n3.3 Implementation of CONFLEX-G using OmniRPC\n4.\nPRELIMINARY RESULTS\n4.1 Grid Testbed\n4.2 Performance of CONFLEX-G\n4.2.1 Performance in a Local Cluster\n4.2.2 Performance for Peptides in The Grid Testbed\n4.2.3 PerformanceforSmallProtein in The Grid Testbed\n4.3 Discussion\n5.\nRELATED WORK\nRecently, an algorithm has been developed that solves the problems of parallelization and communication in poorly connected processors to be used for simulation.\nThe Folding@home project [13] simulates timescales thousands to millions of times longer than previously achieved.\nThis has allowed us to simulate folding for the first time and to directly examine folding related diseases.\nSETI@home[14] is a program to search for alien life by analyzing radio telescope signals using Fourier transform radio telescope data from telescopes from different sites.\nSETI@home tackles immensely parallel problems, in which calculation can easily be divided among several computers.\nRadio telescope data chunks can easily be assigned to different computers.\nMost of these efforts explicitly develop a docking application as a parallel application using a special purpose parallel programming language and middleware, such as MPI, which requires development skills and effort.\nHowever, the skills and effort required to develop a grid application may not be required for OmniRPC.\nNimrod\/G [15] is a tool for distributed parametric modeling and implements a parallel task farm for simulations that require several varying input parameters.\nNimrod incorporates a distributed scheduling component that can manage the scheduling of individual experiments to idle computers in a local area network.\nNimrod has been applied to applications including bio-informatics, operations research, and molecular modeling for drug design.\nNetSolve [8] is an RPC facility similar to OmniRPC and Ninf, providing a similar programming interface and automatic load balancing mechanism.\nNinf-G [7] is a grid-enabled implementation of Ninf and provides a GridRPC [10] system that uses LDAP to manage the database of remote executables, but does not support clusters involving private IP addresses or addresses inside a firewall.\nMatsuoka et al. [16] has also discussed several design issues related to grid RPC systems.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have designed and implemented CONFLEX-G using OmniRPC.\nWe reported its performance in a grid testbed of several geographically distributed PC clusters.\nIn order to explore the conformation of large bio-molecules, CONFLEXG was used to generate trial structures of the molecules, and allocate jobs to optimize them by molecular mechanics in the grid.\nOmniRPC provides a restricted persistence model so that the module is automatically initialized at invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and the initialization at each call in CONFLEX-G.\nCONFLEX-G can achieves performance comparable to CONFLEX MPI and exploits more computing resources by allowing the use of multiple PC clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times for the 1BL1 molecule, where the molecule consists of a large number of atoms and each trial structure optimization requires a great deal of time.\nThe load imbalance of the trial structure optimizations may cause performance degradation.\nWe need to refine the algorithm used to generate the trial structure in order to improve the load balance optimization for trial structures in CONFLEX.\nFuture studies will include development of deployment tools and an examination of fault tolerance.\nIn the current OmniRPC, the registration of an execution program to remote hosts and deployments of worker programs are manually set.\nDeployment tools will be required as the number of remote hosts is increased.\nIn grid environments in which the environment changes dynamically, it is also necessary to support fault tolerance.\nThis feature is especially important in large-scale applications which require lengthy calculation in a grid environment.\nWe plan to refine the conformational optimization algorithm in CONFLEX to explore the conformation space search of larger bio-molecules such HIV protease using up to 1000 workers in a grid environment.","lvl-4":"Implementation and Performance Evaluation of CONFLEX-G: Grid-enabled Molecular Conformational Space Search Program with OmniRPC\nABSTRACT\nCONFLEX-G is the grid-enabled version of a molecular conformational space search program called CONFLEX.\nWe have implemented CONFLEX-G using a grid RPC system called OmniRPC.\nIn this paper, we report the performance of CONFLEX-G in a grid testbed of several geographically distributed PC clusters.\nIn order to explore many conformation of large bio-molecules, CONFLEX-G generates trial structures of the molecules and allocates jobs to optimize a trial structure with a reliable molecular mechanics method in the grid.\nOmniRPC provides a restricted persistence model to support the parametric search applications.\nIn this model, when the initialization procedure is defined in the RPC module, the module is automatically initialized at the time of invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and initialization at each call in CONFLEX-G.\nCONFLEXG can achieve performance comparable to CONFLEX MPI and can exploit more computing resources by allowing the use of a cluster of multiple clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times in the case of the 1BL1 molecule, where the molecule consists of a large number of atoms, and each trial structure optimization requires significant time.\nThe load imbalance of the optimization time of the trial structure may also cause performance degradation.\n1.\nINTRODUCTION\nRecently, the concept of the computational grid has begun to attract significant interest in the field of high-performance network computing.\nCONFLEX is one of the most efficient and reliable conformational space search programs [1].\nWe have applied this\nprogram to parallelization using global computing.\nThe performance of the parallelized CONFLEX enables exploration of the lower-energy region of the conformational space of small peptides within an available elapsed time using a local PC cluster.\nSince trial structure optimization in CONFLEX is calculated via molecular mechanics, conformational space search can be performed quickly compared to that using molecular orbital calculation.\nAlthough the parallelized version of CONFLEX was used to calculate in parallel the structure optimization, which takes up over 90% of the processing in the molecular conformation search, sufficient improvement in the speedup could not be achieved by this method alone.\nThis requires the vast computer resources of a grid computing environment.\nIn this paper, we describe CONFLEX-G, a grid-enabled molecular conformational search program, using OmniRPC and report its performance in a grid of several PC clusters which are geographically distributed.\nThe prototype CONFLEX-G allocates calculation trial structures optimization, which is a very time-consuming task, to worker nodes in the grid environment in order to obtain high throughput.\nIn addition, we compare the performance of CONFLEX-G in a local PC cluster to that in a grid testbed.\nOmniRPC [2, 3, 4] is a thread-safe implementation of Ninf RPC [5, 6] which is a Grid RPC facility for grid environment computing.\nSeveral systems adopt the concept of the RPC as the basic model for grid environment computing, including Ninf-G [7], NetSolve [8] and CORBA [9].\nThe RPCstyle system provides an easy-to-use, intuitive programming interface, allowing users of the grid system to easily create grid-enabled applications.\nIn order to support parallel programming, an RPC client can issue asynchronous call requests to a different remote computer to exploit networkwide parallelism via OmniRPC.\nIn this paper, we propose the OmniRPC persistence model to a Grid RPC system and demonstrate its effectiveness.\nIn order to support a typical application for a grid environment, such as a parametric search application, in which the same function is executed with different input parameters on the same data set.\nIn the current GridRPC system [10], the data set by the previous call cannot be used by subsequent calls.\nThis paper demonstrates that CONFLEX-G is able to exploit the huge computer resources of a grid environment and search large-scale molecular conformers.\nWe demonstrate CONFLEX-G on our grid testbed using the actual protein as a sample molecule.\nThe OmniRPC facility of the automatic initializable module (AIM) allows the system to efficiently calculate numerous conformers.\nFurthermore, by using OmniRPC, the user can grid-parallelize the existing application, and move from the cluster to the grid environment without modifying program code and compiling the program.\nIn addition, the user can easily build a private grid environment.\nAn overview\nFigure 1: Algorithm of conformational space search in the original CONFLEX.\nof the CONFLEX system is presented in Section2, and the implementation and design of CONFLEX-G are described in Section 3.\nWe report experimental results obtained using CONFLEX-G and discuss its performance in Section 4.\nIn Section 6, we present conclusions and discuss subjects for future study.\n5.\nRELATED WORK\nRecently, an algorithm has been developed that solves the problems of parallelization and communication in poorly connected processors to be used for simulation.\nThis has allowed us to simulate folding for the first time and to directly examine folding related diseases.\nSETI@home[14] is a program to search for alien life by analyzing radio telescope signals using Fourier transform radio telescope data from telescopes from different sites.\nSETI@home tackles immensely parallel problems, in which calculation can easily be divided among several computers.\nRadio telescope data chunks can easily be assigned to different computers.\nHowever, the skills and effort required to develop a grid application may not be required for OmniRPC.\nNimrod\/G [15] is a tool for distributed parametric modeling and implements a parallel task farm for simulations that require several varying input parameters.\nNimrod has been applied to applications including bio-informatics, operations research, and molecular modeling for drug design.\nNetSolve [8] is an RPC facility similar to OmniRPC and Ninf, providing a similar programming interface and automatic load balancing mechanism.\nMatsuoka et al. [16] has also discussed several design issues related to grid RPC systems.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have designed and implemented CONFLEX-G using OmniRPC.\nWe reported its performance in a grid testbed of several geographically distributed PC clusters.\nIn order to explore the conformation of large bio-molecules, CONFLEXG was used to generate trial structures of the molecules, and allocate jobs to optimize them by molecular mechanics in the grid.\nOmniRPC provides a restricted persistence model so that the module is automatically initialized at invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and the initialization at each call in CONFLEX-G.\nCONFLEX-G can achieves performance comparable to CONFLEX MPI and exploits more computing resources by allowing the use of multiple PC clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times for the 1BL1 molecule, where the molecule consists of a large number of atoms and each trial structure optimization requires a great deal of time.\nThe load imbalance of the trial structure optimizations may cause performance degradation.\nWe need to refine the algorithm used to generate the trial structure in order to improve the load balance optimization for trial structures in CONFLEX.\nFuture studies will include development of deployment tools and an examination of fault tolerance.\nIn the current OmniRPC, the registration of an execution program to remote hosts and deployments of worker programs are manually set.\nDeployment tools will be required as the number of remote hosts is increased.\nIn grid environments in which the environment changes dynamically, it is also necessary to support fault tolerance.\nThis feature is especially important in large-scale applications which require lengthy calculation in a grid environment.\nWe plan to refine the conformational optimization algorithm in CONFLEX to explore the conformation space search of larger bio-molecules such HIV protease using up to 1000 workers in a grid environment.","lvl-2":"Implementation and Performance Evaluation of CONFLEX-G: Grid-enabled Molecular Conformational Space Search Program with OmniRPC\nABSTRACT\nCONFLEX-G is the grid-enabled version of a molecular conformational space search program called CONFLEX.\nWe have implemented CONFLEX-G using a grid RPC system called OmniRPC.\nIn this paper, we report the performance of CONFLEX-G in a grid testbed of several geographically distributed PC clusters.\nIn order to explore many conformation of large bio-molecules, CONFLEX-G generates trial structures of the molecules and allocates jobs to optimize a trial structure with a reliable molecular mechanics method in the grid.\nOmniRPC provides a restricted persistence model to support the parametric search applications.\nIn this model, when the initialization procedure is defined in the RPC module, the module is automatically initialized at the time of invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and initialization at each call in CONFLEX-G.\nCONFLEXG can achieve performance comparable to CONFLEX MPI and can exploit more computing resources by allowing the use of a cluster of multiple clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times in the case of the 1BL1 molecule, where the molecule consists of a large number of atoms, and each trial structure optimization requires significant time.\nThe load imbalance of the optimization time of the trial structure may also cause performance degradation.\n1.\nINTRODUCTION\nElucidation of the stable conformations and the folding process of proteins is one of the most fundamental and challenging goals in life science.\nWhile some of the most common secondary structures (e.g., certain types of helix, the beta-strand, and the coil) are well known, precise analysis of the thousands of chemically important conformers and pico-second-order analysis of their conformational interconversions via the transition states on the potential energy surface are required for the microsecond-order investigation of the folding process toward the tertiary structure formations.\nRecently, the concept of the computational grid has begun to attract significant interest in the field of high-performance network computing.\nRapid advances in wide-area networking technology and infrastructure have made it possible to construct large-scale, high-performance distributed computing environments, or computational grids, that provide dependable, consistent and pervasive access to enormous computational resources.\nCONFLEX is one of the most efficient and reliable conformational space search programs [1].\nWe have applied this\nprogram to parallelization using global computing.\nThe performance of the parallelized CONFLEX enables exploration of the lower-energy region of the conformational space of small peptides within an available elapsed time using a local PC cluster.\nSince trial structure optimization in CONFLEX is calculated via molecular mechanics, conformational space search can be performed quickly compared to that using molecular orbital calculation.\nAlthough the parallelized version of CONFLEX was used to calculate in parallel the structure optimization, which takes up over 90% of the processing in the molecular conformation search, sufficient improvement in the speedup could not be achieved by this method alone.\nTherefore, for high polymers from live organisms, such as HIV protease, the use one PC cluster is insufficient due to the requirement for optimization of a huge number of trial structures.\nThis requires the vast computer resources of a grid computing environment.\nIn this paper, we describe CONFLEX-G, a grid-enabled molecular conformational search program, using OmniRPC and report its performance in a grid of several PC clusters which are geographically distributed.\nThe prototype CONFLEX-G allocates calculation trial structures optimization, which is a very time-consuming task, to worker nodes in the grid environment in order to obtain high throughput.\nIn addition, we compare the performance of CONFLEX-G in a local PC cluster to that in a grid testbed.\nOmniRPC [2, 3, 4] is a thread-safe implementation of Ninf RPC [5, 6] which is a Grid RPC facility for grid environment computing.\nSeveral systems adopt the concept of the RPC as the basic model for grid environment computing, including Ninf-G [7], NetSolve [8] and CORBA [9].\nThe RPCstyle system provides an easy-to-use, intuitive programming interface, allowing users of the grid system to easily create grid-enabled applications.\nIn order to support parallel programming, an RPC client can issue asynchronous call requests to a different remote computer to exploit networkwide parallelism via OmniRPC.\nIn this paper, we propose the OmniRPC persistence model to a Grid RPC system and demonstrate its effectiveness.\nIn order to support a typical application for a grid environment, such as a parametric search application, in which the same function is executed with different input parameters on the same data set.\nIn the current GridRPC system [10], the data set by the previous call cannot be used by subsequent calls.\nIn the OmniRPC system, once a remote executable is invoked, the client attempts to use the invoked remote executable and its initialized state for subsequent RPC calls to the same remote functions in order to eliminate the invocation cost of each call.\nThis paper demonstrates that CONFLEX-G is able to exploit the huge computer resources of a grid environment and search large-scale molecular conformers.\nWe demonstrate CONFLEX-G on our grid testbed using the actual protein as a sample molecule.\nThe OmniRPC facility of the automatic initializable module (AIM) allows the system to efficiently calculate numerous conformers.\nFurthermore, by using OmniRPC, the user can grid-parallelize the existing application, and move from the cluster to the grid environment without modifying program code and compiling the program.\nIn addition, the user can easily build a private grid environment.\nThe rest of this paper is organized as follows.\nAn overview\nFigure 1: Algorithm of conformational space search in the original CONFLEX.\nof the CONFLEX system is presented in Section2, and the implementation and design of CONFLEX-G are described in Section 3.\nWe report experimental results obtained using CONFLEX-G and discuss its performance in Section 4.\nIn Section 6, we present conclusions and discuss subjects for future study.\n2.\nCONFLEX\nCONFLEX [1] is an efficient conformational space search program, which can predominately and exhaustively search the conformers in the lower-energy region.\nApplications of CONFLEX include the elucidation of the reactivity and selectivity of drugs and possible drug materials with regard to their conformational flexibility.\n2.1 Algorithm of Conformational Space Search\nThe basic strategy of CONFLEX is an exhaustive search of only the low-energy regions.\nThe original CONFLEX performs the following four major steps:\n1.\nSelection of an initial structure among the previously discovered unique conformers sorted in a conformational database.\n(An input structure is used as the first initial structure at the beginning of a search execution only.)\n2.\nGeneration of trial structures by local perturbations to the selected initial structure.\n3.\nGeometry optimization for the newly generated trial structures.\n4.\nComparison of the successfully optimized (trial) structures with the other conformers stored in a conformation database, and preservation of newly discovered unique conformers in the database.\nFigure 1 shows the outline of CONFLEX, the original conformational space search algorithm.\nThese procedures incorporate two unique strategies.\nFigure 2 shows the strategies for generating local perturbations in CONFLEX.\nThe first strategy involves both corner flapping and edge flipping for the ring atoms and stepwise rotation for side-chains or backbone chains.\nThese methods provide a highly efficient way to produce several good trial structures.\nThese perturbations can be considered to mimic\nFigure 3: Procedure of CONFLEX-G.\nFigure 2: Strategies used to generate the local perturbations.\na barrier-crossing step in the elementary process of the thermal conformational inter-conversion.\nActually, the perturbations of an initial structure correspond to the precise performance around the space of the initial structure because of localization and weakness of the perturbation.\nThe selection rule of the initial structure, the LowestConformer-First rule, is the second strategy for directing the conformation search expanded to the low-energy regions.\nThe initial structure is selected as the set of lowest energy conformers stored in the conformation database.\nThis rule is effective in moving down the search space toward lower energy regions, like water from a stream running into an empty reservoir, while filling local depressions along the way.\nTherefore, these tactical procedures of the CONFLEX search are referred to as the Reservoir Filling Algorithm.\nIn order to remain in the low-energy region and perform an exhaustive search, the search limit (SEL), which determines the maximum energy of the initial structures, is pre-defined.\nGradually increasing SEL allows only the lowenergy conformers to be searched and avoids straying into unnecessarily high-energy regions.\n2.2 Parallelization of CONFLEX for Cluster\nFor application to over 100 atoms, CONFLEX was improved using high-performance parallel computing techniques.\nIn the CONFLEX search algorithm, the geometry optimization procedures always take 95% of the elapsed time of the search execution.\nTherefore, we parallelized this optimization using the Master\/Worker parallelization technique.\nWe modified the search procedures as follows.\nAfter trial structures are generated (step 2), they are temporarily stored in a task pool on the master node.\nThen, each worker node is dynamically supplied with one trial structure from the master node.\nAfter an optimization on a worker node is finished, the worker is immediately supplied with another trial structure.\nWhen all of the trial structures related to a given initial structure are optimized, only the master procedure is used in comparison.\nBy parallelizing CONFLEX, the speedup of searching molecular conformers obtained is as reported in [11].\n3.\nCONFLEX-G\nOriginally, CONFLEX was intended for use in exploring the conformers of the large bio-molecules, such HIV protease.\nIn such molecules, the number of trial structures increases and the time required for optimization of\nFigure 4: Overview of the OmniRPC system for the remote cluster having a private IP address.\nthe trial structure becomes immense.\nWe implemented the parallelized version of CONFLEX, which cannot treat such molecules using only a local PC cluster.\nIn order to exploit the vast computing resources of a grid environment, we designed and implemented CONFLEX-G, which is a grid-enabled version of CONFLEX, with the OmniRPC system.\nCONFLEX-G allocates jobs to optimize a trial structure to the computational nodes of each cluster in the grid environment.\nFigure 3 shows the process of CONFLEX-G.\nThe worker programs are initialized by the initialize method, which is provided by the OmniRPC AIM facility at worker invocation.\nAt each RPC call, the initialized state is reused on the remote host.\nIn other words, the client program can eliminate the initialization for each RPC call, and can therefore optimize trial structures efficiently.\n3.1 The OmniRPC system\nOmniRPC is a Grid RPC system which allows seamless parallel programming from a PC cluster to a grid environment.\nOmniRPC inherits its API and basic architecture from Ninf.\nA client and the remote computational hosts which execute the remote procedures may be connected via a network.\nThe remote libraries are implemented as an executable program which contains a network stub routine as its main routine.\nWe call this executable program a remote executable program (rex).\nWhen the OmniRPC client program starts, the initialization function of OmniRPC system invokes the OmniRPC agent program omrpc-agent in the remote hosts listed in the host file.\nTo invoke the agent, the user can use the remote shell command rsh in a local-area network, the GRAM (Globus Resource Allocation Manager) API of the Globus\ntoolkit [12] in a grid environment, or the secure remote shell command ssh.\nThe user can switch the configurations only by changing the host file.\nOmniRpcCall is a simple client programming interface for calling remote functions.\nWhen OmniRpcCall makes a remote procedure call, the call is allocated to an appropriate remote host.\nWhen the client issues the RPC request, it requests that the agent in the selected host submit the job of the remote executable with the local job scheduler specified in the host file.\nIf the job scheduler is not specified, the agent executes the remote executable in the same node by the fork system call.\nThe client sends the data of the input arguments to the invoked remote executable, and receives the results upon return of the remote function.\nOnce a remote executable is invoked, the client attempts to use the invoked remote executable for subsequent RPC calls in order to eliminate the cost of invoking the same remote executable again.\nWhen the agent and the remote executables are invoked, the remote programs obtain the client address and port from the argument list and connect back to the client by direct TCP\/IP or Globus-IO for data transmission.\nBecause the OmniRPC system does not use any fixed service ports, the client program allocates unused ports dynamically to wait for connection from the remote executables.\nThis avoids possible security problems, and allows the user to install the OmniRPC system without requiring a privileged account.\nHerein, a typical grid resource is regarded as a cluster of geographically distributed PC clusters.\nFor PC clusters on a private network, an OmniRPC agent process on the server host functions as a proxy to relay communications between the client and the remote executables by multiplexing the communications using a single connection.\nThis feature, called multiplex IO (MXIO), allows a single client to use up to 1,000 remote computing hosts.\nWhen the PC cluster is inside a firewall, the port forwarding of SSH enables the node to communicate to the outside with MXIO.\nFigure 4 shows the overview of the OmniRPC system for a remote cluster with a private IP address.\nFor parallel programming, the programmer can use asynchronous remote procedure calls, allowing the client to issue several requests while continuing with other computations.\nThe requests are dispatched to different remote hosts to be executed in parallel, and the client waits or polls the completed request.\nIn such a programming model with asynchronous remote procedure calls, the programmer should handle outstanding requests explicitly.\nBecause OmniRPC is a thread-safe system, a number of remote procedure calls may be outstanding at any time for multi-threaded programs written in OpenMP.\n3.2 OmniRPC persistence model: automatic initializable module\nOmniRPC efficiently supports typical Master\/Worker parallel applications such as parametric execution programs.\nFor parametric search applications, which often require large amount of identical data for each call, OmniRPC supports a limited persistence model, which is implemented by the automatic initializable module.\nThe user can define an initialization procedure in the remote executable in order to send and store data automatically in advance of actual remote procedure calls.\nSince the remote executable may accept requests for subsequent calls, the data set which has been set by the initialization procedure can be re-used.\nAs a result, the worker program can execute efficiently and reduce the amount of data transmitted for initialization.\nOnce a remote executable is invoked, the client attempts to use the invoked remote executable for subsequent RPC calls.\nHowever, OmniRPC does not guarantee persistence of the remote executable, so that the data set by the previous call cannot be used by subsequent calls.\nThis is because a remote call by OmniRpcCall may be scheduled to any remote host dynamically, and remote executables may be terminated accidentally due to dynamic re-scheduling or host faults.\nHowever, persistence of the remote executable can be exploited in certain applications.\nAn example is a parametric search application: in such an application, it would be efficient if a large set of data could be pre-loaded by the first call, and subsequent calls could be performed on the same data, but with different parameters.\nThis is the case for CONFLEX.\nOmniRPC provides a restricted persistence model through the automatic initializable module (AIM) in order to support this type of application.\nIf the initialization procedure is defined in the module, the module is automatically initialized at invocation by calling the initialization procedure.\nWhen the remote executable is re-scheduled in different hosts, the initialization is called to initialize the newly allocated remote module.\nThis can eliminate unnecessary communications when RPC calls use the same data.\nTo reveal more about the difference in progress between the cases with OmniRPC AIM and without OmniRPC AIM, we present two figures.\nFigure 5 illustrates the time chart of the progress of a typical OmniRPC application using the OmniRPC AIM facility, and Figure 6 illustrates the time chart of the same application without the OmniRPC AIM facility.\nIn both figures, the lines between diamonds represent the processes of initialization, and the lines between points represent the calculation.\nThe bold line indicates the time when the client program sends the data to a worker program.\nIt is necessary for the application without the OmniRPC AIM facility to initialize at each RPC.\nThe application using the OmniRPC AIM facility can re-use the initialized data once the data set is initialized.\nThis can reduce the initialization at each RPC.\nThe workers of the application with the AIM can calculate efficiently compared to the application without the OmniRPC AIM facility.\n3.3 Implementation of CONFLEX-G using OmniRPC\nFigure 3 shows an overview of the process used in CONFLEXG.\nUsing RPCs, CONFLEX-G allocates the processes of trial structure optimization, which are performed by the computation nodes of a PC cluster in the MPI version of CONFLEX, to the computational nodes of each cluster in a grid environment.\nThere are two computations which are performed by the worker programs in CONFLEX-G.\nOne is the initialization of a worker program, and another is the calculation of trial structure optimization.\nFirst, the OmniRPC facility of the AIM is adapted for initialization of a worker program.\nThis facility automatically calls the initialization function, which is contained in the worker program, once the client program invokes the worker program in a remote node.\nIt is necessary for the common RPC system including GridRPC to initialize a program for every RPC call, since data persistence of worker programs\nFigure 5: Time chart of applications using the OmniRPC facility of the automatic initializable module.\nFigure 6: Time chart of applications without the OmniRPC facility of the automatic initializable module.\nTable 1: Machine configurations in the grid testbed.\nis not supported.\nIn OmniRPC, however, when the \"Initialize\" remote function is defined in the worker program and a new worker program, corresponding to the other RPC, is assigned to execute, an Initialize function is called automatically.\nTherefore, after the Initialize function call to set up common initialization data, a worker program can re-use this data and increase the efficiency of it's processes.\nThus, the higher the set-up cost, the greater the potential benefit.\nWe implemented the worker program of CONFLEX-G to receive data, such as evaluation parameters of energy, from a client program and to be initialized by the \"Initialize\" function.\nWe arranged the client program of CONFLEX-G to transfer the parameter file at the time of worker initialization.\nThis enables execution to be performed by modify only the client setting if the user wants to run CONFLEX-G with a different data set.\nSecond, in order to calculate trial structure optimization in a worker program, the worker program must receive the data, such as the atom arrangement of the trial structure and the internal energy state.\nThe result is returned to the client program after the worker has Optimized the trial structure.\nSince the calculation portion of the structure optimization in this worker program can be calculated independently using different parameters, we parallelized this portion using asynchronous RPCs on the client side.\nTo call the structure optimization function in a worker program from the client program, we use the OmniRpcCallAsync API, which is intended for asynchronous RPC.\nIn addition, OmniRpcCallWaitAll API which waits until all asynchronous RPCs are used in order to perform synchronization with all of the asynchronous RPCs completed so as to optimize the trial structure.\nThe client program which assigns trial structure optimization to the calculation node of a PC cluster using RPC is outlined as follows.\nNote that OmniRpcModuleInit API returns only the arguments needed for initialization and will not actually execute the Initialization function.\nAs described above, the actual Initialization is performed at the first remote call.\nSince the OmniRPC system has an easy round-robin scheduler, we do not have to explicitly write the code for load balance.\nTherefore, RPCs are allocated automatically to idle workers.\nTable 2: Network performance between the master node of the Dennis cluster and the master node of\n4.\nPRELIMINARY RESULTS\n4.1 Grid Testbed\nThe grid testbed was constructed by computing resources at the University of Tsukuba, the Toyohashi University of Technology (TUT) and the National Institute of Advanced Industrial Science and Technology (AIST).\nTable 1 shows the computing resources used for the grid of the present study.\nThe University of Tsukuba and AIST are connected by a 1-Gbps Tsukuba WAN, and the other PC clusters are connected by SINET, which is wide-area network dedicated to academic research in Japan.\nTable 2 shows the performance of the measured network between the master node of the Dennis cluster and the master node of each PC cluster in the grid testbed.\nThe communication throughput was measured using netperf, and the round-trip time was measured by ping.\n4.2 Performance of CONFLEX-G\nIn all of the CONFLEX-G experiments, the client program was executed on the master node of the Dennis cluster at the University of Tsukuba.\nThe built-in Round-Robin scheduler of OmniRPC was used as a job scheduler.\nSSH was used for an authentication system, the OminRPC's MXIO, which relays the I\/O communication between client program and worker programs by port forwarding of SSH was, not used.\nNote that one worker program is assigned and performed on one CPU of the calculation node in a PC cluster.\nThat is, the number of workers is equal to the number of CPUs.\nThese programs were compiled by the Intel Fortran Compiler 7.0 and gcc 2.95.\nMPICH, Version 1.2.5, was used to compare the performance between CONFLEX MPI and CONFLEX-G.\nIn order to demonstrate the usability of the OmniRPC facility of the AIM, we implemented another version of CONFLEX-G which did not utilize the OmniRPC facility.\nThe worker program in this version of CONFLEXG must be initialized at each RPC because the worker does not hold the previous data set.\nIn order to examine the performance of CONFLEX-G, we selected two peptides and two small protein as test molecules: \u2022 N-acetyl tetra-alanine methylester (AlaX04) \u2022 N-acetyl hexdeca-alanine methylester (AlaX16)\n\u2022 TRP-cage miniprotein construct TC5B (1L2Y) \u2022 PTH receptor N-terminus fragment (1BL1)\nTable 3 lists the characteristics of these sample molecules.\nThe column trial structure \/ loops in this table shows the\nFigure 7: Performances of CONFLEX-G, CONFLEX MPI and Original CONFLEX in the Dennis cluster.\nCIuster (#of Workers) Figure 8: Speedup ratio, which is based on the elapsed time of CONFLEX-G using one worker in the Dennis cluster.\nFigure 9: Performance of CONFLEX-G with and without the OmniRPC facility of automatic initializable module for AlaX16.\nTable 3: Characteristics of molecules and data transmission for optimizing trial molecular structures in each molecular code.\nTable 4: Elapsed search time for the molecular conformation of AlaX04.\nTable 5: Elapsed search time for the molecular conformation of AlaX16\nTable 6: Elapsed time of the search for the trial structure of 1L2Y.\nTable 7: Elapsed time of the search for the trial structure of 1BL1.\nnumber of trial structures generated in each iteration, indicating the degree of parallelism.\nFigure 3 also summarizes the amount of data transmission required for initialization of a worker program and for optimization of each trial structure.\nNote that the amount of data transmission, which is required in order to initialize a worker program and optimize a trial structure in the MPI version of CONFLEX, is equal to that of CONFLEX-G.\nWe used an improvement version of MM2 force field to assign a potential energy function to various geometric properties of a group of atoms.\n4.2.1 Performance in a Local Cluster\nWe first compared the performance of CONFLEX-G, the MPI version of CONFLEX, and the original sequential version of CONFLEX-G using a local cluster.\nWe investigated performance by varying the number of workers using the Dennis cluster.\nWe chose AlaX04 as a test molecule for this experiment.\nFigure 7 compares the results for the CONFLEX MPI and CONFLEX-G in a local PC cluster.\nThe result of this experiment shows that CONFLEX-G can reduce the execution time as the number of workers increases, as in the MPI version of CONFLEX.\nWe found that CONFLEX-G achieved efficiencies comparable to the MPI version.\nWith 28 workers, CONFLEX-G achieved an 18.00 times speedup compared to the CONFLEX sequential version.\nThe performance of CONFLEX-G without the OmniRPC AIM facility is worse than that of CONFLEXG using the facility, based on the increase in the number of workers.\nThis indicates that the OmniRPC AIM enables the worker to calculate efficiently without other calculations, such initialization or invocation of worker programs.\nAs the number of workers is increased, the performance of CONFLEX-G is a slightly lower than that of the MPI version.\nThis performance degradation is caused by differences in the worker initialization processes of CONFLEX-G and CONFLEX MPI.\nIn the case of CONFLEX MPI, all workers are initialized in advance of the optimization phase.\nIn the case of OminRPC, the worker is invoked on-demand when the RPC call is actually issued.\nTherefore, the initialization incurs this overhead.\nSince the objective of CONFLEX-G is to explore the conformations of large bio-molecules, the number of trial structures and the time to optimize the trial structure might be large.\nIn such cases, the overhead to invoke and initialize the worker program can be small compared to the entire elapsed time.\n4.2.2 Performance for Peptides in The Grid Testbed\nFirst, the sample molecules (AlaX04 and AlaX16) were used to examine the CONFLEX-G performance in a grid environment.\nFigure 8 shows the speedup achieved by using multiple clusters compared to using one worker in the Dennis cluster.\nDetailed results are shown in Table 4 and Table 5.\nIn both cases, the best performance was obtained using 64 workers of the combination of the Dennis and Alice clusters.\nCONFLEX-G achieved a maximum speedup of 36.08 times for AlaX04 and a maximum speedup of 21.91 times for AlaX16.\nIn the case of AlaX04, the performance is improved only when the network performance between clusters is high.\nHowever, even if two or more clusters are used in a wide area network environment, the performance improvement was slight because the optimization time of one trial structure generated from AlaX04, a small molecule, is short.\nIn addition, the overhead required for invocation of a worker program and network data transmission consume a large portion of the remaining processing time.\nIn particular, the data transmission required for the initialization of a worker program is 2 MB.\nIn the case of Toyo cluster, where the network performance between the client program and the worker programs is poor, the time of data transmission to the worker program required approximately 6.7 seconds.\nSince this transmission time was longer than the processing time of one structure optimization in CONFLEX-G, most of the time was spent for this data transmission.\nTherefore, even if CONFLEX-G uses a large number of calculation nodes in a wide area network environment, the benefit of using a grid resource is not obtained.\nIn the case of AlaX16, CONFLEX-G achieved a speedup by using two or more PC clusters in our grid testbed.\nThis was because the calculation time on the worker program was long and the overhead, such as network latency and the invoking of worker programs, became relatively small and could be hidden.\nThe best performance was obtained using 64 workers in the Dennis and Alice clusters.\nIn the case of AaX16, the achieved performance was a speedup of 36.08 times.\nFigure 9 reveals the effect of using the facility of the OmniRPC AIM on CONFLEX-G performance.\nIn most cases, CONFLEX-G with the OmniRPC AIM facility archived better performance than CONFLEX-G without the facility.\nIn particular, the OmniRPC AIM facility was advantageous when using two clusters connected by a low-performance network.\nThe results indicate that the OmniRPC AIM facility can improve performance in the grid environment.\n4.2.3 PerformanceforSmallProtein in The Grid Testbed\nFinally, we explored the molecular conformation using CONFLEX-G for large molecules.\nIn a grid environment, this experiment was conducted using the Dennis and Ume clusters.\nIn this experiment, we used two proteins, 1L2Y and 1BL1.\nTable 6 and Table 7 show the performance of CONFLEX-G in the grid environment and that of CONFLEX MPI in the Toyo cluster, respectively.\nThe speedups in these tables were computed respectively based on the performance of one worker and 16 workers of the Toyo cluster using CONFLEX MPI.\nCONFLEX-G with 84 workers in Dennis and Ume clusters obtained maximum speedups of 56.5 times for 1L2Y and 34.5 times for 1L2Y.\nSince the calculation time for structure optimization required a great deal of time, the ratio of overhead, including tasks such as the invocation of a worker program and data transmission for initialization, became very small, so that the performance of CONFLEX-G was improved.\nWe found that the load imbalance in the processing time of optimization for each trial structure caused performance degradation.\nWhen we obtained the best performance for 1L2Y using the Dennis and Ume clusters, the time for each structure optimization varied from 190 to 27,887 seconds, and the ratio between the longest and shortest times was 13.4.\nFor 1BL1, the ratio of minimum time over maximum time was 190.\nIn addition, in order that the worker program wait until the completion of optimization of all trial structures, all worker programs were found to wait in an idle state for approximately 6 hours.\nThis has caused the performance degradation of CONFLEX-G.\n4.3 Discussion\nIn this subsection, we discuss the improvement of the performance reflected in our experiments.\nExploiting parallelism--In order to exploit more computational resources, it is necessary to increase the degree of parallelism.\nIn this experiment, the degree of parallelism was not so large in the case of the sample molecules.\nWhen using a set of over 500 computing nodes for 1BL1, the number of one trial structures assigned to each worker will be only one or two.\nIf over 100 trial structures are assigned to each worker program, calculation can be performed more efficiently due to the reduction of the overhead for worker invocation and initialization via the facility of the OmniRPC AIM.\nOne idea for increasing parallelism is to overlap the execution of two or more sets of trial structures.\nIn the current algorithm, a set of trial structures is generated from one initial structure and computed until optimizations for all structures in this set are calculated.\nFurthermore, this will help to improve load imbalance.\nBy having other sets of trial structures overlap, even if some optimizations require a long time, the optimization for the structures in other sets can be executed to compensate for the idle workers for other optimizations.\nIt is however unclear how such modification of the algorithm might affect the quality of the final results in terms of a conformation search.\nImprovement in load imbalance when optimizing each trial structure--Table 8 lists the statistics for optimization times of trial structures generated for each sample molecule measured using 28 workers in the Dennis cluster.\nWhen two or more sets of PC clusters are used, the speedup in performance is hampered by the load imbalance of the optimization of the trial structures.\nThe longest time for optimizing a trial structure was nearly 24 times longer than the shortest time.\nFurthermore, other workers must wait until the longest job has Finished, so that the entire execution time cannot be reduced.\nWhen CONFLEX-G searched the conformers of 1BL1 by the Dennis cluster, the longest calculation time of the trial structure optimization made up approximately 80% of the elapsed time.\nTherefore, there are two possible solutions for the load Imbalance.\n9 It is necessary to refine the algorithm used to generate the trial structure, which suppresses the time variation for optimizing a trial structure in CONFLEX.\nThis enables CONFLEX-G to achieve high-throughput by using many computer resources.\n9 One of the solutions is to overlap the executions for two or more sets of trial structures.\nIn the current algorithms, a set of trial structures is generated from one initial structure and calculation continues until all structures in this set are calculated.\nBy having other sets of trial structures, even if a structure search takes a long time, a job can be executed in order to compensate the load imbalance by other jobs.\nHowever, how such modification of the algorithms might affect the efficiency is not clear.\n9 In this experiment, we used a simple build-in roundrobbin scheduler of OmniRPC, which is necessary in order to apply the scheduler that allocates structures with long optimization times to a high-performance\nTable 8: Statistics of elapsed time of trial structure optimization using 28 workers in the Dennis cluster.\nnode and structures with short optimization times to low-performance nodes.\nIn general, however, it might be difficult to predict the time required for trial structure optimization.\nParallelization of the worker program for speedup to optimize a trial structure--In the current implementation, we do not parallelize the worker program.\nIn order to speed up trial structures, hybrid programming using OmniRPC and OpenMP in an SMP (Symmetric Multiple Processor) machine may be one of the alternative methods by which to improve overall performance.\n5.\nRELATED WORK\nRecently, an algorithm has been developed that solves the problems of parallelization and communication in poorly connected processors to be used for simulation.\nThe Folding@home project [13] simulates timescales thousands to millions of times longer than previously achieved.\nThis has allowed us to simulate folding for the first time and to directly examine folding related diseases.\nSETI@home[14] is a program to search for alien life by analyzing radio telescope signals using Fourier transform radio telescope data from telescopes from different sites.\nSETI@home tackles immensely parallel problems, in which calculation can easily be divided among several computers.\nRadio telescope data chunks can easily be assigned to different computers.\nMost of these efforts explicitly develop a docking application as a parallel application using a special purpose parallel programming language and middleware, such as MPI, which requires development skills and effort.\nHowever, the skills and effort required to develop a grid application may not be required for OmniRPC.\nNimrod\/G [15] is a tool for distributed parametric modeling and implements a parallel task farm for simulations that require several varying input parameters.\nNimrod incorporates a distributed scheduling component that can manage the scheduling of individual experiments to idle computers in a local area network.\nNimrod has been applied to applications including bio-informatics, operations research, and molecular modeling for drug design.\nNetSolve [8] is an RPC facility similar to OmniRPC and Ninf, providing a similar programming interface and automatic load balancing mechanism.\nNinf-G [7] is a grid-enabled implementation of Ninf and provides a GridRPC [10] system that uses LDAP to manage the database of remote executables, but does not support clusters involving private IP addresses or addresses inside a firewall.\nMatsuoka et al. [16] has also discussed several design issues related to grid RPC systems.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have designed and implemented CONFLEX-G using OmniRPC.\nWe reported its performance in a grid testbed of several geographically distributed PC clusters.\nIn order to explore the conformation of large bio-molecules, CONFLEXG was used to generate trial structures of the molecules, and allocate jobs to optimize them by molecular mechanics in the grid.\nOmniRPC provides a restricted persistence model so that the module is automatically initialized at invocation by calling the initialization procedure.\nThis can eliminate unnecessary communication and the initialization at each call in CONFLEX-G.\nCONFLEX-G can achieves performance comparable to CONFLEX MPI and exploits more computing resources by allowing the use of multiple PC clusters in the grid.\nThe experimental result shows that CONFLEX-G achieved a speedup of 56.5 times for the 1BL1 molecule, where the molecule consists of a large number of atoms and each trial structure optimization requires a great deal of time.\nThe load imbalance of the trial structure optimizations may cause performance degradation.\nWe need to refine the algorithm used to generate the trial structure in order to improve the load balance optimization for trial structures in CONFLEX.\nFuture studies will include development of deployment tools and an examination of fault tolerance.\nIn the current OmniRPC, the registration of an execution program to remote hosts and deployments of worker programs are manually set.\nDeployment tools will be required as the number of remote hosts is increased.\nIn grid environments in which the environment changes dynamically, it is also necessary to support fault tolerance.\nThis feature is especially important in large-scale applications which require lengthy calculation in a grid environment.\nWe plan to refine the conformational optimization algorithm in CONFLEX to explore the conformation space search of larger bio-molecules such HIV protease using up to 1000 workers in a grid environment.","keyphrases":["conflex-g","conform space search","omnirpc","grid rpc system","pc cluster","bio-molecul","molecular mechan","initi procedur","rpc modul","mpu","grid comput","automat initializ modul","comput chemistri"],"prmu":["P","P","P","P","P","P","P","P","P","U","R","M","M"]} {"id":"H-20","title":"New Event Detection Based on Indexing-tree and Named Entity","abstract":"New Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously). With the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately. In this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically. Moreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy. In the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories. Experimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.","lvl-1":"New Event Detection Based on Indexing-tree and Named Entity Zhang Kuo Tsinghua University Beijing, 100084, China 86-10-62771736 zkuo99@mails.tsinghua.edu.cn Li Juan Zi Tsinghua University Beijing, 100084, China 86-10-62781461 ljz@keg.cs.tsinghua.edu.cn Wu Gang Tsinghua University Beijing, 100084, China 86-10-62789831 wug03@keg.cs.tsinghua.edu.cn ABSTRACT New Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously).\nWith the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately.\nIn this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically.\nMoreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy.\nIn the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories.\nExperimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.\nCategories and Subject Descriptors H.3.3 [Information Systems]: Information Search and Retrieval; H.4.2 [Information Systems Applications]: Types of Systemsdecision support.\nGeneral Terms Algorithms, Performance, Experimentation 1.\nINTRODUCTION Topic Detection and Tracking (TDT) program aims to develop techniques which can effectively organize, search and structure news text materials from a variety of newswire and broadcast media [1].\nNew Event Detection (NED) is one of the five tasks in TDT.\nIt is the task of online identification of the earliest report for each topic as soon as that report arrives in the sequence of documents.\nA Topic is defined as a seminal event or activity, along with directly related events and activities [2].\nAn Event is defined as something (non-trivial) happening in a certain place at a certain time [3].\nFor instance, when a bomb explodes in a building, the exploding is the seminal event that triggers the topic, and other stories on the same topic would be those discussing salvaging efforts, the search for perpetrators, arrests and trial and so on.\nUseful news information is usually buried in a mass of data generated everyday.\nTherefore, NED systems are very useful for people who need to detect novel information from real-time news stream.\nThese real-life needs often occur in domains like financial markets, news analysis, and intelligence gathering.\nIn most of state-of-the-art (currently) NED systems, each news story on hand is compared to all the previous received stories.\nIf all the similarities between them do not exceed a threshold, then the story triggers a new event.\nThey are usually in the form of cosine similarity or Hellinger similarity metric.\nThe core problem of NED is to identify whether two stories are on the same topic.\nObviously, these systems cannot take advantage of topic information.\nFurther more, it is not acceptable in real applications because of the large amount of computation required in the NED process.\nOther systems organize previous stories into clusters (each cluster corresponds to a topic), and new story is compared to the previous clusters instead of stories.\nThis manner can reduce comparing times significantly.\nNevertheless, it has been proved that this manner is less accurate [4, 5].\nThis is because sometimes stories within a topic drift far away from each other, which could lead low similarity between a story and its topic.\nOn the other hand, some proposed NED systems tried to improve accuracy by making better use of named entities [10, 11, 12, 13].\nHowever, none of the systems have considered that terms of different types (e.g. Noun, Verb or Person name) have different effects for different classes of stories in determining whether two stories are on the same topic.\nFor example, the names of election candidates (Person name) are very important for stories of election class; the locations (Location name) where accidents happened are important for stories of accidents class.\nSo, in NED, there still exist following three problems to be investigated: (1) How to speed up the detection procedure while do not decrease the detection accuracy?\n(2) How to make good use of cluster (topic) information to improve accuracy?\n(3) How to obtain better news story representation by better understanding of named entities.\nDriven by these problems, we have proposed three approaches in this paper.\n(1)To make the detection procedure faster, we propose a new NED procedure based on news indexing-tree created dynamically.\nStory indexing-tree is created by assembling similar stories together to form news clusters in different hierarchies according to their values of similarity.\nComparisons between current story and previous clusters could help find the most similar story in less comparing times.\nThe new procedure can reduce the amount of comparing times without hurting accuracy.\n(2)We use the clusters of the first floor in the indexing-tree as news topics, in which term weights are adjusted dynamically according to term distribution in the clusters.\nIn this approach, cluster (topic) information is used properly, so the problem of theme decentralization is avoided.\n(3)Based on observations on the statistics obtained from training data, we found that terms of different types (e.g. Noun and Verb) have different effects for different classes of stories in determining whether two stories are on the same topic.\nAnd we propose to use statistics to optimize the weights of the terms of different types in a story according to the news class that the story belongs to.\nOn TDT3 dataset, the new NED model just uses 14.9% comparing times of the basic model, while its minimum normalized cost is 0.5012, which is 0.0797 better than the basic model, and also better than any other results previously reported for this dataset [8, 13].\nThe rest of the paper is organized as follows.\nWe start off this paper by summarizing the previous work in NED in section 2.\nSection 3 presents the basic model for NED that most current systems use.\nSection 4 describes our new detection procedure based on news indexing-tree.\nIn section 5, two term reweighting methods are proposed to improve NED accuracy.\nSection 6 gives our experimental data and evaluation metrics.\nWe finally wrap up with the experimental results in Section 7, and the conclusions and future work in Section 8.\n2.\nRELATED WORK Papka et al. proposed Single-Pass clustering on NED [6].\nWhen a new story was encountered, it was processed immediately to extract term features and a query representation of the story``s content is built up.\nThen it was compared with all the previous queries.\nIf the document did not trigger any queries by exceeding a threshold, it was marked as a new event.\nLam et al build up previous query representations of story clusters, each of which corresponds to a topic [7].\nIn this manner comparisons happen between stories and clusters.\nRecent years, most work focus on proposing better methods on comparison of stories and document representation.\nBrants et al. [8] extended a basic incremental TF-IDF model to include sourcespecific models, similarity score normalization based on document-specific averages, similarity score normalization based on source-pair specific averages, term reweighting based on inverse event frequencies, and segmentation of documents.\nGood improvements on TDT bench-marks were shown.\nStokes et al. [9] utilized a combination of evidence from two distinct representations of a document``s content.\nOne of the representations was the usual free text vector, the other made use of lexical chains (created using WordNet) to build another term vector.\nThen the two representations are combined in a linear fashion.\nA marginal increase in effectiveness was achieved when the combined representation was used.\nSome efforts have been done on how to utilize named entities to improve NED.\nYang et al. gave location named entities four times weight than other terms and named entities [10].\nDOREMI research group combined semantic similarities of person names, location names and time together with textual similarity [11][12].\nUMass [13] research group split document representation into two parts: named entities and non-named entities.\nAnd it was found that some classes of news could achieve better performance using named entity representation, while some other classes of news could achieve better performance using non-named entity representation.\nBoth [10] and [13] used text categorization technique to classify news stories in advance.\nIn [13] news stories are classified automatically at first, and then test sensitivities of names and non-name terms for NED for each class.\nIn [10] frequent terms for each class are removed from document representation.\nFor example, word election does not help identify different elections.\nIn their work, effectiveness of different kinds of names (or terms with different POS) for NED in different news classes are not investigated.\nWe use statistical analysis to reveal the fact and use it to improve NED performance.\n3.\nBASIC MODEL In this section, we present the basic New Event Detection model which is similar to what most current systems apply.\nThen, we propose our new model by extending the basic model.\nNew Event Detection systems use news story stream as input, in which stories are strictly time-ordered.\nOnly previously received stories are available when dealing with current story.\nThe output is a decision for whether the current story is on a new event or not and the confidence of the decision.\nUsually, a NED model consists of three parts: story representation, similarity calculation and detection procedure.\n3.1 Story Representation Preprocessing is needed before generating story representation.\nFor preprocessing, we tokenize words, recognize abbreviations, normalize abbreviations, add part-of-speech tags, remove stopwords included in the stop list used in InQuery [14], replace words with their stems using K-stem algorithm[15], and then generate word vector for each news story.\nWe use incremental TF-IDF model for term weight calculation [4].\nIn a TF-IDF model, term frequency in a news document is weighted by the inverse document frequency, which is generated from training corpus.\nWhen a new term occurs in testing process, there are two solutions: simply ignore the new term or set df of the term as a small const (e.g. df = 1).\nThe new term receives too low weight in the first solution (0) and too high weight in the second solution.\nIn incremental TF-IDF model, document frequencies are updated dynamically in each time step t: 1( ) ( ) ( )t t D tdf w df w df w\u2212= + (1) where Dt represents news story set received in time t, and dfDt(w) means the number of documents that term w occurs in, and dft(w) means the total number of documents that term w occurs in before time t.\nIn this work, each time window includes 50 news stories.\nThus, each story d received in t is represented as follows: 1 2{ ( , , ), ( , , ),..., ( , , )}nd weight d t w weight d t w weight d t w\u2192 where n means the number of distinct terms in story d, and ( , , )weight d t w means the weight of term w in story d at time t: ' log( ( , ) 1) log(( 1) \/( ( ) 0.5)) ( , , ) log( ( , ') 1) log(( 1) \/( ( ') 0.5)) t t t t w d tf d w N df w weight d t w tf d w N df w \u2208 + + + = + + +\u2211 (2) where Nt means the total number of news stories before time t, and tf(d,w) means how many times term w occurs in news story d. 3.2 Similarity Calculation We use Hellinger distance for the calculation of similarity between two stories, for two stories d and d'' at time t, their similarity is defined as follows: , ' ( , ', ) ( , , ) * ( ', , ) w d d sim d d t weight d t w weight d t w \u2208 = \u2211 (3) 3.3 Detection Procedure For each story d received in time step t, the value ( ') ( ) ( ) ( ( , ', )) time d time d n d max sim d d t < = (4) is a score used to determine whether d is a story about a new topic and at the same time is an indication of the confidence in our decision [8].\ntime(d) means the publication time of story d.\nIf the score exceeds the threshold\u03b8 new, then there exists a sufficiently similar document, thus d is a old story, otherwise, there is no sufficiently similar previous document, thus d is an new story.\n4.\nNew NED Procedure Traditional NED systems can be classified into two main types on the aspect of detection procedure: (1) S-S type, in which the story on hand is compared to each story received previously, and use the highest similarity to determine whether current story is about a new event; (2) S-C type, in which the story on hand is compared to all previous clusters each of which representing a topic, and the highest similarity is used for final decision for current story.\nIf the highest similarity exceeds threshold\u03b8 new, then it is an old story, and put it into the most similar cluster; otherwise it is a new story and create a new cluster.\nPrevious work show that the first manner is more accurate than the second one [4][5].\nSince sometimes stories within a topic drift far away from each other, a story may have very low similarity with its topic.\nSo using similarities between stories for determining new story is better than using similarities between story and clusters.\nNevertheless, the first manner needs much more comparing times which means the first manner is low efficient.\nWe propose a new detection procedure which uses comparisons with previous clusters to help find the most similar story in less comparing times, and the final new event decision is made according to the most similar story.\nTherefore, we can get both the accuracy of S-S type methods and the efficiency of S-C type methods.\nThe new procedure creates a news indexing-tree dynamically, in which similar stories are put together to form a hierarchy of clusters.\nWe index similar stories together by their common ancestor (a cluster node).\nDissimilar stories are indexed in different clusters.\nWhen a story is coming, we use comparisons between the current story and previous hierarchical clusters to help find the most similar story which is useful for new event decision.\nAfter the new event decision is made, the current story is inserted to the indexing-tree for the following detection.\nThe news indexing-tree is defined formally as follows: S-Tree = {r, NC , NS , E} where r is the root of S-Tree, NC is the set of all cluster nodes, NS is the set of all story nodes, and E is the set of all edges in S-Tree.\nWe define a set of constraints for a S-Tree: .\n, is an non-terminal node in the treeC i i N i\u2200 \u2208 \u2192 .\n, is a terminal node in the treeS i i N i\u2200 \u2208 \u2192 .\n, out degree of is at least 2C i i N i\u2200 \u2208 \u2192 .\n, is represented as the centroid of its desendantsC i i iN\u2200 \u2208 \u2192 For a news story di, the comparison procedure and inserting procedure based on indexing-tree are defined as follows.\nAn example is shown by Figure 1 and Figure 2.\nFigure 1.\nComparison procedure Figure 2.\nInserting procedure Comparison procedure: Step 1: compare di to all the direct child nodes of r and select \u03bb nodes with highest similarities, e.g., C1 2 and C1 3 in Figure 1.\nStep 2: for each selected node in the last step, e.g. C1 2, compare di to all its direct child nodes, and select \u03bb nodes with highest similarities, e.g. C2 2 and d8.\nRepeat step 2 for all non-terminal nodes.\nStep 3: record the terminal node with the highest similarty to di, e.g. s5, and the similarity value (0.20).\nInserting di to the S-tree with r as root: Find the node n which is direct child of r in the path from r to the terminal node with highest similarity s, e.g. C1 2.\nIf s is smaller than \u03b8 init+(h-1)\u03b4 , then add di to the tree as a direct child of r. Otherwise, if n is a terminal node, then create a cluster node instead of n, and add both n and di as its direct children; if n is an non-terminal node, then repeat this procedure and insert di to the sub-tree with n as root recursively.\nHere h is the length between n and the root of S-tree.\nThe more the stories in a cluster similar to each other, the better the cluster represents the stories in it.\nHence we add no constraints on the maximum of tree``s height and degree of a node.\nTherefore, we cannot give the complexity of this indexing-tree based procedure.\nBut we will give the number of comparing times needed by the new procedure in our experiments in section7.\n5.\nTerm Reweighting Methods In this section, two term reweighting methods are proposed to improve NED accuracy.\nIn the first method, a new way is explored for better using of cluster (topic) information.\nThe second one finds a better way to make use of named entities based on news classification.\n5.1 Term Reweighting Based on Distribution Distance TF-IDF is the most prevalent model used in information retrieval systems.\nThe basic idea is that the fewer documents a term appears in, the more important the term is in discrimination of documents (relevant or not relevant to a query containing the term).\nNevertheless, in TDT domain, we need to discriminate documents with regard to topics rather than queries.\nIntuitively, using cluster (topic) vectors to compare with subsequent news stories should outperform using story vectors.\nUnfortunately, the experimental results do not support this intuition [4][5].\nBased on observation on data, we find the reason is that a news topic usually contains many directly or indirectly related events, while they all have their own sub-subjects which are usually different with each other.\nTake the topic described in section 1 as an example, events like the explosion and salvage have very low similarities with events about criminal trial, therefore stories about trial would have low similarity with the topic vector built on its previous events.\nThis section focuses on how to effectively make use of topic information and at the same time avoid the problem of content decentralization.\nAt first, we classify terms into 5 classes to help analysis the needs of the modified model: Term class A: terms that occur frequently in the whole corpus, e.g., year and people.\nTerms of this class should be given low weights because they do not help much for topic discrimination.\nTerm class B: terms that occur frequently within a news category, e.g., election, storm.\nThey are useful to distinguish two stories in different news categories.\nHowever, they cannot provide information to determine whether two stories are on the same or different topics.\nIn another words, term election and term storm are not helpful in differentiate two election campaigns and two storm disasters.\nTherefore, terms of this class should be assigned lower weights.\nTerm class C: terms that occur frequently in a topic, and infrequently in other topics, e.g., the name of a crash plane, the name of a specific hurricane.\nNews stories that belong to different topics rarely have overlap terms in this class.\nThe more frequently a term appears in a topic, the more important the term is for a story belonging to the topic, therefore the term should be set higher weight.\nTerm class D: terms that appear in a topic exclusively, but not frequently.\nFor example, the name of a fireman who did very well in a salvage action, which may appears in only two or three stories but never appeared in other topics.\nTerms of this type should receive more weights than in TF-IDF model.\nHowever, since they are not popular in the topic, it is not appropriate to give them too high weights.\nTerm class E: terms with low document frequency, and appear in different topics.\nTerms of this class should receive lower weights.\nNow we analyze whether TF-IDF model can give proper weights to the five classes of terms.\nObviously, terms of class A are lowly weighted in TF-IDF model, which is conformable with the requirement described above.\nIn TF-IDF model, terms of class B are highly dependant with the number of stories in a news class.\nTF-IDF model cannot provide low weights if the story containing the term belongs to a relative small news class.\nFor a term of class C, the more frequently it appears in a topic, the less weight TFIDF model gives to it.\nThis strongly conflicts with the requirement of terms in class C. For terms of class D, TF-IDF model gives them high weights correctly.\nBut for terms of class E, TF-IDF model gives high weights to them which are not conformable with the requirement of low weights.\nTo sum up, terms of class B, C, E cannot be properly weighted in TF-IDF model.\nSo, we propose a modified model to resolve this problem.\nWhen \u03b8 init and\u03b8 new are set closely, we assume that most of the stories in a first-level cluster (a direct child node of root node) are on the same topic.\nTherefore, we make use of a first-level cluster to capture term distribution (df for all the terms within the cluster) within the topic dynamically.\nKL divergence of term distribution in a first-level cluster and the whole story set is used to adjust term weights: ' ' ' ( , , ) * (1 * ( || )) ( , , ) ( , , ') * (1 * ( || )) cw tw cw tw w d D weight d t w KL P P weight d t w weight d t w KL P P \u03b3 \u03b3 \u2208 + = +\u2211 (5) where ( ) ( ) ( ) ( ) 1,cw cw c c c c df w df w p y p y N N = = \u2212 (6) ( ) ( ) ( ) ( ) 1,t t tw tw t t df w df w p y p y N N = = \u2212 (7) where dfc(w) is the number of documents containing term w within cluster C, and Nc is the number of documents in cluster C, and Nt is the total number of documents that arrive before time step t. \u03b3 is a const parameter, now is manually set 3.\nKL divergence is defined as follows [17]: ( ) ( || ) ( ) log ( )x p x KL P Q p x q x = \u2211 (8) The basic idea is: for a story in a topic, the more a term occurs within the topic, and the less it occurs in other topics, it should be assigned higher weights.\nObviously, modified model can meet all the requirements of the five term classes listed above.\n5.2 Term Reweighting Based on Term Type and Story Class Previous work found that some classes of news stories could achieve good improvements by giving extra weight to named entities.\nBut we find that terms of different types should be given different amount of extra weight for different classes of news stories.\nWe use open-NLP1 to recognize named entity types and part-ofspeech tags for terms that appear in news stories.\nNamed entity types include person name, organization name, location name, date, time, money and percentage, and five POSs are selected: none (NN), verb (VB), adjective (JJ), adverb (RB) and cardinal number (CD).\nStatistical analysis shows topic-level discriminative terms types for different classes of stories.\nFor the sake of convenience, named entity type and part-of-speech tags are uniformly called term type in subsequent sections.\nDetermining whether two stories are about the same topic is a basic component for NED task.\nSo at first we use 2 \u03c7 statistic to compute correlations between terms and topics.\nFor a term t and a topic T, a contingence table is derived: Table 1.\nA 2\u00d72 Contingence Table Doc Number belong to topic T not belong to topic T include t A B not include t C D The 2 \u03c7 statistic for a specific term t with respect to topic T is defined to be [16]: 2 2 ( , ) ( ) * ( ) ( ) * ( ) * ( ) * ( ) w T A B C D AD CB A C B D A B C D \u03c7 = + + + \u2212 + + + + (9) News topics for the TDT task are further classified into 11 rules of interpretations (ROIs) 2 .\nThe ROI can be seen as a higher level class of stories.\nThe average correlation between a term type and a topic ROI is computed as: 2 avg 2 ( , )( ( , ) )k m m km kT R w P w TP R p w T R P \u03c7 \u03c7 \u2208 \u2208 \u2211 \u2211\uff08 , \uff09= 1 1 k=1...K, m=1...M (10) where K is the number of term types (set 12 constantly in the paper).\nM is the number news classes (ROIs, set 11 in the paper).\nPk represents the set of all terms of type k, and Rm represents the set of all topics of class m, p(t,T) means the probability that t occurs in topic T. Because of limitation of space, only parts of the term types (9 term types) and parts of news classes (8 classes) are listed in table 2 with the average correlation values between them.\nThe statistics is derived from labeled data in TDT2 corpus.\n(Results in table 2 are already normalized for convenience in comparison.)\nThe statistics in table 2 indicates the usefulness of different term types in topic discrimination with respect to different news classes.\nWe can see that, location name is the most useful term type for three news classes: Natural Disasters, Violence or War, Finances.\nAnd for three other categories Elections, Legal\/Criminal Cases, Science and Discovery, person name is the most discriminative term type.\nFor Scandals\/Hearings, date is the most important information for topic discrimination.\nIn addition, Legal\/Criminal Cases and Finance topics have higher correlation with money terms, while Science and Discovery have higher correlation with percentage terms.\nNon-name terms are more stable for different classes.\n1 .\nhttp:\/\/opennlp.sourceforge.net\/ 2 .\nhttp:\/\/projects.ldc.upenn.edu\/TDT3\/Guide\/label.html From the analysis of table 2, it is reasonable to adjust term weight according to their term type and the news class the story belongs to.\nNew term weights are reweighted as follows: ( ) ( ) ( ) ( ') ' ( , , ) * ( , , ) ( , , ) *' class d D type w T class d D type w w d weight d t w weight d t w weight d t w \u03b1 \u03b1 \u2208 = \u2211 (11) where type(w) represents the type of term w, and class(d) represents the class of story d, c k\u03b1 is reweighting parameter for news class c and term type k.\nIn the work, we just simply use statistics in table 2 as the reweighting parameters.\nEven thought using the statistics directly may not the best choice, we do not discuss how to automatically obtain the best parameters.\nWe will try to use machine learning techniques to obtain the best parameters in the future work.\nIn the work, we use BoosTexter [20] to classify all stories into one of the 11 ROIs.\nBoosTexter is a boosting based machine learning program, which creates a series of simple rules for building a classifier for text or attribute-value data.\nWe use term weight generated using TF-IDF model as feature for story classification.\nWe trained the model on the 12000 judged English stories in TDT2, and classify the rest of the stories in TDT2 and all stories in TDT3.\nClassification results are used for term reweighting in formula (11).\nSince the class labels of topic-off stories are not given in TDT datasets, we cannot give the classification accuracy here.\nThus we do not discuss the effects of classification accuracy to NED performance in the paper.\n6.\nEXPERIMENTAL SETUP 6.1 Datasets We used two LDC [18] datasets TDT2 and TDT3 for our experiments.\nTDT2 contains news stories from January to June 1998.\nIt contains around 54,000 stories from sources like ABC, Associated Press, CNN, New York Times, Public Radio International, Voice of America etc..\nOnly English stories in the collection were considered.\nTDT3 contains approximately 31,000 English stories collected from October to December 1998.\nIn addition to the sources used in TDT2, it also contains stories from NBC and MSNBC TV broadcasts.\nWe used transcribed versions of the TV and radio broadcasts besides textual news.\nTDT2 dataset is labeled with about 100 topics, and approximately 12,000 English stories belong to at least one of these topics.\nTDT3 dataset is labeled with about 120 topics, and approximately 8000 English stories belong to at least one of these topics.\nAll the topics are classified into 11 Rules of Interpretation: (1)Elections, (2)Scandals\/Hearings, (3)Legal\/Criminal Cases, (4)Natural Disasters, (5)Accidents, (6)Ongoing Violence or War, (7)Science and Discovery News, (8)Finance, (9)New Law, (10)Sports News, (11)MISC.\nNews.\n6.2 Evaluation Metric TDT uses a cost function CDet that combines the probabilities of missing a new story and a false alarm [19]: * * * *Det Miss Miss Target FA FA NontargetC C P P C P P= + (12) Table 2.\nAverage correlation between term types and news classes where CMiss means the cost of missing a new story, PMiss means the probability of missing a new story, and PTarget means the probability of seeing a new story in the data; CFA means the cost of a false alarm, PFA means the probability of a false alarm, and PNontarget means the probability of seeing an old story.\nThe cost CDet is normalized such that a perfect system scores 0 and a trivial system, which is the better one of mark all stories as new or old, scores 1: ( ( * , * ) ) Det Det Miss Target FA Nontarget C Norm C min C P C P = (13) New event detection system gives two outputs for each story.\nThe first part is yes or no indicating whether the story triggers a new event or not.\nThe second part is a score indicating confidence of the first decision.\nConfidence scores can be used to plot DET curve, i.e., curves that plot false alarm vs. miss probabilities.\nMinimum normalized cost can be determined if optimal threshold on the score were chosen.\n7.\nEXPERIMENTAL RESULTS 7.1 Main Results To test the approaches proposed in the model, we implemented and tested five systems: System-1: this system is used as baseline.\nIt is implemented based on the basic model described in section 3, i.e., using incremental TF-IDF model to generate term weights, and using Hellinger distance to compute document similarity.\nSimilarity score normalization is also employed [8].\nS-S detection procedure is used.\nSystem-2: this system is the same as system-1 except that S-C detection procedure is used.\nSystem-3: this system is the same as system-1 except that it uses the new detection procedure which is based on indexing-tree.\nSystem-4: implemented based on the approach presented in section 5.1, i.e., terms are reweighted according to the distance between term distributions in a cluster and all stories.\nThe new detection procedure is used.\nSystem-5: implemented based on the approach presented in section 5.2, i.e., terms of different types are reweighted according to news class using trained parameters.\nThe new detection procedure is used.\nThe following are some other NED systems: System-6: [21] for each pair of stories, it computes three similarity values for named entity, non-named entity and all terms respectively.\nAnd employ Support Vector Machine to predict new or old using the similarity values as features.\nSystem-7: [8] it extended a basic incremental TF-IDF model to include source-specific models, similarity score normalization based on document-specific averages, similarity score normalization based on source-pair specific averages, etc..\nSystem-8: [13] it split document representation into two parts: named entities and non-named entities, and choose one effective part for each news class.\nTable 3 and table 4 show topic-weighted normalized costs and comparing times on TDT2 and TDT3 datasets respectively.\nSince no heldout data set for fine-tuning the threshold \u03b8 new was available for experiments on TDT2, we only report minimum normalized costs for our systems in table 3.\nSystem-5 outperforms all other systems including system-6, and it performs only 2.78e+8 comparing times in detection procedure which is only 13.4% of system-1.\nTable 3.\nNED results on TDT2 Systems Min Norm(CDet) Cmp times System-1 0.5749 2.08e+9 System-2\u2460 0.6673 3.77e+8 System-3\u2461 0.5765 2.81e+8 System-4\u2461 0.5431 2.99e+8 System-5\u2461 0.5089 2.78e+8 System-6 0.5300 -\u2460 \u03b8 new=0.13 \u2461 \u03b8 init=0.13, \u03bb =3,\u03b4 =0.15 When evaluating on the normalized costs on TDT3, we use the optimal thresholds obtained from TDT2 data set for all systems.\nSystem-2 reduces comparing times to 1.29e+9 which is just 18.3% of system-1, but at the same time it also gets a deteriorated minimum normalized cost which is 0.0499 higher than system-1.\nSystem-3 uses the new detection procedure based on news indexing-tree.\nIt requires even less comparing times than system-2.\nThis is because story-story comparisons usually yield greater similarities than story-cluster ones, so stories tend to be combined Location Person Date Organization Money Percentage NN JJ CD Elections 0.37 1 0.04 0.58 0.08 0.03 0.32 0.13 0.1 Scandals\/Hearings 0.66 0.62 0.28 1 0.11 0.02 0.27 0.13 0.05 Legal\/Criminal Cases 0.48 1 0.02 0.62 0.15 0 0.22 0.24 0.09 Natural Disasters 1 0.27 0 0.04 0.04 0 0.25 0.04 0.02 Violence or War 1 0.36 0.02 0.14 0.02 0.04 0.21 0.11 0.02 Science and Discovery 0.11 1 0.01 0.22 0.08 0.12 0.19 0.08 0.03 Finances 1 0.45 0.04 0.98 0.13 0.02 0.29 0.06 0.05 Sports 0.16 0.27 0.01 1 0.02 0 0.11 0.03 0.01 together in system-3.\nAnd system-3 is basically equivalent to system-1 in accuracy results.\nSystem-4 adjusts term weights based on the distance of term distributions between the whole corpus and cluster story set, yielding a good improvement by 0.0468 compared to system-1.\nThe best system (system-5) has a minimum normalized cost 0.5012, which is 0.0797 better than system-1, and also better than any other results previously reported for this dataset [8, 13].\nFurther more, system-5 only needs 1.05e+8 comparing times which is 14.9% of system-1.\nTable 4.\nNED results on TDT3 Systems Norm(CDet) Min Norm(CDet) Cmp times System-1 0.6159 0.5809 7.04e+8 System-2\u2460 0.6493 0.6308 1.29e+8 System-3\u2461 0.6197 0.5868 1.03e+8 System-4\u2461 0.5601 0.5341 1.03e+8 System-5\u2461 0.5413 0.5012 1.05e+8 System-7 -- 0.5783 -System-8 -- 0.5229 -\u2460 \u03b8 new=0.13 \u2461 \u03b8 init=0.13, \u03bb =3,\u03b4 =0.15 Figure5 shows the five DET curves for our systems on data set TDT3.\nSystem-5 achieves the minimum cost at a false alarm rate of 0.0157 and a miss rate of 0.4310.\nWe can observe that System4 and System-5 obtain lower miss probability at regions of low false alarm probabilities.\nThe hypothesis is that, more weight value is transferred to key terms of topics from non-key terms.\nSimilarity score between two stories belonging to different topics are lower than before, because their overlapping terms are usually not key terms of their topics.\n7.2 Parameter selection for indexing-tree detection Figure 3 shows the minimum normalized costs obtained by system-3 on TDT3 using different parameters.\nThe\u03b8 init parameter is tested on six values spanning from 0.03 to 0.18.\nAnd the \u03bb parameter is tested on four values 1, 2, 3 and 4.\nWe can see that, when\u03b8 init is set to 0.12, which is the closest one to\u03b8 new, the costs are lower than others.\nThis is easy to explain, because when stories belonging to the same topic are put in a cluster, it is more reasonable for the cluster to represent the stories in it.\nWhen parameter \u03bb is set to 3 or 4, the costs are better than other cases, but there is no much difference between 3 and 4.\n0 0.05 0.1 0.15 0.2 1 2 3 4 0.5 0.6 0.7 0.8 0.9 1 \u03b8-init\u03bb MinCost 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Figure 3.\nMin Cost on TDT3 (\u03b4 =0.15) 0 0.05 0.1 0.15 0.2 1 2 3 4 0 0.5 1 1.5 2 2.5 x 10 8 \u03b8-init \u03bb Comparingtimes 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10 8 Figure 4.\nComparing times on TDT3 (\u03b4 =0.15) Figure 4 gives the comparing times used by system-3 on TDT3 with the same parameters as figure 3.\nThe comparing times are strongly dependent on\u03b8 init.\nBecause the greater\u03b8 init is, the less stories combined together, the more comparing times are needed for new event decision.\nSo we use\u03b8 init =0.13,\u03bb =3,\u03b4 =0.15 for system-3, 4, and 5.\nIn this parameter setting, we can get both low minimum normalized costs and less comparing times.\n8.\nCONCLUSION We have proposed a news indexing-tree based detection procedure in our model.\nIt reduces comparing times to about one seventh of traditional method without hurting NED accuracy.\nWe also have presented two extensions to the basic TF-IDF model.\nThe first extension is made by adjust term weights based on term distributions between the whole corpus and a cluster story set.\nAnd the second extension to basic TF-IDF model is better use of term types (named entities types and part-of-speed) according to news categories.\nOur experimental results on TDT2 and TDT3 datasets show that both of the two extensions contribute significantly to improvement in accuracy.\nWe did not consider news time information as a clue for NED task, since most of the topics last for a long time and TDT data sets only span for a relative short period (no more than 6 months).\nFor the future work, we want to collect news set which span for a longer period from internet, and integrate time information in NED task.\nSince topic is a relative coarse-grained news cluster, we also want to refine cluster granularity to event-level, and identify different events and their relations within a topic.\nAcknowledgments This work is supported by the National Natural Science Foundation of China under Grant No. 90604025.\nAny opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsor.\n9.\nREFERENCES [1] http:\/\/www.nist.gov\/speech\/tests\/tdt\/index.htm [2] In Topic Detection and Tracking.\nEvent-based Information Organization.\nKluwer Academic Publishers, 2002.\n.01 .02 .05 .1 .2 .5 1 2 5 10 20 40 60 80 90 1 2 5 10 20 40 60 80 90 False Alarm Probability (in %) MissProbability(in%) SYSTEM1 Topic Weighted Curve SYSTEM1 Min Norm(Cost) SYSTEM2 Topic Weighted Curve SYSTEM2 Min Norm(Cost) SYSTEM3 Topic Weighted Curve SYSTEM3 Min Norm(Cost) SYSTEM4 Topic Weighted Curve SYSTEM4 Min Norm(Cost) SYSTEM5 Topic Weighted Curve SYSTEM5 Min Norm(Cost) Random Performance Figure 5.\nDET curves on TDT3 [3] Y. Yang, J. Carbonell, R. Brown, T. Pierce, B.T. Archibald, and X. Liu.\nLearning Approaches for Detecting and Tracking News Events.\nIn IEEE Intelligent Systems Special Issue on Applications of Intelligent Information Retrieval, volume 14 (4), 1999, 32-43.\n[4] Y. Yang, T. Pierce, and J. Carbonell.\nA Study on Retrospective and On-line Event Detection.\nIn Proceedings of SIGIR-98, Melbourne, Australia, 1998, 28-36.\n[5] J. Allan, V. Lavrenko, D. Malin, and R. Swan.\nDetections, Bounds, and Timelines: Umass and tdt-3.\nIn Proceedings of Topic Detection and Tracking Workshop (TDT-3), Vienna, VA, 2000, 167-174.\n[6] R. Papka and J. Allan.\nOn-line New Event Detection Using Single Pass Clustering TITLE2:.\nTechnical Report UM-CS1998-021, 1998.\n[7] W. Lam, H. Meng, K. Wong, and J. Yen.\nUsing Contextual Analysis for News Event Detection.\nInternational Journal on Intelligent Systems, 2001, 525-546.\n[8] B. Thorsten, C. Francine, and F. Ayman.\nA System for New Event Detection.\nIn Proceedings of the 26th Annual International ACM SIGIR Conference, New York, NY, USA.\nACM Press.\n2003, 330-337.\n[9] S. Nicola and C. Joe.\nCombining Semantic and Syntactic Document Classifiers to Improve First Story Detection.\nIn Proceedings of the 24th Annual International ACM SIGIR Conference, New York, NY, USA.\nACM Press.\n2001, 424425.\n[10] Y. Yang, J. Zhang, J. Carbonell, and C. Jin.\nTopicconditioned Novelty Detection.\nIn Proceedings of the 8th ACM SIGKDD International Conference, ACM Press.\n2002, 688-693.\n[11] M. Juha, A.M. Helena, and S. Marko.\nApplying Semantic Classes in Event Detection and Tracking.\nIn Proceedings of International Conference on Natural Language Processing (ICON 2002), 2002, pages 175-183.\n[12] M. Juha, A.M. Helena, and S. Marko.\nSimple Semantics in Topic Detection and Tracking.\nInformation Retrieval, 7(3-4): 2004, 347-368.\n[13] K. Giridhar and J. Allan.\nText Classification and Named Entities for New Event Detection.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference, New York, NY, USA.\nACM Press.\n2004, 297-304.\n[14] J. P. Callan, W. B. Croft, and S. M. Harding.\nThe INQUERY Retrieval System.\nIn Proceedings of DEXA-92, 3rd International Conference on Database and Expert Systems Applications, 1992, 78-83.\n[15] R. Krovetz.\nViewing Morphology as An Inference Process.\nIn Proceedings of ACM SIGIR93, 1993, 61-81.\n[16] Y. Yang and J. Pedersen.\nA Comparative Study on Feature Selection in Text Categorization.\nIn J. D. H. Fisher, editor, The Fourteenth International Conference on Machine Learning (ICML'97), Morgan Kaufmann, 1997, 412-420.\n[17] T. M. Cover, and J.A. Thomas.\nElements of Information Theory.\nWiley.\n1991.\n[18] The linguistic data consortium, http:\/\/www.ldc,upenn.edu\/.\n[19] The 2001 TDT task definition and evaluation plan, http:\/\/www.nist.gov\/speech\/tests\/tdt\/tdt2001\/evalplan.htm.\n[20] R. E. Schapire and Y. Singer.\nBoostexter: A Boosting-based System for Text Categorization.\nIn Machine Learning 39(2\/3):1, Kluwer Academic Publishers, 2000, 35-168.\n[21] K. Giridhar and J. Allan.\n2005.\nUsing Names and Topics for New Event Detection.\nIn Proceedings of Human Technology Conference and Conference on Empirical Methods in Natural Language, Vancouver, 2005, 121-128","lvl-3":"New Event Detection Based on Indexing-tree and Named Entity\nABSTRACT\nNew Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously).\nWith the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately.\nIn this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically.\nMoreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy.\nIn the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories.\nExperimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.\n1.\nINTRODUCTION\nTopic Detection and Tracking (TDT) program aims to develop techniques which can effectively organize, search and structure news text materials from a variety of newswire and broadcast media [1].\nNew Event Detection (NED) is one of the five tasks in TDT.\nIt is the task of online identification of the earliest report for each topic as soon as that report arrives in the sequence of documents.\nA Topic is defined as \"a seminal event or activity, along with directly related events and activities\" [2].\nAn Event is defined as \"something (non-trivial) happening in a certain place at\na certain time\" [3].\nFor instance, when a bomb explodes in a building, the exploding is the seminal event that triggers the topic, and other stories on the same topic would be those discussing salvaging efforts, the search for perpetrators, arrests and trial and so on.\nUseful news information is usually buried in a mass of data generated everyday.\nTherefore, NED systems are very useful for people who need to detect novel information from real-time news stream.\nThese real-life needs often occur in domains like financial markets, news analysis, and intelligence gathering.\nIn most of state-of-the-art (currently) NED systems, each news story on hand is compared to all the previous received stories.\nIf all the similarities between them do not exceed a threshold, then the story triggers a new event.\nThey are usually in the form of cosine similarity or Hellinger similarity metric.\nThe core problem of NED is to identify whether two stories are on the same topic.\nObviously, these systems cannot take advantage of topic information.\nFurther more, it is not acceptable in real applications because of the large amount of computation required in the NED process.\nOther systems organize previous stories into clusters (each cluster corresponds to a topic), and new story is compared to the previous clusters instead of stories.\nThis manner can reduce comparing times significantly.\nNevertheless, it has been proved that this manner is less accurate [4, 5].\nThis is because sometimes stories within a topic drift far away from each other, which could lead low similarity between a story and its topic.\nOn the other hand, some proposed NED systems tried to improve accuracy by making better use of named entities [10, 11, 12, 13].\nHowever, none of the systems have considered that terms of different types (e.g. Noun, Verb or Person name) have different effects for different classes of stories in determining whether two stories are on the same topic.\nFor example, the names of election candidates (Person name) are very important for stories of election class; the locations (Location name) where accidents happened are important for stories of accidents class.\nSo, in NED, there still exist following three problems to be investigated: (1) How to speed up the detection procedure while do not decrease the detection accuracy?\n(2) How to make good use of cluster (topic) information to improve accuracy?\n(3) How to obtain better news story representation by better understanding of named entities.\nDriven by these problems, we have proposed three approaches in this paper.\n(1) To make the detection procedure faster, we propose a new NED procedure based on news indexing-tree created dynamically.\nStory indexing-tree is created by assembling similar stories together to form news clusters in different hierarchies according to their values of similarity.\nComparisons between current story and previous clusters could help find the most similar story in less comparing times.\nThe new procedure can\nreduce the amount of comparing times without hurting accuracy.\n(2) We use the clusters of the first floor in the indexing-tree as news topics, in which term weights are adjusted dynamically according to term distribution in the clusters.\nIn this approach, cluster (topic) information is used properly, so the problem of theme decentralization is avoided.\n(3) Based on observations on the statistics obtained from training data, we found that terms of different types (e.g. Noun and Verb) have different effects for different classes of stories in determining whether two stories are on the same topic.\nAnd we propose to use statistics to optimize the weights of the terms of different types in a story according to the news class that the story belongs to.\nOn TDT3 dataset, the new NED model just uses 14.9% comparing times of the basic model, while its minimum normalized cost is 0.5012, which is 0.0797 better than the basic model, and also better than any other results previously reported for this dataset [8, 13].\nThe rest of the paper is organized as follows.\nWe start off this paper by summarizing the previous work in NED in section 2.\nSection 3 presents the basic model for NED that most current systems use.\nSection 4 describes our new detection procedure based on news indexing-tree.\nIn section 5, two term reweighting methods are proposed to improve NED accuracy.\nSection 6 gives our experimental data and evaluation metrics.\nWe finally wrap up with the experimental results in Section 7, and the conclusions and future work in Section 8.\n2.\nRELATED WORK\nPapka et al. proposed Single-Pass clustering on NED [6].\nWhen a new story was encountered, it was processed immediately to extract term features and a query representation of the story's content is built up.\nThen it was compared with all the previous queries.\nIf the document did not trigger any queries by exceeding a threshold, it was marked as a new event.\nLam et al build up previous query representations of story clusters, each of which corresponds to a topic [7].\nIn this manner comparisons happen between stories and clusters.\nRecent years, most work focus on proposing better methods on comparison of stories and document representation.\nBrants et al. [8] extended a basic incremental TF-IDF model to include sourcespecific models, similarity score normalization based on document-specific averages, similarity score normalization based on source-pair specific averages, term reweighting based on inverse event frequencies, and segmentation of documents.\nGood improvements on TDT bench-marks were shown.\nStokes et al. [9] utilized a combination of evidence from two distinct representations of a document's content.\nOne of the representations was the usual free text vector, the other made use of lexical chains (created using WordNet) to build another term vector.\nThen the two representations are combined in a linear fashion.\nA marginal increase in effectiveness was achieved when the combined representation was used.\nSome efforts have been done on how to utilize named entities to improve NED.\nYang et al. gave location named entities four times weight than other terms and named entities [10].\nDOREMI research group combined semantic similarities of person names, location names and time together with textual similarity [11] [12].\nUMass [13] research group split document representation into two parts: named entities and non-named entities.\nAnd it was found that some classes of news could achieve better performance using named entity representation, while some other classes of news could achieve better performance using non-named entity representation.\nBoth [10] and [13] used text categorization technique to classify news stories in advance.\nIn [13] news stories are classified automatically at first, and then test sensitivities of names and non-name terms for NED for each class.\nIn [10] frequent terms for each class are removed from document representation.\nFor example, word \"election\" does not help identify different elections.\nIn their work, effectiveness of different kinds of names (or terms with different POS) for NED in different news classes are not investigated.\nWe use statistical analysis to reveal the fact and use it to improve NED performance.\n3.\nBASIC MODEL\n3.1 Story Representation\n3.2 Similarity Calculation\n3.3 Detection Procedure\n4.\nNew NED Procedure\n5.\nTerm Reweighting Methods\n5.1 Term Reweighting Based on Distribution Distance\n5.2 Term Reweighting Based on Term Type and Story Class\n6.\nEXPERIMENTAL SETUP 6.1 Datasets\n6.2 Evaluation Metric\n7.\nEXPERIMENTAL RESULTS\n7.1 Main Results\n7.2 Parameter selection for indexing-tree detection\n8.\nCONCLUSION\nWe have proposed a news indexing-tree based detection procedure in our model.\nIt reduces comparing times to about one seventh of traditional method without hurting NED accuracy.\nWe also have presented two extensions to the basic TF-IDF model.\nThe first extension is made by adjust term weights based on term distributions between the whole corpus and a cluster story set.\nAnd the second extension to basic TF-IDF model is better use of term types (named entities types and part-of-speed) according to news categories.\nOur experimental results on TDT2 and TDT3 datasets show that both of the two extensions contribute significantly to improvement in accuracy.\nWe did not consider news time information as a clue for NED task, since most of the topics last for a long time and TDT data sets only span for a relative short period (no more than 6 months).\nFor the future work, we want to collect news set which span for a longer period from internet, and integrate time information in NED task.\nSince topic is a relative coarse-grained news cluster, we also want to refine cluster granularity to event-level, and identify different events and their relations within a topic.","lvl-4":"New Event Detection Based on Indexing-tree and Named Entity\nABSTRACT\nNew Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously).\nWith the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately.\nIn this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically.\nMoreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy.\nIn the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories.\nExperimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.\n1.\nINTRODUCTION\nNew Event Detection (NED) is one of the five tasks in TDT.\nA Topic is defined as \"a seminal event or activity, along with directly related events and activities\" [2].\nAn Event is defined as \"something (non-trivial) happening in a certain place at\na certain time\" [3].\nUseful news information is usually buried in a mass of data generated everyday.\nTherefore, NED systems are very useful for people who need to detect novel information from real-time news stream.\nThese real-life needs often occur in domains like financial markets, news analysis, and intelligence gathering.\nIn most of state-of-the-art (currently) NED systems, each news story on hand is compared to all the previous received stories.\nIf all the similarities between them do not exceed a threshold, then the story triggers a new event.\nThe core problem of NED is to identify whether two stories are on the same topic.\nObviously, these systems cannot take advantage of topic information.\nOther systems organize previous stories into clusters (each cluster corresponds to a topic), and new story is compared to the previous clusters instead of stories.\nThis manner can reduce comparing times significantly.\nThis is because sometimes stories within a topic drift far away from each other, which could lead low similarity between a story and its topic.\nOn the other hand, some proposed NED systems tried to improve accuracy by making better use of named entities [10, 11, 12, 13].\nHowever, none of the systems have considered that terms of different types (e.g. Noun, Verb or Person name) have different effects for different classes of stories in determining whether two stories are on the same topic.\nFor example, the names of election candidates (Person name) are very important for stories of election class; the locations (Location name) where accidents happened are important for stories of accidents class.\n(2) How to make good use of cluster (topic) information to improve accuracy?\n(3) How to obtain better news story representation by better understanding of named entities.\nDriven by these problems, we have proposed three approaches in this paper.\n(1) To make the detection procedure faster, we propose a new NED procedure based on news indexing-tree created dynamically.\nStory indexing-tree is created by assembling similar stories together to form news clusters in different hierarchies according to their values of similarity.\nComparisons between current story and previous clusters could help find the most similar story in less comparing times.\nThe new procedure can\nreduce the amount of comparing times without hurting accuracy.\n(2) We use the clusters of the first floor in the indexing-tree as news topics, in which term weights are adjusted dynamically according to term distribution in the clusters.\nIn this approach, cluster (topic) information is used properly, so the problem of theme decentralization is avoided.\n(3) Based on observations on the statistics obtained from training data, we found that terms of different types (e.g. Noun and Verb) have different effects for different classes of stories in determining whether two stories are on the same topic.\nAnd we propose to use statistics to optimize the weights of the terms of different types in a story according to the news class that the story belongs to.\nThe rest of the paper is organized as follows.\nWe start off this paper by summarizing the previous work in NED in section 2.\nSection 3 presents the basic model for NED that most current systems use.\nSection 4 describes our new detection procedure based on news indexing-tree.\nIn section 5, two term reweighting methods are proposed to improve NED accuracy.\nSection 6 gives our experimental data and evaluation metrics.\nWe finally wrap up with the experimental results in Section 7, and the conclusions and future work in Section 8.\n2.\nRELATED WORK\nPapka et al. proposed Single-Pass clustering on NED [6].\nWhen a new story was encountered, it was processed immediately to extract term features and a query representation of the story's content is built up.\nThen it was compared with all the previous queries.\nIf the document did not trigger any queries by exceeding a threshold, it was marked as a new event.\nLam et al build up previous query representations of story clusters, each of which corresponds to a topic [7].\nIn this manner comparisons happen between stories and clusters.\nRecent years, most work focus on proposing better methods on comparison of stories and document representation.\nGood improvements on TDT bench-marks were shown.\nStokes et al. [9] utilized a combination of evidence from two distinct representations of a document's content.\nOne of the representations was the usual free text vector, the other made use of lexical chains (created using WordNet) to build another term vector.\nThen the two representations are combined in a linear fashion.\nA marginal increase in effectiveness was achieved when the combined representation was used.\nSome efforts have been done on how to utilize named entities to improve NED.\nYang et al. gave location named entities four times weight than other terms and named entities [10].\nDOREMI research group combined semantic similarities of person names, location names and time together with textual similarity [11] [12].\nUMass [13] research group split document representation into two parts: named entities and non-named entities.\nAnd it was found that some classes of news could achieve better performance using named entity representation, while some other classes of news could achieve better performance using non-named entity representation.\nBoth [10] and [13] used text categorization technique to classify news stories in advance.\nIn [13] news stories are classified automatically at first, and then test sensitivities of names and non-name terms for NED for each class.\nIn [10] frequent terms for each class are removed from document representation.\nIn their work, effectiveness of different kinds of names (or terms with different POS) for NED in different news classes are not investigated.\n8.\nCONCLUSION\nWe have proposed a news indexing-tree based detection procedure in our model.\nIt reduces comparing times to about one seventh of traditional method without hurting NED accuracy.\nWe also have presented two extensions to the basic TF-IDF model.\nThe first extension is made by adjust term weights based on term distributions between the whole corpus and a cluster story set.\nAnd the second extension to basic TF-IDF model is better use of term types (named entities types and part-of-speed) according to news categories.\nOur experimental results on TDT2 and TDT3 datasets show that both of the two extensions contribute significantly to improvement in accuracy.\nFor the future work, we want to collect news set which span for a longer period from internet, and integrate time information in NED task.\nSince topic is a relative coarse-grained news cluster, we also want to refine cluster granularity to event-level, and identify different events and their relations within a topic.","lvl-2":"New Event Detection Based on Indexing-tree and Named Entity\nABSTRACT\nNew Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously).\nWith the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately.\nIn this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically.\nMoreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy.\nIn the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories.\nExperimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.\n1.\nINTRODUCTION\nTopic Detection and Tracking (TDT) program aims to develop techniques which can effectively organize, search and structure news text materials from a variety of newswire and broadcast media [1].\nNew Event Detection (NED) is one of the five tasks in TDT.\nIt is the task of online identification of the earliest report for each topic as soon as that report arrives in the sequence of documents.\nA Topic is defined as \"a seminal event or activity, along with directly related events and activities\" [2].\nAn Event is defined as \"something (non-trivial) happening in a certain place at\na certain time\" [3].\nFor instance, when a bomb explodes in a building, the exploding is the seminal event that triggers the topic, and other stories on the same topic would be those discussing salvaging efforts, the search for perpetrators, arrests and trial and so on.\nUseful news information is usually buried in a mass of data generated everyday.\nTherefore, NED systems are very useful for people who need to detect novel information from real-time news stream.\nThese real-life needs often occur in domains like financial markets, news analysis, and intelligence gathering.\nIn most of state-of-the-art (currently) NED systems, each news story on hand is compared to all the previous received stories.\nIf all the similarities between them do not exceed a threshold, then the story triggers a new event.\nThey are usually in the form of cosine similarity or Hellinger similarity metric.\nThe core problem of NED is to identify whether two stories are on the same topic.\nObviously, these systems cannot take advantage of topic information.\nFurther more, it is not acceptable in real applications because of the large amount of computation required in the NED process.\nOther systems organize previous stories into clusters (each cluster corresponds to a topic), and new story is compared to the previous clusters instead of stories.\nThis manner can reduce comparing times significantly.\nNevertheless, it has been proved that this manner is less accurate [4, 5].\nThis is because sometimes stories within a topic drift far away from each other, which could lead low similarity between a story and its topic.\nOn the other hand, some proposed NED systems tried to improve accuracy by making better use of named entities [10, 11, 12, 13].\nHowever, none of the systems have considered that terms of different types (e.g. Noun, Verb or Person name) have different effects for different classes of stories in determining whether two stories are on the same topic.\nFor example, the names of election candidates (Person name) are very important for stories of election class; the locations (Location name) where accidents happened are important for stories of accidents class.\nSo, in NED, there still exist following three problems to be investigated: (1) How to speed up the detection procedure while do not decrease the detection accuracy?\n(2) How to make good use of cluster (topic) information to improve accuracy?\n(3) How to obtain better news story representation by better understanding of named entities.\nDriven by these problems, we have proposed three approaches in this paper.\n(1) To make the detection procedure faster, we propose a new NED procedure based on news indexing-tree created dynamically.\nStory indexing-tree is created by assembling similar stories together to form news clusters in different hierarchies according to their values of similarity.\nComparisons between current story and previous clusters could help find the most similar story in less comparing times.\nThe new procedure can\nreduce the amount of comparing times without hurting accuracy.\n(2) We use the clusters of the first floor in the indexing-tree as news topics, in which term weights are adjusted dynamically according to term distribution in the clusters.\nIn this approach, cluster (topic) information is used properly, so the problem of theme decentralization is avoided.\n(3) Based on observations on the statistics obtained from training data, we found that terms of different types (e.g. Noun and Verb) have different effects for different classes of stories in determining whether two stories are on the same topic.\nAnd we propose to use statistics to optimize the weights of the terms of different types in a story according to the news class that the story belongs to.\nOn TDT3 dataset, the new NED model just uses 14.9% comparing times of the basic model, while its minimum normalized cost is 0.5012, which is 0.0797 better than the basic model, and also better than any other results previously reported for this dataset [8, 13].\nThe rest of the paper is organized as follows.\nWe start off this paper by summarizing the previous work in NED in section 2.\nSection 3 presents the basic model for NED that most current systems use.\nSection 4 describes our new detection procedure based on news indexing-tree.\nIn section 5, two term reweighting methods are proposed to improve NED accuracy.\nSection 6 gives our experimental data and evaluation metrics.\nWe finally wrap up with the experimental results in Section 7, and the conclusions and future work in Section 8.\n2.\nRELATED WORK\nPapka et al. proposed Single-Pass clustering on NED [6].\nWhen a new story was encountered, it was processed immediately to extract term features and a query representation of the story's content is built up.\nThen it was compared with all the previous queries.\nIf the document did not trigger any queries by exceeding a threshold, it was marked as a new event.\nLam et al build up previous query representations of story clusters, each of which corresponds to a topic [7].\nIn this manner comparisons happen between stories and clusters.\nRecent years, most work focus on proposing better methods on comparison of stories and document representation.\nBrants et al. [8] extended a basic incremental TF-IDF model to include sourcespecific models, similarity score normalization based on document-specific averages, similarity score normalization based on source-pair specific averages, term reweighting based on inverse event frequencies, and segmentation of documents.\nGood improvements on TDT bench-marks were shown.\nStokes et al. [9] utilized a combination of evidence from two distinct representations of a document's content.\nOne of the representations was the usual free text vector, the other made use of lexical chains (created using WordNet) to build another term vector.\nThen the two representations are combined in a linear fashion.\nA marginal increase in effectiveness was achieved when the combined representation was used.\nSome efforts have been done on how to utilize named entities to improve NED.\nYang et al. gave location named entities four times weight than other terms and named entities [10].\nDOREMI research group combined semantic similarities of person names, location names and time together with textual similarity [11] [12].\nUMass [13] research group split document representation into two parts: named entities and non-named entities.\nAnd it was found that some classes of news could achieve better performance using named entity representation, while some other classes of news could achieve better performance using non-named entity representation.\nBoth [10] and [13] used text categorization technique to classify news stories in advance.\nIn [13] news stories are classified automatically at first, and then test sensitivities of names and non-name terms for NED for each class.\nIn [10] frequent terms for each class are removed from document representation.\nFor example, word \"election\" does not help identify different elections.\nIn their work, effectiveness of different kinds of names (or terms with different POS) for NED in different news classes are not investigated.\nWe use statistical analysis to reveal the fact and use it to improve NED performance.\n3.\nBASIC MODEL\nIn this section, we present the basic New Event Detection model which is similar to what most current systems apply.\nThen, we propose our new model by extending the basic model.\nNew Event Detection systems use news story stream as input, in which stories are strictly time-ordered.\nOnly previously received stories are available when dealing with current story.\nThe output is a decision for whether the current story is on a new event or not and the confidence of the decision.\nUsually, a NED model consists of three parts: story representation, similarity calculation and detection procedure.\n3.1 Story Representation\nPreprocessing is needed before generating story representation.\nFor preprocessing, we tokenize words, recognize abbreviations, normalize abbreviations, add part-of-speech tags, remove stopwords included in the stop list used in InQuery [14], replace words with their stems using K-stem algorithm [15], and then generate word vector for each news story.\nWe use incremental TF-IDF model for term weight calculation [4].\nIn a TF-IDF model, term frequency in a news document is weighted by the inverse document frequency, which is generated from training corpus.\nWhen a new term occurs in testing process, there are two solutions: simply ignore the new term or set df of the term as a small const (e.g. df = 1).\nThe new term receives too low weight in the first solution (0) and too high weight in the second solution.\nIn incremental TF-IDF model, document frequencies are updated dynamically in each time step t:\nwhere Dt represents news story set received in time t, and dfDt (w) means the number of documents that term w occurs in, and dft (w) means the total number of documents that term w occurs in before time t.\nIn this work, each time window includes 50 news stories.\nThus, each story d received in t is represented as follows: d \u2192 {weight (d, t, w1), weight (d, t, w2),..., weight (d, t, wn)} where n means the number of distinct terms in story d, and weight (d, t, w) means the weight of term w in story d at time t:\nwhere Nt means the total number of news stories before time t, and tf (d, w) means how many times term w occurs in news story d.\n3.2 Similarity Calculation\nWe use Hellinger distance for the calculation of similarity between two stories, for two stories d and d' at time t, their similarity is defined as follows:\n3.3 Detection Procedure\nFor each story d received in time step t, the value\nis a score used to determine whether d is a story about a new topic and at the same time is an indication of the confidence in our decision [8].\ntime (d) means the publication time of story d.\nIf the score exceeds the thresholdB new, then there exists a sufficiently similar document, thus d is a old story, otherwise, there is no sufficiently similar previous document, thus d is an new story.\n4.\nNew NED Procedure\nTraditional NED systems can be classified into two main types on the aspect of detection procedure: (1) S-S type, in which the story on hand is compared to each story received previously, and use the highest similarity to determine whether current story is about a new event; (2) S-C type, in which the story on hand is compared to all previous clusters each of which representing a topic, and the highest similarity is used for final decision for current story.\nIf the highest similarity exceeds thresholdB new, then it is an old story, and put it into the most similar cluster; otherwise it is a new story and create a new cluster.\nPrevious work show that the first manner is more accurate than the second one [4] [5].\nSince sometimes stories within a topic drift far away from each other, a story may have very low similarity with its topic.\nSo using similarities between stories for determining new story is better than using similarities between story and clusters.\nNevertheless, the first manner needs much more comparing times which means the first manner is low efficient.\nWe propose a new detection procedure which uses comparisons with previous clusters to help find the most similar story in less comparing times, and the final new event decision is made according to the most similar story.\nTherefore, we can get both the accuracy of S-S type methods and the efficiency of S-C type methods.\nThe new procedure creates a news indexing-tree dynamically, in which similar stories are put together to form a hierarchy of clusters.\nWe index similar stories together by their common ancestor (a cluster node).\nDissimilar stories are indexed in different clusters.\nWhen a story is coming, we use comparisons between the current story and previous hierarchical clusters to help find the most similar story which is useful for new event decision.\nAfter the new event decision is made, the current story is inserted to the indexing-tree for the following detection.\nThe news indexing-tree is defined formally as follows: S-Tree = {r, NC, NS, E} where r is the root of S-Tree, NC is the set of all cluster nodes, NS is the set of all story nodes, and E is the set of all edges in S-Tree.\nWe define a set of constraints for a S-Tree:\niv.\nVi, i e N i is represented as the centroid of its desendants For a news story di, the comparison procedure and inserting procedure based on indexing-tree are defined as follows.\nAn example is shown by Figure 1 and Figure 2.\nFigure 1.\nComparison procedure Figure 2.\nInserting procedure Comparison procedure: Step 1: compare di to all the direct child nodes of r and select \u03bb nodes with highest similarities, e.g., C12 and C13 in Figure 1.\nStep 2: for each selected node in the last step, e.g. C12, compare di to all its direct child nodes, and select \u03bb nodes with highest similarities, e.g. C22 and d8.\nRepeat step 2 for all non-terminal nodes.\nStep 3: record the terminal node with the highest similarty to di, e.g. s5, and the similarity value (0.20).\nInserting di to the S-tree with r as root: Find the node n which is direct child of r in the path from r to the terminal node with highest similarity s, e.g. C1 2.\nIf s is smaller than B init + (h-1) 3, then add di to the tree as a direct child of r. Otherwise, if n is a terminal node, then create a cluster node instead of n, and add both n and di as its direct children; if n is an non-terminal node, then repeat this procedure and insert di to the sub-tree with n as root recursively.\nHere h is the length between n and the root of S-tree.\nThe more the stories in a cluster similar to each other, the better the cluster represents the stories in it.\nHence we add no constraints on the maximum of tree's height and degree of a node.\nTherefore, we cannot give the complexity of this indexing-tree based procedure.\nBut we will give the number of comparing times needed by the new procedure in our experiments in section7.\nsim, d'\n5.\nTerm Reweighting Methods\nIn this section, two term reweighting methods are proposed to improve NED accuracy.\nIn the first method, a new way is explored for better using of cluster (topic) information.\nThe second one finds a better way to make use of named entities based on news classification.\n5.1 Term Reweighting Based on Distribution Distance\nTF-IDF is the most prevalent model used in information retrieval systems.\nThe basic idea is that the fewer documents a term appears in, the more important the term is in discrimination of documents (relevant or not relevant to a query containing the term).\nNevertheless, in TDT domain, we need to discriminate documents with regard to topics rather than queries.\nIntuitively, using cluster (topic) vectors to compare with subsequent news stories should outperform using story vectors.\nUnfortunately, the experimental results do not support this intuition [4] [5].\nBased on observation on data, we find the reason is that a news topic usually contains many directly or indirectly related events, while they all have their own sub-subjects which are usually different with each other.\nTake the topic described in section 1 as an example, events like the explosion and salvage have very low similarities with events about criminal trial, therefore stories about trial would have low similarity with the topic vector built on its previous events.\nThis section focuses on how to effectively make use of topic information and at the same time avoid the problem of content decentralization.\nAt first, we classify terms into 5 classes to help analysis the needs of the modified model: Term class A: terms that occur frequently in the whole corpus, e.g., \"year\" and \"people\".\nTerms of this class should be given low weights because they do not help much for topic discrimination.\nTerm class B: terms that occur frequently within a news category, e.g., \"election\", \"storm\".\nThey are useful to distinguish two stories in different news categories.\nHowever, they cannot provide information to determine whether two stories are on the same or different topics.\nIn another words, term\" election\" and term \"storm\" are not helpful in differentiate two election campaigns and two storm disasters.\nTherefore, terms of this class should be assigned lower weights.\nTerm class C: terms that occur frequently in a topic, and infrequently in other topics, e.g., the name of a crash plane, the name of a specific hurricane.\nNews stories that belong to different topics rarely have overlap terms in this class.\nThe more frequently a term appears in a topic, the more important the term is for a story belonging to the topic, therefore the term should be set higher weight.\nTerm class D: terms that appear in a topic exclusively, but not frequently.\nFor example, the name of a fireman who did very well in a salvage action, which may appears in only two or three stories but never appeared in other topics.\nTerms of this type should receive more weights than in TF-IDF model.\nHowever, since they are not popular in the topic, it is not appropriate to give them too high weights.\nTerm class E: terms with low document frequency, and appear in different topics.\nTerms of this class should receive lower weights.\nNow we analyze whether TF-IDF model can give proper weights to the five classes of terms.\nObviously, terms of class A are lowly weighted in TF-IDF model, which is conformable with the requirement described above.\nIn TF-IDF model, terms of class B are highly dependant with the number of stories in a news class.\nTF-IDF model cannot provide low weights if the story containing the term belongs to a relative small news class.\nFor a term of class C, the more frequently it appears in a topic, the less weight TFIDF model gives to it.\nThis strongly conflicts with the requirement of terms in class C. For terms of class D, TF-IDF model gives them high weights correctly.\nBut for terms of class E, TF-IDF model gives high weights to them which are not conformable with the requirement of low weights.\nTo sum up, terms of class B, C, E cannot be properly weighted in TF-IDF model.\nSo, we propose a modified model to resolve this problem.\nWhen \u03b8 init and\u03b8 new are set closely, we assume that most of the stories in a first-level cluster (a direct child node of root node) are on the same topic.\nTherefore, we make use of a first-level cluster to capture term distribution (df for all the terms within the cluster) within the topic dynamically.\nKL divergence of term distribution in a first-level cluster and the whole story set is used to adjust term weights:\nwhere dfc (w) is the number of documents containing term w within cluster C, and Nc is the number of documents in cluster C, and Nt is the total number of documents that arrive before time step t. \u03b3 is a const parameter, now is manually set 3.\nThe basic idea is: for a story in a topic, the more a term occurs within the topic, and the less it occurs in other topics, it should be assigned higher weights.\nObviously, modified model can meet all the requirements of the five term classes listed above.\n5.2 Term Reweighting Based on Term Type and Story Class\nPrevious work found that some classes of news stories could achieve good improvements by giving extra weight to named entities.\nBut we find that terms of different types should be given different amount of extra weight for different classes of news stories.\nWe use open-NLP1 to recognize named entity types and part-ofspeech tags for terms that appear in news stories.\nNamed entity types include person name, organization name, location name, date, time, money and percentage, and five POSs are selected: none (NN), verb (VB), adjective (JJ), adverb (RB) and cardinal number (CD).\nStatistical analysis shows topic-level discriminative terms types for different classes of stories.\nFor the sake of convenience, named entity type and part-of-speech tags are uniformly called term type in subsequent sections.\nDetermining whether two stories are about the same topic is a basic component for NED task.\nSo at first we use \u03c72 statistic to compute correlations between terms and topics.\nFor a term t and a topic T, a contingence table is derived:\nTable 1.\nA 2 \u00d7 2 Contingence Table\n\u03c7 statistic for a specific term t with respect to topic T is defined to be [16]: (A+B+C+D) * (AD \u2212 CB) 2 (A+C) * (B+D) * (A+B) * (C+D) News topics for the TDT task are further classified into 11 \"rules of interpretations\" (ROIs) 2.\nThe ROI can be seen as a higher level class of stories.\nThe average correlation between a term type and a topic ROI is computed as:\nwhere K is the number of term types (set 12 constantly in the paper).\nM is the number news classes (ROIs, set 11 in the paper).\nPk represents the set of all terms of type k, and Rm represents the set of all topics of class m, p (t, T) means the probability that t occurs in topic T. Because of limitation of space, only parts of the term types (9 term types) and parts of news classes (8 classes) are listed in table 2 with the average correlation values between them.\nThe statistics is derived from labeled data in TDT2 corpus.\n(Results in table 2 are already normalized for convenience in comparison.)\nThe statistics in table 2 indicates the usefulness of different term types in topic discrimination with respect to different news classes.\nWe can see that, location name is the most useful term type for three news classes: Natural Disasters, Violence or War, Finances.\nAnd for three other categories Elections, Legal\/Criminal Cases, Science and Discovery, person name is the most discriminative term type.\nFor Scandals\/Hearings, date is the most important information for topic discrimination.\nIn addition, Legal\/Criminal Cases and Finance topics have higher correlation with money terms, while Science and Discovery have higher correlation with percentage terms.\nNon-name terms are more stable for different classes.\nFrom the analysis of table 2, it is reasonable to adjust term weight according to their term type and the news class the story belongs to.\nNew term weights are reweighted as follows:\nwhere type (w) represents the type of term w, and class (d) represents the class of story d, \u03b1 c k is reweighting parameter for news class c and term type k.\nIn the work, we just simply use statistics in table 2 as the reweighting parameters.\nEven thought using the statistics directly may not the best choice, we do not discuss how to automatically obtain the best parameters.\nWe will try to use machine learning techniques to obtain the best parameters in the future work.\nIn the work, we use BoosTexter [20] to classify all stories into one of the 11 ROIs.\nBoosTexter is a boosting based machine learning program, which creates a series of simple rules for building a classifier for text or attribute-value data.\nWe use term weight generated using TF-IDF model as feature for story classification.\nWe trained the model on the 12000 judged English stories in TDT2, and classify the rest of the stories in TDT2 and all stories in TDT3.\nClassification results are used for term reweighting in formula (11).\nSince the class labels of topic-off stories are not given in TDT datasets, we cannot give the classification accuracy here.\nThus we do not discuss the effects of classification accuracy to NED performance in the paper.\n6.\nEXPERIMENTAL SETUP 6.1 Datasets\nWe used two LDC [18] datasets TDT2 and TDT3 for our experiments.\nTDT2 contains news stories from January to June 1998.\nIt contains around 54,000 stories from sources like ABC, Associated Press, CNN, New York Times, Public Radio International, Voice of America etc. .\nOnly English stories in the collection were considered.\nTDT3 contains approximately 31,000 English stories collected from October to December 1998.\nIn addition to the sources used in TDT2, it also contains stories from NBC and MSNBC TV broadcasts.\nWe used transcribed versions of the TV and radio broadcasts besides textual news.\nTDT2 dataset is labeled with about 100 topics, and approximately 12,000 English stories belong to at least one of these topics.\nTDT3 dataset is labeled with about 120 topics, and approximately 8000 English stories belong to at least one of these topics.\nAll the topics are classified into 11 \"Rules of Interpretation\": (1) Elections, (2) Scandals\/Hearings, (3) Legal\/Criminal Cases, (4) Natural Disasters, (5) Accidents, (6) Ongoing Violence or War, (7) Science and Discovery News, (8) Finance, (9) New Law, (10) Sports News, (11) MISC.\nNews.\n6.2 Evaluation Metric\nTDT uses a cost function CDet that combines the probabilities of missing a new story and a false alarm [19]:\nTable 2.\nAverage correlation between term types and news classes\nwhere CMiss means the cost of missing a new story, PMiss means the probability of missing a new story, and PTarget means the probability of seeing a new story in the data; CFA means the cost of a false alarm, PFA means the probability of a false alarm, and PNontarget means the probability of seeing an old story.\nThe cost CDet is normalized such that a perfect system scores 0 and a trivial system, which is the better one of mark all stories as new or old, scores 1:\nNew event detection system gives two outputs for each story.\nThe first part is \"yes\" or \"no\" indicating whether the story triggers a new event or not.\nThe second part is a score indicating confidence of the first decision.\nConfidence scores can be used to plot DET curve, i.e., curves that plot false alarm vs. miss probabilities.\nMinimum normalized cost can be determined if optimal threshold on the score were chosen.\n7.\nEXPERIMENTAL RESULTS\n7.1 Main Results\nTo test the approaches proposed in the model, we implemented and tested five systems: System-1: this system is used as baseline.\nIt is implemented based on the basic model described in section 3, i.e., using incremental TF-IDF model to generate term weights, and using Hellinger distance to compute document similarity.\nSimilarity score normalization is also employed [8].\nS-S detection procedure is used.\nSystem-2: this system is the same as system-1 except that S-C detection procedure is used.\nSystem-3: this system is the same as system-1 except that it uses the new detection procedure which is based on indexing-tree.\nSystem-4: implemented based on the approach presented in section 5.1, i.e., terms are reweighted according to the distance between term distributions in a cluster and all stories.\nThe new detection procedure is used.\nSystem-5: implemented based on the approach presented in section 5.2, i.e., terms of different types are reweighted according to news class using trained parameters.\nThe new detection procedure is used.\nThe following are some other NED systems: System-6: [21] for each pair of stories, it computes three similarity values for named entity, non-named entity and all terms respectively.\nAnd employ Support Vector Machine to predict \"new\" or \"old\" using the similarity values as features.\nSystem-7: [8] it extended a basic incremental TF-IDF model to include source-specific models, similarity score normalization based on document-specific averages, similarity score normalization based on source-pair specific averages, etc. .\nSystem-8: [13] it split document representation into two parts: named entities and non-named entities, and choose one effective part for each news class.\nTable 3 and table 4 show topic-weighted normalized costs and comparing times on TDT2 and TDT3 datasets respectively.\nSince no heldout data set for fine-tuning the threshold \u03b8 new was available for experiments on TDT2, we only report minimum normalized costs for our systems in table 3.\nSystem-5 outperforms all other systems including system-6, and it performs only 2.78 e +8 comparing times in detection procedure which is only 13.4% of system-1.\nTable 3.\nNED results on TDT2\nWhen evaluating on the normalized costs on TDT3, we use the optimal thresholds obtained from TDT2 data set for all systems.\nSystem-2 reduces comparing times to 1.29 e +9 which is just 18.3% of system-1, but at the same time it also gets a deteriorated minimum normalized cost which is 0.0499 higher than system-1.\nSystem-3 uses the new detection procedure based on news indexing-tree.\nIt requires even less comparing times than system-2.\nThis is because story-story comparisons usually yield greater similarities than story-cluster ones, so stories tend to be combined\ntogether in system-3.\nAnd system-3 is basically equivalent to system-1 in accuracy results.\nSystem-4 adjusts term weights based on the distance of term distributions between the whole corpus and cluster story set, yielding a good improvement by 0.0468 compared to system-1.\nThe best system (system-5) has a minimum normalized cost 0.5012, which is 0.0797 better than system-1, and also better than any other results previously reported for this dataset [8, 13].\nFurther more, system-5 only needs 1.05 e +8 comparing times which is 14.9% of system-1.\nTable 4.\nNED results on TDT3\nFigure5 shows the five DET curves for our systems on data set TDT3.\nSystem-5 achieves the minimum cost at a false alarm rate of 0.0157 and a miss rate of 0.4310.\nWe can observe that System4 and System-5 obtain lower miss probability at regions of low false alarm probabilities.\nThe hypothesis is that, more weight value is transferred to key terms of topics from non-key terms.\nSimilarity score between two stories belonging to different topics are lower than before, because their overlapping terms are usually not key terms of their topics.\n7.2 Parameter selection for indexing-tree detection\nFigure 3 shows the minimum normalized costs obtained by system-3 on TDT3 using different parameters.\nThe\u03b8 init parameter is tested on six values spanning from 0.03 to 0.18.\nAnd the \u03bb parameter is tested on four values 1, 2, 3 and 4.\nWe can see that, when\u03b8 init is set to 0.12, which is the closest one to\u03b8 new, the costs are lower than others.\nThis is easy to explain, because when stories belonging to the same topic are put in a cluster, it is more reasonable for the cluster to represent the stories in it.\nWhen parameter \u03bb is set to 3 or 4, the costs are better than other cases, but there is no much difference between 3 and 4.\nFigure 4.\nComparing times on TDT3 (\u03b4 = 0.15) Figure 4 gives the comparing times used by system-3 on TDT3 with the same parameters as figure 3.\nThe comparing times are\nstrongly dependent on\u03b8 init.\nBecause the greater\u03b8 init is, the less stories combined together, the more comparing times are needed for new event decision.\nSo we use\u03b8 init = 0.13, \u03bb = 3, \u03b4 = 0.15 for system-3, 4, and 5.\nIn this parameter setting, we can get both low minimum normalized costs and less comparing times.\n8.\nCONCLUSION\nWe have proposed a news indexing-tree based detection procedure in our model.\nIt reduces comparing times to about one seventh of traditional method without hurting NED accuracy.\nWe also have presented two extensions to the basic TF-IDF model.\nThe first extension is made by adjust term weights based on term distributions between the whole corpus and a cluster story set.\nAnd the second extension to basic TF-IDF model is better use of term types (named entities types and part-of-speed) according to news categories.\nOur experimental results on TDT2 and TDT3 datasets show that both of the two extensions contribute significantly to improvement in accuracy.\nWe did not consider news time information as a clue for NED task, since most of the topics last for a long time and TDT data sets only span for a relative short period (no more than 6 months).\nFor the future work, we want to collect news set which span for a longer period from internet, and integrate time information in NED task.\nSince topic is a relative coarse-grained news cluster, we also want to refine cluster granularity to event-level, and identify different events and their relations within a topic.","keyphrases":["new event detect","name entiti","term reweight approach","ned accuraci","term weight","statist","train data","linguist data consortium","baselin system","exist system","new stori stream","new volum","new index-tree","name entiti reweight mode","stori class","topic detect and track","real-time index"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","M","M","R","M","U"]} {"id":"J-4","title":"Revenue Analysis of a Family of Ranking Rules for Keyword Auctions","abstract":"Keyword auctions lie at the core of the business models of today's leading search engines. Advertisers bid for placement alongside search results, and are charged for clicks on their ads. Advertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates. We consider a family of ranking rules that contains those typically used to model Yahoo! and Google's auction designs as special cases. We find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders' values and click-through rates. We propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience. We illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.","lvl-1":"Revenue Analysis of a Family of Ranking Rules for Keyword Auctions S\u00b4ebastien Lahaie \u2217 School of Engineering and Applied Sciences Harvard University, Cambridge, MA 02138 slahaie@eecs.harvard.edu David M. Pennock Yahoo! Research New York, NY 10011 pennockd@yahoo-inc.com ABSTRACT Keyword auctions lie at the core of the business models of today``s leading search engines.\nAdvertisers bid for placement alongside search results, and are charged for clicks on their ads.\nAdvertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates.\nWe consider a family of ranking rules that contains those typically used to model Yahoo! and Google``s auction designs as special cases.\nWe find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders'' values and click-through rates.\nWe propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience.\nWe illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Theory 1.\nINTRODUCTION Major search engines like Google, Yahoo!, and MSN sell advertisements by auctioning off space on keyword search results pages.\nFor example, when a user searches the web for iPod, the highest paying advertisers (for example, Apple or Best Buy) for that keyword may appear in a separate sponsored section of the page above or to the right of the algorithmic results.\nThe sponsored results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a web page.\nGenerally, advertisements that appear in a higher position on the page garner more attention and more clicks from users.\nThus, all else being equal, advertisers prefer higher positions to lower positions.\nAdvertisers bid for placement on the page in an auctionstyle format where the larger their bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally bid and pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nOverture Services, formerly GoTo.com and now owned by Yahoo! Inc., is credited with pioneering sponsored search advertising.\nOverture``s success prompted a number of companies to adopt similar business models, most prominently Google, the leading web search engine today.\nMicrosoft``s MSN, previously an affiliate of Overture, now operates its own keyword auction marketplace.\nSponsored search is one of the fastest growing, most effective, and most profitable forms of advertising, generating roughly $7 billion in revenue in 2005 after nearly doubling every year for the previous five years.\nThe search engine evaluates the advertisers'' bids and allocates the positions on the page accordingly.\nNotice that, although bids are expressed as payments per click, the search engine cannot directly allocate clicks, but rather allocates impressions, or placements on the screen.\nClicks relate only stochastically to impressions.\nUntil recently, Yahoo! ranked bidders in decreasing order of advertisers'' stated values per click, while Google ranks in decreasing order of advertisers'' stated values per impression.\nIn Google``s case, value per impression is computed by multiplying the advertiser``s (perclick) bid by the advertisement``s expected click-through rate, where this expectation may consider a number of unspecified factors including historical click-through rate, position on the page, advertiser identity, user identity, and the context of other items on the page.\nWe refer to these rules as rank-by-bid and rank-by-revenue, respectively.1 We analyze a family of ranking rules that contains the Yahoo! and Google models as special cases.\nWe consider rank1 These are industry terms.\nWe will see, however, that rankby-revenue is not necessarily revenue-optimal.\n50 ing rules where bidders are ranked in decreasing order of score eq b, where e denotes an advertiser``s click-through rate (normalized for position) and b his bid.\nNotice that q = 0 corresponds to Yahoo!``s rank-by-bid rule and q = 1 corresponds to Google``s rank-by-revenue rule.\nOur premise is that bidders are playing a symmetric equilibrium, as defined by Edelman, Ostrovsky, and Schwarz [3] and Varian [11].\nWe show through simulation that although q = 1 yields the efficient allocation, settings of q considerably less than 1 can yield superior revenue in equilibrium under certain conditions.\nThe key parameter is the correlation between advertiser value and click-through rate.\nIf this correlation is strongly positive, then smaller q are revenue-optimal.\nOur simulations are based on distributions fitted to data from Yahoo! keyword auctions.\nWe propose that search engines set thresholds of acceptable loss in advertiser satisfaction and user experience, then choose the revenue-optimal q consistent with these constraints.\nWe also compare the potential gains from tuning q with the gains from setting reserve prices, and find that the former may be much more significant.\nIn Section 2 we give a formal model of keyword auctions, and establish its equilibrium properties in Section 3.\nIn Section 4 we note that giving agents bidding credits can have the same effect as tuning the ranking rule explicitly.\nIn Section 5 we give a general formulation of the optimal keyword auction design problem as an optimization problem, in a manner analogous to the single-item auction setting.\nWe then provide some theoretical insight into how tuning q can improve revenue, and why the correlation between bidders'' values and click-through rates is relevant.\nIn Section 6 we consider the effect of q on advertiser satisfaction and user experience.\nIn Section 7 we describe our simulations and interpret their results.\nRelated work.\nAs mentioned the papers of Edelman et al. [3] and Varian [11] lay the groundwork for our study.\nBoth papers independently define an appealing refinement of Nash equilibrium for keyword auctions and analyze its equilibrium properties.\nThey called this refinement locally envy-free equilibrium and symmetric equilibrium, respectively.\nVarian also provides some empirical analysis.\nThe general model of keyword auctions used here, where bidders are ranked according to a weight times their bid, was introduced by Aggarwal, Goel, and Motwani [1].\nThat paper also makes a connection between the revenue of keyword auctions in incomplete information settings with the revenue in symmetric equilibrium.\nIyengar and Kumar [5] study the optimal keyword auction design problem in a setting of incomplete information, and also make the connection to symmetric equilibrium.\nWe make use of this connection when formulating the optimal auction design problem in our setting.\nThe work most closely related to ours is that of Feng, Bhargava, and Pennock [4].\nThey were the first to realize that the correlation between bidder values and click-through rates should be a key parameter affecting the revenue performance of various ranking mechanisms.\nFor simplicity, they assume bidders bid their true values, so their model is very different from ours and consequently so are their findings.\nAccording to their simulations, rank-by-revenue always (weakly) dominates rank-by-bid in terms of revenue, whereas our results suggest that rank-by-bid may do much better for negative correlations.\nLahaie [8] gives an example that suggests rank-by-bid should yield more revenue when values and click-through rates are positively correlated, whereas rank-by-revenue should do better when the correlation is negative.\nIn this work we make a deeper study of this conjecture.\n2.\nMODEL There are K positions to be allocated among N bidders, where N > K.\nWe assume that the (expected) click-through rate of bidder s in position t is of the form esxt, i.e. separable into an advertiser effect es \u2208 [0, 1] and position effect xt \u2208 [0, 1].\nWe assume that x1 > x2 > ... > xK > 0 and let xt = 0 for t > K.\nWe also refer to es as the relevance of bidder s.\nIt is useful to interpret xt as the probability that an ad in position t will be noticed, and es as the probability that it will be clicked on if noticed.\nBidder s has value vs for each click.\nBidders have quasilinear utility, so that the utility to bidder s of obtaining position t at a price of p per click is esxt(vs \u2212 p).\nA weight ws is associated with agent s, and agents bid for position.\nIf agent s bids bs, his corresponding score is wsbs.\nAgents are ranked by score, so that the agent with highest score is ranked first, and so on.\nWe assume throughout that agents are numbered such that agent s obtains position s.\nAn agent pays per click the lowest bid necessary to retain his position, so that the agent in slot s pays ws+1 ws bs+1.\nThe auctioneer may introduce a reserve score of r, so that an agent``s ad appears only if his score is at least r. For agent s, this translates into a reserve price (minimum bid) of r\/ws.\n3.\nEQUILIBRIUM We consider the pure-strategy Nash equilibria of the auction game.\nThis is a full-information concept.\nThe motivation for this choice is that in a keyword auction, bidders are allowed to continuously adjust their bids over time, and hence obtain estimates of their profits in various positions.\nAs a result it is reasonable to assume that if bids stabilize, bidders should be playing best-responses to each other``s bids [2, 3, 11].\nFormally, in a Nash equilibrium of this game the following inequalities hold.\nesxs \u201e vs \u2212 ws+1 ws bs+1 `` \u2265 esxt \u201e vs \u2212 wt+1 ws bt+1 `` \u2200t > s (1) esxs \u201e vs \u2212 ws+1 ws bs+1 `` \u2265 esxt \u201e vs \u2212 wt ws bt `` \u2200t < s (2) Inequalities (1) and (2) state that bidder s does not prefer a lower or higher position to his own, respectively.\nIt can be hard to derive any theoretical insight into the properties of these Nash equilibria-multiple allocations of positions to bidders can potentially arise in equilibrium [2].\nEdelman, Ostrovsky, and Schwarz [3] introduced a refinement of Nash equilibrium called locally envy-free equilibrium that is more tractable to analyze; Varian [11] independently proposed this solution concept and called it symmetric equilibrium.\nIn a symmetric equilibrium, inequality (1) holds for all s, t rather than just for t > s.\nSo for all s and all t = s, we have esxs \u201e vs \u2212 ws+1 ws bs+1 `` \u2265 esxt \u201e vs \u2212 wt+1 ws bt+1 `` , 51 or equivalently xs(wsvs \u2212 ws+1bs+1) \u2265 xt(wsvs \u2212 wt+1bt+1).\n(3) Edelman et al. [3] note that this equilibrium arises if agents are raising their bids to increase the payments of those above them, a practice which is believed to be common in actual keyword auctions.\nVarian [11] provides some empirical evidence that Google bid data agrees well with the hypothesis that bidders are playing a symmetric equilibrium.\nVarian does a thorough analysis of the properties of symmetric equilibrium, assuming ws = es = 1 for all bidders.\nIt is straightforward to adapt his analysis to the case where bidders are assigned arbitrary weights and have separable click-through rates.2 As a result we find that in symmetric equilibrium, bidders are ranked in order of decreasing wsvs.\nTo be clear, although the auctioneer only has access to the bids bs and not the values vs, in symmetric equilibrium the bids are such that ranking according to wsbs is equivalent to ranking according to wsvs.\nThe smallest possible bid profile that can arise in symmetric equilibrium is given by the recursion xsws+1bs+1 = (xs \u2212 xs+1)ws+1vs+1 + xs+1ws+2bs+2.\nIn this work we assume that bidders are playing the smallest symmetric equilibrium.\nThis is an appropriate selection for our purposes: by optimizing revenue in this equilibrium, we are optimizing a lower bound on the revenue in any symmetric equilibrium.\nUnraveling the recursion yields xsws+1bs+1 = KX t=s (xt \u2212 xt+1)wt+1vt+1.\n(4) Agent s``s total expected payment is es\/ws times the quantity on the left-hand side of (4).\nThe base case of the recursion occurs for s = K, where we find that the first excluded bidder bids his true value, as in the original analysis.\nMultiplying each of the inequalities (4) by the corresponding es\/ws to obtain total payments, and summing over all positions, we obtain a total equilibrium revenue of KX s=1 KX t=s wt+1 ws es(xt \u2212 xt+1)vt+1.\n(5) To summarize, the minimum possible revenue in symmetric equilibrium can be computed as follows, given the agents'' relevance-value pairs (es, vs): first rank the agents in decreasing order of wsvs, and then evaluate (5).\nWith a reserve score of r, it follows from inequality (3) that no bidder with wsvs < r would want to participate in the auction.\nLet K(r) be the number of bidders with wsvs \u2265 r, and assume it is at most K.\nWe can impose a reserve score of r by introducing a bidder with value r and weight 1, and making him the first excluded bidder (who in symmetric equilibrium bids truthfully).\nIn this case the recursion yields xsws+1bs+1 = K(r)\u22121 X t=s (xt \u2212 xt+1)wt+1vt+1 + xK(r)r and the revenue formula is adapted similarly.\n2 If we redefine wsvs to be vs and wsbs to be bs, we recover Varian``s setup and his original analysis goes through unchanged.\n4.\nBIDDING CREDITS An indirect way to influence the allocation is to introduce bidding credits.3 Suppose bidder s is only required to pay a fraction cs \u2208 [0, 1] of the price he faces, or equivalently a (1 \u2212 cs) fraction of his clicks are received for free.\nThen in a symmetric equilibrium, we have esxs \u201e vs \u2212 ws+1 ws csbs+1 `` \u2265 esxt \u201e vs \u2212 wt+1 ws csbt+1 `` or equivalently xs \u201e ws cs vs \u2212 ws+1bs+1 `` \u2265 xt \u201e ws cs vs \u2212 wt+1bt+1 `` .\nIf we define ws = ws cs and bs = csbs, we recover inequality (3).\nHence the equilibrium revenue will be as if we had used weights w rather than w.\nThe bids will be scaled versions of the bids that arise with weights w (and no credits), where each bid is scaled by the corresponding factor 1\/cs.\nThis technique allows one to use credits instead of explicit changes in the weights to affect revenue.\nFor instance, rankby-revenue will yield the same revenue as rank-by-bid if we set credits to cs = es.\n5.\nREVENUE We are interested in setting the weights w to achieve optimal expected revenue.\nThe setup is as follows.\nThe auctioneer chooses a function g so that the weighting scheme is ws \u2261 g(es).\nWe do not consider weights that also depend on the agents'' bids because this would invalidate the equilibrium analysis of the previous section.4 A pool of N bidders is then obtained by i.i.d. draws of value-relevance pairs from a common probability density f(es, vs).\nWe assume the density is continuous and has full support on [0, 1]\u00d7[0, \u221e).\nThe revenue to the auctioneer is then the revenue generated in symmetric equilibrium under weighting scheme w.\nThis assumes the auctioneer is patient enough not to care about revenue until bids have stabilized.\nThe problem of finding an optimal weighting scheme can be formulated as an optimization problem very similar to the one derived by Myerson [9] for the single-item auction case (with incomplete information).\nLet Qsk(e, v; w) = 1 if agent s obtains slot k in equilibrium under weighting scheme w, where e = (e1, ... , eN ) and v = (v1, ... , vN ), and let it be 0 otherwise.\nNote that the total payment of agent s in equilibrium is esxs ws+1 ws bs+1 = KX t=s es(xt \u2212 xt+1) wt+1 ws vt+1 = esxsvs \u2212 Z vs 0 KX k=1 esxkQsk(es, e\u2212s, y, v\u2212s; w) dy.\nThe derivation then continues just as in the case of a singleitem auction [7, 9].\nWe take the expectation of this payment, 3 Hal Varian suggested to us that bidding credits could be used to affect revenue in keyword auctions, which prompted us to look into this connection.\n4 The analysis does not generalize to weights that depend on bids.\nIt is unclear whether an equilibrium would exist at all with such weights.\n52 and sum over all agents to obtain the objective Z \u221e 0 Z \u221e 0 '' NX s=1 KX k=1 esxk\u03c8(es, vs)Qsk(e, v; w) # f(e, v) dv de, where \u03c8 is the virtual valuation \u03c8(es, vs) = vs \u2212 1 \u2212 F(vs|es) f(vs|es) .\nAccording to this analysis, we should rank bidders by virtual score es\u03c8(es, vs) to optimize revenue (and exclude any bidders with negative virtual score).\nHowever, unlike in the incomplete information setting, here we are constrained to ranking rules that correspond to a certain weighting scheme ws \u2261 g(es).\nWe remark that the virtual score cannot be reproduced exactly via a weighting scheme.\nLemma 1.\nThere is no weighting scheme g such that the virtual score equals the score, for any density f. Proof.\nAssume there is a g such that e\u03c8(e, v) = g(e)v. (The subscript s is suppressed for clarity.)\nThis is equivalent to d dv log(1 \u2212 F(v|e)) = h(e)\/v, (6) where h(e) = (g(e)\/e\u22121)\u22121 .\nLet \u00afv be such that F(\u00afv|e) < 1; under the assumption of full support, there is always such a \u00afv. Integrating (6) with respect to v from 0 to \u00afv, we find that the left-hand side converges whereas the right-hand side diverges, a contradiction.\nOf course, to rank bidders by virtual score, we only need g(es)vs = h(es\u03c8(es, vs)) for some monotonically increasing transformation h. (A necessary condition for this is that \u03c8(es, vs) be increasing in vs for all es.)\nAbsent this regularity condition, the optimization problem seems quite difficult because it is so general: we need to maximize expected revenue over the space of all functions g. To simplify matters, we now restrict our attention to the family of weights ws = eq s for q \u2208 (\u2212\u221e, +\u221e).\nIt should be much simpler to find the optimum within this family, since it is just one-dimensional.\nNote that it covers rank-by-bid (q = 0) and rank-by-revenue (q = 1) as special cases.\nTo see how tuning q can improve matters, consider again the equilibrium revenue: R(q) = KX s=1 KX t=s \u201e et+1 es ``q es(xt \u2212 xt+1)vt+1.\n(7) If the bidders are ranked in decreasing order of relevance, then et es \u2264 1 for t > s and decreasing q slightly without affecting the allocation will increase revenue.\nSimilarly, if bidders are ranked in increasing order of relevance, increasing q slightly will yield an improvement.\nNow suppose there is perfect positive correlation between value and relevance.\nIn this case, rank-by-bid will always lead to the same allocation as rank-by-revenue, and bidders will always be ranked in decreasing order of relevance.\nIt then follows from (7) that q = 0 will yield more revenue in equilibrium than q = 1.5 5 It may appear that this contradicts the revenue-equivalence theorem [7, 9], because mechanisms that always lead to the same allocation in equilibrium should yield the same revenue.\nNote though that with perfect correlation, there are If a good estimate of f is available, Monte-Carlo simulations can be used to estimate the revenue curve as a function of q, and the optimum can be located.\nSimulations can also be used to quantify the effect of correlation on the location of the optimum.\nWe do this in Section 7.\n6.\nEFFICIENCY AND RELEVANCE In principle the revenue-optimal parameter q may lie anywhere in (\u2212\u221e, \u221e).\nHowever, tuning the ranking rule also has consequences for advertiser satisfaction and user experience, and taking these into account reduces the range of allowable q.\nThe total relevance of the equilibrium allocation is L(q) = KX s=1 esxs, i.e. the aggregate click-through rate.\nPresumably users find the ad display more interesting and less of a nuisance if they are more inclined to click on the ads, so we adopt total relevance as a measure of user experience.\nLet ps = ws+1 ws bs+1 be the price per click faced by bidder s.\nThe total value (efficiency) generated by the auction in equilibrium is V (q) = KX s=1 esxsvs = KX s=1 esxs(vs \u2212 ps) + KX s=1 esxsps.\nAs we see total value can be reinterpreted as total profits to the bidders and auctioneer combined.\nSince we only consider deviations from maximum efficiency that increase the auctioneer``s profits, any decrease in efficiency in our setting corresponds to a decrease in bidder profits.\nWe therefore adopt efficiency as a measure of advertiser satisfaction.\nWe would expect total relevance to increase with q, since more weight is placed on each bidder``s individual relevance.\nWe would expect efficiency to be maximized at q = 1, since in this case a bidder``s weight is exactly his relevance.\nProposition 1.\nTotal relevance is non-decreasing in q. Proof.\nRecall that in symmetric equilibrium, bidders are ranked in order of decreasing wsvs.\nLet > 0.\nPerform an exchange sort to obtain the ranking that arises with q + starting from the ranking that arises with q (for a description of exchange sort and its properties, see Knuth [6] pp.\n106110).\nAssume that is large enough to make the rankings distinct.\nAgents s and t, where s is initially ranked lower than t, are swapped in the process if and only if the following conditions hold: eq svs \u2264 eq t vt eq+ s vs > eq+ t vt which together imply that es > et and hence es > et as > 0.\nAt some point in the sort, agent s occupies some slot \u03b1, \u03b2 such that vs = \u03b1es + \u03b2.\nSo the assumption of full support is violated, which is necessary for revenue equivalence.\nRecall that a density has full support over a given domain if every point in the domain has positive density.\n53 k while agent t occupies slot k \u2212 1.\nAfter the swap, total relevance will have changed by the amount esxk\u22121 + etxk \u2212 etxk\u22121 \u2212 esxk = (es \u2212 et)(xk\u22121 \u2212 xk) > 0 As relevance strictly increases with each swap in the sort, total relevance is strictly greater when using q + rather than q. Proposition 2.\nTotal value is non-decreasing in q for q \u2264 1 and non-increasing in q for q \u2265 1.\nProof.\nLet q \u2265 1 and let > 0.\nPerform an exchange sort to obtain the second ranking from the first as in the previous proof.\nIf agents s and t are swapped, where s was initially ranked lower than t, then es > et.\nThis follows by the same reasoning as in the previous proof.\nNow e1\u2212q s \u2264 e1\u2212q t as 1 \u2212 q \u2264 0.\nThis together with eq svs \u2264 eq t vt implies that esvs \u2264 etvt.\nHence after swapping agents s and t, total value has not increased.\nThe case for q \u2264 1 is similar.\nSince the trends described in Propositions 1 and 2 hold pointwise (i.e. for any set of bidders), they also hold in expectation.\nProposition 2 confirms that efficiency is indeed maximized at q = 1.\nThese results motivate the following approach.\nAlthough tuning q can optimize current revenue, this may come at the price of future revenue because advertisers and users may be lost, seeing as their satisfaction decreases.\nTo guarantee future revenue will not be hurt too much, the auctioneer can impose bounds on the percent efficiency and relevance loss he is willing to tolerate, with q = 1 being a natural baseline.\nBy Proposition 2, a lower bound on efficiency will yield upper and lower bounds on the search space for q. By Proposition 1, a lower bound on relevance will yield another lower bound on q.\nThe revenue curve can then be plotted within the allowable range of q to find the revenue-optimal setting.\n7.\nSIMULATIONS To add a measure of reality to our simulations, we fit distributions for value and relevance to Yahoo! bid and clickthrough rate data for a certain keyword that draws over a million searches per month.\n(We do not reveal the identity of the keyword to respect the privacy of the advertisers.)\nWe obtained click and impression data for the advertisers bidding on the keyword.\nFrom this we estimated advertiser and position effects using a maximum-likelihood criterion.\nWe found that, indeed, position effects are monotonically decreasing with lower rank.\nWe then fit a beta distribution to the advertiser effects resulting in parameters a = 2.71 and b = 25.43.\nWe obtained bids of advertisers for the keyword.\nUsing Varian``s [11] technique, we derived bounds on the bidders'' actual values given these bids.\nBy this technique, upper and lower bounds are obtained on bidder values given the bids according to inequality (3).\nIf the interval for a given value is empty, i.e. its upper bound lies below its lower bound, then we compute the smallest perturbation to the bids necessary to make the interval non-empty, which involves solving a quadratic program.\nWe found that the mean absolute deviation required to fit bids to symmetric equilibrium was 0 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Value Density 0 0.05 0.1 0.15 0.2 0.25 0 2 4 6 8 10 Relevance Density Figure 1: Empirical marginal distributions of value and relevance.\nalways at most 0.08, and usually significantly less, over different days in a period of two weeks.6 We fit a lognormal distribution to the lower bounds on the bidders'' values, resulting in parameters \u03bc = 0.35 and \u03c3 = 0.71.\nThe empirical distributions of value and relevance together with the fitted lognormal and beta curves are given in Figure 1.\nIt appears that mixtures of beta and lognormal distributions might be better fits, but since these distributions are used mainly for illustration purposes, we err on the side of simplicity.\nWe used a Gaussian copula to create dependence between value and relevance.7 Given the marginal distributions for value and relevance together with this copula, we simulated the revenue effect of varying q for different levels of Spearman correlation, with 12 slots and 13 bidders.\nThe results are shown in Figure 2.8 It is apparent from the figure that the optimal choice of q moves to the right as correlation decreases; this agrees with our intuition from Section 5.\nThe choice is very sensitive to the level of correlation.\nIf choosing only between rankby-bid and rank-by-revenue, rank-by-bid is best for positive correlation whereas rank-by-revenue is best for negative correlation.\nAt zero correlation, they give about the same expected revenue in this instance.\nFigure 2 also shows that in principle, the optimal q may be negative.\nIt may also occur beyond 1 for different distributions, but we do not know if these would be realistic.\nThe trends in efficiency and relevance are as described in the results from Section 6.\n(Any small deviation from these trends is due to the randomness inherent in the simulations.)\nThe curves level off as q \u2192 +\u221e because eventually agents are ranked purely according to relevance, and similarly as q \u2192 \u2212\u221e.\nA typical Spearman correlation between value and relevance for the keyword was about 0.4-for different days in a week the correlation lay within [0.36, 0.55].\nSimulation results with this correlation are in Figure 3.\nIn this instance rank-by-bid is in fact optimal, yielding 25% more revenue than rank-by-revenue.\nHowever, at q = 0 efficiency and relevance are 9% and 17% lower than at q = 1, respectively.\nImposing a bound of, say, 5% on efficiency and relevance loss from the baseline at q = 1, the optimal setting is q = 0.6, yielding 11% more revenue than the baseline.\n6 See Varian [11] for a definition of mean absolute deviation.\n7 A copula is a function that takes marginal distributions and gives a joint distribution with these marginals.\nIt can be designed so that the variables are correlated.\nSee for example Nelsen [10].\n8 The y-axes in Figures 2-4 have been normalized because the simulations are based on proprietary data.\nOnly relative values are meaningful.\n54 0 1 2 3 4 5 6 7 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 R(q) q Revenue 0 1 2 3 4 5 6 7 8 9 10 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 V(q) q Efficiency 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 L(q) q Relevance Figure 2: Revenue, efficiency, and relevance for different parameters q under varying Spearman correlation (key at right).\nEstimated standard errors are less than 1% of the values shown.\n-1 -0.5 0 0.5 1 1 1.5 2 2.5 3 3.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 R(q) q Revenue 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 V(q) q Efficiency 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 L(q) q Relevance Figure 3: Revenue, efficiency, and relevance for different parameters q with Spearman correlation of 0.4.\nEstimated standard errors are less than 1% of the values shown.\nWe also looked into the effect of introducing a reserve score.\nResults are shown in Figure 4.\nNaturally, both efficiency and relevance suffer with an increasing reserve score.\nThe optimal setting is r = 0.2, which gives only an 8% increase in revenue from r = 0.\nHowever, it results in a 13% efficiency loss and a 26% relevance loss.\nTuning weights seems to be a much more desirable approach than introducing a reserve score in this instance.\nThe reason why efficiency and relevance suffer more with a reserve score is that this approach will often exclude bidders entirely, whereas this never occurs when tuning weights.\nThe two approaches are not mutually exclusive, however, and some combination of the two might prove better than either alone, although we did not investigate this possibility.\n8.\nCONCLUSIONS In this work we looked into the revenue properties of a family of ranking rules that contains the Yahoo! and Google models as special cases.\nIn practice, it should be very simple to move between rules within the family: this simply involves changing the exponent q applied to advertiser effects.\nWe also showed that, in principle, the same effect could be obtained by using bidding credits.\nDespite the simplicity of the rule change, simulations revealed that properly tuning q can significantly improve revenue.\nIn the simulations, the revenue improvements were greater than what could be obtained using reserve prices.\nOn the other hand, we showed that advertiser satisfaction and user experience could suffer if q is made too small.\nWe proposed that the auctioneer set bounds on the decrease in advertiser and user satisfaction he is willing to tolerate, which would imply bounds on the range of allowable q. With appropriate estimates for the distributions of value and relevance, and knowledge of their correlation, the revenue curve can then be plotted within this range to locate the optimum.\nThere are several ways to push this research further.\nIt would be interesting to do this analysis for a variety of keywords, to see if the optimal setting of q is always so sensitive to the level of correlation.\nIf it is, then simply using rank-bybid where there is positive correlation, and rank-by-revenue where there is negative correlation, could be fine to a first approximation and already improve revenue.\nIt would also be interesting to compare the effects of tuning q versus reserve pricing for keywords that have few bidders.\nIn this instance reserve pricing should be more competitive, but this is still an open question.\nIn principle the minimum revenue in Nash equilibrium can be found by linear programming.\nHowever, many allocations can arise in Nash equilibrium, and a linear program needs to be solved for each of these.\nThere is as yet no efficient way to enumerate all possible Nash allocations, so finding the minimum revenue is currently infeasible.\nIf this problem could be solved, we could run simulations for Nash equilibrium instead of symmetric equilibrium, to see if our insights are robust to the choice of solution concept.\nLarger classes of ranking rules could be relevant.\nFor instance, it is possible to introduce discounts ds and rank according to wsbs \u2212 ds; the equilibrium analysis generalizes to this case as well.\nWith this larger class the virtual score can equal the score, e.g. in the case of a uniform marginal distribution over values.\nIt is unclear, though, whether such extensions help with more realistic distributions.\n55 0 0.5 1 1.5 2 2.5 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 R(r) r Revenue 0 1 2 3 4 5 6 7 8 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 V(r) r Efficiency 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 L(r) r Relevance Figure 4: Revenue, efficiency, and relevance for different reserve scores r, with Spearman correlation of 0.4 and q = 1.\nEstimates are averaged over 1000 samples.\nAcknowledgements We thank Pavel Berkhin, Chad Carson, Yiling Chen, Ashvin Kannan, Darshan Kantak, Chris LuVogt, Jan Pedersen, Michael Schwarz, Tong Zhang, and other members of Yahoo! Research and Yahoo! Search Marketing.\n9.\nREFERENCES [1] G. Aggarwal, A. Goel, and R. Motwani.\nTruthful auctions for pricing search keywords.\nIn Proceedings of the 7th ACM Conference on Electronic Commerce, Ann Arbor, MI, 2006.\n[2] T. B\u00a8orgers, I. Cox, M. Pesendorfer, and V. Petricek.\nEquilibrium bids in auctions of sponsored links: Theory and evidence.\nWorking paper, November 2006.\n[3] B. Edelman, M. Ostrovsky, and M. Schwarz.\nInternet advertising and the Generalized Second Price auction: Selling billions of dollars worth of keywords.\nAmerican Economic Review, forthcoming.\n[4] J. Feng, H. K. Bhargava, and D. M. Pennock.\nImplementing sponsored search in Web search engines: Computational evaluation of alternative mechanisms.\nINFORMS Journal on Computing, forthcoming.\n[5] G. Iyengar and A. Kumar.\nCharacterizing optimal keyword auctions.\nIn Proceedings of the 2nd Workshop on Sponsored Search Auctions, Ann Arbor, MI, 2006.\n[6] D. Knuth.\nThe Art of Computer Programming, volume 3.\nAddison-Wesley, 1997.\n[7] V. Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[8] S. Lahaie.\nAn analysis of alternative slot auction designs for sponsored search.\nIn Proceedings of the 7th ACM Conference on Electronic Commerce, Ann Arbor, MI, 2006.\n[9] R. B. Myerson.\nOptimal auction design.\nMathematics of Operations Research, 6(1), February 1981.\n[10] R. B. Nelsen.\nAn Introduction to Copulas.\nSpringer, 2006.\n[11] H. R. Varian.\nPosition auctions.\nInternational Journal of Industrial Organization, forthcoming.\n56","lvl-3":"Revenue Analysis of a Family of Ranking Rules for Keyword Auctions\nABSTRACT\nKeyword auctions lie at the core of the business models of today's leading search engines.\nAdvertisers bid for placement alongside search results, and are charged for clicks on their ads.\nAdvertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates.\nWe consider a family of ranking rules that contains those typically used to model Yahoo! and Google's auction designs as special cases.\nWe find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders' values and click-through rates.\nWe propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience.\nWe illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.\n1.\nINTRODUCTION\nMajor search engines like Google, Yahoo!, and MSN sell advertisements by auctioning off space on keyword search results pages.\nFor example, when a user searches the web for * This work was done while the author was at Yahoo! Research.\n\"iPod\", the highest paying advertisers (for example, Apple or Best Buy) for that keyword may appear in a separate \"sponsored\" section of the page above or to the right of the algorithmic results.\nThe sponsored results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a web page.\nGenerally, advertisements that appear in a higher position on the page garner more attention and more clicks from users.\nThus, all else being equal, advertisers prefer higher positions to lower positions.\nAdvertisers bid for placement on the page in an auctionstyle format where the larger their bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally bid and pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nOverture Services, formerly GoTo.com and now owned by Yahoo! Inc., is credited with pioneering sponsored search advertising.\nOverture's success prompted a number of companies to adopt similar business models, most prominently Google, the leading web search engine today.\nMicrosoft's MSN, previously an affiliate of Overture, now operates its own keyword auction marketplace.\nSponsored search is one of the fastest growing, most effective, and most profitable forms of advertising, generating roughly $7 billion in revenue in 2005 after nearly doubling every year for the previous five years.\nThe search engine evaluates the advertisers' bids and allocates the positions on the page accordingly.\nNotice that, although bids are expressed as payments per click, the search engine cannot directly allocate clicks, but rather allocates impressions, or placements on the screen.\nClicks relate only stochastically to impressions.\nUntil recently, Yahoo! ranked bidders in decreasing order of advertisers' stated values per click, while Google ranks in decreasing order of advertisers' stated values per impression.\nIn Google's case, value per impression is computed by multiplying the advertiser's (perclick) bid by the advertisement's expected click-through rate, where this expectation may consider a number of unspecified factors including historical click-through rate, position on the page, advertiser identity, user identity, and the context of other items on the page.\nWe refer to these rules as \"rank-by-bid\" and \"rank-by-revenue\", respectively . '\nWe analyze a family of ranking rules that contains the Yahoo! and Google models as special cases.\nWe consider rank ` These are industry terms.\nWe will see, however, that rankby-revenue is not necessarily revenue-optimal.\ning rules where bidders are ranked in decreasing order of score eqb, where e denotes an advertiser's click-through rate (normalized for position) and b his bid.\nNotice that q = 0 corresponds to Yahoo!'s rank-by-bid rule and q = 1 corresponds to Google's rank-by-revenue rule.\nOur premise is that bidders are playing a symmetric equilibrium, as defined by Edelman, Ostrovsky, and Schwarz [3] and Varian [11].\nWe show through simulation that although q = 1 yields the efficient allocation, settings of q considerably less than 1 can yield superior revenue in equilibrium under certain conditions.\nThe key parameter is the correlation between advertiser value and click-through rate.\nIf this correlation is strongly positive, then smaller q are revenue-optimal.\nOur simulations are based on distributions fitted to data from Yahoo! keyword auctions.\nWe propose that search engines set thresholds of acceptable loss in advertiser satisfaction and user experience, then choose the revenue-optimal q consistent with these constraints.\nWe also compare the potential gains from tuning q with the gains from setting reserve prices, and find that the former may be much more significant.\nIn Section 2 we give a formal model of keyword auctions, and establish its equilibrium properties in Section 3.\nIn Section 4 we note that giving agents bidding credits can have the same effect as tuning the ranking rule explicitly.\nIn Section 5 we give a general formulation of the optimal keyword auction design problem as an optimization problem, in a manner analogous to the single-item auction setting.\nWe then provide some theoretical insight into how tuning q can improve revenue, and why the correlation between bidders' values and click-through rates is relevant.\nIn Section 6 we consider the effect of q on advertiser satisfaction and user experience.\nIn Section 7 we describe our simulations and interpret their results.\nRelated work.\nAs mentioned the papers of Edelman et al. [3] and Varian [11] lay the groundwork for our study.\nBoth papers independently define an appealing refinement of Nash equilibrium for keyword auctions and analyze its equilibrium properties.\nThey called this refinement \"locally envy-free equilibrium\" and \"symmetric equilibrium\", respectively.\nVarian also provides some empirical analysis.\nThe general model of keyword auctions used here, where bidders are ranked according to a weight times their bid, was introduced by Aggarwal, Goel, and Motwani [1].\nThat paper also makes a connection between the revenue of keyword auctions in incomplete information settings with the revenue in symmetric equilibrium.\nIyengar and Kumar [5] study the optimal keyword auction design problem in a setting of incomplete information, and also make the connection to symmetric equilibrium.\nWe make use of this connection when formulating the optimal auction design problem in our setting.\nThe work most closely related to ours is that of Feng, Bhargava, and Pennock [4].\nThey were the first to realize that the correlation between bidder values and click-through rates should be a key parameter affecting the revenue performance of various ranking mechanisms.\nFor simplicity, they assume bidders bid their true values, so their model is very different from ours and consequently so are their findings.\nAccording to their simulations, rank-by-revenue always (weakly) dominates rank-by-bid in terms of revenue, whereas our results suggest that rank-by-bid may do much better for negative correlations.\nLahaie [8] gives an example that suggests rank-by-bid should yield more revenue when values and click-through rates are positively correlated, whereas rank-by-revenue should do better when the correlation is negative.\nIn this work we make a deeper study of this conjecture.\n2.\nMODEL\n3.\nEQUILIBRIUM\n4.\nBIDDING CREDITS\n5.\nREVENUE\n6.\nEFFICIENCY AND RELEVANCE\nPROPOSITION 1.\nTotal relevance is non-decreasing in q.\n7.\nSIMULATIONS\n8.\nCONCLUSIONS\nIn this work we looked into the revenue properties of a family of ranking rules that contains the Yahoo! and Google models as special cases.\nIn practice, it should be very simple to move between rules within the family: this simply involves changing the exponent q applied to advertiser effects.\nWe also showed that, in principle, the same effect could be obtained by using bidding credits.\nDespite the simplicity of the rule change, simulations revealed that properly tuning q can significantly improve revenue.\nIn the simulations, the revenue improvements were greater than what could be obtained using reserve prices.\nOn the other hand, we showed that advertiser satisfaction and user experience could suffer if q is made too small.\nWe proposed that the auctioneer set bounds on the decrease in advertiser and user satisfaction he is willing to tolerate, which would imply bounds on the range of allowable q. With appropriate estimates for the distributions of value and relevance, and knowledge of their correlation, the revenue curve can then be plotted within this range to locate the optimum.\nThere are several ways to push this research further.\nIt would be interesting to do this analysis for a variety of keywords, to see if the optimal setting of q is always so sensitive to the level of correlation.\nIf it is, then simply using rank-bybid where there is positive correlation, and rank-by-revenue where there is negative correlation, could be fine to a first approximation and already improve revenue.\nIt would also be interesting to compare the effects of tuning q versus reserve pricing for keywords that have few bidders.\nIn this instance reserve pricing should be more competitive, but this is still an open question.\nIn principle the minimum revenue in Nash equilibrium can be found by linear programming.\nHowever, many allocations can arise in Nash equilibrium, and a linear program needs to be solved for each of these.\nThere is as yet no efficient way to enumerate all possible Nash allocations, so finding the minimum revenue is currently infeasible.\nIf this problem could be solved, we could run simulations for Nash equilibrium instead of symmetric equilibrium, to see if our insights are robust to the choice of solution concept.\nLarger classes of ranking rules could be relevant.\nFor instance, it is possible to introduce discounts ds and rank according to wsbs \u2212 ds; the equilibrium analysis generalizes to this case as well.\nWith this larger class the virtual score can equal the score, e.g. in the case of a uniform marginal distribution over values.\nIt is unclear, though, whether such extensions help with more realistic distributions.\nFigure 4: Revenue, efficiency, and relevance for different reserve scores r, with Spearman correlation of 0.4 and q = 1.\nEstimates are averaged over 1000 samples.","lvl-4":"Revenue Analysis of a Family of Ranking Rules for Keyword Auctions\nABSTRACT\nKeyword auctions lie at the core of the business models of today's leading search engines.\nAdvertisers bid for placement alongside search results, and are charged for clicks on their ads.\nAdvertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates.\nWe consider a family of ranking rules that contains those typically used to model Yahoo! and Google's auction designs as special cases.\nWe find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders' values and click-through rates.\nWe propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience.\nWe illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.\n1.\nINTRODUCTION\nMajor search engines like Google, Yahoo!, and MSN sell advertisements by auctioning off space on keyword search results pages.\nFor example, when a user searches the web for * This work was done while the author was at Yahoo! Research.\n\"iPod\", the highest paying advertisers (for example, Apple or Best Buy) for that keyword may appear in a separate \"sponsored\" section of the page above or to the right of the algorithmic results.\nGenerally, advertisements that appear in a higher position on the page garner more attention and more clicks from users.\nThus, all else being equal, advertisers prefer higher positions to lower positions.\nAdvertisers bid for placement on the page in an auctionstyle format where the larger their bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally bid and pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nOverture Services, formerly GoTo.com and now owned by Yahoo! Inc., is credited with pioneering sponsored search advertising.\nOverture's success prompted a number of companies to adopt similar business models, most prominently Google, the leading web search engine today.\nMicrosoft's MSN, previously an affiliate of Overture, now operates its own keyword auction marketplace.\nThe search engine evaluates the advertisers' bids and allocates the positions on the page accordingly.\nNotice that, although bids are expressed as payments per click, the search engine cannot directly allocate clicks, but rather allocates impressions, or placements on the screen.\nClicks relate only stochastically to impressions.\nUntil recently, Yahoo! ranked bidders in decreasing order of advertisers' stated values per click, while Google ranks in decreasing order of advertisers' stated values per impression.\nWe refer to these rules as \"rank-by-bid\" and \"rank-by-revenue\", respectively . '\nWe analyze a family of ranking rules that contains the Yahoo! and Google models as special cases.\nWe consider rank ` These are industry terms.\nWe will see, however, that rankby-revenue is not necessarily revenue-optimal.\ning rules where bidders are ranked in decreasing order of score eqb, where e denotes an advertiser's click-through rate (normalized for position) and b his bid.\nNotice that q = 0 corresponds to Yahoo!'s rank-by-bid rule and q = 1 corresponds to Google's rank-by-revenue rule.\nOur premise is that bidders are playing a symmetric equilibrium, as defined by Edelman, Ostrovsky, and Schwarz [3] and Varian [11].\nWe show through simulation that although q = 1 yields the efficient allocation, settings of q considerably less than 1 can yield superior revenue in equilibrium under certain conditions.\nThe key parameter is the correlation between advertiser value and click-through rate.\nIf this correlation is strongly positive, then smaller q are revenue-optimal.\nOur simulations are based on distributions fitted to data from Yahoo! keyword auctions.\nWe propose that search engines set thresholds of acceptable loss in advertiser satisfaction and user experience, then choose the revenue-optimal q consistent with these constraints.\nIn Section 2 we give a formal model of keyword auctions, and establish its equilibrium properties in Section 3.\nIn Section 4 we note that giving agents bidding credits can have the same effect as tuning the ranking rule explicitly.\nIn Section 5 we give a general formulation of the optimal keyword auction design problem as an optimization problem, in a manner analogous to the single-item auction setting.\nWe then provide some theoretical insight into how tuning q can improve revenue, and why the correlation between bidders' values and click-through rates is relevant.\nIn Section 6 we consider the effect of q on advertiser satisfaction and user experience.\nIn Section 7 we describe our simulations and interpret their results.\nRelated work.\nBoth papers independently define an appealing refinement of Nash equilibrium for keyword auctions and analyze its equilibrium properties.\nThey called this refinement \"locally envy-free equilibrium\" and \"symmetric equilibrium\", respectively.\nVarian also provides some empirical analysis.\nThe general model of keyword auctions used here, where bidders are ranked according to a weight times their bid, was introduced by Aggarwal, Goel, and Motwani [1].\nThat paper also makes a connection between the revenue of keyword auctions in incomplete information settings with the revenue in symmetric equilibrium.\nIyengar and Kumar [5] study the optimal keyword auction design problem in a setting of incomplete information, and also make the connection to symmetric equilibrium.\nWe make use of this connection when formulating the optimal auction design problem in our setting.\nThey were the first to realize that the correlation between bidder values and click-through rates should be a key parameter affecting the revenue performance of various ranking mechanisms.\nFor simplicity, they assume bidders bid their true values, so their model is very different from ours and consequently so are their findings.\nAccording to their simulations, rank-by-revenue always (weakly) dominates rank-by-bid in terms of revenue, whereas our results suggest that rank-by-bid may do much better for negative correlations.\nLahaie [8] gives an example that suggests rank-by-bid should yield more revenue when values and click-through rates are positively correlated, whereas rank-by-revenue should do better when the correlation is negative.\nIn this work we make a deeper study of this conjecture.\n8.\nCONCLUSIONS\nIn this work we looked into the revenue properties of a family of ranking rules that contains the Yahoo! and Google models as special cases.\nIn practice, it should be very simple to move between rules within the family: this simply involves changing the exponent q applied to advertiser effects.\nWe also showed that, in principle, the same effect could be obtained by using bidding credits.\nDespite the simplicity of the rule change, simulations revealed that properly tuning q can significantly improve revenue.\nIn the simulations, the revenue improvements were greater than what could be obtained using reserve prices.\nOn the other hand, we showed that advertiser satisfaction and user experience could suffer if q is made too small.\nIt would be interesting to do this analysis for a variety of keywords, to see if the optimal setting of q is always so sensitive to the level of correlation.\nIf it is, then simply using rank-bybid where there is positive correlation, and rank-by-revenue where there is negative correlation, could be fine to a first approximation and already improve revenue.\nIt would also be interesting to compare the effects of tuning q versus reserve pricing for keywords that have few bidders.\nIn principle the minimum revenue in Nash equilibrium can be found by linear programming.\nHowever, many allocations can arise in Nash equilibrium, and a linear program needs to be solved for each of these.\nThere is as yet no efficient way to enumerate all possible Nash allocations, so finding the minimum revenue is currently infeasible.\nIf this problem could be solved, we could run simulations for Nash equilibrium instead of symmetric equilibrium, to see if our insights are robust to the choice of solution concept.\nLarger classes of ranking rules could be relevant.\nFor instance, it is possible to introduce discounts ds and rank according to wsbs \u2212 ds; the equilibrium analysis generalizes to this case as well.\nWith this larger class the virtual score can equal the score, e.g. in the case of a uniform marginal distribution over values.\nFigure 4: Revenue, efficiency, and relevance for different reserve scores r, with Spearman correlation of 0.4 and q = 1.","lvl-2":"Revenue Analysis of a Family of Ranking Rules for Keyword Auctions\nABSTRACT\nKeyword auctions lie at the core of the business models of today's leading search engines.\nAdvertisers bid for placement alongside search results, and are charged for clicks on their ads.\nAdvertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates.\nWe consider a family of ranking rules that contains those typically used to model Yahoo! and Google's auction designs as special cases.\nWe find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders' values and click-through rates.\nWe propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience.\nWe illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.\n1.\nINTRODUCTION\nMajor search engines like Google, Yahoo!, and MSN sell advertisements by auctioning off space on keyword search results pages.\nFor example, when a user searches the web for * This work was done while the author was at Yahoo! Research.\n\"iPod\", the highest paying advertisers (for example, Apple or Best Buy) for that keyword may appear in a separate \"sponsored\" section of the page above or to the right of the algorithmic results.\nThe sponsored results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a web page.\nGenerally, advertisements that appear in a higher position on the page garner more attention and more clicks from users.\nThus, all else being equal, advertisers prefer higher positions to lower positions.\nAdvertisers bid for placement on the page in an auctionstyle format where the larger their bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally bid and pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nOverture Services, formerly GoTo.com and now owned by Yahoo! Inc., is credited with pioneering sponsored search advertising.\nOverture's success prompted a number of companies to adopt similar business models, most prominently Google, the leading web search engine today.\nMicrosoft's MSN, previously an affiliate of Overture, now operates its own keyword auction marketplace.\nSponsored search is one of the fastest growing, most effective, and most profitable forms of advertising, generating roughly $7 billion in revenue in 2005 after nearly doubling every year for the previous five years.\nThe search engine evaluates the advertisers' bids and allocates the positions on the page accordingly.\nNotice that, although bids are expressed as payments per click, the search engine cannot directly allocate clicks, but rather allocates impressions, or placements on the screen.\nClicks relate only stochastically to impressions.\nUntil recently, Yahoo! ranked bidders in decreasing order of advertisers' stated values per click, while Google ranks in decreasing order of advertisers' stated values per impression.\nIn Google's case, value per impression is computed by multiplying the advertiser's (perclick) bid by the advertisement's expected click-through rate, where this expectation may consider a number of unspecified factors including historical click-through rate, position on the page, advertiser identity, user identity, and the context of other items on the page.\nWe refer to these rules as \"rank-by-bid\" and \"rank-by-revenue\", respectively . '\nWe analyze a family of ranking rules that contains the Yahoo! and Google models as special cases.\nWe consider rank ` These are industry terms.\nWe will see, however, that rankby-revenue is not necessarily revenue-optimal.\ning rules where bidders are ranked in decreasing order of score eqb, where e denotes an advertiser's click-through rate (normalized for position) and b his bid.\nNotice that q = 0 corresponds to Yahoo!'s rank-by-bid rule and q = 1 corresponds to Google's rank-by-revenue rule.\nOur premise is that bidders are playing a symmetric equilibrium, as defined by Edelman, Ostrovsky, and Schwarz [3] and Varian [11].\nWe show through simulation that although q = 1 yields the efficient allocation, settings of q considerably less than 1 can yield superior revenue in equilibrium under certain conditions.\nThe key parameter is the correlation between advertiser value and click-through rate.\nIf this correlation is strongly positive, then smaller q are revenue-optimal.\nOur simulations are based on distributions fitted to data from Yahoo! keyword auctions.\nWe propose that search engines set thresholds of acceptable loss in advertiser satisfaction and user experience, then choose the revenue-optimal q consistent with these constraints.\nWe also compare the potential gains from tuning q with the gains from setting reserve prices, and find that the former may be much more significant.\nIn Section 2 we give a formal model of keyword auctions, and establish its equilibrium properties in Section 3.\nIn Section 4 we note that giving agents bidding credits can have the same effect as tuning the ranking rule explicitly.\nIn Section 5 we give a general formulation of the optimal keyword auction design problem as an optimization problem, in a manner analogous to the single-item auction setting.\nWe then provide some theoretical insight into how tuning q can improve revenue, and why the correlation between bidders' values and click-through rates is relevant.\nIn Section 6 we consider the effect of q on advertiser satisfaction and user experience.\nIn Section 7 we describe our simulations and interpret their results.\nRelated work.\nAs mentioned the papers of Edelman et al. [3] and Varian [11] lay the groundwork for our study.\nBoth papers independently define an appealing refinement of Nash equilibrium for keyword auctions and analyze its equilibrium properties.\nThey called this refinement \"locally envy-free equilibrium\" and \"symmetric equilibrium\", respectively.\nVarian also provides some empirical analysis.\nThe general model of keyword auctions used here, where bidders are ranked according to a weight times their bid, was introduced by Aggarwal, Goel, and Motwani [1].\nThat paper also makes a connection between the revenue of keyword auctions in incomplete information settings with the revenue in symmetric equilibrium.\nIyengar and Kumar [5] study the optimal keyword auction design problem in a setting of incomplete information, and also make the connection to symmetric equilibrium.\nWe make use of this connection when formulating the optimal auction design problem in our setting.\nThe work most closely related to ours is that of Feng, Bhargava, and Pennock [4].\nThey were the first to realize that the correlation between bidder values and click-through rates should be a key parameter affecting the revenue performance of various ranking mechanisms.\nFor simplicity, they assume bidders bid their true values, so their model is very different from ours and consequently so are their findings.\nAccording to their simulations, rank-by-revenue always (weakly) dominates rank-by-bid in terms of revenue, whereas our results suggest that rank-by-bid may do much better for negative correlations.\nLahaie [8] gives an example that suggests rank-by-bid should yield more revenue when values and click-through rates are positively correlated, whereas rank-by-revenue should do better when the correlation is negative.\nIn this work we make a deeper study of this conjecture.\n2.\nMODEL\nThere are K positions to be allocated among N bidders, where N> K.\nWe assume that the (expected) click-through rate of bidder s in position t is of the form esxt, i.e. separable into an advertiser effect es G [0, 1] and position effect xt G [0, 1].\nWe assume that x1> x2>...> xK> 0 and let xt = 0 for t> K.\nWe also refer to es as the relevance of bidder s.\nIt is useful to interpret xt as the probability that an ad in position t will be noticed, and es as the probability that it will be clicked on if noticed.\nBidder s has value vs for each click.\nBidders have quasilinear utility, so that the utility to bidder s of obtaining position t at a price of p per click is\nA weight ws is associated with agent s, and agents bid for position.\nIf agent s bids bs, his corresponding score is wsbs.\nAgents are ranked by score, so that the agent with highest score is ranked first, and so on.\nWe assume throughout that agents are numbered such that agent s obtains position s.\nAn agent pays per click the lowest bid necessary to retain his position, so that the agent in slot s pays ws +1 ws bs +1.\nThe auctioneer may introduce a reserve score of r, so that an agent's ad appears only if his score is at least r. For agent s, this translates into a reserve price (minimum bid) of r\/ws.\n3.\nEQUILIBRIUM\nWe consider the pure-strategy Nash equilibria of the auction game.\nThis is a full-information concept.\nThe motivation for this choice is that in a keyword auction, bidders are allowed to continuously adjust their bids over time, and hence obtain estimates of their profits in various positions.\nAs a result it is reasonable to assume that if bids stabilize, bidders should be playing best-responses to each other's bids [2, 3, 11].\nFormally, in a Nash equilibrium of this game the following inequalities hold.\nInequalities (1) and (2) state that bidder s does not prefer a lower or higher position to his own, respectively.\nIt can be hard to derive any theoretical insight into the properties of these Nash equilibria--multiple allocations of positions to bidders can potentially arise in equilibrium [2].\nEdelman, Ostrovsky, and Schwarz [3] introduced a refinement of Nash equilibrium called \"locally envy-free equilibrium\" that is more tractable to analyze; Varian [11] independently proposed this solution concept and called it \"symmetric equilibrium\".\nIn a symmetric equilibrium, inequality (1) holds for all s, t rather than just for t> s.\nSo for all s and all t = ~ s, we have\nEdelman et al. [3] note that this equilibrium arises if agents are raising their bids to increase the payments of those above them, a practice which is believed to be common in actual keyword auctions.\nVarian [11] provides some empirical evidence that Google bid data agrees well with the hypothesis that bidders are playing a symmetric equilibrium.\nVarian does a thorough analysis of the properties of symmetric equilibrium, assuming ws = es = 1 for all bidders.\nIt is straightforward to adapt his analysis to the case where bidders are assigned arbitrary weights and have separable click-through rates .2 As a result we find that in symmetric equilibrium, bidders are ranked in order of decreasing wsvs.\nTo be clear, although the auctioneer only has access to the bids bs and not the values vs, in symmetric equilibrium the bids are such that ranking according to wsbs is equivalent to ranking according to wsvs.\nThe smallest possible bid profile that can arise in symmetric equilibrium is given by the recursion\nIn this work we assume that bidders are playing the smallest symmetric equilibrium.\nThis is an appropriate selection for our purposes: by optimizing revenue in this equilibrium, we are optimizing a lower bound on the revenue in any symmetric equilibrium.\nUnraveling the recursion yields\nAgent s's total expected payment is es\/ws times the quantity on the left-hand side of (4).\nThe base case of the recursion occurs for s = K, where we find that the first excluded bidder bids his true value, as in the original analysis.\nMultiplying each of the inequalities (4) by the corresponding es\/ws to obtain total payments, and summing over all positions, we obtain a total equilibrium revenue of\nTo summarize, the minimum possible revenue in symmetric equilibrium can be computed as follows, given the agents' relevance-value pairs (es, vs): first rank the agents in decreasing order of wsvs, and then evaluate (5).\nWith a reserve score of r, it follows from inequality (3) that no bidder with wsvs r, and assume it is at most K.\nWe can impose a reserve score of r by introducing a bidder with value r and weight 1, and making him the first excluded bidder (who in symmetric equilibrium bids truthfully).\nIn this case the recursion yields\nand the revenue formula is adapted similarly.\n2If we redefine wsvs to be vs and wsbs to be bs, we recover Varian's setup and his original analysis goes through unchanged.\n4.\nBIDDING CREDITS\nAn indirect way to influence the allocation is to introduce bidding credits .3 Suppose bidder s is only required to pay a fraction cs G [0, 1] of the price he faces, or equivalently a (1--cs) fraction of his clicks are received for free.\nThen in a symmetric equilibrium, we have\nIf we define w's = ws cs and b's = csbs, we recover inequality (3).\nHence the equilibrium revenue will be as if we had used weights w' rather than w.\nThe bids will be scaled versions of the bids that arise with weights w' (and no credits), where each bid is scaled by the corresponding factor 1\/cs.\nThis technique allows one to use credits instead of explicit changes in the weights to affect revenue.\nFor instance, rankby-revenue will yield the same revenue as rank-by-bid if we set credits to cs = es.\n5.\nREVENUE\nWe are interested in setting the weights w to achieve optimal expected revenue.\nThe setup is as follows.\nThe auctioneer chooses a function g so that the weighting scheme is ws = _ g (es).\nWe do not consider weights that also depend on the agents' bids because this would invalidate the equilibrium analysis of the previous section.\ncents A pool of N bidders is then obtained by i.i.d. draws of value-relevance pairs from a common probability density f (es, vs).\nWe assume the density is continuous and has full support on [0, 1] x [0, oo).\nThe revenue to the auctioneer is then the revenue generated in symmetric equilibrium under weighting scheme w.\nThis assumes the auctioneer is patient enough not to care about revenue until bids have stabilized.\nThe problem of finding an optimal weighting scheme can be formulated as an optimization problem very similar to the one derived by Myerson [9] for the single-item auction case (with incomplete information).\nLet Qsk (e, v; w) = 1 if agent s obtains slot k in equilibrium under weighting scheme w, where e = (e1,..., eN) and v = (v1,..., vN), and let it be 0 otherwise.\nNote that the total payment of agent s in equilibrium is\nThe derivation then continues just as in the case of a singleitem auction [7, 9].\nWe take the expectation of this payment, 3Hal Varian suggested to us that bidding credits could be used to affect revenue in keyword auctions, which prompted us to look into this connection.\ncents The analysis does not generalize to weights that depend on bids.\nIt is unclear whether an equilibrium would exist at all with such weights.\nand sum over all agents to obtain the objective f (vsles).\nAccording to this analysis, we should rank bidders by \"virtual score\" es\u03c8 (es, vs) to optimize revenue (and exclude any bidders with negative virtual score).\nHowever, unlike in the incomplete information setting, here we are constrained to ranking rules that correspond to a certain weighting scheme ws = g (es).\nWe remark that the virtual score cannot be reproduced exactly via a weighting scheme.\nLEMMA 1.\nThere is no weighting scheme g such that the virtual score equals the score, for any density f.\nPROOF.\nAssume there is a g such that e\u03c8 (e, v) = g (e) v. (The subscript s is suppressed for clarity.)\nThis is equivalent to\nIf a good estimate of f is available, Monte-Carlo simulations can be used to estimate the revenue curve as a function of q, and the optimum can be located.\nSimulations can also be used to quantify the effect of correlation on the location of the optimum.\nWe do this in Section 7.\n6.\nEFFICIENCY AND RELEVANCE\nIn principle the revenue-optimal parameter q may lie anywhere in (--oo, oo).\nHowever, tuning the ranking rule also has consequences for advertiser satisfaction and user experience, and taking these into account reduces the range of allowable q.\nThe total relevance of the equilibrium allocation is i.e. the aggregate click-through rate.\nPresumably users find the ad display more interesting and less of a nuisance if they are more inclined to click on the ads, so we adopt total relevance as a measure of user experience.\nLet ps = ws +1 ws bs +1 be the price per click faced by bidder s.\nThe total value (efficiency) generated by the auction in equilibrium is\nwhere h (e) = (g (e) \/ e--1) \u2212 1.\nLet v \u00af be such that F (\u00af vle) <1; V (q) = XK esxsvs XK esxsps.\nunder the assumption of full support, there is always such = s = 1 esxs (vs--ps) + s = 1 a \u00af v. Integrating (6) with respect to v from 0 to \u00af v, we find XK that the left-hand side converges whereas the right-hand side s = 1 diverges, a contradiction.\nOf course, to rank bidders by virtual score, we only need g (es) vs = h (es\u03c8 (es, vs)) for some monotonically increasing transformation h. (A necessary condition for this is that \u03c8 (es, vs) be increasing in vs for all es.)\nAbsent this regularity condition, the optimization problem seems quite difficult because it is so general: we need to maximize expected revenue over the space of all functions g. To simplify matters, we now restrict our attention to the family of weights ws = eqs for q E (--oo, + oo).\nIt should be much simpler to find the optimum within this family, since it is just one-dimensional.\nNote that it covers rank-by-bid (q = 0) and rank-by-revenue (q = 1) as special cases.\nTo see how tuning q can improve matters, consider again the equilibrium revenue:\naffecting the allocation will increase revenue.\nSimilarly, if bidders are ranked in increasing order of relevance, increasing q slightly will yield an improvement.\nNow suppose there is perfect positive correlation between value and relevance.\nIn this case, rank-by-bid will always lead to the same allocation as rank-by-revenue, and bidders will always be ranked in decreasing order of relevance.\nIt then follows from (7) that q = 0 will yield more revenue in equilibrium than q = 1.5 5It may appear that this contradicts the revenue-equivalence theorem [7, 9], because mechanisms that always lead to the same allocation in equilibrium should yield the same revenue.\nNote though that with perfect correlation, there are As we see total value can be reinterpreted as total profits to the bidders and auctioneer combined.\nSince we only consider deviations from maximum efficiency that increase the auctioneer's profits, any decrease in efficiency in our setting corresponds to a decrease in bidder profits.\nWe therefore adopt efficiency as a measure of advertiser satisfaction.\nWe would expect total relevance to increase with q, since more weight is placed on each bidder's individual relevance.\nWe would expect efficiency to be maximized at q = 1, since in this case a bidder's weight is exactly his relevance.\nPROPOSITION 1.\nTotal relevance is non-decreasing in q.\nPROOF.\nRecall that in symmetric equilibrium, bidders are ranked in order of decreasing wsvs.\nLet ~> 0.\nPerform an exchange sort to obtain the ranking that arises with q + ~ starting from the ranking that arises with q (for a description of exchange sort and its properties, see Knuth [6] pp. 106--110).\nAssume that ~ is large enough to make the rankings distinct.\nAgents s and t, where s is initially ranked lower than t, are swapped in the process if and only if the following conditions hold:\nwhich together imply that e ~ s> e ~ t and hence es> et as ~> 0.\nAt some point in the sort, agent s occupies some slot \u03b1, \u03b2 such that vs = \u03b1es + \u03b2.\nSo the assumption of full support is violated, which is necessary for revenue equivalence.\nRecall that a density has full support over a given domain if every point in the domain has positive density.\nk while agent t occupies slot k - 1.\nAfter the swap, total relevance will have changed by the amount esxk \u2212 1 + etxk - etxk \u2212 1 - esxk = (es - et) (xk \u2212 1 - xk)> 0 As relevance strictly increases with each swap in the sort, total relevance is strictly greater when using q + ~ rather than q.\nPROPOSITION 2.\nTotal value is non-decreasing in q for q <1 and non-increasing in q for q> 1.\nPROOF.\nLet q> 1 and let ~> 0.\nPerform an exchange sort to obtain the second ranking from the first as in the previous proof.\nIf agents s and t are swapped, where s was initially ranked lower than t, then es> et.\nThis follows by the same reasoning as in the previous proof.\nNow e1 \u2212 q\nthat esvs 0} and let G = ([n], S \u222a E) be a digraph with weights wij = \u22121 if (i, j) \u2208 S and wij = 0 otherwise.\nD(A) has no negative cycles, hence G is acyclic and breadth first search can assign potentials \u03c6i such that \u03c6j \u2264 \u03c6i + wij for (i, j) \u2208 S \u222a E.\nWe relabel the vertices so that \u03c61 \u2265 \u03c62 \u2265 ... \u2265 \u03c6n.\nLet \u03b4i = (n \u2212 1) max(i,j)\u2208S(\u2212aij) min(i,j)\u2208T aij if \u03c6i < \u03c6i\u22121 and \u03b4i = 1 otherwise, and define si = iY j=2 \u03b4j = \u03b4i \u00b7 si\u22121 .\nWe show that for this choice of s, D(A, s) contains no negative weight cycle.\nSuppose C = (i1, ... , ik) is a cycle in D(A, s).\nIf \u03c6 is constant on C then aij ij+1 = 0 for j = 1, ... , k and we are done.\nOtherwise let iv \u2208 C be the vertex with smallest potential satisfying w.l.o.g. \u03c6(iv) < \u03c6(iv+1).\nFor any cycle C in the digraph D(A, s), let (v, u) be an edge in C such that (i) v has the smallest potential among all vertices in C, and (ii) \u03c6u > \u03c6v.\nSuch an edge exists, otherwise \u03c6i is identical for all vertices i in C.\nIn this case, all edges in C have non-negative edge weight in D(A, s).\nIf (iv, iv+1) \u2208 S \u222a E, then we have \u03c6(iv+1) \u2264 \u03c6(iv) + wiv,iv+1 \u2264 \u03c6(iv) a contradiction.\nHence (iv, iv+1) \u2208 T. Now, note that all vertices q in C with the same potential as iv must be incident to an edge (q, t) in C such that \u03c6(t) \u2265 \u03c6(q).\nHence the edge (q, t) must have non-negative weight.\ni.e., aq,t \u2265 0.\nLet p denote a vertex in C with the second smallest potential.\nNow, C has weight svavu+ X (k,l)\u2208C\\(v,u) skak,l \u2265 svav,u+sp(n\u22121) max (i,j)\u2208S {aij } \u2265 0, i.e., C has non-negative weight \u2737 Algorithm 1 returns in polynomial time a hypothesis that is a piecewise linear function and agrees with the labeling of the observation namely sample error zero.\nTo use this function to forecast demand for unobserved prices we need algorithm 2 which maximizes the function on a given budget set.\nSince u(x) = mini{yi + sipi(x \u2212 xi)} this is a linear program and can be solved in time polynomial in d, n as well as the size of the largest number in the input.\n38 Algorithm 1 Utility Algorithm Input (x1, p1), ... , (xn, pn) S \u2190 {(i, j) : aij < 0} E \u2190 {(i, j) : aij = 0} for all (i, j) \u2208 S do wij \u2190 \u22121 end for for all (i, j) \u2208 E do wij \u2190 0 end for while there exist unvisited vertices do visit new vertex j assign potential to \u03c6j end while reorder indices so \u03c61 \u2264 \u03c62 ... \u2264 \u03c6n for all 1 \u2264 i \u2264 n do \u03b4i \u2190 (n \u2212 1) max(i,j)\u2208S (\u2212aij ) min(i,j)\u2208T aij si \u2190 Qi j=2 \u03b4j end for SHORTEST PATH(yj \u2212 yi \u2264 siaij) Return y1, ... , yn \u2208 Rd and s1, ... , sn \u2208 R+ Algorithm 2 Evaluation Input y1, ... , yn \u2208 Rd and s1, ... , sn \u2208 R+ max z z \u2264 yi + sipi(x \u2212 xi) for i = 1, ... , n px \u2264 1 Return x for which z is maximized 4.\nSUPERVISED LEARNING In a supervised learning problem, a learning algorithm is given a finite sample of labeled observations as input and is required to return a model of the functional relationship underlying the labeling.\nThis model, referred to as a hypothesis, is usually a computable function that is used to forecast the labels of future observations.\nThe labels are usually binary values indicating the membership of the observed points in the set that is being learned.\nHowever, we are not limited to binary values and, indeed, in the demand functions we are studying the labels are real vectors.\nThe learning problem has three major components: estimation, approximation and complexity.\nThe estimation problem is concerned with the tradeoff between the size of the sample given to the algorithm and the degree of confidence we have in the forecast it produces.\nThe approximation problem is concerned with the ability of hypotheses from a certain class to approximate target functions from a possibly different class.\nThe complexity problem is concerned with the computational complexity of finding a hypothesis that approximates the target function.\nA parametric paradigm assumes that the underlying functional relationship comes from a well defined family, such as the Cobb-Douglas production functions; the system must learn the parameters characterizing this family.\nSuppose that a learning algorithm observes a finite set of production data which it assumes comes from a Cobb-Douglas production function and returns a hypothesis that is a polynomial of bounded degree.\nThe estimation problem in this case would be to assess the sample size needed to obtain a good estimate of the coefficients.\nThe approximation problem would be to assess the error sustained from approximating a rational function by a polynomial.\nThe complexity problem would be the assessment of the time required to compute the polynomial coefficients.\nIn the probably approximately correct (PAC) paradigm, the learning of a target function is done by a class of hypothesis functions, that does or does not include the target function itself; it does not necessitate any parametric assumptions on this class.\nIt is also assumed that the observations are generated independently by some distribution on the domain of the relation and that this distribution is fixed.\nIf the class of target functions has finite ``dimensionality'' then a function in the class is characterized by its values on a finite number of points.\nThe basic idea is to observe the labeling of a finite number of points and find a function from a class of hypotheses which tends to agree with this labeling.\nThe theory tells us that if the sample is large enough then any function that tends to agree with the labeling will, with high probability, be a good approximation of the target function for future observations.\nThe prime objective of PAC theory is to develop the relevant notion of dimensionality and to formalize the tradeoff between dimensionality, sample size and the level of confidence in the forecast.\nIn the revealed preference setting, our objective is to use a set of observations of prices and demand to forecast demand for unobserved prices.\nThus the target function is a mapping from prices to bundles, namely f : Rd + \u2192 Rd +.\nThe theory of PAC learning for real valued functions is concerned predominantly with functions from Rd to R.\nIn this section we introduce modifications to the classical notions of PAC learning to vector valued functions and use them to prove a lower bound for sample complexity.\nAn upper bound on the sample complexity can also be proved for our definition of fat shattering, but we do not bring it here as the proof is much more tedious and analogous to the proof of theorem 4.\nBefore we can proceed with the formal definition, we must clarify what we mean by forecast and tend to agree.\nIn the case of discrete learning, we would like to obtain a function h that with high probability agrees with f.\nWe would then take the probability P\u03c3(f(x) = h(x)) as the measure of the quality of the estimation.\nDemand functions are real vector functions and we therefore do not expect f and h to agree with high probability.\nRather we are content with having small mean square errors on all coordinates.\nThus, our measure of estimation error is given by: er\u03c3(f, h) = Z (||f \u2212 h||\u221e)2 d\u03c3.\nFor given observations S = {(p1, x1), ... , (pn, xn)} we measure the agreement by the sample error erS(S, h) = X j (||xj \u2212 h(pj)||\u221e)2 .\nA sample error minimization (SEM) algorithm is an algorithm that finds a hypothesis minimizing erS(S, h).\nIn the case of revealed preference, there is a function that takes the sample error to zero.\nNevertheless, the upper bounds theorem we use does not require the sample error to be zero.\nDefinition 1.\nA set of demand functions C is probably approximately correct (PAC) learnable by hypotheses set H if for any \u03b5, \u03b4 > 0, f \u2208 C and distribution \u03c3 on the prices 39 there exists an algorithm L that for a set of observations of length mL = mL(\u03b5, \u03b4) = Poly(1 \u03b4 , 1 \u03b5 ) finds a function h from H such that er\u03c3(f, h) < \u03b5 with probability 1 \u2212 \u03b4.\nThere may be several learning algorithms for C with different sample complexities.\nThe minimal mL is called the sample complexity of C. Note that in the definition there is no mention of the time complexity to find h in H and evaluating h(p).\nA set C is efficiently PAC-learnable if there is a Poly(1 \u03b4 , 1 \u03b5 ) time algorithm for choosing h and evaluating h(p).\nFor discrete function sets, sample complexity bounds may be derived from the VC-dimension of the set (see [19, 8]).\nAn analog to this notion of dimension for real functions is the fat shattering dimension.\nWe use an adaptation of this notion to real vector valued function sets.\nLet \u0393 \u2282 Rd + and let C be a set of real functions from \u0393 to Rd +.\nDefinition 2.\nFor \u03b3 > 0, a set of points p1, ... , pn \u2208 \u0393 is \u03b3-shattered by a class of real functions C if there exists x1, ... , xn \u2208 Rd + and parallel affine hyperplanes H0, H1 \u2282 Rd such that 0 \u2208 H\u2212 0 \u2229 H+ 1 , dist(H0, H1) > \u03b3 and for each b = (b1, ... , bn) \u2208 {0, 1}n there exists a function fb \u2208 C such that fb(pi) \u2208 xi + H+ 0 if bi = 0 and f(pi) \u2208 xi + H\u2212 1 if bi = 1.\nWe define the \u03b3-fat shattering dimension of C, denoted fatC(\u03b3) as the maximal size of a \u03b3-shattered set in \u0393.\nIf this size is unbounded then the dimension is infinite.\nTo demonstrate the usefulness of the this notion we use it to derive a lower bound on the sample complexity.\nLemma 2.\nSuppose the functions {fb : b \u2208 {0, 1}n } witness the shattering of {p1, ... , pn}.\nThen, for any x \u2208 Rd + and labels b, b \u2208 {0, 1}n such that bi = bi either ||fb(pi) \u2212 x||\u221e > \u03b3 2d or ||fb (pi) \u2212 x||\u221e > \u03b3 2d .\nProof : Since the max exceeds the mean, it follows that if fb and fb correspond to labels such that bi = bi then ||fb(pi) \u2212 fb (pi)||\u221e \u2265 1 d ||fb(pi) \u2212 fb (pi)||2 > \u03b3 d .\nThis implies that for any x \u2208 Rd + either ||fb(pi) \u2212 x||\u221e > \u03b3 2d or ||fb (pi) \u2212 x||\u221e > \u03b3 2d \u2737 Theorem 3.\nSuppose that C is a class of functions mapping from \u0393 to Rd +.\nThen any learning algorithm L for C has sample complexity satisfying mL(\u03b5, \u03b4) \u2265 1 2 fatC(4d\u03b5) An analog of this theorem for real valued functions with a tighter bound can be found in [2], this version will suffice for our needs.\nProof : Suppose n = 1 2 fatC(4d\u03b5) then there exists a set \u0393S = {p1, ... , p2n} that is shattered by C.\nIt suffices to show that at least one distribution requires large sample.\nWe construct such a distribution.\nLet \u03c3 be the uniform distribution on \u0393S and CS = {fb : b \u2208 {0, 1}2n } be the set of functions that witness the shattering of {p1... , pn}.\nLet fb be a function chosen uniformly at random from CS.\nIt follows from lemma 2 (with \u03b3 = 2d ) that for any fixed function h the probability that ||fb(p) \u2212 h(p)||\u221e > 2\u03b5 for p \u2208 \u0393S is at least as high as getting heads on a fair coin toss.\nTherefore Eb(||fb(p) \u2212 h(p)||\u221e) > 2\u03b5.\nSuppose for a sequence of observations z = ((pi1 , x1), ... , (pin , xn)) a learning algorithm L finds a function h.\nThe observation above and Fubini imply Eb(er\u03c3(h, fb)) > \u03b5.\nRandomizing on the sample space we get Eb,z(er\u03c3(h, fb)) > \u03b5.\nThis shows Eh,z(er\u03c3(h, fb0 )) > \u03b5 for some fb0 .\nW.l.g we may assume the error is bounded (since we are looking at what is essentially a finite set) therefore the probability that er\u03c3(h, fb0 ) > \u03b5 cannot be too small, hence fb0 is not PAClearnable with a sample of size n \u2737 The following theorem gives an upper bound on the sample complexity required for learning a set of functions with finite fat shattering dimension.\nThe theorem is proved in [2] for real valued functions, the proof for the real vector case is analogous and so omitted.\nTheorem 4.\nLet C be a set of real-valued functions from X to [0, 1] with fatC(\u03b3) < \u221e.\nLet A be approximate-SEM algorithm for C and define L(z) = A(z, \u03b50 6 ) for z \u2208 Zm and \u03b50 = 16\u221a m .\nThen L is a learning algorithm for C with sample complexity given by: mL(\u03b5, \u03b4) = O \u201e 1 \u03b52 (ln2 ( 1 \u03b5 )fatC(\u03b5) + ln( 1 \u03b4 )) `` for any \u03b5, \u03b4 > 0.\n5.\nLEARNING FROM REVEALED PREFERENCE Algorithm 1 is an efficient learning algorithm in the sense that it finds a hypothesis with sample error zero in time polynomial in the number of observations.\nAs we have seen in section 4 the number of observations required to PAC learn the demand depends on the fat shattering dimension of the class of demand functions which in turn depends on the class of utility functions they are derived from.\nWe compute the fat shattering dimension for two classes of demands.\nThe first is the class of all demand functions, we show that this class has infinite shattering dimension (we give two proofs) and is therefore not PAC learnable.\nThe second class we consider is the class of demand functions derived from utilities with bounded support and income-Lipschitz.\nWe show that the class has a finite fat shattering dimension that depends on the support and the income-Lipschitz constant.\nTheorem 5.\nLet C be a set of demand functions from Rd + to Rd + then fatC(\u03b3) = \u221e Proof 1: For \u03b5 > 0 let pi = 2\u2212i p for i = 1, ... , n be a set of price vectors inducing parallel budget sets Bi and let x1, ... , xn be the intersection of these hyperplanes with an orthogonal line passing through the center.\nLet H0 and H1 be hyperplanes that are not parallel to p and let xi \u2208 Bi \u2229 (xi + H+ 0 ) and xi \u2208 Bi \u2229 (xi + H\u2212 1 ) for i = 1 ... n (see figure 1).\nFor any labeling b = (b1, ... , bn) \u2208 {0, 1}n let y = y(b) = (y1, ... , yn) be a set of demands such that yi = xi if bi = 0 and yi = xi if bi = 1 (we omit an additional index b in y for notational convenience).\nTo show that p1, ... , pn is shattered it suffices to find for every b a demand function fb supported by concave utility such that fb(pi) = yb i .\nTo show that such a function exists it suffices to show that Afriat``s conditions are satisfied.\nSince yi are in the budget 40 set yi \u00b7 2\u2212i p = 1, therefore pi \u00b7 (yj \u2212 yi) = 2j\u2212i \u2212 1.\nThis shows that pi \u00b7 (yj \u2212 yi) \u2264 0 iff j < i hence there can be no negative cycles and the condition is met.\n\u2737 Proof 2: The utility functions satisfying Afriat``s condition in the first proof could be trivial assigning the same utility to xi as to xi .\nIn fact, pick a utility function whose level sets are parallel to the budget constraint.\nTherefore the shattering of the prices p1, ... , pn is the result of indifference rather than genuine preference.\nTo avoid this problem we reprove the theorem by constructing utility functions u such that u(xi) = u(xi ) for all i and therefore a distinct utility function is associated with each labeling.\nFor i = 1, ... n let pi1, ... , pid be price vectors satisfying the following conditions: 1.\nthe budget sets Bs i are supporting hyperplanes of a convex polytope \u039bi 2.\nyi is a vertex of \u039bi 3.\n||yj ||1 \u00b7 ||pis \u2212 pi||\u221e = o(1) for s = 1, ... d and j = 1, ... , n Finally let yi1, ... , yid be points on the facets of \u039bi that intersect yi, such that ||pjr||1 \u00b7 ||yi \u2212 yis||\u221e = o(1) for all j, s and r.\nWe call the set of points yi, yi1, ... , yid the level i demand and pi, pi1, ... , pid level i prices.\nApplying H\u00a8olders inequality we get |pir \u00b7 yjs \u2212 pi \u00b7 yj | \u2264 |(pir \u2212 pi) \u00b7 yj| + |pir \u00b7 (yjs \u2212 yj)| ||pir \u2212 pi||\u221e \u00b7 ||yj ||1 + ||yjs \u2212 yj||\u221e \u00b7 ||pir||1.\n= o(1) This shows that pir \u00b7 (yjs \u2212 yir) = pi \u00b7 (ys \u2212 yi) + o(1) = 2j\u2212i \u2212 1 + o(1) therefore pir \u00b7 (yjs \u2212 yir) \u2264 0 iff j < i or i = j.\nThis implies that if there is a negative cycle then all the points in the cycle must belong to the same level.\nThe points of any one level lie on the facets of a polytope \u039bi and the prices pis are supporting hyperplanes of the polytope.\nThus, the polytope defines a utility function for which these demands are utility maximizing.\nThe other direction of Afriat``s theorem therefore implies there can be no negative cycles within points on the same level.\nIt follows that there are no negative cycles for the union of observations from all levels hence the sequence of observations (y1, p1), (y11, p11), (y12, p12), ... , (ynd, pnd) is consistent with monotone concave utility function maximization and again by Afriat``s theorem there exists u supporting a demand function fb \u2737 The proof above relies on the fact that an agent have high utility and marginal utility for very large bundles.\nIn many cases it is reasonable to assume that the marginal for very large bundles is very small, or even that the utility or the marginal utility have compact support.\nUnfortunately, rescaling the previous example shows that even a compact set may contain a large shattered set.\nWe notice however, that in this case we obtain a utility function that yield demand functions that are very sensitive to small price changes.\nWe show that the class of utility functions that have marginal utilities with compact support and for which the relevant demand functions are income-Lipschitzian has finite fat shattering dimension.\n\u2732 \u273b \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 \u2745 (0,0) H0 H1r x1 r x1 \u2748 \u2748 \u275c \u275c\u275c r x2 r x2 \u275a \u275a \u275a \u275a \u275a \u275a \u275a \u275a \u275a \u275a \u275a \u275c \u275c\u275c Figure 1: Utility function shattering x1 and x2 Theorem 6.\nLet C be a set of L-income-Lipschitz demand functions from \u2206d to Rd + for some global constant L \u2208 R.\nThen fatC(\u03b3) \u2264 ( L \u03b3 )d Proof : Let p1, ... , pn \u2208 \u2206d be a shattered set with witnesses x1, ... , xn \u2208 Rd +.\nW.l.g. xi+H+ 0 \u2229xj +H\u2212 0 = \u2205 implying xi + H\u2212 1 \u2229 xj + H+ 1 = \u2205, for a labeling b = (b1, ... , bn) \u2208 {0, 1}n such that bi = 0 and bj = 1, ||fb(pi) \u2212 fb(pj)||\u221e > \u03b3 hence ||pi \u2212 pj||\u221e > \u03b3 L .\nA standard packing argument implies n \u2264 (L \u03b3 )d \u2737 6.\nACKNOWLEDGMENTS The authors would like to thank Eli Shamir, Ehud Kalai, Julio Gonz\u00b4alez D\u00b4\u0131az, Rosa Matzkin, Gad Allon and Adam Galambos for helpful discussions and suggestions.\n7.\nREFERENCES [1] Afriat S. N. (1967) The Construction of a Utility Function from Expenditure Data International Economic Review 8, 67-77.\n[2] Anthony M. and Bartlett P. L. (1999) Neural Network Learning: Theoretical Foundations Cambridge University Press.\n[3] Blundell R., Browning M. and Crawford I. (2003) Nonparametric Engel curves and revealed preference.\nEconometrica, 71(1):205-240.\n[4] Blundell R. (2005 ) How revealing is revealed preference?\nEuropean Economic Journal 3, 211 - 235.\n[5] Diewert E. (1973) Afriat and Revealed Preference Theory Review of Economic Studies 40, 419 - 426.\n[6] Farkas J. (1902) \u00a8Uber die Theorie der Einfachen Ungleichungen Journal f\u00a8ur die Reine und Angewandte Mathematik 124 1-27 [7] Houthakker H. (1950) Revealed Preference and the Utility Function Economica 17, 159 - 174.\n[8] Kearns M. and Vazirani U. (1994) An Introduction to Computational Learning Theory The MIT Press Cambridge MA.\n41 [9] Knoblauch V. (1992) A Tight Upper Bound on the Money Metric Utility Function.\nThe American Economic Review, 82(3):660-663.\n[10] Mas-Colell A. (1977) The Recoverability of Consumers'' Preferences from Market Demand.\nEconometrica, 45(6):1409-1430.\n[11] Mas-Colell A. (1978) On Revealed Preference Analysis.\nThe Review of Economic Studies, 45(1):121-131.\n[12] Mas-Colell A., Whinston M. and Green J. R. (1995) Microeconomic Theory Oxford University Press.\n[13] Matzkin R. and Richter M. (1991) Testing Strictly Concave Rationality.\nJournal of Economic Theory, 53:287-303.\n[14] Papadimitriou C. H. and Steiglitz K. (1982) Combinatorial Optimization Dover Publications inc. [15] Richter M. (1966) Revealed Preference Theory.\nEconometrica, 34(3):635-645.\n[16] Uzawa H. (1960 ) Preference and rational choice in the theory of consumption.\nIn K. J. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Models in Social Science Stanford University Press, Stanford, CA.\n[17] Teo C. P. and Vohra R. V. (2003) Afriat``s Theorem and Negative Cycles Working Paper [18] Samuelson P. A. (1948) Consumption Theory in Terms of Revealed Preference Economica 15, 243 - 253.\n[19] Vapnik V. N. (1998) Statistical Learning Theory John Wiley & Sons Inc. [20] Varian H. R. (1982) The Non-Parametric Approach to Demand Analysis Econometrica 50, 945 - 974.\n[21] Varian H. R. (2005) Revealed Preference, In Michael Szenberg editor, Samuelson Economics and the 21st Century.\n[22] Ziegler G. M. (1994) Lectures on Polytopes Springer.\n42","lvl-3":"Learning From Revealed Preference\nABSTRACT\nA sequence of prices and demands are rationalizable if there exists a concave, continuous and monotone utility function such that the demands are the maximizers of the utility function over the budget set corresponding to the price.\nAfriat [1] presented necessary and sufficient conditions for a finite sequence to be rationalizable.\nVarian [20] and later Blundell et al. [3, 4] continued this line of work studying nonparametric methods to forecasts demand.\nTheir results essentially characterize learnability of degenerate classes of demand functions and therefore fall short of giving a general degree of confidence in the forecast.\nThe present paper complements this line of research by introducing a statistical model and a measure of complexity through which we are able to study the learnability of classes of demand functions and derive a degree of confidence in the forecasts.\nOur results show that the class of all demand functions has unbounded complexity and therefore is not learnable, but that there exist interesting and potentially useful classes that are learnable from finite samples.\nWe also present a learning algorithm that is an adaptation of a new proof of Afriat's theorem due to Teo and Vohra [17].\n1.\nINTRODUCTION\nA market is an institution by which economic agents meet and make transactions.\nClassical economic theory explains the incentives of the agents to engage in this behavior through the agents' preference over the set of available bundles indicating that agents attempt to replace their current bundle with bundles that are both more preferred and attainable if such bundles exist.\nThe preference relation is therefore the key factor in understanding consumer behavior.\nOne of the common assumptions in this theory is that the preference relation is represented by a utility function and that agents strive to maximize their utility given a budget constraint.\nThis pattern of behavior is the essence of supply and demand, general equilibria and other aspects of consumer theory.\nFurthermore, as we elaborate in section 2, basic observations on market demand behavior suggest that utility functions are monotone and concave.\nThis brings us to the question, first raised by Samuelson [18], to what degree is this theory refutable?\nGiven observations of price and demand, under what circumstances can we conclude that the data is consistent with the behavior of a utility maximizing agent equipped with a monotone concave utility function and subject to a budget constraint?\nSamuelson gave a necessary but insufficient condition on the underlying preference known as the weak axiom of revealed preference.\nUzawa [16] and Mas-Colell [10, 11] introduced a notion of income-Lipschitz and showed that demand functions with this property are rationalizable.\nThese properties do not require any parametric assumptions and are technically refutable, but they do assume knowledge of the entire demand function and rely heavily on the differential properties of demand functions.\nHence, an infinite amount of information is needed to refute the theory.\nIt is often the case that apart form the demand observations there is additional information on the system and it is sensible to make parametric assumptions, namely, to stipulate some functional form of utility.\nConsistency with utility maximization would then depend on fixing the parameters of the utility function to be consistent with the observations and with a set of equations called the Slutski equations.\nIf such parameters exist, we conclude that the stipulated utility form is consistent with the observations.\nThis approach is useful when there is reason to make these stipulations, it gives an explicit utility function which can be used to make precise forecasts on demand for unob\nserved prices.\nThe downside of this approach is that real life data is often inconsistent with convenient functional forms.\nMoreover, if the observations are inconsistent it is unclear whether this is a refutation of the stipulated functional form or of utility maximization.\nAddressing these issues Houthakker [7] noted that an observer can see only finite quantities of data.\nHe askes when can it be determined that a finite set of observations is consistent with utility maximization without making parametric assumptions?\nHe showes that rationalizability of a finite set of observations is equivalent to the strong axiom of revealed preference.\nRichter [15] showes that strong axiom of revealed preference is equivalent to rationalizability by a strictly concave monotone utility function.\nAfriat [1] gives another set of rationalizability conditions the observations must satisfy.\nVarian [20] introduces the generalized axiom of revealed preference (GARP), an equivalent form of Afriat's consistency condition that is easier to verify computationally.\nIt is interesting to note that these necessary and sufficient conditions for rationalizability are essentially versions of the well known Farkas lemma [6] (see also [22]).\nAfriat [1] proved his theorem by an explicit construction of a utility function witnessing consistency.\nVarian [20] took this one step further progressing from consistency to forecasting.\nVarian's forecasting algorithm basically rules out bundles that are revealed inferior to observed bundles and finds a bundle from the remaining set that together with the observations is consistent with GARP.\nFurthermore, he introduces Samuelson's\" money metric\" as a canonical utility function and gives upper and lower envelope utility functions for the money metric.\nKnoblauch [9] shows these envelopes can be computed efficiently.\nVarian [21] provides an up to date survey on this line of research.\nA different approach is presented by Blundell et al. [3, 4].\nThese papers introduce a model where an agent observes prices and Engel curves for these prices.\nThis gives an improvement on Varian's original bounds, though the basic idea is still to rule out demands that are revealed inferior.\nThis model is in a sense a hybrid between Mas-Colell and Afriat's aproaches.\nThe former requires full information for all prices, the latter for a finite number of prices.\nOn the other hand the approach taken by Blundell et al. requires full information only on a finite number of price trajectories.\nThe motivation for this crossover is to utilize income segmentation in the population to restructure econometric information.\nDifferent segments of the population face the same prices with different budgets, and as much as aggregate data can testify on individual preferences, show how demand varies with the budget.\nApplying non parametric statistical methods, they reconstruct a trajectory from the observed demands of different segments and use it to obtain tighter bounds.\nBoth these methods would most likely give a good forecast for a fixed demand function after sufficiently many observations assuming they were spread out in a reasonable manner.\nHowever, these methods do not consider the complexity of the demand functions and do not use any probabilistic model of the observations.\nTherefore, they are unable to provide any estimate of the number of observations that would be sufficient for a good forecast or the degree of confidence in such a forecast.\nIn this paper we examine the feasibility of demand forecasting with a high degree of confidence using Afriat's conditions.\nWe formulate the question in terms of whether the class of demand functions derived from monotone concave utilities is efficiently PAC-learnable.\nOur first result is negative.\nWe show, by computing the fat shattering dimension, that without any prior assumptions, the set of all demand functions induced by monotone concave utility functions is too rich to be efficiently PAC-learnable.\nHowever, under some prior assumptions on the set of demand functions we show that the fat shattering dimension is finite and therefore the corresponding sets are PAC-learnable.\nIn these cases, assuming the probability distribution by which the observed price-demand pairs are generated is fixed, we are in a position to offer a forecast and a probabilistic estimate on its accuracy.\nIn section 2 we briefly discuss the basic assumptions of demand theory and their implications.\nIn section 3 we present a new proof to Afriat's theorem incorporating an algorithm for efficiently generating a forecasting function due to Teo and Vohra [17].\nWe show that this algorithm is computationally efficient and can be used as a learning algorithm.\nIn section 4 we give a brief introduction to PAC learning including several modifications to learning real vector valued functions.\nWe introduce the notion of fat shattering dimension and use it to devise a lower bound on the sample complexity.\nWe also sketch results on upper bounds.\nIn section 5 we study the learnability of demand functions and directly compute the fat shattering dimension of the class of all demand functions and a class of income-Lipschitzian demand functions with a bounded global income-Lipschitz constant.\n2.\nUTILITY AND DEMAND\n3.\nREVEALED PREFERENCE\nend for\nend while\n4.\nSUPERVISED LEARNING\n5.\nLEARNING FROM REVEALED PREFERENCE","lvl-4":"Learning From Revealed Preference\nABSTRACT\nA sequence of prices and demands are rationalizable if there exists a concave, continuous and monotone utility function such that the demands are the maximizers of the utility function over the budget set corresponding to the price.\nAfriat [1] presented necessary and sufficient conditions for a finite sequence to be rationalizable.\nVarian [20] and later Blundell et al. [3, 4] continued this line of work studying nonparametric methods to forecasts demand.\nTheir results essentially characterize learnability of degenerate classes of demand functions and therefore fall short of giving a general degree of confidence in the forecast.\nThe present paper complements this line of research by introducing a statistical model and a measure of complexity through which we are able to study the learnability of classes of demand functions and derive a degree of confidence in the forecasts.\nOur results show that the class of all demand functions has unbounded complexity and therefore is not learnable, but that there exist interesting and potentially useful classes that are learnable from finite samples.\nWe also present a learning algorithm that is an adaptation of a new proof of Afriat's theorem due to Teo and Vohra [17].\n1.\nINTRODUCTION\nThe preference relation is therefore the key factor in understanding consumer behavior.\nOne of the common assumptions in this theory is that the preference relation is represented by a utility function and that agents strive to maximize their utility given a budget constraint.\nThis pattern of behavior is the essence of supply and demand, general equilibria and other aspects of consumer theory.\nFurthermore, as we elaborate in section 2, basic observations on market demand behavior suggest that utility functions are monotone and concave.\nThis brings us to the question, first raised by Samuelson [18], to what degree is this theory refutable?\nGiven observations of price and demand, under what circumstances can we conclude that the data is consistent with the behavior of a utility maximizing agent equipped with a monotone concave utility function and subject to a budget constraint?\nSamuelson gave a necessary but insufficient condition on the underlying preference known as the weak axiom of revealed preference.\nUzawa [16] and Mas-Colell [10, 11] introduced a notion of income-Lipschitz and showed that demand functions with this property are rationalizable.\nThese properties do not require any parametric assumptions and are technically refutable, but they do assume knowledge of the entire demand function and rely heavily on the differential properties of demand functions.\nHence, an infinite amount of information is needed to refute the theory.\nIt is often the case that apart form the demand observations there is additional information on the system and it is sensible to make parametric assumptions, namely, to stipulate some functional form of utility.\nConsistency with utility maximization would then depend on fixing the parameters of the utility function to be consistent with the observations and with a set of equations called the Slutski equations.\nIf such parameters exist, we conclude that the stipulated utility form is consistent with the observations.\nThis approach is useful when there is reason to make these stipulations, it gives an explicit utility function which can be used to make precise forecasts on demand for unob\nserved prices.\nThe downside of this approach is that real life data is often inconsistent with convenient functional forms.\nMoreover, if the observations are inconsistent it is unclear whether this is a refutation of the stipulated functional form or of utility maximization.\nHe askes when can it be determined that a finite set of observations is consistent with utility maximization without making parametric assumptions?\nHe showes that rationalizability of a finite set of observations is equivalent to the strong axiom of revealed preference.\nRichter [15] showes that strong axiom of revealed preference is equivalent to rationalizability by a strictly concave monotone utility function.\nAfriat [1] gives another set of rationalizability conditions the observations must satisfy.\nVarian [20] introduces the generalized axiom of revealed preference (GARP), an equivalent form of Afriat's consistency condition that is easier to verify computationally.\nAfriat [1] proved his theorem by an explicit construction of a utility function witnessing consistency.\nVarian [20] took this one step further progressing from consistency to forecasting.\nVarian's forecasting algorithm basically rules out bundles that are revealed inferior to observed bundles and finds a bundle from the remaining set that together with the observations is consistent with GARP.\nFurthermore, he introduces Samuelson's\" money metric\" as a canonical utility function and gives upper and lower envelope utility functions for the money metric.\nKnoblauch [9] shows these envelopes can be computed efficiently.\nA different approach is presented by Blundell et al. [3, 4].\nThese papers introduce a model where an agent observes prices and Engel curves for these prices.\nThis gives an improvement on Varian's original bounds, though the basic idea is still to rule out demands that are revealed inferior.\nThis model is in a sense a hybrid between Mas-Colell and Afriat's aproaches.\nThe former requires full information for all prices, the latter for a finite number of prices.\nOn the other hand the approach taken by Blundell et al. requires full information only on a finite number of price trajectories.\nDifferent segments of the population face the same prices with different budgets, and as much as aggregate data can testify on individual preferences, show how demand varies with the budget.\nApplying non parametric statistical methods, they reconstruct a trajectory from the observed demands of different segments and use it to obtain tighter bounds.\nBoth these methods would most likely give a good forecast for a fixed demand function after sufficiently many observations assuming they were spread out in a reasonable manner.\nHowever, these methods do not consider the complexity of the demand functions and do not use any probabilistic model of the observations.\nTherefore, they are unable to provide any estimate of the number of observations that would be sufficient for a good forecast or the degree of confidence in such a forecast.\nIn this paper we examine the feasibility of demand forecasting with a high degree of confidence using Afriat's conditions.\nWe formulate the question in terms of whether the class of demand functions derived from monotone concave utilities is efficiently PAC-learnable.\nOur first result is negative.\nWe show, by computing the fat shattering dimension, that without any prior assumptions, the set of all demand functions induced by monotone concave utility functions is too rich to be efficiently PAC-learnable.\nHowever, under some prior assumptions on the set of demand functions we show that the fat shattering dimension is finite and therefore the corresponding sets are PAC-learnable.\nIn section 2 we briefly discuss the basic assumptions of demand theory and their implications.\nIn section 3 we present a new proof to Afriat's theorem incorporating an algorithm for efficiently generating a forecasting function due to Teo and Vohra [17].\nWe show that this algorithm is computationally efficient and can be used as a learning algorithm.\nIn section 4 we give a brief introduction to PAC learning including several modifications to learning real vector valued functions.\nWe also sketch results on upper bounds.\nIn section 5 we study the learnability of demand functions and directly compute the fat shattering dimension of the class of all demand functions and a class of income-Lipschitzian demand functions with a bounded global income-Lipschitz constant.","lvl-2":"Learning From Revealed Preference\nABSTRACT\nA sequence of prices and demands are rationalizable if there exists a concave, continuous and monotone utility function such that the demands are the maximizers of the utility function over the budget set corresponding to the price.\nAfriat [1] presented necessary and sufficient conditions for a finite sequence to be rationalizable.\nVarian [20] and later Blundell et al. [3, 4] continued this line of work studying nonparametric methods to forecasts demand.\nTheir results essentially characterize learnability of degenerate classes of demand functions and therefore fall short of giving a general degree of confidence in the forecast.\nThe present paper complements this line of research by introducing a statistical model and a measure of complexity through which we are able to study the learnability of classes of demand functions and derive a degree of confidence in the forecasts.\nOur results show that the class of all demand functions has unbounded complexity and therefore is not learnable, but that there exist interesting and potentially useful classes that are learnable from finite samples.\nWe also present a learning algorithm that is an adaptation of a new proof of Afriat's theorem due to Teo and Vohra [17].\n1.\nINTRODUCTION\nA market is an institution by which economic agents meet and make transactions.\nClassical economic theory explains the incentives of the agents to engage in this behavior through the agents' preference over the set of available bundles indicating that agents attempt to replace their current bundle with bundles that are both more preferred and attainable if such bundles exist.\nThe preference relation is therefore the key factor in understanding consumer behavior.\nOne of the common assumptions in this theory is that the preference relation is represented by a utility function and that agents strive to maximize their utility given a budget constraint.\nThis pattern of behavior is the essence of supply and demand, general equilibria and other aspects of consumer theory.\nFurthermore, as we elaborate in section 2, basic observations on market demand behavior suggest that utility functions are monotone and concave.\nThis brings us to the question, first raised by Samuelson [18], to what degree is this theory refutable?\nGiven observations of price and demand, under what circumstances can we conclude that the data is consistent with the behavior of a utility maximizing agent equipped with a monotone concave utility function and subject to a budget constraint?\nSamuelson gave a necessary but insufficient condition on the underlying preference known as the weak axiom of revealed preference.\nUzawa [16] and Mas-Colell [10, 11] introduced a notion of income-Lipschitz and showed that demand functions with this property are rationalizable.\nThese properties do not require any parametric assumptions and are technically refutable, but they do assume knowledge of the entire demand function and rely heavily on the differential properties of demand functions.\nHence, an infinite amount of information is needed to refute the theory.\nIt is often the case that apart form the demand observations there is additional information on the system and it is sensible to make parametric assumptions, namely, to stipulate some functional form of utility.\nConsistency with utility maximization would then depend on fixing the parameters of the utility function to be consistent with the observations and with a set of equations called the Slutski equations.\nIf such parameters exist, we conclude that the stipulated utility form is consistent with the observations.\nThis approach is useful when there is reason to make these stipulations, it gives an explicit utility function which can be used to make precise forecasts on demand for unob\nserved prices.\nThe downside of this approach is that real life data is often inconsistent with convenient functional forms.\nMoreover, if the observations are inconsistent it is unclear whether this is a refutation of the stipulated functional form or of utility maximization.\nAddressing these issues Houthakker [7] noted that an observer can see only finite quantities of data.\nHe askes when can it be determined that a finite set of observations is consistent with utility maximization without making parametric assumptions?\nHe showes that rationalizability of a finite set of observations is equivalent to the strong axiom of revealed preference.\nRichter [15] showes that strong axiom of revealed preference is equivalent to rationalizability by a strictly concave monotone utility function.\nAfriat [1] gives another set of rationalizability conditions the observations must satisfy.\nVarian [20] introduces the generalized axiom of revealed preference (GARP), an equivalent form of Afriat's consistency condition that is easier to verify computationally.\nIt is interesting to note that these necessary and sufficient conditions for rationalizability are essentially versions of the well known Farkas lemma [6] (see also [22]).\nAfriat [1] proved his theorem by an explicit construction of a utility function witnessing consistency.\nVarian [20] took this one step further progressing from consistency to forecasting.\nVarian's forecasting algorithm basically rules out bundles that are revealed inferior to observed bundles and finds a bundle from the remaining set that together with the observations is consistent with GARP.\nFurthermore, he introduces Samuelson's\" money metric\" as a canonical utility function and gives upper and lower envelope utility functions for the money metric.\nKnoblauch [9] shows these envelopes can be computed efficiently.\nVarian [21] provides an up to date survey on this line of research.\nA different approach is presented by Blundell et al. [3, 4].\nThese papers introduce a model where an agent observes prices and Engel curves for these prices.\nThis gives an improvement on Varian's original bounds, though the basic idea is still to rule out demands that are revealed inferior.\nThis model is in a sense a hybrid between Mas-Colell and Afriat's aproaches.\nThe former requires full information for all prices, the latter for a finite number of prices.\nOn the other hand the approach taken by Blundell et al. requires full information only on a finite number of price trajectories.\nThe motivation for this crossover is to utilize income segmentation in the population to restructure econometric information.\nDifferent segments of the population face the same prices with different budgets, and as much as aggregate data can testify on individual preferences, show how demand varies with the budget.\nApplying non parametric statistical methods, they reconstruct a trajectory from the observed demands of different segments and use it to obtain tighter bounds.\nBoth these methods would most likely give a good forecast for a fixed demand function after sufficiently many observations assuming they were spread out in a reasonable manner.\nHowever, these methods do not consider the complexity of the demand functions and do not use any probabilistic model of the observations.\nTherefore, they are unable to provide any estimate of the number of observations that would be sufficient for a good forecast or the degree of confidence in such a forecast.\nIn this paper we examine the feasibility of demand forecasting with a high degree of confidence using Afriat's conditions.\nWe formulate the question in terms of whether the class of demand functions derived from monotone concave utilities is efficiently PAC-learnable.\nOur first result is negative.\nWe show, by computing the fat shattering dimension, that without any prior assumptions, the set of all demand functions induced by monotone concave utility functions is too rich to be efficiently PAC-learnable.\nHowever, under some prior assumptions on the set of demand functions we show that the fat shattering dimension is finite and therefore the corresponding sets are PAC-learnable.\nIn these cases, assuming the probability distribution by which the observed price-demand pairs are generated is fixed, we are in a position to offer a forecast and a probabilistic estimate on its accuracy.\nIn section 2 we briefly discuss the basic assumptions of demand theory and their implications.\nIn section 3 we present a new proof to Afriat's theorem incorporating an algorithm for efficiently generating a forecasting function due to Teo and Vohra [17].\nWe show that this algorithm is computationally efficient and can be used as a learning algorithm.\nIn section 4 we give a brief introduction to PAC learning including several modifications to learning real vector valued functions.\nWe introduce the notion of fat shattering dimension and use it to devise a lower bound on the sample complexity.\nWe also sketch results on upper bounds.\nIn section 5 we study the learnability of demand functions and directly compute the fat shattering dimension of the class of all demand functions and a class of income-Lipschitzian demand functions with a bounded global income-Lipschitz constant.\n2.\nUTILITY AND DEMAND\nA utility function u: Rn +--+ R is a function relating bundles of goods to a cardinal in a manner reflecting the preferences over the bundles.\nA rational agent with a budget that w.l.g equals 1 facing a price vector p E Rn + will choose from her budget set B (p) = {x E Rn +: p \u00b7 x <1} a bundle x E Rn + that maximizes her private utility.\nThe first assumption we make is that the function is monotone increasing, namely, if x> y, in the sense that the inequality holds coordinatewise, then u (x)> u (y).\nThis reflects the assumption that agents will always prefer more of any one good.\nThis, of course, does not necessarily hold in practice, as in many cases excess supply may lead to storage expenses or other externalities.\nHowever, in such cases the demand will be an interior point of the budget set and the less preferred bundles won't be observed.\nThe second assumption we make on the utility is that all the marginals (partial derivatives) are monotone decreasing.\nThis is the law of diminishing marginal utility which assumes that the larger the excess of one good over the other the less we value each additional good of one kind over the other.\nThese assumptions imply that the utility function is concave and monotone on the observations.\nThe demand function of the agent is the correspondence fu: Rn +--+ Rn + satisfying f (p) = argmax {u (x): p \u00b7 x 0} and let G = ([n], S \u222a E) be a digraph with weights wij = \u2212 1 if (i, j) \u2208 S and wij = 0 otherwise.\nD (A) has no negative cycles, hence G is acyclic and breadth first search can assign potentials \u03c6i such that \u03c6j \u2264 \u03c6i + wij for (i, j) \u2208 S \u222a E.\nWe relabel the vertices so that \u03c61 \u2265 \u03c62 \u2265...\u2265 \u03c6n.\nLet\nWe show that for this choice of s, D (A, s) contains no negative weight cycle.\nSuppose C = (i1,..., ik) is a cycle in D (A, s).\nIf \u03c6 is constant on C then aijij +1 = 0 for j = 1,..., k and we are done.\nOtherwise let iv \u2208 C be the vertex with smallest potential satisfying w.l.o.g. \u03c6 (iv) <\u03c6 (iv +1).\nFor any cycle C in the digraph D (A, s), let (v, u) be an edge in C such that (i) v has the smallest potential among all vertices in C, and (ii) \u03c6u> \u03c6v.\nSuch an edge exists, otherwise \u03c6i is identical for all vertices i in C.\nIn this case, all edges in C have non-negative edge weight in D (A, s).\nIf (iv, iv +1) \u2208 S \u222a E, then we have\na contradiction.\nHence (iv, iv +1) \u2208 T. Now, note that all vertices q in C with the same potential as iv must be incident to an edge (q, t) in C such that \u03c6 (t) \u2265 \u03c6 (q).\nHence the edge (q, t) must have non-negative weight.\ni.e., aq, t \u2265 0.\nLet p denote a vertex in C with the second smallest potential.\nNow, C has weight\nAlgorithm 1 returns in polynomial time a hypothesis that is a piecewise linear function and agrees with the labeling of the observation namely sample error zero.\nTo use this function to forecast demand for unobserved prices we need algorithm 2 which maximizes the function on a given budget set.\nSince u (x) = mini {yi + sipi (x \u2212 xi)} this is a linear program and can be solved in time polynomial in d, n as well as the size of the largest number in the input.\nend for\nwhile there exist unvisited vertices do visit new vertex j assign potential to \u03c6j\nend while\nreorder indices so \u03c61 \u2264 \u03c62...\u2264 \u03c6n for all 1 \u2264 i \u2264 n do\n4.\nSUPERVISED LEARNING\nIn a supervised learning problem, a learning algorithm is given a finite sample of labeled observations as input and is required to return a model of the functional relationship underlying the labeling.\nThis model, referred to as a hypothesis, is usually a computable function that is used to forecast the labels of future observations.\nThe labels are usually binary values indicating the membership of the observed points in the set that is being learned.\nHowever, we are not limited to binary values and, indeed, in the demand functions we are studying the labels are real vectors.\nThe learning problem has three major components: estimation, approximation and complexity.\nThe estimation problem is concerned with the tradeoff between the size of the sample given to the algorithm and the degree of confidence we have in the forecast it produces.\nThe approximation problem is concerned with the ability of hypotheses from a certain class to approximate target functions from a possibly different class.\nThe complexity problem is concerned with the computational complexity of finding a hypothesis that approximates the target function.\nA parametric paradigm assumes that the underlying functional relationship comes from a well defined family, such as the Cobb-Douglas production functions; the system must learn the parameters characterizing this family.\nSuppose that a learning algorithm observes a finite set of production data which it assumes comes from a Cobb-Douglas production function and returns a hypothesis that is a polynomial of bounded degree.\nThe estimation problem in this case would be to assess the sample size needed to obtain a good estimate of the coefficients.\nThe approximation problem would be to assess the error sustained from approximating a rational function by a polynomial.\nThe complexity problem would be the assessment of the time required to compute the polynomial coefficients.\nIn the probably approximately correct (PAC) paradigm, the learning of a target function is done by a class of hypothesis functions, that does or does not include the target function itself; it does not necessitate any parametric assumptions on this class.\nIt is also assumed that the observations are generated independently by some distribution on the domain of the relation and that this distribution is fixed.\nIf the class of target functions has finite'd imensionality' then a function in the class is characterized by its values on a finite number of points.\nThe basic idea is to observe the labeling of a finite number of points and find a function from a class of hypotheses which\" tends to agree\" with this labeling.\nThe theory tells us that if the sample is large enough then any function that\" tends to agree\" with the labeling will, with high probability, be a good approximation of the target function for future observations.\nThe prime objective of PAC theory is to develop the relevant notion of dimensionality and to formalize the tradeoff between dimensionality, sample size and the level of confidence in the forecast.\nIn the revealed preference setting, our objective is to use a set of observations of prices and demand to forecast demand for unobserved prices.\nThus the target function is a mapping from prices to bundles, namely f: Rd + \u2192 Rd +.\nThe theory of PAC learning for real valued functions is concerned predominantly with functions from Rd to R.\nIn this section we introduce modifications to the classical notions of PAC learning to vector valued functions and use them to prove a lower bound for sample complexity.\nAn upper bound on the sample complexity can also be proved for our definition of fat shattering, but we do not bring it here as the proof is much more tedious and analogous to the proof of theorem 4.\nBefore we can proceed with the formal definition, we must clarify what we mean by forecast and tend to agree.\nIn the case of discrete learning, we would like to obtain a function h that with high probability agrees with f.\nWe would then take the probability lo\u03c3 (f (x) = h (x)) as the measure of the quality of the estimation.\nDemand functions are real vector functions and we therefore do not expect f and h to agree with high probability.\nRather we are content with having small mean square errors on all coordinates.\nThus, our measure of estimation error is given by:\nFor given observations S = {(p1, x1),..., (pn, xn)} we measure the agreement by the sample error\nA sample error minimization (SEM) algorithm is an algorithm that finds a hypothesis minimizing erS (S, h).\nIn the case of revealed preference, there is a function that takes the sample error to zero.\nNevertheless, the upper bounds theorem we use does not require the sample error to be zero.\nthere exists an algorithm L that for a set of observations of length mL = mL (\u03b5, \u03b4) = Poly (\u03b41, 1\u03b5) finds a function h from H such that er\u03c3 (f, h) <\u03b5 with probability 1 \u2212 \u03b4.\nThere may be several learning algorithms for C with different sample complexities.\nThe minimal mL is called the sample complexity of C. Note that in the definition there is no mention of the time complexity to find h in H and evaluating h (p).\nA set C is efficiently PAC-learnable if there is a Poly (\u03b41, 1\u03b5) time algorithm for choosing h and evaluating h (p).\nFor discrete function sets, sample complexity bounds may be derived from the VC-dimension of the set (see [19, 8]).\nAn analog to this notion of dimension for real functions is the fat shattering dimension.\nWe use an adaptation of this notion to real vector valued function sets.\nLet \u0393 \u2282 Rd + and let C be a set of real functions from \u0393 to Rd +.\nWe define the \u03b3--fat shattering dimension of C, denoted fatC (\u03b3) as the maximal size of a \u03b3-shattered set in \u0393.\nIf this size is unbounded then the dimension is infinite.\nTo demonstrate the usefulness of the this notion we use it to derive a lower bound on the sample complexity.\nLEMMA 2.\nSuppose the functions {fb: b \u2208 {0, 1} n} witness the shattering of {p1,..., pn}.\nThen, for any x \u2208 Rd + and labels b, b ~ \u2208 {0, 1} n such that bi = ~ b ~ i either | | fb (pi) \u2212 x | | \u221e> \u03b32d or | | fb, (pi) \u2212 x | | \u221e> \u03b32d.\nProof: Since the max exceeds the mean, it follows that if fb and fb, correspond to labels such that bi = ~ b ~ i then\nThis implies that for any x \u2208 Rd + either | | fb (pi) \u2212 x | | \u221e> \u03b32d or | | fb, (pi) \u2212 x | | \u221e> \u03b32d \u2737 THEOREM 3.\nSuppose that C is a class of functions mapping from \u0393 to Rd +.\nThen any learning algorithm L for C has sample complexity satisfying mL (\u03b5, \u03b4) \u2265 21 fatC (4d\u03b5) An analog of this theorem for real valued functions with a tighter bound can be found in [2], this version will suffice for our needs.\nProof: Suppose n = 21 fatC (4d\u03b5) then there exists a set \u0393S = {p1,..., p2n} that is shattered by C.\nIt suffices to show that at least one distribution requires large sample.\nWe construct such a distribution.\nLet \u03c3 be the uniform distribution on \u0393S and CS = {fb: b \u2208 {0, 1} 2n} be the set of functions that witness the shattering of {p1..., pn}.\nLet fb be a function chosen uniformly at random from CS.\nIt follows from lemma 2 (with \u03b3 = 2d +) that for any fixed function h the probability that | | fb (p) \u2212 h (p) | | \u221e> 2\u03b5 for p \u2208 \u0393S is at least as high as getting heads on a fair coin toss.\nTherefore Eb (| | fb (p) \u2212 h (p) | | \u221e)> 2\u03b5.\nSuppose for a sequence of observations z = ((pi1, x1),..., (pin, xn)) a learning algorithm L finds a function h.\nThe observation above and Fubini imply Eb (er\u03c3 (h, fb))> \u03b5.\nRandomizing on the sample space we get Eb, z (er\u03c3 (h, fb))> \u03b5.\nThis shows Eh, z (er\u03c3 (h, fb,,))> \u03b5 for some fb,,.\nW.l.g we may assume the error is bounded (since we are looking at what is essentially a finite set) therefore the probability that er\u03c3 (h, fb,,)> \u03b5 cannot be too small, hence fb,, is not PAClearnable with a sample of size n \u2737 The following theorem gives an upper bound on the sample complexity required for learning a set of functions with finite fat shattering dimension.\nThe theorem is proved in [2] for real valued functions, the proof for the real vector case is analogous and so omitted.\n5.\nLEARNING FROM REVEALED PREFERENCE\nAlgorithm 1 is an efficient learning algorithm in the sense that it finds a hypothesis with sample error zero in time polynomial in the number of observations.\nAs we have seen in section 4 the number of observations required to PAC learn the demand depends on the fat shattering dimension of the class of demand functions which in turn depends on the class of utility functions they are derived from.\nWe compute the fat shattering dimension for two classes of demands.\nThe first is the class of all demand functions, we show that this class has infinite shattering dimension (we give two proofs) and is therefore not PAC learnable.\nThe second class we consider is the class of demand functions derived from utilities with bounded support and income-Lipschitz.\nWe show that the class has a finite fat shattering dimension that depends on the support and the income-Lipschitz constant.\nTHEOREM 5.\nLet C be a set of demand functions from Rd + to Rd + then\nProof 1: For \u03b5> 0 let pi = 2 \u2212 ip for i = 1,..., n be a set of price vectors inducing parallel budget sets Bi and let x1,..., xn be the intersection of these hyperplanes with an orthogonal line passing through the center.\nLet H0 and H1 be hyperplanes that are not parallel to p and let x ~ i \u2208 Bi \u2229 (xi + H0 +) and x ~ ~ i \u2208 Bi \u2229 (xi + H1 \u2212) for i = 1...n (see figure 1).\nFor any labeling b = (b1,..., bn) \u2208 {0, 1} n let y = y (b) = (y1,..., yn) be a set of demands such that yi = x ~ i if bi = 0 and yi = x ~ ~ i if bi = 1 (we omit an additional index b in y for notational convenience).\nTo show that p1,..., pn is shattered it suffices to find for every b a demand function fb supported by concave utility such that fb (pi) = ybi.\nTo show that such a function exists it suffices to show that Afriat's conditions are satisfied.\nSince yi are in the budget\nset yi \u00b7 2 \u2212 ip = 1, therefore pi \u00b7 (yj \u2212 yi) = 2j \u2212 i \u2212 1.\nThis shows that pi \u00b7 (yj \u2212 yi) \u2264 0 iff j \u03b3 hence | | pi \u2212 pj | | \u221e> \u03b3L.\nA standard packing argument implies n \u2264 (L\u03b3) d \u2737","keyphrases":["learn from reveal prefer","reveal prefer","rationaliz","forecast","demand function","complex problem","probabl approxim correct","monoton concav util function","observ finit set","incom-lipschitz","fat shatter dimens","machin learn","fat shatter"],"prmu":["P","P","P","P","P","M","U","R","M","U","U","M","U"]} {"id":"C-17","title":"Deployment Issues of a VoIP Conferencing System in a Virtual Conferencing Environment","abstract":"Real-time services have been supported by and large on circuit-switched networks. Recent trends favour services ported on packet-switched networks. For audio conferencing, we need to consider many issues -- scalability, quality of the conference application, floor control and load on the clients\/servers -- to name a few. In this paper, we describe an audio service framework designed to provide a Virtual Conferencing Environment (VCE). The system is designed to accommodate a large number of end users speaking at the same time and spread across the Internet. The framework is based on Conference Servers [14], which facilitate the audio handling, while we exploit the SIP capabilities for signaling purposes. Client selection is based on a recent quantifier called \"Loudness Number\" that helps mimic a physical face-to-face conference. We deal with deployment issues of the proposed solution both in terms of scalability and interactivity, while explaining the techniques we use to reduce the traffic. We have implemented a Conference Server (CS) application on a campus-wide network at our Institute.","lvl-1":"Deployment Issues of a VoIP Conferencing System in a Virtual Conferencing Environment R. Venkatesha Prasad i Richard Hurni ii H.S. Jamadagni iii H.N. Shankar iv i, iii Centre for Electronics Design and Technology Indian Institute of Science, Bangalore, India Telephone: +91\u00a080\u00a0360\u00a00810 i, iii {vprasad, hsjam}@cedt.\niisc.ernet.in ii hurni@ieee.org iv hn_shankar@yahoo.com ABSTRACT Real-time services have been supported by and large on circuitswitched networks.\nRecent trends favour services ported on packet-switched networks.\nFor audio conferencing, we need to consider many issues - scalability, quality of the conference application, floor control and load on the clients\/servers - to name a few.\nIn this paper, we describe an audio service framework designed to provide a Virtual Conferencing Environment (VCE).\nThe system is designed to accommodate a large number of end users speaking at the same time and spread across the Internet.\nThe framework is based on Conference Servers [14], which facilitate the audio handling, while we exploit the SIP capabilities for signaling purposes.\nClient selection is based on a recent quantifier called ``Loudness Number'' that helps mimic a physical face-to-face conference.\nWe deal with deployment issues of the proposed solution both in terms of scalability and interactivity, while explaining the techniques we use to reduce the traffic.\nWe have implemented a Conference Server (CS) application on a campus-wide network at our Institute.\nCategories and Subjects Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - Client \/ Server, distributed applications.\nGeneral Terms Algorithms, Performance, Design, Theory.\n1.\nINTRODUCTION Today's Internet uses the IP protocol suite that was primarily designed for the transport of data and provides best effort data delivery.\nDelay-constraints and characteristics separate traditional data on the one hand from voice & video applications on the other.\nHence, as progressively time-sensitive voice and video applications are deployed on the Internet, the inadequacy of the Internet is exposed.\nFurther, we seek to port telephone services on the Internet.\nAmong them, virtual conference (teleconference) facility is at the cutting edge.\nAudio and video conferencing on Internet are popular [25] for the several advantages they inhere [3,6].\nClearly, the bandwidth required for a teleconference over the Internet increases rapidly with the number of participants; reducing bandwidth without compromising audio quality is a challenge in Internet Telephony.\nAdditional critical issues are: (a) packet delay, (b) echo, (c) mixing of audio from selected clients, (d) automatic selection of clients to participate in the conference, (e) playout of mixed audio for every client, (f) handling clients not capable of mixing audio streams (such clients are known as dumb clients), and (g) deciding the number of simultaneously active clients in the conference without compromising voice quality.\nWhile all the above requirements are from the technology point of view, the user's perspective and interactions are also essential factors.\nThere is plenty of discussion amongst HCI and CSCW community on the use of Ethnomethodology for design of CSCW applications.\nThe basic approach is to provide larger bandwidth, more facilities and more advanced control mechanisms, looking forward to better quality of interaction.\nThis approach ignores the functional utility of the environment that is used for collaboration.\nEckehard Doerry [4] criticizes this approach by saying ``it is keeping form before function''.\nThus, the need is to take an approach that considers both aspects - the technical and the functional.\nRegarding the functional aspect, we refer to [15] where it has been dealt with in some detail.\nIn this work, we do not discuss video conferencing; its inclusion does not significantly benefit conference quality [4].\nOur focus is on virtual audio environments.\nWe first outline the challenges encountered in virtual audio conferences.\nThen we look into the motivations followed by relevant literature.\nIn Section 5, we explain the architecture of our system.\nSection 6 comprises description of the various algorithms used in our setup.\nWe address deployment issues.\nA discussion on Swiss Federal Institute of Technology, Lausanne.\nFormer visitor at CEDT.\nPESIT and NIAS, Bangalore, India.iv ii performance follows.\nWe conclude taking alongside some implementation issues.\n2.\nCHALLENGES IN VoIP CONFERENCING Many challenges arise in building a VoIP application.\nThe following are of particular concern in the process: \u2022 Ease of use: Conferencing must be simple; users need no domain expertise.\nManagement (addition\/removal) of clients and servers must be uncomplicated.\nApplication development should not presuppose specific characteristics of the underlying system or of network layers.\nEase of use may include leveraging readily available, technically feasible and economically viable technologies.\n\u2022 Scalability: Conferencing must seem uninterrupted under heavy loads, i.e., when many additional users are added on.\nTraffic on WAN should not grow appreciably with the total number of clients; else, this has lead to congestion.\nSo a means to regulate traffic to a minimum is needed for this kind of real-time applications.\n\u2022 Interactivity: In Virtual Conferencing Environments (VCEs), we intend a face-to-face-like conferencing application that mimics a ``real'' conference, where more vocal participants invite attention.\nTurn-taking in floor occupation by participants must be adapted gracefully to give a feel of natural transition.\n\u2022 Standardization: The solution must conform to established standards so as to gain interoperability and peer acceptance.\nThe above requirements are placed in the perspective of observations made in earlier works (vide Sections 3 and 4) and will steer the VCE design.\n3.\nTHE MOTIVATION Ramanathan and Rangan [20] have studied in detail the architectural configurations comparing many conferencing architecture schemes taking into consideration the network delay and computation requirements for mixing.\nFunctional division and object-oriented architecture design that aid in implementation is presented in [1].\nAn overview of many issues involved in supporting a large conference is dealt in [8].\nH. P. Dommel [5] and many others highlight floor control as another pivotal aspect to be taken into account in designing a conferencing tool.\nTightly coupled conference control protocols in Internet belong to the ITU-T H.323 family [9]; however, they are mainly for small conferences.\nThe latest IETF draft by Rosenberg and Schulzrinne [23] discusses conferencing models with SIP [22] in the background.\nAspects of implementation for centralized SIP conferencing are reported in [26].\nA new approach called partial mixing by Radenkovic [18] allows for mixed and non-mixed streams to coexist.\nIn all the above proposals, while there are some very useful suggestions, they share one or more of the following limitations: \u2022 In an audio conference, streams from all the clients need not be mixed.\nActually, mixing many arbitrary streams [24] from clients degrades the quality of the conference due to the reduction in the volume (spatial aspect of speech).\nThe number of streams mixed varies dynamically depending on the number of active participants.\nThis would lead to fluctuations in the volume of every individual participant causing severe degradation in quality.\nCustomized mixing of streams is not possible when many clients are active.\nThere is a threshold on the number of simultaneous speakers above which increasing the number of speakers becomes counterproductive to conference quality.\nFixing the maximum number of simultaneous speakers is dealt in a recent work [15] using Ethnomethodology, and is conjectured to be three.\nThus it is advisable to honour that constraint.\n\u2022 There cannot be many intermediate mixers (similarly, Conference Servers as in [10]) in stages as in [20] because it brings in inordinate delay by increasing the number of hops and is not scalable with interactivity in focus.\n\u2022 Floor Control for an audio conference (even video conference) with explicit turn-taking instructions to participants renders the conference essentially a one-speakerat-a-time affair, not a live and free-to-interrupt one.\nThis way, the conference becomes markedly artificial and its quality degrades.\nSchulzrinne et al. [24], assume only one participant is speaking at a time.\nIn this case, if applications are implemented with some control [5], the service becomes `gagging'' for the users.\n\u2022 Partial mixing [18] has a similar problem as that of mixing when more streams are mixed.\nMoreover, in [18], to allow impromptu speech, mixing is not done when the network can afford high bandwidth requirements for sending\/receiving all the streams, but it is unnecessary [15].\n\u2022 For large conferences [23, 10] a centralized conference cannot scale up.\nWith multicasting, clients will have to parse many streams and traffic on a client``s network increases unnecessarily.\nEvidently, different particular issues, all of which are a subset of requirements (defined in [14] and [16]) for a VoIP conferencing support, are tackled.\nThus there is a need to address conferencing as a whole with all its requirements considered concurrently.\nTowards this goal, the VoIP conferencing system we propose is intended to be scalable and interactive.\nWe make use of the ``Loudness Number'' for implementing floor control.\nThis permits a participant to freely get into the speaking mode to interrupt the current speaker as in a natural face-to-face meeting.\nAn upper limit on the number of floors (i.e., the number of speakers allowed to speak at the same time) is fixed using a conjecture proposed in [15].\nThe work presented here is in continuation of our studies into conferencing based on the Session Initiation Protocol in [14] and [16].\nSIP, defined in [22] is now the most popular standard for VoIP deployment and has been chosen for its strength, ease of use, extensibility and compatibility.\nThis is the reason it will be in the background of all controlling messages that will implicitly arise between the entities in our architecture.\nThe actual messages are described in [16] and, as such, we do not present a complete description of them here.\n4.\nRELATED WORK The SIP standard defined in RFC 3261 [22] and in later extensions such as [21] does not offer conference control services such as floor control or voting and does not prescribe how a conference is to be managed.\nHowever SIP can be used to initiate a session that uses some other conference control protocol.\nThe core SIP specification supports many models for conferencing [26, 23].\nIn the server-based models, a server mixes media streams, whereas in a server-less conference, mixing is done at the end systems.\nSDP [7] can be used to define media capabilities and provide other information about the conference.\nWe shall now consider a few conference models in SIP that have been proposed recently [23].\nFirst, let us look into server-less models.\nIn End-System Mixing, only one client (SIP UA) handles the signaling and media mixing for all the others, which is clearly not scalable and causes problems when that particular client leaves the conference.\nIn the Users Joining model, a tree grows, as each invited party constitutes a new branch in the distribution path.\nThis leads to an increasing number of hops for the remote leaves and is not scalable.\nAnother option would be to use multicast for conferencing but multicast is not enabled over Internet and only possible on a LAN presently.\nAmong server-based models, in a Dial-In Conference, UAs connect to a central server that handles all the mixing.\nThis model is not scalable as it is limited by the processing power of the server and bandwidth of the network.\nAdhoc Centralized Conferences and Dial-Out Conference Servers have similar mechanisms and problems.\nHybrid models involving centralized signaling and distributed media, with the latter using unicast or multicast, raise scalability problems as before.\nHowever an advantage is that the conference control can be a third party solution.\nDistributed Partial Mixing, presented in [18], proposes that in case of bandwidth limitation, some streams are mixed and some are not, leaving interactivity intact.\nLoss of spatialism when they mix and the bandwidth increase when they do not are open problems.\nA related study [19] by the same author proposes conferencing architecture for Collaborative Virtual Environments (CVEs) but does not provide the scalability angle without the availability of multicasting.\nWith the limitations of proposed conferencing systems in mind, we will now detail our proposal.\n5.\nSYSTEM ARCHITECTURE This section is dedicated to the description of the proposed system architecture.\nHowever, as this paper constitutes the continuation of our work started in [14] and furthered in [16], we will not present here all the details about the proposed entities and invite the readers to consult the papers mentioned above for a full and thorough description.\nFirst, we do not restrict our conferencing system to work on small conferences only, but rather on large audio VCEs that have hundreds (or even thousands) of users across a Wide Area Network (WAN) such as the Internet.\nThis view stems from an appraisal that VCEs will gain in importance in the future, as interactive audio conferences will be more popular because of the spread of the media culture around the world.\nTwo issues must be taken care of when building a VoIP conferencing system: (i) the front-end, consisting of the application program running on the end-users'' computers and (ii) the back-end that provides other application programs that facilitate conferencing and conference.\nThe participating users are grouped into several domains.\nThese domains are Local Area Networks (LANs), such as corporate or educational networks.\nThis distributed assumption asks for distributed controlling and media handling solutions, as centralized systems would not scale for such very large conferences (vide Section 4).\nMore explicitly, in each domain, we can identify several relevant logical components of a conferencing facility (Fig. 1): An arbitrary number of end users (clients) that can take part in at most one audio conference at a time.\nEvery user is Fig. 1.\nConference example - 3 domains containing the necessary entities so that the conference can take place.\nincluded in one and only one domain at a given instant, but can move from domain to domain (nomadism).\nIn our conferencing environment, these clients are regular SIP User Agents (SIP UAs), as defined in [22] so to gain in interoperability with other existing SIP-compatible systems.\nThese clients are thus not aware of the complex setting that supports the conference and this is highlighted below.\nOne SIP Server (SIPS) per domain, taking care of all the signaling aspects of the conference (clients joining, leaving, etc.) [16].\nIn particular, it is considered as a physical implementation encompassing different logical roles, namely a SIP Proxy Server, a SIP Registrar Server, a SIP Redirect Server and a SIP B2BUA (Back-to-Back User Agent) [22].\nThis physical implementation enables the handling of incoming\/outgoing SIP messages by one or another logical entity according to the needs.\nSIPS is entrusted with maintaining total service and has many advantages such as (a) it works as a centralized entity that can keep track of the activities of the UAs in a conference; (b) it can do all the switching for providing PBX features; (c) it can locate the UAs and invite them for a conference; (d) it can do the billing as well.\nSIPSs in different domains communicate with each other using SIP messages as described in [16].\nIf the load on a particular SIPS is too heavy, it can create another SIPS in the same domain so that the load will be shared.\nOne Master Conference Server (M-CS) (simply a Conference Server (CS)) for each conference is created by the local SIPS when a conference starts.\nThis server will be used for handling media packets for the clients of the domain.\nIts mechanism will be described in the next section.\nThe M-CS will be able to create a hierarchy of CSs inside a domain by adding one or more Slave CSs (S-CSs) to accommodate all the active clients and prevent its own flooding at the same time.\nWe will see this mechanism in some detail in the sequel.\nThe entities described here are exhaustive and conform to the SIP philosophy.\nThus, the use of SIP makes this architecture more useful and interoperable with any other SIP clients or servers.\n6.\nALGORITHMIC ISSUES 6.1 Selecting the Streams Similar to SipConf in [27], a Conference Server (CS) [17] has the function of supporting the conference; it is responsible for handling audio streams using RTP.\nIt can also double to convert audio stream formats for a given client if necessary and can work as Translators\/Mixers of RTP specification behind firewalls.\nWe have based the design of our CS on H.323 Multipoint Processor (MP) [9].\nIn short, the MP receives audio streams from the endpoints involved in a centralized or hybrid multipoint conference, processes them and returns them to the endpoints.\nAn MP that processes audio prepares NMax audio outputs from M input streams after selection, mixing, or both.\nAudio mixing requires decoding the input audio to linear signals (PCM or analog), performing a linear combination of the signals and reencoding the result in an appropriate audio format.\nThe MP may eliminate or attenuate some of the input signals in order to reduce noise and unwanted components.\nFig. 2.\nSchematic diagram of a CS The limitation of H.323 is that it does not address the scalability of a conference.\nThe architecture proposes a cascaded or daisy chain topology [10], which can be shown that it cannot scale up for a large conference.\nA CS serves many clients in the same conference.\nThus it handles only one conference at a time.\nMultiple CSs may coexist in a domain, as when there are several conferences under way.\nSignaling-related messages of CSs are dealt in [11].\nThe working of a CS is illustrated on Fig. 2: For each mixing interval, CS 1 chooses the best NMax audio packets out of the M1 (using a criterion termed ``Loudness Number, described in the next subsection).\nIt may possibly receive and sends these to CSs 2 to P.\nThe set of packets sent is denoted as ToOtherCSs.\nIn the same mixing interval, it also receives the best NMax audio packets (out of possibly M2) from CS 2, similarly the best NMax (out of possibly MP) from CS P. For simplicity, we ignore propagation delay between CSs which indeed can be taken into account; it is beyond the scope of this presentation.\nThe set of packets received is denoted as FromOtherCSs.\nFinally, it selects the best NMax packets from the set {ToOtherCSs union FromOtherCSs} and passes these packets to its own group.\nIt can be seen that the set {ToOtherCSs union FromOtherCSs} is the same at all CSs.\nThis ensures that any client in the conference finally receives the same set of packets for mixing.\nHence all clients obtain a common view of the conference.\nSimilarly, for each time slot (packet time), a subset, F of all clients is selected (using the same criterion) from the pool of packets from all other CSs plus the NMax clients selected locally.\nTheir packets are mixed and played out at the clients.\nAccording to [15], the cardinality of F, |F| is NMax and is fixed at three.\nIn our conferencing setup, selection is by the Master Conference Server (M-CS), which comes into the picture exclusively for media handling.\nNote that even if the SIP specification enables direct UA-to-UA media communication in a one-to-one call, it is also possible to use the Conference Server for two-party calls, especially because it is then more functional to create a real conference by adding a third and subsequently more participant(s).\nThere are cases wherein the processing capacity of an M-CS is exceeded as it may have too many packets - from local domains and from remote domains - to process.\nIn that case, the M-CS will create one or many S-CS (Fig. 6) and transfer its own clients as well as the new clients to them.\nIn this configuration, the algorithm outlined above will be slightly modified, as the audio packets will go from clients to their dedicated S-CS that will select NMax packets to send to the local M-CS, which will then select NMax packets from all its S-CSs in the domain before sending them to the remote domains.\nThe incoming packets from other domains will be received by the M-CS, select NMax of them and send them directly to the domain clients, bypassing the SCSs.\nThis change implies that at most three intermediate entities exist for each audio packet, instead of two in the conventional setup.\nAs the extra hop happens inside the LAN supposed to have a high-speed connectivity, we consider that it should not prevent us from using this hierarchy of CSs when there``s a need to do so.\n6.2 Loudness Number (LN) A basic question to be answered by the CS is the following.\nIn a mixing interval, how should it choose NMax packets out of the M it might possibly receive?\nOne way is to rank the M packets received according to their energies, and choose the top NMax.\nHowever, this is usually found to be inadequate because random fluctuations in packet energies can lead to poor audio quality.\nThis indicates the need for a metric different from mere individual packet energies.\nThe metric should have the following characteristics [12]: \u2022 A speaker (floor occupant) should not be cut off by a spike in the packet energy of another speaker.\nThis implies that a speaker``s speech history should be given some weight.\nThis is often referred to as Persistence or Hangover.\n\u2022 A participant who wants to interrupt a speaker will have to (i) speak loudly and (ii) keep trying for a little while.\nIn a face-to-face conference, body language often indicates the intent to interrupt.\nBut in a blind conference under discussion, a participant``s intention to interrupt can be conveyed effectively through LN.\nA floor control mechanism empowered to cut off a speaker forcefully must be ensured.\nThese requirements are met by Loudness Number [12], which changes smoothly with time so that the selection (addition and deletion) of clients for conference is graceful.\nLN (= ) is a function of the amplitude of the current audio stream plus the activity and amplitude over a specific window in the past.\nFig. 3.\nThe different windows used for LN computation The Loudness Number is updated on a packet-by-packet basis.\nThe basic parameter used here is packet amplitude, which is calculated as root mean square (rms) of the energies in audio samples of a packet, and denoted by XK.\nThree windows are defined as shown in Fig. 3.\nThe present amplitude level of the speaker is found by calculating the moving average of packet amplitude (XK) within a window called Recent Past Window starting from the present instant to some past time.\nThe past activity of the speaker is found by calculating the moving average of the packet amplitude (XK) within a window called Distant Past Window, which starts at the point where the Recent Past window ends and stretches back in the past for a pre-defined interval.\nThe activity of the speaker in the past is found with a window called Activity Horizon, which spans the recent past window as well as the distant past window and beyond if necessary.\nThough the contribution of the activity horizon looks similar to the contribution of the recent past and distant past windows, past activity is computed from activity horizon window in a differently.\nDefine the quantities during these three intervals as L1, L2 and L3.\nL1 quantifies the Recent Past speech activity, L2 the Distant Past speech activity and L3 gives a number corresponding to the speech activity in the Activity Horizon window quantifying how active the speaker was in the past few intervals.\nL3 yields a quantity that is proportional to the fraction of packets having energies above a pre-defined threshold (Eq.\n3).\nThe threshold is invariant across clients.\n\u2211 +\u2212 = = 1 1 1 RPP P Wt tK K RP X W L (1) \u2211 +\u2212\u2212 \u2212= = 1 2 1 DPRPP RPP WWt WtK K DP X W L (2) \u2211 +\u2212 = \u2265= 1 }{3 * 1 AHP P K Wt tK X AH I W L \u03b8\u03b8 (3) Where ifI KX 1}{ =\u2265\u03b8 \u03b8\u2265KX = otherwise,0 The threshold is a constant.\nis set at 10-20 percent of the amplitude of the voice samples of a packet in our implementation here.\nLoudness Number \u03bb for the present time instant (or the present packet) is calculated as, 332211 *L*L*L \u03b1\u03b1\u03b1\u03bb ++= (4) Here 1 2 DQG 3 are chosen such that: 0< 1 2 1 2 DQG 3=1- 1 2) Here, 1 is the weight given to the recent past speech, 2 is the weight given to distant past speech and 3 is the weight given to speech activity in the activity horizon window considered.\n6.3 Safety, Liveness and Fairness The \u03bb parameter KDV VRPH PHPRU\\ GHSHQGLQJ RQ WKH VSUHDG RI the windows.\nAfter one conferee becomes silent, another can take the floor.\nAlso, as there is more than one channel, interruption is enabled.\nA loud conferee is more likely to be heard because of elevated \u03bb.\nThis ensures fairness to all conferees.\nAfter all, even in a face-to-face conference, a more vocal speaker grabs special attention.\nAll these desirable characteristics are embedded into the LN.\nA comprehensive discussion on selection of the various parameters and the dynamics of LN are beyond the scope of this paper.\n6.4 Selection Algorithm using the LN Following the developments in subsections 6.1 and 6.2, we present the simple algorithm that runs at each Master-Conference Server (Algorithm.\n1).\nThis algorithm is based on the discussions in section 6.1.\nThe globally unique set F is found using this procedure.\nRepeat for each time slot at each M-CS { 1.\nGet all the packets from the Clients that belong to it.\n2.\nFind at most NMax Clients that have PD[LPXP RXW RI 0 &OLHQWV LQ LWV GRPDLQ 3.\nStore a copy of packets from those NMax Clients in database DB1.\n4.\nSend these NMax packets to other M-CSs (on Unicast or Multicast, depending on the configuration).\n5.\nSimilarly, receive packets from all other M-CSs and store them in database DB2.\n6.\nNow compare the packets in DB1 and DB2 on WKH EDVLV RI DQG VHOHFW D PD[LPXP RI NMax amongst them (to form set F) that should be played out at each Client.\n7.\nSend the NMax packets in set F to the Clients in its domain.\n8.\nMix these NMax audio packets in set F after linearising and send it to dumb Clients in the domain. }\nAlgorithm 1.\nSelection algorithm The mechanism proposed here is also depicted on Fig. 6, where a single conference takes place between three domains.\nThe shaded clients are the ones selected in their local domains; their audio streams will be sent to other CSs.\n7.\nDEPLOYMENT ISSUES We now analyze deployment issues associated with conference management.\nHow are domains to be organized to maximize the number of participants able to join?\nTo address this, we define some useful parameters.\nLet d be the number of different domains in which there are active clients in a given conference.\nLet Mi be the number of active clients present in domain i ( di \u2264\u22641 ) in a given conference.\nThe total number of active clients in the conference is thus \u2211= = d i iMM 1 .\nLet C be the maximum number of audio streams a Conference Server can handle in a packet time, also called capacity.\nC is set according to the processing power of the weakest CS in the conference but as it cannot be assumed that we know it a-priori, it can be set according to some minimum system requirement a machine must meet in order to take part in a conference.\nLet NMax be the number of output streams a CS has to send to other CSs in remote domains (see section 6.1).\nWe will set NMax =3 (=|F|), according to [15].\nThe optimization problem is now to find the value of d that maximizes the total number of clients Mi served by one CS in a domain with capacity C.\nWe first dispose the case where the capacity is not exceeded (the existing CS is not overloaded), and then proceed to the case where there exists a need to create more CSs when a single CS is overloaded.\nWe assume that clients are equally distributed amongst the domains, as we may not have information to assume an a-priori distribution of the clients.\nWe can specify no more than an upper bound on the number of clients acceptable, given the number of active domains d. 7.1 Conferencing with only One Level of CSs In this subsection, we consider that we have only one CS, i.e., a unique M-CS in each domain.\nThus it cannot be overloaded.\nWe consider that the system works as outlined in section 6.1: The Clients send their audio packets to their local CS, which selects NMax streams, before sending them to other CSs.\nIn parallel, it also receives NMax streams for every other CSs before taking a decision on which NMax streams will be selected, sent and played out at each individual clients.\nFor system stability, any CS in the conference should be able to handle its local clients in addition to the audio packets from other domains.\nClearly then, the following inequality must hold for every domain: )1( \u2212\u22c5+\u2265 dN d M C Max (5) The limiting case of (5) (taking the equality) takes the form 2 )( dNdNCM MaxMax \u22c5\u2212\u22c5+= (6) To optimize d with respect to M, we set 0)(2 =+\u2212\u22c5\u22c5= \u2202 \u2202 MaxMax NCdN d M (7) yielding \u22c5 + = Max Max N NC d 2 (8) ([ ]* = Rounding to nearest integer) and hence, M from (6).\nC d M 50 9 234 100 17 884 150 26 1950 200 34 3434 250 42 5334 300 51 7650 350 59 10384 400 67 13534 450 76 17100 500 84 21084 Table 1.\nValues of d and M computed for some values of C with NMax = 3.\nIn Table 1, we give the values of d and M that were computed using (8) and (6) with NMax = 3.\nWe see that the values of d and M, being dependent of C, are therefore based on the weakest CS.\nWe see that there is a trade-off between M and d.\nWe could admit more domains in the conference, but at the expense of restricting the total number of clients M in the conference.\nWhile implementing and testing the Conference Servers on a Pentium III 1.4 GHz running Windows NT, we were able to set C=300.\nBut with the advent of faster computers (> 3 GHz), one can easily set C to higher values and determine d and M accordingly.\nFig. 4 shows a contour plot and Fig. 5, a 3D-mesh showing optimized solutions for CSs of different capacities.\nThese lead us to maximize the number of domains, and hence, to maximize the total number of clients based on the capacity of various CSs.\nIn Fig. 4, the individual curves represent the total number of clients targeted, and we select a lower value of d, for capacity C, for targeted M to reduce traffic on WAN.\nFig. 5 represents a different perspective of the same data in 3D.\nFig. 4.\nContour Plot of Capacity versus Optimum number of domains for various conference sizes 7.2 Conferencing with Two Levels of CSs Now considering the case where the number of clients in a particular domain is too large, i.e., d M Mi \u2265 (9) one has to avoid the denial of service for new clients due to overloading of Conference Server.\nThis problem can be solved by introducing a second level of CSs inside the given domain, as in Fig. 6.\nThe existing M-CS creates a Slave CS (S-CS) that can handle up to C end-users and to which it transfers all its active clients.\nHere, the system works differently as outlined in section 6.1: The Clients send their audio packets to their local S-CS, which selects NMax streams, before sending them to a local M-CS, which will proceed in the same way, before sending NMax streams to the other domains.\nEach newly created S-CS must run on a separate machine.\nThe M-CS has to create more S-CSs if the number of active clients exceeds C in the course of the conference after the transfer.\nWith this mechanism, the M-CS will be able to create utmost \u2212\u22c5\u2212 = Max Max N dNC U )1( S-CSs, (10) as it must handle 3 (= NMax) packets for each local S-CSs and 3 (= NMax) packets from each other remote domains.\nWe can then calculate the maximum theoretical number of active clients CUMi \u22c5= in each domain as well as M, for the whole conference as CUdM \u22c5\u22c5= .\nFig. 5.\n3D Plot of Capacity versus Optimum number of domains for various conference sizes Of course, one could further create a third level in the hierarchy, giving the possibility of accommodating even more clients.\nThis may be unnecessary as the number of possible clients is large enough with two levels.\n8.\nPERFORMANCE DISCUSSION We now analyze the performance of the algorithm presented in subsection 6.3, i.e., the one taking care of the exchange of audio packets between the different domains.\nNote that the packets that are transiting within the LAN take advantage of the higher capacity (generally coupled with multicast capabilities) and therefore do not require a performance analysis.\nThus we have to look only at the RTP packets over the WAN, i.e., between participating M-CSs.\nAs each M-CS from a domain will be sending only NMax out of d M packets to the other CSs ( MaxN d M >> ), the bandwidth used by the application over a WAN is upper-bounded by the following expression.\nThe total number of audio packets transiting over the WAN for each time slot is \u2211 \u2211= \u2260= d i d ijj MaxN 1 ,1 which is quadratic in the number of domains (i.e., O(d2 )).\nHowever, it is independent of the total number of active clients.\nThis would not have been the case had all packets been sent over the network in each time slot.\nThe saving is tremendous.\nYet, one may contend that sending three packets to and from all domains is a waste of resources, as most of these streams will not be selected.\nIf just one client is active, selecting a subset of clients in that domain is unnecessary.\nPessimistic and optimistic algorithms as presented in the sequel aim at reducing the traffic further by harnessing the slow varying nature of the LN.\n8.1 Pessimistic algorithm Consider a scenario wherein the lowest LN (called LNt) of the three globally selected streams (set F of Section 6.1) exceeds the LN of the most dominant stream of a domain.\nEvidently, the chances that the next two dominant streams of that domain being selected to F in the next packet period are less.\nHere, we send this most dominant stream and withhold the other two.\nThere may be an error in unique selection across all domains for one packet period only.\nAs LN varies slowly, the error would get automatically rectified in a subsequent packet period (slot).\nIn this algorithm, there is at least one stream in each period.\nThe net network traffic in a packet period in the best case is )1( \u2212\u22c5 dd , i.e., )( 2 dO using unicast, instead of MaxNdd \u22c5\u2212\u22c5 )1( .\nConsiderable valuable bandwidth can be saved using this heuristic.\nThe resulting traffic complexity reduces from O(d2 ) to O(d) in multicast-enabled networks.\nInitialize LNt = 0 at an M-CS\/S-CS A.\nIn the first time slot (packet time), each CS sends the top NMax streams (based on their LN) to all other CSs.\nAt each M-CS\/S-CS and for each packet time: B. Find the value of lowest LN of the NMax globally selected streams (set F) from the previous time slot.\nSet LNt with this value.\nC.\nAt each CS domain, select the NMax local streams that have maximum value of LN (ToOtherCSs set).\nD. Select streams that have LN > LNt.\nIF there are >= NMax streams with LN > LNt then send top NMax to other CSs.\nELSE IF there are (NMax-1) streams with LN>LNt then send top (NMax-1) plus the one lower than LNt (i.e., top NMax) to other CSs.\nELSE IF there are (NMax-2) streams with LN>LNt then send top (NMax-2) plus the one lower than LNt (i.e. top (NMax -1)) to other CSs.\n...... ELSE IF there are NO streams with LN> LNt then send top 1 stream to other CSs.\nE. Packets sent in step D form DB1.\nPackets received from other CSs form DB2.\nF. For this time slot, find global NMax streams based on LN from DB1 U DB2 (set F) G. Send set F to the clients in its domain.\nUpdate LNt for the next period.\nAlgorithm 2.\nPessimistic algorithm to reduce the number of packets sent over the Internet.\nFig. 6.\nExample of a 2-level hierarchy of Conference Servers; the shaded Clients are the one selected by the M-CS and will be sent to other domains'' CSs.\nIn this algorithm the saving in traffic is at the cost of relaxing the condition of formation of globally unique set F. However, the discrepancies in selected streams at different domains remain for a short period of time depending on the transportation delay between any two domains.\nEven for a total delay of 400ms, for only 10 packet time slots the uniqueness is lost.\nThis duration in a real-time interactive conversation is non-perceivable by the listener.\nIn the case that there is a joke and every one laughs, then there would be sudden rise in the number of packets and it would be upper bounded by MaxN)d(O 2 for a short period.\n8.2 Optimistic Algorithm The traffic can be reduced further.\nThe scheme in the following algorithm (Algorithm.\n3) is withholding all the streams that have less value of LN compared to the least of the three in the set F.\nWe can find the correct and unique three streams after a few time slots depending on the transportation delay between the domains.\nAs the packet period is of the order of 40ms, the error in the selection is unnoticeable.\nThe number of streams on network in this case is always restricted to NMax (=3).\nEven without Voice Activity Detection (VAD), there will be no more than three streams in the network in the best case, thus the total traffic is constant.\nA sudden burst of traffic, as described in 8.1, is a particular case.\nThese advantages are due to exploitation of the characteristics of LN.\nInitialize LNt = 0 at an M-CS\/S-CS A.\nIn the first time slot (packet time), each CS sends the top NMax streams (based on their LN) to all other CSs.\nAt each M-CS\/S-CS and for each packet time: B. Find the value of lowest LN of the NMax globally selected streams (set F) from the previous time slot.\nSet LNt with this value.\nC.\nAt each CS domain, select the NMax local streams that have maximum value of LN (ToOtherCSs set) D. Select streams that have LN > LNt IF there are >= NMax streams with LN > LNt then send top NMax to other CSs.\nELSE IF there are (NMax-1) streams with LN>LNt then send top (NMax-1) and see E. ELSE IF there are (NMax-2) streams with LN>LNt then send top (NMax-2) and see E. ...... ELSE IF there are NO streams with LN> LNt then don``t send any stream.\nE. Exceptions: IF the stream that was in F in the last interval belongs to this CS then select and send that stream even if its LN is now < LNt.\n(Note this occurs only at that CS which had the stream that was the last of the three in the previous packet period.)\nF. Packets sent in step D and E form DB1.\nPackets received from other CSs form DB2.\nG. For this time slot, find global NMax streams based on LN from DB1 U DB2 (set F).\nH. Send set F to the clients in its domain.\nUpdate LNt for the next period.\nAlgorithm 3.\nOptimistic algorithm to reduce the number of packets sent over the Internet Furthermore, when VAD is used [13], it would further reduce the traffic by sending the header part of the RTP packet only and not the whole packet, thus in order to keep updating the LN across.\nThe traffic here in this case is O(NMax) for multicast and O(d) for unicast.\nWe see that the above algorithms save bandwidth and computation at each CS, and leads to a scalable architecture with multiple CSs mainly because clients are grouped in domains.\nThe necessary bandwidth is not dependent on the total number of active clients.\nAs the CS always chooses the best three clients out of all the clients assigned to it in the domain, addition of new clients to the existing conference will not cause any scalability problem.\n8.3 Availability of Multicasting In the architecture that has been proposed, no assumption was made about the availability of multicasting support from the network.\nThe traffic will be further reduced if multicasting is available over WAN.\nIt is simple to show that the order of traffic would tend to become O(d) from O(d2 ).\nThis is an approximation as saving in multicasting depends also on the topology.\nThe analysis was done for the case wherein multicast is not available (a realistic assumption in today``s Internet).\nThe advantage of this set up is that we can use it even if multicasting is partially available.\nWe can instruct CSs during the set-up phase to send unicast packets to those CSs that cannot receive multicast packets whereas CSs on multicast enabled routers can exchange packets on a multicast address.\nThe data structures and conference objects inside a CS is given in [14].\nFig. 7.\nUser Interface for setting the weight for NMax audio streams (setting Self-bar to zero avoids echo).\n8.4 Quality Improvement The observed improvement in the perceived quality of the conference service is due to: (1) limiting the number of concurrent speakers to a low number such as three.\nGenerally, in a conference if more than two participants speak the intelligibility is lost.\nThe conversational analysis demonstrates that there would be a repair mechanism [15] in such a case.\n(2) Delay: The audio stream between any two clients will pass through at most two CSs thus reducing the end-to-end delay.\nFor a large conference there might be three CSs however, one hop is within the domain incurring negligible delay.\n(3) As the streams are mixed only at the clients, there can be a customized mix of the streams.\nThe individual tuning of mixing with weights the spatialism is preserved.\nFig. 7 shows the user interface for the same.\nThe echo when self-stream is selected can be avoided by reducing the weight.\nNonetheless, feedback helps in reassuring speaker that he\/she is heard by all.\n9.\nCONCLUSION In this paper, we have presented a discussion on a voice-only virtual conferencing environment.\nWe have argued that the distributed nature of deployment here makes it scalable.\nInteractivity is achieved by adapting a recent stream selection scheme based on Loudness Number.\nAdditionally, we incorporate a result from a more recent work [15] where the sufficiency of three simultaneous speakers has been demonstrated.\nThus, there is significantly effective utilization of bandwidth.\nA mixed stream is played out at each client; each client may choose to have a customized mix since mixing is done at the local terminal of each client.\nThese render impromptu speech in a virtual teleconference over VoIP a reality, as in a real face-to-face conference.\nCompatibility is assured thanks to the use of SIP, the most soughtafter signaling protocol.\nTo ensure a satisfying performance, we do not demand the availability of multicast, but use it if and when available.\nThe traffic in the WAN (Internet) is upper-bounded by the square of the number of domains, -- further reduced by using heuristic algorithms -- which is far below the total number of clients in the conference.\nThis is due to the use of a Conference Server local to each domain.\nVAD techniques help further traffic reduction.\nUsing SIP standard for signaling makes this solution highly interoperable.\nWe have implemented a CS application on a campus-wide network.\nWe believe this new generation of virtual conferencing environments will gain more popularity in the future as their ease of deployment is assured thanks to readily available technologies and scalable frameworks.\n10.\nREFERENCES [1] L Aguilar et al., Architecture for a Multimedia Teleconferencing System, in Proceedings of the ACM SIGCOMM, Aug 1986, pp. 126-136.\n[2] Carsten Bormann, Joerg Ott et al., Simple Conference Control Protocol, Internet Draft, Dec. 1996.\n[3] M. Decina and V. Trecordi, ``Voice over Internet Protocol and Human Assisted E-Commerce'', IEEE Comm.\nMagazine, Sept. 1999, pp. 64-67.\n[4] Eckehard Doerry, ``An Empirical Comparison of Copresent and Technologically-mediated Interaction based on Communicative Breakdown'', Phd Thesis, Graduate School of the University of Oregon, 1995.\n[5] H. P. Dommel and J.J. Garcia-Luna-Aceves, ``Floor Control for Multimedia Conferencing and Collaboration'', J. Multimedia Systems, Vol.\n5, No. 1, January 1997, pp. 23-38.\n[6] Amitava Dutta-Roy, ``Virtual Meetings with desktop conferencing'', IEEE Spectrum, July 1998, pp. 47-56.\n[7] M. Handley and V. Jacobson, ``SDP: Session Description Protocol'', RFC 2327, IETF, April 1998.\n[8] M. Handley, J. Crowcroft et al., ``Very large conferences on the Internet: the Internet multimedia conferencing architecture'', Journal of Computer Networks, vol.\n31, No. 3, Feb 1999, pp. 191-204.\n[9] ITU-T Rec.\nH.323, Packet based Multimedia Communications Systems, vol.\n2, 1998.\n[10] P. Koskelainen, H. Schulzrinne and X. Wu, ``A SIP-based Conference Control Framework'', NOSSDAV``02, May 2002, pp. 53-61.\n[11] R Venkatesha Prasad et al., Control Protocol for VoIP Audio Conferencing Support, International Conference on Advanced Communication Technology, Mu-Ju, South Korea, Feb 2001, pp. 419-424.\n[12] R Venkatesha Prasad et al., ``Automatic Addition and Deletion of Clients in VoIP Conferencing'', 6th IEEE Symposium on Computers and Communications, July 2001, Hammamet, Tunisia, pp. 386-390.\n[13] R Venkatesha Prasad, H S Jamadagni, Abjijeet, et al Comparison of Voice Activity Detection Algorithms, 7th IEEE Symposium on Computers and Communications.\nJuly 2002, Sicily, Italy, pp. 530-535.\n[14] R. Venkatesha Prasad, Richard Hurni, H S Jamadagni, A Scalable Distributed VoIP Conferencing using SIP, Proc.\nof the 8th IEEE Symposium on Computers and Communications, Antalya, Turkey, June 2003.\n[15] R Venkatesha Prasad, H S Jamadagni and H N Shankar, ``On Problem of Specifying Number of Floors in a Voice Only Conference'', To appear in IEEE ITRE 2003.\n[16] R. Venkatesha Prasad, Richard Hurni, H S Jamadagni, ``A Proposal for Distributed Conferencing on SIP using Conference Servers'', To appear in the Proc.\nof MMNS 2003, Belfast, UK, September 2003.\n[17] R. Venkatesha Prasad, H.S. Jamadagni, J. Kuri, R.S. Varchas, A Distributed VoIP Conferencing Support Using Loudness Number, Tech.\nRep. TR-CEDT-TE-03-01 [18] M. Radenkovic et al, ``Scaleable and Adaptable Audio Service for Supporting Collaborative Work and Entertainment over the Internet'', SSGRR 2002, L'Aquila, Italy, Jan. 2002.\n[19] M. Radenkovic, C. Greenhalgh, S. Benford, Deployment Issues for Multi-User Audio Support in CVEs, ACM VRST 2002, Nov. 2002, pp. 179-185.\n[20] Srinivas Ramanathan, P. Venkata Rangan, Harrick M. Vin, Designing Communication Architectures for Interorganizational Multimedia Collaboration, Journal of Organizational Computing, 2 (3&4), pp.277-302, 1992.\n[21] A. B. Roach, '' Session Initiation Protocol (SIP)-Specific Event Notification'', RFC 3265, IETF, June 2002.\n[22] J. Rosenberg, H. Schulzrinne et al., ``SIP: Session Initiation Protocol'', RFC 3261, IETF, June 2002.\n[23] J. Rosenberg, H. Schulzrinne, Models for Multy Party Conferencing in SIP, Internet Draft, IETF, July 2002.\n[24] H. Schulzrinne et al., ``RTP: a transport protocol for realtime applications'', RFC 1889, IETF, Jan 1996.\n[25] Lisa R. Silverman, ``Coming of Age: Conferencing Solutions Cut Corporate Costs'', White Paper, www.imcca.org\/wpcomingofage.asp [26] Kundan Singh, Gautam Nair and Henning Schulzrinne, ``Centralized Conferencing using SIP'', Proceedings of the 2nd IP-Telephony Workshop (IPTel), April 2001.\n[27] D. Thaler, M. Handley and D. Estrin, ``The Internet Multicast Address Allocation Architecture'', RFC 2908, IETF, Sept. 2000.","lvl-3":"Deployment Issues of a VoIP Conferencing System in a Virtual Conferencing Environment\nABSTRACT\nReal-time services have been supported by and large on circuitswitched networks.\nRecent trends favour services ported on packet-switched networks.\nFor audio conferencing, we need to consider many issues--scalability, quality of the conference application, floor control and load on the clients\/servers--to name a few.\nIn this paper, we describe an audio service framework designed to provide a Virtual Conferencing Environment (VCE).\nThe system is designed to accommodate a large number of end users speaking at the same time and spread across the Internet.\nThe framework is based on Conference Servers [14], which facilitate the audio handling, while we exploit the SIP capabilities for signaling purposes.\nClient selection is based on a recent quantifier called \"Loudness Number\" that helps mimic a physical face-to-face conference.\nWe deal with deployment issues of the proposed solution both in terms of scalability and interactivity, while explaining the techniques we use to reduce the traffic.\nWe have implemented a Conference Server (CS) application on a campus-wide network at our Institute.\n1.\nINTRODUCTION\nToday's Internet uses the IP protocol suite that was primarily designed for the transport of data and provides best effort data delivery.\nDelay-constraints and characteristics separate traditional data on the one hand from voice & video applications on the other.\nHence, as progressively time-sensitive voice and video applications are deployed on the Internet, the inadequacy of the Internet is exposed.\nFurther, we seek to port telephone services on the Internet.\nAmong them, virtual conference (teleconference) facility is at the cutting edge.\nAudio and video conferencing on Internet are popular [25] for the several advantages they inhere [3,6].\nClearly, the bandwidth required for a teleconference over the Internet increases rapidly with the number of participants; reducing bandwidth without compromising audio quality is a challenge in Internet Telephony.\nAdditional critical issues are: (a) packet delay, (b) echo, (c) mixing of audio from selected clients, (d) automatic selection of clients to participate in the conference, (e) playout of mixed audio for every client, (f) handling clients not capable of mixing audio streams (such clients are known as \"dumb clients\"), and (g) deciding the number of simultaneously active clients in the conference without compromising voice quality.\nWhile all the above requirements are from the technology point of view, the user's perspective and interactions are also essential factors.\nThere is plenty of discussion amongst HCI and CSCW community on the use of Ethnomethodology for design of CSCW applications.\nThe basic approach is to provide larger bandwidth, more facilities and more advanced control mechanisms, looking forward to better quality of interaction.\nThis approach ignores the functional utility of the environment that is used for collaboration.\nEckehard Doerry [4] criticizes this approach by saying \"it is keeping form before function\".\nThus, the need is to take an approach that considers both aspects--the technical and the functional.\nRegarding the functional aspect, we refer to [15] where it has been dealt with in some detail.\nIn this work, we do not discuss video conferencing; its inclusion does not significantly benefit conference quality [4].\nOur focus is on virtual audio environments.\nWe first outline the challenges encountered in virtual audio conferences.\nThen we look into the motivations followed by relevant literature.\nIn Section 5, we explain the architecture of our system.\nSection 6 comprises description of the various algorithms used in our setup.\nWe address deployment issues.\nA discussion on performance follows.\nWe conclude taking alongside some implementation issues.\n2.\nCHALLENGES IN VoIP CONFERENCING\n3.\nTHE MOTIVATION\n4.\nRELATED WORK\nThe SIP standard defined in RFC 3261 [22] and in later extensions such as [21] does not offer conference control services such as floor control or voting and does not prescribe how a\nFig. 1.\nConference example--3 domains containing the necessary entities so that the conference can take place.\nconference is to be managed.\nHowever SIP can be used to initiate a session that uses some other conference control protocol.\nThe core SIP specification supports many models for conferencing [26, 23].\nIn the server-based models, a server mixes media streams, whereas in a server-less conference, mixing is done at the end systems.\nSDP [7] can be used to define media capabilities and provide other information about the conference.\nWe shall now consider a few conference models in SIP that have been proposed recently [23].\nFirst, let us look into server-less models.\nIn End-System Mixing, only one client (SIP UA) handles the signaling and media mixing for all the others, which is clearly not scalable and causes problems when that particular client leaves the conference.\nIn the Users Joining model, a tree grows, as each invited party constitutes a new branch in the distribution path.\nThis leads to an increasing number of hops for the remote leaves and is not scalable.\nAnother option would be to use multicast for conferencing but multicast is not enabled over Internet and only possible on a LAN presently.\nAmong server-based models, in a Dial-In Conference, UAs connect to a central server that handles all the mixing.\nThis model is not scalable as it is limited by the processing power of the server and bandwidth of the network.\nAdhoc Centralized Conferences and Dial-Out Conference Servers have similar mechanisms and problems.\nHybrid models involving centralized signaling and distributed media, with the latter using unicast or multicast, raise scalability problems as before.\nHowever an advantage is that the conference control can be a third party solution.\nDistributed Partial Mixing, presented in [18], proposes that in case of bandwidth limitation, some streams are mixed and some are not, leaving interactivity intact.\nLoss of spatialism when they mix and the bandwidth increase when they do not are open problems.\nA related study [19] by the same author proposes conferencing architecture for Collaborative Virtual Environments (CVEs) but does not provide the scalability angle without the availability of multicasting.\nWith the limitations of proposed conferencing systems in mind, we will now detail our proposal.\n5.\nSYSTEM ARCHITECTURE\n6.\nALGORITHMIC ISSUES\n6.1 Selecting the Streams\n6.2 Loudness Number (LN)\n6.3 Safety, Liveness and Fairness\n6.4 Selection Algorithm using the LN\n7.\nDEPLOYMENT ISSUES\n7.1 Conferencing with only One Level of CSs\n7.2 Conferencing with Two Levels of CSs\n8.\nPERFORMANCE DISCUSSION\n8.1 Pessimistic algorithm\n8.2 Optimistic Algorithm\n8.3 Availability of Multicasting\n8.4 Quality Improvement\n9.\nCONCLUSION\nIn this paper, we have presented a discussion on a voice-only virtual conferencing environment.\nWe have argued that the distributed nature of deployment here makes it scalable.\nInteractivity is achieved by adapting a recent stream selection scheme based on Loudness Number.\nAdditionally, we incorporate a result from a more recent work [15] where the sufficiency of three simultaneous speakers has been demonstrated.\nThus, there is significantly effective utilization of bandwidth.\nA mixed stream is played out at each client; each client may choose to have a customized mix since mixing is done at the local terminal of each client.\nThese render impromptu speech in a virtual teleconference over VoIP a reality, as in a real face-to-face conference.\nCompatibility is assured thanks to the use of SIP, the most soughtafter signaling protocol.\nTo ensure a satisfying performance, we do not demand the availability of multicast, but use it if and when available.\nThe traffic in the WAN (Internet) is upper-bounded by the square of the number of domains,--further reduced by using heuristic algorithms--which is far below the total number of clients in the conference.\nThis is due to the use of a Conference Server local to each domain.\nVAD techniques help further traffic reduction.\nUsing SIP standard for signaling makes this solution highly interoperable.\nWe have implemented a CS application on a campus-wide network.\nWe believe this new generation of virtual conferencing environments will gain more popularity in the future as their ease of deployment is assured thanks to readily available technologies and scalable frameworks.","lvl-4":"Deployment Issues of a VoIP Conferencing System in a Virtual Conferencing Environment\nABSTRACT\nReal-time services have been supported by and large on circuitswitched networks.\nRecent trends favour services ported on packet-switched networks.\nFor audio conferencing, we need to consider many issues--scalability, quality of the conference application, floor control and load on the clients\/servers--to name a few.\nIn this paper, we describe an audio service framework designed to provide a Virtual Conferencing Environment (VCE).\nThe system is designed to accommodate a large number of end users speaking at the same time and spread across the Internet.\nThe framework is based on Conference Servers [14], which facilitate the audio handling, while we exploit the SIP capabilities for signaling purposes.\nClient selection is based on a recent quantifier called \"Loudness Number\" that helps mimic a physical face-to-face conference.\nWe deal with deployment issues of the proposed solution both in terms of scalability and interactivity, while explaining the techniques we use to reduce the traffic.\nWe have implemented a Conference Server (CS) application on a campus-wide network at our Institute.\n1.\nINTRODUCTION\nToday's Internet uses the IP protocol suite that was primarily designed for the transport of data and provides best effort data delivery.\nDelay-constraints and characteristics separate traditional data on the one hand from voice & video applications on the other.\nHence, as progressively time-sensitive voice and video applications are deployed on the Internet, the inadequacy of the Internet is exposed.\nFurther, we seek to port telephone services on the Internet.\nAmong them, virtual conference (teleconference) facility is at the cutting edge.\nAudio and video conferencing on Internet are popular [25] for the several advantages they inhere [3,6].\nClearly, the bandwidth required for a teleconference over the Internet increases rapidly with the number of participants; reducing bandwidth without compromising audio quality is a challenge in Internet Telephony.\nThere is plenty of discussion amongst HCI and CSCW community on the use of Ethnomethodology for design of CSCW applications.\nThe basic approach is to provide larger bandwidth, more facilities and more advanced control mechanisms, looking forward to better quality of interaction.\nThis approach ignores the functional utility of the environment that is used for collaboration.\nThus, the need is to take an approach that considers both aspects--the technical and the functional.\nIn this work, we do not discuss video conferencing; its inclusion does not significantly benefit conference quality [4].\nOur focus is on virtual audio environments.\nWe first outline the challenges encountered in virtual audio conferences.\nThen we look into the motivations followed by relevant literature.\nIn Section 5, we explain the architecture of our system.\nSection 6 comprises description of the various algorithms used in our setup.\nWe address deployment issues.\nA discussion on performance follows.\nWe conclude taking alongside some implementation issues.\n4.\nRELATED WORK\nThe SIP standard defined in RFC 3261 [22] and in later extensions such as [21] does not offer conference control services such as floor control or voting and does not prescribe how a\nFig. 1.\nConference example--3 domains containing the necessary entities so that the conference can take place.\nconference is to be managed.\nHowever SIP can be used to initiate a session that uses some other conference control protocol.\nThe core SIP specification supports many models for conferencing [26, 23].\nIn the server-based models, a server mixes media streams, whereas in a server-less conference, mixing is done at the end systems.\nSDP [7] can be used to define media capabilities and provide other information about the conference.\nWe shall now consider a few conference models in SIP that have been proposed recently [23].\nFirst, let us look into server-less models.\nIn End-System Mixing, only one client (SIP UA) handles the signaling and media mixing for all the others, which is clearly not scalable and causes problems when that particular client leaves the conference.\nThis leads to an increasing number of hops for the remote leaves and is not scalable.\nAnother option would be to use multicast for conferencing but multicast is not enabled over Internet and only possible on a LAN presently.\nAmong server-based models, in a Dial-In Conference, UAs connect to a central server that handles all the mixing.\nThis model is not scalable as it is limited by the processing power of the server and bandwidth of the network.\nAdhoc Centralized Conferences and Dial-Out Conference Servers have similar mechanisms and problems.\nHybrid models involving centralized signaling and distributed media, with the latter using unicast or multicast, raise scalability problems as before.\nHowever an advantage is that the conference control can be a third party solution.\nLoss of spatialism when they mix and the bandwidth increase when they do not are open problems.\nA related study [19] by the same author proposes conferencing architecture for Collaborative Virtual Environments (CVEs) but does not provide the scalability angle without the availability of multicasting.\nWith the limitations of proposed conferencing systems in mind, we will now detail our proposal.\n9.\nCONCLUSION\nIn this paper, we have presented a discussion on a voice-only virtual conferencing environment.\nWe have argued that the distributed nature of deployment here makes it scalable.\nInteractivity is achieved by adapting a recent stream selection scheme based on Loudness Number.\nThus, there is significantly effective utilization of bandwidth.\nThese render impromptu speech in a virtual teleconference over VoIP a reality, as in a real face-to-face conference.\nThe traffic in the WAN (Internet) is upper-bounded by the square of the number of domains,--further reduced by using heuristic algorithms--which is far below the total number of clients in the conference.\nThis is due to the use of a Conference Server local to each domain.\nVAD techniques help further traffic reduction.\nUsing SIP standard for signaling makes this solution highly interoperable.\nWe have implemented a CS application on a campus-wide network.\nWe believe this new generation of virtual conferencing environments will gain more popularity in the future as their ease of deployment is assured thanks to readily available technologies and scalable frameworks.","lvl-2":"Deployment Issues of a VoIP Conferencing System in a Virtual Conferencing Environment\nABSTRACT\nReal-time services have been supported by and large on circuitswitched networks.\nRecent trends favour services ported on packet-switched networks.\nFor audio conferencing, we need to consider many issues--scalability, quality of the conference application, floor control and load on the clients\/servers--to name a few.\nIn this paper, we describe an audio service framework designed to provide a Virtual Conferencing Environment (VCE).\nThe system is designed to accommodate a large number of end users speaking at the same time and spread across the Internet.\nThe framework is based on Conference Servers [14], which facilitate the audio handling, while we exploit the SIP capabilities for signaling purposes.\nClient selection is based on a recent quantifier called \"Loudness Number\" that helps mimic a physical face-to-face conference.\nWe deal with deployment issues of the proposed solution both in terms of scalability and interactivity, while explaining the techniques we use to reduce the traffic.\nWe have implemented a Conference Server (CS) application on a campus-wide network at our Institute.\n1.\nINTRODUCTION\nToday's Internet uses the IP protocol suite that was primarily designed for the transport of data and provides best effort data delivery.\nDelay-constraints and characteristics separate traditional data on the one hand from voice & video applications on the other.\nHence, as progressively time-sensitive voice and video applications are deployed on the Internet, the inadequacy of the Internet is exposed.\nFurther, we seek to port telephone services on the Internet.\nAmong them, virtual conference (teleconference) facility is at the cutting edge.\nAudio and video conferencing on Internet are popular [25] for the several advantages they inhere [3,6].\nClearly, the bandwidth required for a teleconference over the Internet increases rapidly with the number of participants; reducing bandwidth without compromising audio quality is a challenge in Internet Telephony.\nAdditional critical issues are: (a) packet delay, (b) echo, (c) mixing of audio from selected clients, (d) automatic selection of clients to participate in the conference, (e) playout of mixed audio for every client, (f) handling clients not capable of mixing audio streams (such clients are known as \"dumb clients\"), and (g) deciding the number of simultaneously active clients in the conference without compromising voice quality.\nWhile all the above requirements are from the technology point of view, the user's perspective and interactions are also essential factors.\nThere is plenty of discussion amongst HCI and CSCW community on the use of Ethnomethodology for design of CSCW applications.\nThe basic approach is to provide larger bandwidth, more facilities and more advanced control mechanisms, looking forward to better quality of interaction.\nThis approach ignores the functional utility of the environment that is used for collaboration.\nEckehard Doerry [4] criticizes this approach by saying \"it is keeping form before function\".\nThus, the need is to take an approach that considers both aspects--the technical and the functional.\nRegarding the functional aspect, we refer to [15] where it has been dealt with in some detail.\nIn this work, we do not discuss video conferencing; its inclusion does not significantly benefit conference quality [4].\nOur focus is on virtual audio environments.\nWe first outline the challenges encountered in virtual audio conferences.\nThen we look into the motivations followed by relevant literature.\nIn Section 5, we explain the architecture of our system.\nSection 6 comprises description of the various algorithms used in our setup.\nWe address deployment issues.\nA discussion on performance follows.\nWe conclude taking alongside some implementation issues.\n2.\nCHALLENGES IN VoIP CONFERENCING\nMany challenges arise in building a VoIP application.\nThe following are of particular concern in the process:\n\u2022 Ease of use: Conferencing must be simple; users need no domain expertise.\nManagement (addition\/removal) of clients and servers must be uncomplicated.\nApplication development should not presuppose specific characteristics of the underlying system or of network layers.\nEase of use may include leveraging readily available, technically feasible and economically viable technologies.\n\u2022 Scalability: Conferencing must seem uninterrupted under heavy loads, i.e., when many additional users are added on.\nTraffic on WAN should not grow appreciably with the total number of clients; else, this has lead to congestion.\nSo a means to regulate traffic to a minimum is needed for this kind of real-time applications.\n\u2022 Interactivity: In Virtual Conferencing Environments (VCEs), we intend a face-to-face-like conferencing application that mimics a \"real\" conference, where more vocal participants invite attention.\nTurn-taking in floor occupation by participants must be adapted gracefully to give a feel of natural transition.\n\u2022 Standardization: The solution must conform to established standards so as to gain interoperability and peer acceptance.\nThe above requirements are placed in the perspective of observations made in earlier works (vide Sections 3 and 4) and will steer the VCE design.\n3.\nTHE MOTIVATION\nRamanathan and Rangan [20] have studied in detail the architectural configurations comparing many conferencing architecture schemes taking into consideration the network delay and computation requirements for mixing.\nFunctional division and object-oriented architecture design that aid in implementation is presented in [1].\nAn overview of many issues involved in supporting a large conference is dealt in [8].\nH. P. Dommel [5] and many others highlight floor control as another pivotal aspect to be taken into account in designing a conferencing tool.\nTightly coupled conference control protocols in Internet belong to the ITU-T H. 323 family [9]; however, they are mainly for small conferences.\nThe latest IETF draft by Rosenberg and Schulzrinne [23] discusses conferencing models with SIP [22] in the background.\nAspects of implementation for centralized SIP conferencing are reported in [26].\nA new approach called partial mixing by Radenkovic [18] allows for mixed and non-mixed streams to coexist.\nIn all the above proposals, while there are some very useful suggestions, they share one or more of the following limitations:\n\u2022 In an audio conference, streams from all the clients need not be mixed.\nActually, mixing many arbitrary streams [24] from\nclients degrades the quality of the conference due to the reduction in the volume (spatial aspect of speech).\nThe number of streams mixed varies dynamically depending on the number of active participants.\nThis would lead to fluctuations in the volume of every individual participant causing severe degradation in quality.\nCustomized mixing of streams is not possible when many clients are active.\nThere is a threshold on the number of simultaneous speakers above which increasing the number of speakers becomes counterproductive to conference quality.\nFixing the maximum number of simultaneous speakers is dealt in a recent work [15] using Ethnomethodology, and is conjectured to be three.\nThus it is advisable to honour that constraint.\n\u2022 There cannot be many intermediate mixers (similarly, Conference Servers as in [10]) in stages as in [20] because it brings in inordinate delay by increasing the number of hops and is not scalable with interactivity in focus.\n\u2022 Floor Control for an audio conference (even video conference) with explicit turn-taking instructions to participants renders the conference essentially a one-speakerat-a-time affair, not a live and free-to-interrupt one.\nThis way, the conference becomes markedly artificial and its quality degrades.\nSchulzrinne et al. [24], assume only one participant is speaking at a time.\nIn this case, if applications are implemented with some control [5], the service becomes ` gagging' for the users.\n\u2022 Partial mixing [18] has a similar problem as that of mixing when more streams are mixed.\nMoreover, in [18], to allow impromptu speech, mixing is not done when the network can afford high bandwidth requirements for sending\/receiving all the streams, but it is unnecessary [15].\n\u2022 For large conferences [23, 10] a centralized conference cannot scale up.\nWith multicasting, clients will have to parse many streams and traffic on a client's network increases unnecessarily.\nEvidently, different particular issues, all of which are a subset of requirements (defined in [14] and [16]) for a VoIP conferencing support, are tackled.\nThus there is a need to address conferencing as a whole with all its requirements considered concurrently.\nTowards this goal, the VoIP conferencing system we propose is intended to be scalable and interactive.\nWe make use of the \"Loudness Number\" for implementing floor control.\nThis permits a participant to freely get into the speaking mode to interrupt the current speaker as in a natural face-to-face meeting.\nAn upper limit on the number of floors (i.e., the number of speakers allowed to speak at the same time) is fixed using a conjecture proposed in [15].\nThe work presented here is in continuation of our studies into conferencing based on the Session Initiation Protocol in [14] and [16].\nSIP, defined in [22] is now the most popular standard for VoIP deployment and has been chosen for its strength, ease of use, extensibility and compatibility.\nThis is the reason it will be in the background of all controlling messages that will implicitly arise between the entities in our architecture.\nThe actual messages are described in [16] and, as such, we do not present a complete description of them here.\n4.\nRELATED WORK\nThe SIP standard defined in RFC 3261 [22] and in later extensions such as [21] does not offer conference control services such as floor control or voting and does not prescribe how a\nFig. 1.\nConference example--3 domains containing the necessary entities so that the conference can take place.\nconference is to be managed.\nHowever SIP can be used to initiate a session that uses some other conference control protocol.\nThe core SIP specification supports many models for conferencing [26, 23].\nIn the server-based models, a server mixes media streams, whereas in a server-less conference, mixing is done at the end systems.\nSDP [7] can be used to define media capabilities and provide other information about the conference.\nWe shall now consider a few conference models in SIP that have been proposed recently [23].\nFirst, let us look into server-less models.\nIn End-System Mixing, only one client (SIP UA) handles the signaling and media mixing for all the others, which is clearly not scalable and causes problems when that particular client leaves the conference.\nIn the Users Joining model, a tree grows, as each invited party constitutes a new branch in the distribution path.\nThis leads to an increasing number of hops for the remote leaves and is not scalable.\nAnother option would be to use multicast for conferencing but multicast is not enabled over Internet and only possible on a LAN presently.\nAmong server-based models, in a Dial-In Conference, UAs connect to a central server that handles all the mixing.\nThis model is not scalable as it is limited by the processing power of the server and bandwidth of the network.\nAdhoc Centralized Conferences and Dial-Out Conference Servers have similar mechanisms and problems.\nHybrid models involving centralized signaling and distributed media, with the latter using unicast or multicast, raise scalability problems as before.\nHowever an advantage is that the conference control can be a third party solution.\nDistributed Partial Mixing, presented in [18], proposes that in case of bandwidth limitation, some streams are mixed and some are not, leaving interactivity intact.\nLoss of spatialism when they mix and the bandwidth increase when they do not are open problems.\nA related study [19] by the same author proposes conferencing architecture for Collaborative Virtual Environments (CVEs) but does not provide the scalability angle without the availability of multicasting.\nWith the limitations of proposed conferencing systems in mind, we will now detail our proposal.\n5.\nSYSTEM ARCHITECTURE\nThis section is dedicated to the description of the proposed system architecture.\nHowever, as this paper constitutes the continuation of our work started in [14] and furthered in [16], we will not present here all the details about the proposed entities and invite the readers to consult the papers mentioned above for a full and thorough description.\nFirst, we do not restrict our conferencing system to work on small conferences only, but rather on large audio VCEs that have hundreds (or even thousands) of users across a Wide Area Network (WAN) such as the Internet.\nThis view stems from an appraisal that VCEs will gain in importance in the future, as interactive audio conferences will be more popular because of the spread of the media culture around the world.\nTwo issues must be taken care of when building a VoIP conferencing system: (i) the front-end, consisting of the application program running on the end-users' computers and (ii) the back-end that provides other application programs that facilitate conferencing and conference.\nThe participating users are grouped into several domains.\nThese domains are Local Area Networks (LANs), such as corporate or educational networks.\nThis distributed assumption asks for distributed controlling and media handling solutions, as centralized systems would not scale for such very large conferences (vide Section 4).\nMore explicitly, in each domain, we can identify several relevant logical components of a conferencing facility (Fig. 1): \u25a0 An arbitrary number of end users (clients) that can take part in at most one audio conference at a time.\nEvery user is included in one and only one domain at a given instant, but can move from domain to domain (nomadism).\nIn our conferencing environment, these clients are regular SIP User Agents (SIP UAs), as defined in [22] so to gain in interoperability with other existing SIP-compatible systems.\nThese clients are thus not aware of the complex setting that supports the conference and this is highlighted below.\n\u25a0 One SIP Server (SIPS) per domain, taking care of all the signaling aspects of the conference (clients joining, leaving, etc.) [16].\nIn particular, it is considered as a physical implementation encompassing different logical roles, namely a SIP Proxy Server, a SIP Registrar Server, a SIP Redirect Server and a SIP B2BUA (Back-to-Back User Agent) [22].\nThis physical implementation enables the handling of incoming\/outgoing SIP messages by one or another logical entity according to the needs.\nSIPS is entrusted with maintaining total service and has many advantages such as (a) it works as a centralized entity that can keep track of the activities of the UAs in a conference; (b) it can do all the switching for providing PBX features; (c) it can locate the UAs and invite them for a conference; (d) it can do the billing as well.\nSIPSs in different domains communicate with each other using SIP messages as described in [16].\nIf the load on a particular SIPS is too heavy, it can create another SIPS in the same domain so that the load will be shared.\n\u25a0 One Master Conference Server (M-CS) (simply a Conference Server (CS)) for each conference is created by the local SIPS when a conference starts.\nThis server will be used for handling media packets for the clients of the domain.\nIts mechanism will be described in the next section.\nThe M-CS will be able to create a hierarchy of CSs inside a domain by adding one or more Slave CSs (S-CSs) to accommodate all the active clients and prevent its own flooding at the same time.\nWe will see this mechanism in some detail in the sequel.\nThe entities described here are exhaustive and conform to the SIP philosophy.\nThus, the use of SIP makes this architecture more useful and interoperable with any other SIP clients or servers.\n6.\nALGORITHMIC ISSUES\n6.1 Selecting the Streams\nSimilar to SipConf in [27], a Conference Server (CS) [17] has the function of supporting the conference; it is responsible for handling audio streams using RTP.\nIt can also double to convert audio stream formats for a given client if necessary and can work as Translators\/Mixers of RTP specification behind firewalls.\nWe have based the design of our CS on H. 323 Multipoint Processor (MP) [9].\nIn short, the MP receives audio streams from the endpoints involved in a centralized or hybrid multipoint conference, processes them and returns them to the endpoints.\nAn MP that processes audio prepares NMax audio outputs from M input streams after selection, mixing, or both.\nAudio mixing requires decoding the input audio to linear signals (PCM or analog), performing a linear combination of the signals and reencoding the result in an appropriate audio format.\nThe MP may eliminate or attenuate some of the input signals in order to reduce noise and unwanted components.\nFig. 2.\nSchematic diagram of a CS\nThe limitation of H. 323 is that it does not address the scalability of a conference.\nThe architecture proposes a cascaded or daisy chain topology [10], which can be shown that it cannot scale up for a large conference.\nA CS serves many clients in the same conference.\nThus it handles only one conference at a time.\nMultiple CSs may coexist in a domain, as when there are several conferences under way.\nSignaling-related messages of CSs are dealt in [11].\nThe working of a CS is illustrated on Fig. 2: For each mixing interval, CS 1 chooses the \"best\" NMax audio packets out of the M1 (using a criterion termed \"Loudness Number\", described in the next subsection).\nIt may possibly receive and sends these to CSs 2 to P.\nThe set of packets sent is denoted as \"ToOtherCSs\".\nIn the same mixing interval, it also receives the best NMax audio packets (out of possibly M2) from CS 2, similarly the best NMax (out of possibly MP) from CS P. For simplicity, we ignore propagation delay between CSs which indeed can be taken into account; it is beyond the scope of this presentation.\nThe set of packets received is denoted as \"FromOtherCSs\".\nFinally, it selects the best NMax packets from the set {ToOtherCSs union FromOtherCSs} and passes these packets to its own group.\nIt can be seen that the set {ToOtherCSs union FromOtherCSs} is the same at all CSs.\nThis ensures that any client in the conference finally receives the same set of packets for mixing.\nHence all clients obtain a common view of the conference.\nSimilarly, for each time slot (packet time), a subset, F of all clients is selected (using the same criterion) from the pool of packets from all other CSs plus the NMax clients selected locally.\nTheir packets are mixed and played out at the clients.\nAccording to [15], the cardinality of F, | F | is NMax and is fixed at three.\nIn our conferencing setup, selection is by the Master Conference Server (M-CS), which comes into the picture exclusively for media handling.\nNote that even if the SIP specification enables direct UA-to-UA media communication in a one-to-one call, it is also possible to use the Conference Server for two-party calls, especially because it is then more functional to create a real conference by adding a third and subsequently more participant (s).\nThere are cases wherein the processing capacity of an M-CS is exceeded as it may have too many packets--from local domains and from remote domains--to process.\nIn that case, the M-CS will create one or many S-CS (Fig. 6) and transfer its own clients as well as the new clients to them.\nIn this configuration, the algorithm outlined above will be slightly modified, as the audio packets will go from clients to their dedicated S-CS that will select NMax packets to send to the local M-CS, which will then select NMax packets from all its S-CSs in the domain before sending them to the remote domains.\nThe incoming packets from other domains will be received by the M-CS, select NMax of them and send them directly to the domain clients, bypassing the SCSs.\nThis change implies that at most three intermediate entities exist for each audio packet, instead of two in the conventional setup.\nAs the extra hop happens inside the LAN supposed to have a high-speed connectivity, we consider that it should not prevent us from using this hierarchy of CSs when there's a need to do so.\n6.2 Loudness Number (LN)\nA basic question to be answered by the CS is the following.\nIn a mixing interval, how should it choose NMax packets out of the M it might possibly receive?\nOne way is to rank the M packets received according to their energies, and choose the top NMax.\nHowever, this is usually found to be inadequate because random fluctuations in packet energies can lead to poor audio quality.\nThis indicates the need for a metric different from mere individual packet energies.\nThe metric should have the following characteristics [12]:\n\u2022 A speaker (floor occupant) should not be cut off by a spike in the packet energy of another speaker.\nThis implies that a speaker's speech history should be given some weight.\nThis is often referred to as \"Persistence\" or \"Hangover\".\n\u2022 A participant who wants to interrupt a speaker will have to (i) speak loudly and (ii) keep trying for a little while.\nIn a face-to-face conference, body language often indicates the intent to interrupt.\nBut in a blind conference under discussion, a participant's intention to interrupt can be conveyed effectively through LN.\nA floor control mechanism empowered to cut off a speaker forcefully must be ensured.\nThese requirements are met by Loudness Number [12], which changes smoothly with time so that the selection (addition and deletion) of clients for conference is graceful.\nLN (=) is a function of the amplitude of the current audio stream plus the activity and amplitude over a specific window in the past.\nThe Loudness Number is updated on a packet-by-packet basis.\nThe basic parameter used here is packet amplitude, which is calculated as root mean square (rms) of the energies in audio samples of a packet, and denoted by XK.\nThree windows are defined as shown in Fig. 3.\nThe present amplitude level of the speaker is found by calculating the moving average of packet amplitude (XK) within a window called \"Recent Past Window\" starting from the present instant to some past time.\nThe past activity of the speaker is found by calculating the moving average of the packet amplitude (XK) within a window called \"Distant Past Window\", which starts at the point where the \"Recent Past\" window ends and stretches back in the past for a pre-defined interval.\nThe activity of the speaker in the past is found with a window called \"Activity Horizon\", which spans the recent past window as well as the distant past window and beyond if necessary.\nThough the contribution of the activity horizon looks similar to the contribution of the recent past and distant past windows, past activity is computed from activity horizon window in a differently.\nDefine the quantities during these three intervals as L1, L2 and L3.\nL1 quantifies the Recent Past speech activity, L2 the Distant Past speech activity and L3 gives a number corresponding to the speech activity in the Activity Horizon window quantifying how active the speaker was in the past few intervals.\nL3 yields a quantity that is proportional to the fraction of packets having energies above a pre-defined threshold (Eq.\n3).\nThe threshold is invariant across clients.\nWhere I {XK \u2265 \u03b8} = 1 if X K \u2265 \u03b8 = 0, otherwise The threshold is a constant.\nis set at 10-20 percent of the amplitude of the voice samples of a packet in our implementation here.\nLoudness Number \u03bb for the present time instant (or the present packet) is calculated as,\nHere, 1 is the weight given to the recent past speech, 2 is the weight given to distant past speech and 3 is the weight given to speech activity in the activity horizon window considered.\n6.3 Safety, Liveness and Fairness\nThe \u03bb parameter has some memory depending on the spread of the windows.\nAfter one conferee becomes silent, another can take the floor.\nAlso, as there is more than one channel, interruption is enabled.\nA loud conferee is more likely to be heard because of elevated \u03bb.\nThis ensures fairness to all conferees.\nAfter all, even in a face-to-face conference, a more vocal speaker grabs special attention.\nAll these desirable characteristics are embedded into the LN.\nA comprehensive discussion on selection of the various parameters and the dynamics of LN are beyond the scope of this paper.\nFig. 3.\nThe different windows used for LN computation\n6.4 Selection Algorithm using the LN\nFollowing the developments in subsections 6.1 and 6.2, we present the simple algorithm that runs at each Master-Conference Server (Algorithm.\n1).\nThis algorithm is based on the discussions in section 6.1.\nThe globally unique set F is found using this procedure.\nRepeat for each time slot at each M-CS {\n1.\nGet all the packets from the Clients that belong to it.\n2.\nFind at most NMax Clients that have maximum out of M Clients in its domain.\n3.\nStore a copy of packets from those NMax Clients in database DB1.\n4.\nSend these NMax packets to other M-CSs (on Unicast or Multicast, depending on the configuration).\n5.\nSimilarly, receive packets from all other M-CSs and store them in database DB2.\n6.\nNow compare the packets in DB1 and DB2 on the basis of and select a maximum of NMax amongst them (to form set F) that should be played out at each Client.\n7.\nSend the NMax packets in set F to the Clients in its domain.\n8.\nMix these NMax audio packets in set F after\nlinearising and send it to dumb Clients in the domain.}\nAlgorithm 1.\nSelection algorithm\nThe mechanism proposed here is also depicted on Fig. 6, where a single conference takes place between three domains.\nThe shaded clients are the ones selected in their local domains; their audio streams will be sent to other CSs.\n7.\nDEPLOYMENT ISSUES\nWe now analyze deployment issues associated with conference management.\nHow are domains to be organized to maximize the number of participants able to join?\nTo address this, we define some useful parameters.\n\u25a0 Let d be the number of different domains in which there are active clients in a given conference.\n\u25a0 Let Mi be the number of active clients present in domain i (1 <_ i <_ d) in a given conference.\nThe total number of d active clients in the conference is thus M = \u2211 = i 1 Mi.\n\u25a0 Let C be the maximum number of audio streams a Conference Server can handle in a packet time, also called capacity.\nC is set according to the processing power of the weakest CS in the conference but as it cannot be assumed\nthat we know it a-priori, it can be set according to some minimum system requirement a machine must meet in order to take part in a conference.\n\u25a0 Let NMax be the number of output streams a CS has to send to other CSs in remote domains (see section 6.1).\nWe will set NMax = 3 (= | F |), according to [15].\nThe optimization problem is now to find the value of d that maximizes the total number of clients Mi served by one CS in a domain with capacity C.\nWe first dispose the case where the capacity is not exceeded (the existing CS is not overloaded), and then proceed to the case where there exists a need to create more CSs when a single CS is overloaded.\nWe assume that clients are equally distributed amongst the domains, as we may not have information to assume an a-priori distribution of the clients.\nWe can specify no more than an upper bound on the number of clients acceptable, given the number of active domains d.\n7.1 Conferencing with only One Level of CSs\nIn this subsection, we consider that we have only one CS, i.e., a unique M-CS in each domain.\nThus it cannot be overloaded.\nWe consider that the system works as outlined in section 6.1: The Clients send their audio packets to their local CS, which selects NMax streams, before sending them to other CSs.\nIn parallel, it also receives NMax streams for every other CSs before taking a decision on which NMax streams will be selected, sent and played out at each individual clients.\nFor system stability, any CS in the conference should be able to handle its local clients in addition to the audio packets from other domains.\nClearly then, the following inequality must hold for every domain:\nTable 1.\nValues of d and M computed for some values of C\nwith NMax = 3.\nIn Table 1, we give the values of d and M that were computed using (8) and (6) with NMax = 3.\nWe see that the values of d and M, being dependent of C, are therefore based on the weakest CS.\nWe see that there is a trade-off between M and d.\nWe could admit aM ad more domains in the conference, but at the expense of restricting the total number of clients M in the conference.\nWhile implementing and testing the Conference Servers on a Pentium III 1.4 GHz running Windows NT, we were able to set C = 300.\nBut with the advent of faster computers (> 3 GHz), one can easily set C to higher values and determine d and M accordingly.\nFig. 4 shows a contour plot and Fig. 5, a 3D-mesh showing optimized solutions for CSs of different capacities.\nThese lead us to maximize the number of domains, and hence, to maximize the total number of clients based on the capacity of various CSs.\nIn Fig. 4, the individual curves represent the total number of clients targeted, and we select a lower value of d, for capacity C, for targeted M to reduce traffic on WAN.\nFig. 5 represents a different perspective of the same data in 3D.\nFig. 4.\nContour Plot of Capacity versus Optimum number of domains for various conference sizes\n7.2 Conferencing with Two Levels of CSs\nNow considering the case where the number of clients in a\none has to avoid the denial of service for new clients due to overloading of Conference Server.\nThis problem can be solved by introducing a second level of CSs inside the given domain, as in Fig. 6.\nThe existing M-CS creates a Slave CS (S-CS) that can handle up to C end-users and to which it transfers all its active clients.\nHere, the system works differently as outlined in section 6.1: The Clients send their audio packets to their local S-CS, which selects NMax streams, before sending them to a local M-CS, which will proceed in the same way, before sending NMax streams to the other domains.\nEach newly created S-CS must run on a separate machine.\nThe M-CS has to create more S-CSs if the number of active clients exceeds C in the course of the conference after the transfer.\nWith this mechanism, the M-CS will be able to create utmost\nas it must handle 3 (= NMax) packets for each local S-CSs and 3 (= NMax) packets from each other remote domains.\nWe can then calculate the maximum theoretical number of active clients\nFig. 5.\n3D Plot of Capacity versus Optimum number of domains for various conference sizes\nOf course, one could further create a third level in the hierarchy, giving the possibility of accommodating even more clients.\nThis may be unnecessary as the number of possible clients is large enough with two levels.\nMax\n8.\nPERFORMANCE DISCUSSION\nWe now analyze the performance of the algorithm presented in subsection 6.3, i.e., the one taking care of the exchange of audio packets between the different domains.\nNote that the packets that are transiting within the LAN take advantage of the higher capacity (generally coupled with multicast capabilities) and therefore do not require a performance analysis.\nFig. 6.\nExample of a 2-level hierarchy of Conference Servers; the shaded Clients are the one selected by the M-CS and will be sent to other domains' CSs.\nThus we have to look only at the RTP packets over the WAN, i.e., between participating M-CSs.\nAs each M-CS from a domain will be sending only NMax out of M packets to the other CSs\nO d using unicast, instead of d \u22c5 (d \u2212 1) \u22c5 NMax.\nConsiderable valuable bandwidth can be saved using this heuristic.\nThe resulting traffic complexity reduces from O (d2) to O (d) in multicast-enabled networks.\nM), the bandwidth used by the application over a (>> NMax d WAN is upper-bounded by the following expression.\nThe total number of audio packets transiting over the WAN for d d each time slot is \u2211 \u2211 NMax which is quadratic in the i = 1 j = 1, j \u2260 i number of domains (i.e., O (d2)).\nHowever, it is independent of the total number of active clients.\nThis would not have been the case had all packets been sent over the network in each time slot.\nThe saving is tremendous.\nYet, one may contend that sending three packets to and from all domains is a waste of resources, as most of these streams will not be selected.\nIf just one client is active, selecting a subset of clients in that domain is unnecessary.\nPessimistic and optimistic algorithms as presented in the sequel aim at reducing the traffic further by harnessing the slow varying nature of the LN.\n8.1 Pessimistic algorithm\nConsider a scenario wherein the lowest LN (called LNt) of the three globally selected streams (set F of Section 6.1) exceeds the LN of the most dominant stream of a domain.\nEvidently, the chances that the next two dominant streams of that domain being selected to F in the next packet period are less.\nHere, we send this most dominant stream and withhold the other two.\nThere may be an error in unique selection across all domains for one packet period only.\nAs LN varies slowly, the error would get automatically rectified in a subsequent packet period (slot).\nIn this algorithm, there is at least one stream in each period.\nThe net network traffic in a packet period in the best case is d \u22c5 (d \u2212 1), Algorithm 2.\nPessimistic algorithm to reduce the number of packets sent over the Internet.\nInitialize LNt = 0 at an M-CS\/S-CS A.\nIn the first time slot (packet time), each CS sends the top NMax streams (based on their LN) to all other CSs.\nAt each M-CS\/S-CS and for each packet time: B. Find the value of lowest LN of the NMax globally selected streams (set F) from the previous time slot.\nSet LNt with this value.\nC.\nAt each CS domain, select the NMax local streams that have maximum value of LN (ToOtherCSs set).\nD. Select streams that have LN> LNt.\nIF there are> = NMax streams with LN> LNt then send top NMax to other CSs.\nELSE IF there are (NMax-1) streams with LN> LNt then send top (NMax-1) plus the one lower than LNt (i.e., top NMax) to other CSs.\nELSE IF there are (NMax-2) streams with LN> LNt then send top (NMax-2) plus the one lower than LNt (i.e. top (NMax -1)) to other CSs.\nELSE IF there are NO streams with LN> LNt then send top 1 stream to other CSs.\nE. Packets sent in step D form DB1.\nPackets received from other CSs form DB2.\nF. For this time slot, find global NMax streams based on LN from DB1 U DB2 (set F) G. Send set F to the clients in its domain.\nUpdate LNt for the next period.\nIn this algorithm the saving in traffic is at the cost of relaxing the condition of formation of globally unique set F. However, the discrepancies in selected streams at different domains remain for a short period of time depending on the transportation delay between any two domains.\nEven for a total delay of 400ms, for only 10 packet time slots the uniqueness is lost.\nThis duration in a real-time interactive conversation is non-perceivable by the listener.\nIn the case that there is a joke and every one laughs, then there would be sudden rise in the number of packets and it would be upper bounded by O (d2) N Max for a short period.\n8.2 Optimistic Algorithm\nThe traffic can be reduced further.\nThe scheme in the following algorithm (Algorithm.\n3) is withholding all the streams that have less value of LN compared to the least of the three in the set F.\nWe can find the correct and unique three streams after a few time slots depending on the transportation delay between the domains.\nAs the packet period is of the order of 40ms, the error in the selection is unnoticeable.\nThe number of streams on network in this case is always restricted to NMax (= 3).\nEven without Voice Activity Detection (VAD), there will be no more than three streams in the network in the best case, thus the total traffic is constant.\nA sudden burst of traffic, as described in 8.1, is a particular case.\nThese advantages are due to exploitation of the characteristics of LN.\nInitialize LNt = 0 at an M-CS\/S-CS A.\nIn the first time slot (packet time), each CS sends the top NMax streams (based on their LN) to all other CSs.\nAt each M-CS\/S-CS and for each packet time: B. Find the value of lowest LN of the NMax globally selected streams (set F) from the previous time slot.\nSet LNt with this value.\nC.\nAt each CS domain, select the NMax local streams that have maximum value of LN (ToOtherCSs set)\nIF the stream that was in F in the last interval belongs to this CS then select and send that stream even if its LN is now 0, set V = Vj and let U = Vj\u22121 and W = Vj+1 be the vertices that precede and follow V , respectively.\nThe payoffs to V are described by a 2\u00d72\u00d72 matrix P: Pxyz is the payoff that V receives when U plays x, V plays y, and W plays z, where x, y, z \u2208 {0, 1}.\nSuppose that U plays 1 with probability u and W plays 1 with probability w.\nThen V ``s expected payoff from playing 0 is P0 =(1\u2212u)(1\u2212w)P000+(1\u2212u)wP001+u(1\u2212w)P100+uwP101, while its expected payoff from playing 1 is P1 =(1\u2212u)(1\u2212w)P010+(1\u2212u)wP011+u(1\u2212w)P110+uwP111.\nIf P 0 > P1 , V strictly prefers to play 0, if P0 < P1 , V strictly prefers to play 1, and if P0 = P1 , V is indifferent, i.e., can play any (mixed) strategy.\nSince P0 and P1 are linear in w and u, there exist some constants A1, A0, B1, and B0 that depend on the matrix P, but not on u and w, such that P0 \u2212 P1 = w(B1u + B0) \u2212 (A1u + A0).\n(1) Depending on the values of A1, A0, B1, and B0, we subdivide the rest of the proof into the following cases.\n\u2022 B1 = 0, B0 = 0.\nIn this case, P0 > P1 if and only if A1u + A0 < 0.\nIf also A1 = 0, A0 = 0, clearly, B(W, V ) = [0, 1]2 , and the statement of the theorem is trivially true.\nOtherwise, the vertex V is indifferent between 0 and 1 if and only if A1 = 0 and u = \u2212A0\/A1.\nLet V = {v | v \u2208 (0, 1), \u2212A0\/A1 \u2208 pbrU (v)}.\nBy the inductive hypothesis, V consists of at most 2(j \u2212 1) + 4 segments and isolated points.\nFor any v \u2208 V, we have B(W, V )|V =v = [0, 1]: no matter what W plays, as long as U is playing \u2212A0\/A1, V is content to play v. On the other hand, for any v \u2208 (0, 1) \\ V we have B(W, V )|V =v = \u2205: when V plays v, U can only respond with u = \u2212A0\/A1, in which case V can benefit from switching to one of the pure strategies.\nTo complete the description of B(W, V ), it remains to analyze the cases v = 0 and v = 1.\nThe vertex V prefers to play 0 if A1 > 0 and u \u2264 \u2212A0\/A1, or A1 < 0 and u \u2265 \u2212A0\/A1, or 103 A1 = 0 and A0 < 0.\nAssume for now that A1 > 0; the other two cases can be treated similarly.\nIn this case 0 \u2208 pbrV (w) for some w \u2208 [0, 1] if and only if there exists a u \u2208 pbrU (0) such that u \u2264 \u2212A0\/A1: if no such u exists, whenever V plays 0 either U``s response is not in pbrU (0) or V can improve its payoff by playing 1.\nTherefore, either B(W, V )|V =0 = [0, 1] or B(W, V )|V =0 = \u2205.\nSimilarly, B(W, V )|V =1 is equal to either [0, 1] or \u2205, depending on pbrU (1).\nTherefore, the set B(W, V ) consists of at most 2j + 4 \u2264 (j + 4)2 rectangles: B(W, V ) \u2229 [0, 1]\u00d7(0, 1) = [0, 1]\u00d7V contributes at most 2j + 2 rectangles, and each of the sets B(W, V )|V =0 and B(W, V )|V =1 contributes at most one rectangle.\nSimilarly, its total number of event points is at most 2j + 4: the only W -event points are 0 and 1, each V -event point of B(W, V ) is a V -event point of B(V, U), and there are at most 2j + 2 of them.\n\u2022 B1u + B0 \u2261 0, A1 = \u03b1B1, A0 = \u03b1B0 for some \u03b1 \u2208 R.\nIn this case, V is indifferent between 0 and 1 if and only if w = \u03b1, or B1 = 0 and u = \u2212B0\/B1 = \u2212A0\/A1.\nSimilarly to the previous case, we can show that B(W, V )\u2229[0, 1]\u00d7(0, 1) consists of the rectangle {\u03b1}\u00d7[0, 1] and at most 2j + 2 rectangles of the form [0, 1]\u00d7IV , where each IV corresponds to a connected component of B(V, U)|U=\u2212B0\/B1 .\nFurthermore, V prefers to play 0 if B1u + B0 > 0 and w \u2265 \u03b1 or B1u + B0 < 0 and w \u2264 \u03b1.\nTherefore, if B1u\u2217 + B0 > 0 for some u\u2217 \u2208 pbrU (0), then B(W, V )|V =0 contains [\u03b1, +\u221e) \u2229 [0, 1] and if B1u\u2217\u2217 + B0 < 0 for some u\u2217\u2217 \u2208 pbrU (0), then B(W, V )|V =0 contains [\u2212\u221e, \u03b1] \u2229 [0, 1]; if both u\u2217 and u\u2217\u2217 exist, B(W, V )|V =0 = [0, 1].\nThe set B(W, V )|V =1 can be described in a similar manner.\nBy the inductive hypothesis, B(V, U) has at most 2j + 2 event points; as at least two of these are U-event points, it has at most 2j V -event points.\nSince each V -event point of B(W, V ) is a Vevent point of B(V, U) and B(W, V ) has at most 3 W -event points (0, 1, and \u03b1), its total number of event points is at most 2j + 3 < 2j +4.\nAlso, similarly to the previous case it follows that B(W, V ) consists of at most 2j + 4 < (j + 4)2 rectangles.\n\u2022 B1u + B0 \u2261 0, \u03b1(B1u + B0) \u2261 A1u + A0.\nIn this case, one can define the indifference function f(\u00b7) as f(u) = A(u) B(u) = A1u+A0 B1u+B0 , where A(u) and B(u) never turn into zero simultaneously.\nObserve that whenever w = f(u) and u, w \u2208 [0, 1], V is indifferent between playing 0 and 1.\nFor any A \u2286 [0, 1]2 , we define a function \u02c6fV by \u02c6fV (A) = {(f(u), v) | (v, u) \u2208 A}; note that \u02c6fV maps subsets of [0, 1]2 to subsets of R\u00d7[0, 1].\nSometimes we drop the subscript V when it is clear from the context.\nLEMMA 1.\nFor any (w, v) \u2208 [0, 1]\u00d7(0, 1) we have (w, v) \u2208 B(W, V ) if and only if there exists a u \u2208 [0, 1] such that (v, u) \u2208 B(V, U) and w = f(u).\nPROOF.\nFix an arbitrary v \u2208 (0, 1).\nSuppose that U plays some u \u2208 pbrU (v), w = f(u) satisfies w \u2208 [0, 1], and W plays w.\nThere exists a vector of strategies v1, ... , vj\u22121 = u, vj = v such that for each Vk, k < j, its strategy is a best response to its neighbours'' strategies.\nSince w = f(u), V is indifferent between playing 0 and 1; in particular, it can play v. Therefore, if we define vj+1 = w, the vector of strategies (v1, ... , vj+1) will satisfy the conditions in the definition of potential best response, i.e., we have v \u2208 pbrV (w).\nConversely, suppose v \u2208 pbrV (w) for some w \u2208 [0, 1], v = 0, 1.\nThen there exists a vector of strategies v1, ... , vj\u22121, vj = v, vj+1 = w such that for each Vk, k \u2264 j, its strategy is a best response to its neighbours'' strategies.\nAs v = 0, 1, V is, in fact, indifferent between playing 0 and 1, which is only possible if w = f(vj\u22121).\nChoose u = vj\u22121; by construction, u \u2208 pbrU (v).\nLemma 1 describes the situations when V is indifferent between playing 0 and playing 1.\nHowever, to fully characterize B(W, V ), we also need to know when V prefers a pure strategy.\nDefine \u02c6f(0) = \u222au\u2208pbrU (0)Ru, where Ru = \u00b4 [f(u), +\u221e)\u00d7{0} if B(u) > 0, (\u2212\u221e, f(u)]\u00d7{0} if B(u) < 0.\nand \u02c6f(1) = \u222au\u2208pbrU (1)Ru, where Ru = \u00b4 [f(u), +\u221e)\u00d7{1} if B(u) < 0, (\u2212\u221e, f(u)]\u00d7{1} if B(u) > 0.\nLEMMA 2.\nFor any w \u2208 [0, 1], we have (w, 0) \u2208 \u02c6f(0) if and only if 0 \u2208 pbrV (w) and (w, 1) \u2208 \u02c6f(1) if and only if 1 \u2208 pbrV (w).\nPROOF.\nConsider an arbitrary u0 \u2208 pbrU (0).\nIf B(u0) > 0, for u = u0 the inequality P0 \u2265 P1 is equivalent to w \u2265 f(u0).\nTherefore, when U plays u0 and W plays w, w \u2265 f(u0), V prefers to play 0; as u0 \u2208 pbrU (u), it follows that 0 \u2208 pbrV (w).\nThe argument for the case B(u0) < 0 is similar.\nConversely, if 0 \u2208 pbrV (w) for some w \u2208 [0, 1], there exists a vector (v1, ... , vj\u22121, vj = 0, vj+1 = w) such that for each Vk, k \u2264 j, Vk plays vk, and this strategy is a best response to the strategies of Vk``s neighbours.\nNote that for any such vector we have vj\u22121 \u2208 pbrU (0).\nBy way of contradiction, assume (w, 0) \u2208 \u00cb u\u2208pbrU (0) Ru.\nThen it must be the case that for any u0 \u2208 pbrU (0) either f(u0) < w and Ru0 = (\u2212\u221e, f(u0)]\u00d7{0} or f(u0) > w and Ru0 = [f(u0), +\u221e)\u00d7{0}.\nIn both cases, when V plays 0, U plays u0, and V plays w, the inequality between f(u0) and w is equivalent to P0 < P1 , i.e., V would benefit from switching to 1.\nThe argument for \u02c6f(1) is similar.\nTogether, Lemma 1 and Lemma 2 completely describe the set B(W, V ): we have B(W, V ) = \u02c6f(0) \u222a \u02c6f(B(V, U)) \u222a \u02c6f(1) [0, 1]2 .\nIt remains to show that B(W, V ) can be represented as a union of at most (j + 4)2 rectangles, has at most 2j + 4 event points, and can be computed in O(j2 ) time.\nSet u\u2217 = \u2212B0\/B1.\n2 Consider an arbitrary rectangle R = [v1, v2]\u00d7[u1, u2] \u2286 B(V, U).\nIf u\u2217 \u2208 [u1, u2], the function f(\u00b7) is continuous on [u1, u2] and hence \u02c6f(R) = [fmin, fmax]\u00d7[v1, v2], where fmin = min{f(u1), f(u2)}, fmax = max{f(u1), f(u2)}, i.e., in this case \u02c6f(R) \u2229 [0, 1]2 consists of a single rectangle.\nNow, suppose that R is intersected by the line [0, 1]\u00d7{u\u2217 }; as was noted earlier, there are at most 2j+2 such rectangles.\nSuppose that limu\u2192u\u2217\u2212 f(u) = +\u221e; as f(\u00b7) is a fractional linear function, this implies that limu\u2192u\u2217+ f(u) = \u2212\u221e and also f(u1) > f(u2).\nSince f(\u00b7) is continuous on [u1, u\u2217 ) and (u\u2217 , u2], it is easy to see that \u02c6f([v1, v2]\u00d7[u1, u\u2217 )) = [f(u1), +\u221e)\u00d7[v1, v2] 2 The case B1 = 0 causes no special problems.\nFor completeness, set u\u2217 to be any value outside of [0, 1] in this case.\n104 v u v u* 1 f(0) f(a)f(b) f(1) a b (0, 0) w v 2 v (0, 0) 1 1 1 v 2 v 1 1 Figure 4: f is increasing on (\u2212\u221e, u\u2217 ) and (u\u2217 , +\u221e).\nand \u02c6f([v1, v2]\u00d7(u\u2217 , u2]) = (\u2212\u221e, f(u2)]\u00d7[v1, v2], i.e., in this case \u02c6f(R) \u2229 [0, 1]2 consists of at most two rectangles.\nThe case limu\u2192u\u2217\u2212 f(u) = \u2212\u221e is similar.\nAs \u02c6f(B(V, U)) = \u00cb R\u2282B(V,U) \u02c6f(R), it follows that \u02c6f(B(V, U)) consists of at most (j + 3)2 + 2j + 2 rectangles.\nAlso, it is easy to see that both \u02c6f(0) and \u02c6f(1) consist of at most 2 line segments each.\nWe conclude that B(W, V ) can be represented as a union of at most (j + 3)2 + 2j + 6 < (j + 4)2 rectangles.\nMoreover, if v is a V -event point of B(W, V ), then v is a Vevent point of B(V, U) (this includes the cases v = 0 and v = 1, as 0 and 1 are V -event points of B(V, U)) and if w is a W -event point of B(W, V ), then either w = 0 or w = 1 or there exists some u \u2208 [0, 1] such that w = f(u) and u is a U-event point of B(V, U).\nHence, B(W, V ) has at most 2j + 4 event points.\nThe O(j2 ) bound on the running time in Theorem 5 follows from our description of the algorithm.\nThe O(n3 ) bound on the overall running time for finding a Nash equilibrium (and a representation of all Nash equilibria) follows.\n4.1 Finding a Single Nash Equilibrium in O(n2 ) Time The upper bound on the running time of our algorithm is tight, at least assuming the straightforward implementation, in which each B(Vj+1, Vj) is stored as a union of rectangles: it is not hard to construct an example in which the size of B(Vj+1, Vj) is \u03a9(j2 ).\nHowever, in some cases it is not necessary to represent all Nash equilibria; rather, the goal is to find an arbitrary equilibrium of the game.\nIn this section, we show that this problem can be solved in quadratic time, thus obtaining a proof of Theorem 1.\nOur solution is based on the idea of [9], i.e., working with subsets of the best response policies rather than the best response policies themselves; following [9], we will refer to such subsets as breakpoint policies.\nWhile it is not always possible to construct a breakpoint policy as defined in [9], we show how to modify this definition so as to ensure that a breakpoint policy always exists; moreover, we prove that for a path graph, the breakpoint policy of any vertex can be stored in a data structure whose size is linear in the number of descendants this vertex has.\nDefinition 3.\nA breakpoint policy \u02c6B(V, U) for a vertex U whose parent is V is a non-self-intersecting curve of the form X1 \u222a Y1 \u222a \u00b7 \u00b7 \u00b7 \u222a Ym\u22121 \u222a Xm, where Xi = [vi\u22121, vi]\u00d7{ui}, Yi = {vi}\u00d7[ui, ui+1] and ui, vi \u2208 [0, 1] for i = 0, ... , m.\nWe say that a breakpoint policy is valid if v0 = 0, vm = 1, and \u02c6B(V, U) \u2286 B(V, U).\nWe will sometimes abuse notation by referring to \u02c6B(V, U) as a collection of segments Xi, Yi rather than their union.\nNote that we do not require that vi \u2264 vi+1 or ui \u2264 ui+1; consequently, in any argument involving breakpoint policies, all segments are to be treated as directed segments.\nObserve that any valid breakpoint policy \u02c6B(V, U) can be viewed as a continuous 1-1 mapping \u03b3(t) = (\u03b3v(t), \u03b3u(t)), \u03b3 : [0, 1] \u2192 [0, 1]2 , where \u03b3(0) = (0, u1), \u03b3(1) = (1, um) and there exist some t0 = 0, t1, ... , t2m\u22122 = 1 such that {\u03b3(t) | t2k \u2264 t \u2264 t2k+1} = Xk+1, {\u03b3(t) | t2k+1 \u2264 t \u2264 t2k+2} = Yk+1.\nAs explained in Section 3, we can use a valid breakpoint policy instead of the best response policy during the downstream pass, and still guarantee that in the end, we will output a Nash equilibrium.\nTheorem 6 shows that one can inductively compute valid breakpoint policies for all vertices on the path; the proof of this theorem can be found in the full version of this paper [6].\nTHEOREM 6.\nFor any V = Vj, one can find in polynomial time a valid breakpoint policy \u02c6B(W, V ) that consists of at most 2j + 1 segments.\n5.\nNASH EQUILIBRIA ON GRAPHS WITH MAXIMUM DEGREE 2 In this section we show how the algorithm for paths can be applied to solve a game on any graph whose vertices have degree at most 2.\nA graph having maximum degree 2 is, of course, a union of paths and cycles.\nSince each connected component can be handled independently, to obtain a proof of Theorem 2, we only need to show how to deal with cycles.\nGiven a cycle with vertices V1, ... , Vk (in cyclic order), we make two separate searches for a Nash equilibrium: first we search for a Nash equilibrium where some vertex plays a pure strategy, then we search for a fully mixed Nash equilibrium, where all vertices play mixed strategies.\nFor i \u2264 k let vi denote the probability that Vi plays 1.\nThe first search can be done as follows.\nFor each i \u2208 {1, ... , k} and each b \u2208 {0, 1}, do the following.\n1.\nLet P be the path (Vi+1, Vi+2 ... , Vk, V1, ... , Vi\u22121, Vi) 2.\nLet payoff to Vi+1 be based on putting vi = b (so it depends only on vi+1 and vi+2.)\n3.\nApply the upstream pass to P 4.\nPut vi = b; apply the downstream pass For each vertex, Vj, keep track of all possible mixed strategies vj 5.\nCheck whether Vi+1 has any responses that are consistent with vi = b; if so we have a Nash equilibrium.\n(Otherwise, there is no Nash equilibrium of the desired form.)\nFor the second search, note that if Vi plays a mixed strategy, then vi+1 and vi\u22121 satisfy an equation of the form vi+1 = (A0 + A1vi\u22121)\/(B0 + B1vi\u22121).\nSince all vertices in the cycle play mixed strategies, we have vi+3 = (A0 +A1vi+1)\/(B0 +B1vi+1).\nComposing the two linear fractional transforms, we obtain vi+3 = (A0 +A1 vi\u22121)\/(B0 +B1 vi\u22121).\nfor some new constants A0 , A1 , B0 , B1 .\nChoose any vertex Vi.\nWe can express vi in terms of vi+2, then vi+4, vi+6 etc. and ultimately vi itself to obtain a quadratic equation (for vi) that is simple to derive from the payoffs in the game.\nIf the equation is non-trivial it has at most 2 solutions in (0, 1).\nFor an odd-length cycle all other vj ``s are derivable from those solutions, and if a fully mixed Nash equilibrium exists, all the vj should turn out to be real numbers in the range (0, 1).\nFor an even-length cycle, we obtain two quadratic equations, one for vi and another for 105 vi+1, and we can in the same way test whether any solutions to these yield values for the other vj , all of which lie in (0, 1).\nIf the quadratic equation is trivial, there is potentially a continuum of fully-mixed equilibria.\nThe values for vi that may occur in a Nash equilibrium are those for which all dependent vj values lie in (0, 1); the latter condition is easy to check by computing the image of the interval (0, 1) under respective fractional linear transforms.\n6.\nFINDING EQUILIBRIA ON AN (ARBITRARY) TREE For arbitrary trees, the general structure of the algorithm remains the same, i.e., one can construct a best response policy (or, alternatively, a breakpoint policy) for any vertex based on the best response policies of its children.\nWe assume that the degree of each vertex is bounded by a constant K, i.e., the payoff matrix for each vertex is of size O(2K ).\nConsider a vertex V whose children are U1, ... , Uk and whose parent is W ; the best response policy of each Uj is B(V, Uj).\nSimilarly to the previous section, we can compute V ``s expected payoffs P0 and P1 from playing 0 or 1, respectively.\nNamely, when each of the Uj plays uj and W plays w, we have P0 = L0 (u1, ... , uk, w), P 1 = L1 (u1, ... , uk, w), where the functions L0 (\u00b7, ... , \u00b7), L1 (\u00b7, ... , \u00b7) are linear in all of their arguments.\nHence, the inequality P0 > P1 can be rewritten as wB(u1, ... , uk) > A(u1, ... , uk), where both A(\u00b7, ... , \u00b7) and B(\u00b7, ... , \u00b7) are linear in all of their arguments.\nSet u = (u1, ... , uk) and define the indifference function f : [0, 1]k \u2192 [0, 1] as f(u) = A(u)\/B(u); clearly, if each Uj plays uj, W plays w and w = f(u), V is indifferent between playing 0 and 1.\nFor any X = X1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xk, where Xi \u2286 [0, 1]2 define \u02c6f(X) = {(f(u), v) | (v, ui) \u2208 Xi, i = 1, ... , k} Also, set \u02c6f(0) = {(w, 0) | \u2203u s.t. ui \u2208 pbrUi (0) and wB(u) \u2265 A(u)} and \u02c6f(1) = {(w, 1) | \u2203u s.t. ui \u2208 pbrUi (1) and wB(u) \u2264 A(u)}.\nAs in previous section, we can show that B(W, V ) is equal to \u02c6f(0) \u222a \u02c6f(B(V, U1) \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 B(V, Uk)) \u222a \u02c6f(1) [0, 1]2 ; also, any path from w = 0 to w = 1 that is a subset of B(W, V ) constitutes a valid breakpoint policy.\n6.1 Exponential Size Breakpoint Policy While the algorithm of Section 4 can be generalized for boundeddegree trees, its running time is no longer polynomial.\nIn fact, the converse is true: we can construct a family of trees and payoff matrices for all players so that the best response policies for some of the players consist of an exponential number of segments.\nMoreover, in our example the breakpoint policies coincide with the best response policies, which means that even finding a single Nash equilibrium using the approach of [8, 9] is going to take exponentially long time.\nIn fact, a stronger statement is true: for any polynomial-time two-pass algorithm (defined later) that works with subsets of best response policies for this graph, we can choose the payoffs of the vertices so that the downstream pass of this algorithm will fail.\nS 1 1 T S n\u22121 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 0 00 1 11 0 00 1 11 0 00 1 11 0 00 1 11 0 00 1 11 0 00 1 11 0 00 1 11 00 0000 11 1111 00 0000 11 1111 0 00 1 11 0000 00000000 00000000 0000 1111 11111111 11111111 1111 000 000000 000000 000 111 111111 111111 111 S S T T T 2 n\u22121 n 2 n 1 n\u221212 n VVVVV 0 Figure 5: The tree Tn that corresponds to exponential-size breakpoint policy.\nIn the rest of this subsection, we describe this construction.\nConsider the tree Tn given by Figure 5; let Vn be the root of this tree.\nFor every k = 1, ... , n, let the payoffs of Sk and Tk be the same as those for the U and V described in Section 3; recall that the breakpoint policies for U and V are shown in Figure 2.\nIt is not hard to see that the indifference function for Tk is given by f(s) = .8s+.1.\nThe payoff of V0 is 1 if V1 selects the same action as V0 and 0 otherwise; V0``s best response policy is given by Figure 6.\nLEMMA 3.\nFix k < n, and let u, t, v, and w denote the strategies of Vk\u22121, Tk, Vk, and Vk+1, respectively.\nSuppose that Vk prefers playing 0 to playing 1 if and only if .5t + .1u + .2 > w.\nThen B(Vk+1, Vk) consists of at least 3k segments.\nMoreover, {(v, w) | (v, w) \u2208 B(Vk+1, Vk), 0 \u2264 w \u2264 .2} = [0, .2]\u00d7{0} and {(v, w) | (v, w) \u2208 B(Vk+1, Vk), .8 \u2264 w \u2264 1} = [.8, 1]\u00d7{1}.\nPROOF.\nThe proof proceeds by induction on k. For k = 0, the statement is obvious.\nNow, suppose it is true for B(Vk, Vk\u22121).\nOne can view B(Vk+1, Vk) as a union of seven components: \u02c6f(0) \u2229 [0, 1]\u00d7{0}, \u02c6f(1) \u2229 [0, 1]\u00d7{1}, and five components that correspond to the segments of B(Vk, Tk).\nLet us examine them in turn.\nTo describe \u02c6f(0)\u2229[0, 1]\u00d7{0}, note that f(u, t) = .5t+.1u+.2 is monotone in t and u and satisfies f(0, 0) = .2.\nAlso, we have pbrVk\u22121 (0) = {0} and pbrTk (0) = {0}.\nFor any w \u2208 [0, 1] we have f(0, 0) \u2265 w if and only if w \u2208 [0, .2].\nWe conclude that \u02c6f(0) \u2229 [0, 1]\u00d7{0} = [0, .2]\u00d7{0}.\nSimilarly, it follows that \u02c6f(1) \u2229 [0, 1]\u00d7{1} = [.8, 1]\u00d7{1}.\nDefine S1 = {(f(u, 0), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [0, .9]\u00d7[0, 1]}, S2 = {(f(u, .5), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [.1, .9]\u00d7[0, 1]}, S3 = {(f(u, 1), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [.1, 1]\u00d7[0, 1]}; these sets correspond to horizontal segments of B(Vk, Tk).\nIt is easy to see that S1, S2, S3 \u2282 B(Vk+1, Vk).\nSince f is a continuous function, the number of segments in each Si is at least the number of segments in B(Vk, Vk\u22121)\u2229[.1, .9]\u00d7[0, 1], which is at least 3k\u22121 by induction hypothesis.\nMoreover, as f is monotone in u and f(1, 0) < f(0, .5) < f(1, .5) < f(0, 1), all Si, i = 1, 2, 3, are disjoint.\nFinally, the set B(Vk+1, Vk) contains two segments that correspond to the vertical segments of B(Vk, Tk), i.e., S4 = {(f(0, t), .1) | t \u2208 [.5, 1]) = [.45, .7]\u00d7{.1} and S5 = {(f(1, t), .9) | t \u2208 [0, .5]) = [.3, .55]\u00d7{.9}.\nClearly, S4 connects S2 and S3, S5 connects S1 and S2, and S4 and S5 do not intersect each other.\nWe conclude that B(Vk+1, Vk) 106 0 00 00 00 00 00 00 00 00 00 0 1 11 11 11 11 11 11 11 11 11 1 00000000001111111111 1 1 10.8 1 1 0.9 0.1 V V0.5 0.2 V V 21 10 Figure 6: Breakpoint policies for V0 and V1.\nis a continuous line that consist of at least 3k segments and satisfies the condition of the lemma.\nTo complete the construction, we need to show that we can design the payoff matrix for Vk so that it prefers playing 0 to playing 1 if and only if .5t + .1u + .2 > w. To this end, we prove a more general statement, namely, that the indifference function of a vertex can be an arbitrary fractional multilinear function of its descendants'' strategies.\nWe say that a function of k variables is multilinear if it can be represented as a sum of monomials and each of these monomials is linear in all of its variables.\nNote that this definition is different from a more standard one in that we do not require that all of the monomials have the same degree.\nRecall that the payoffs of a vertex with k + 1 neighbours are described by matrices P0 and P1 , where Pj i0i1...ik is the payoff that V gets when it plays j, and its neighbours play i0, ... , ik, and j, i0, ... , ik \u2208 {0, 1}.\nLet P[j] = P[j](w, u1, ... , uk) be the expected payoff obtained by this vertex when it plays j and the (mixed) strategies of its neighbours are given by a vector (w, u1, ... , uk), i.e., P[j] = E[P j i0i1...ik ] where i0, ... , ik are independent Bernoulli random variables, each of which is 1 with the respective probabilities w, u1, ... , uk.\nLEMMA 4.\nGiven a tree vertex V whose parent is W and whose children are U1, ... , Uk, for any function f = f(u1, ... , uk) that can be represented as a ratio of two multilinear functions f1, f2, i.e., f = f1(u1,...,uk) f2(u1,...,uk) , there exist payoff matrices P0 and P1 for V such that P[0] \u2212 P[1] = wf2(u1, ... , uk) \u2212 f1(u1, ... , uk).\nThe proof of this lemma is based on the fact that every monomial of the form as(u0)s0 ... (uk)sk , s1, ... , sk \u2208 {0, 1}, can be represented as t=t0...tk\u2208\u03a3k+1 Ct(u0)t0 (1 \u2212 u0)1\u2212t0 ... (uk)tk (1 \u2212 uk)1\u2212tk for some Ct, t \u2208 {0, 1}k+1 .\nThe details can be found in the full version of this paper [6].\n6.2 Irreducibility of the Best Response Policy for Tn While the best response policy constructed in the previous subsection has exponential size, it is not clear `a priori that it is necessary to keep track of all of its line segments rather than to focus on a small subset of these segments.\nHowever, it turns out that for two-pass algorithms such as the algorithm of [8], the best response policy cannot be simplified.\nMore precisely, we say that an algorithm A is a two-pass algorithm if 0 0 0 00 0 0 0 0 00 0 0 0 0 00 0 0 0 0 1 1 1 11 1 1 1 1 11 1 1 1 1 11 1 1 1 1 0000000000000000000000000000011111111111111111111111111111 0.2 0.8 0.9 1 0.1 1 V V 2 3 S 1 S S S S 1 T 0 T 2 3 4 5 Figure 7: Breakpoint policy for V2.\n\u2022 A consists of an upstream pass and a downstream pass.\n\u2022 During the upstream pass, for each vertex V with parent W , A constructs a set BB(W, V ) \u2286 B(W, V ).\nThis set is produced from the sets {BB(V, U) | U is a child of V } by applying the procedure from the beginning of Section 6 (substituting BB(V, Uj ) for B(V, Uj) for all children Uj of V ) , and then possibly omitting some of the points of the resulting set (which is then stored explicitly).\n\u2022 The downstream pass is identical to the downstream pass of [8] as described in Section 2 except that it operates on the sets BB(W, V ) rather than on the sets B(W, V ).\nTheorem 7 demonstrates that any two-pass algorithm will fail during the downstream pass on Tn if there is an index j such that the set BB(Vj+1, Vj) omits any interior point of any of the (at least 3j ) segments of B(Vj+1, Vj).\nThis implies Theorem 3.\nTHEOREM 7.\nFor any two-pass algorithm A for which there exists an index j, j \u2208 [1, n\/4], a segment S of B(Vj , Vj\u22121), and an interior point (x, y) of S such that BB(Vj, Vj\u22121) does not contain (x, y), we can choose payoff matrices of the vertices Vj, ... , Vn so that the downstream pass of A will fail, and, additionally, payoffs to V4j , ... , Vn are identically 0.\nWe sketch the proof of Theorem 7; the details can be found in the full version of this paper [6].\nWe proceed by induction.\nFor j = 1, the argument is similar to that in Section 3.\nFor the inductive step, the main idea is that we can zoom in on any part of a best response policy (including the part that was omitted!)\nby using an appropriate indifference function; this allows us to reduce the case j = j0 to j = j0 \u2212 1.\n7.\nPPAD-COMPLETENESS OF BOUNDED PATHWIDTH GRAPHICAL GAMES In the previous section, we showed that for graphical games on trees that are almost but not quite paths, two-pass algorithms fail to find Nash equilibria in polynomial time.\nWe next show that a milder path-like graph property allows us to construct graphical games for which it is unlikely that any polynomial-time algorithm will find Nash equilibria.\n7.1 Pathwidth A path decomposition of a graph G = (V, E) is a sequence of subset Si(V ) \u2286 V such that for each edge (v, v ) \u2208 E, v, v \u2208 Si(V ) for some i, and furthermore, for each v \u2208 V , if v \u2208 Si(V ) and v \u2208 Sj(V ) for j > i, then v \u2208 Sk(V ) for all i \u2264 k \u2264 j.\nThe path decomposition has width k if all sets Si(V ) have cardinality at most k + 1.\nThe pathwidth of G is the minimum width of any path decomposition of G. 107 Pathwidth is a restriction of treewidth (in which one would seek a tree whose vertices were the sets Si(V ), and the sets containing some vertex would have to form a subtree).\nFor any constant k it can be decided in polynomial time whether a graph has pathwidth (or treewidth) k. Furthermore many graph-theoretic problems seem easier to solve in polynomial time, when restricted to fixed treewidth, or pathwidth, graphs, see [1] for an overview.\nNote that a path has pathwidth 1 and a cycle has pathwidth 2.\n7.2 PPAD-completeness We review some basic definitions from the computational complexity theory of search problems.\nA search problem associates any input (here, a graphical game) with a set of solutions (here, the Nash equilibria of the input game), where the description length of any solution should be polynomially bounded as a function of the description length of its input.\nIn a total search problem, there is a guarantee that at least one solution exists for any input.\nNash``s theorem assures us that the problem of finding Nash equilibria is total.\nA reduction from search problem S to problem S is a mechanism that shows that any polynomial-time algorithm for S implies a polynomial-time algorithm for S.\nIt consists of functions f and g, computable in polynomial time, where f maps inputs of S to inputs of S , and g maps solutions of S to solutions of S, in such a way that if IS is an input to S, and SS is a solution to f(IS), then g(SS ) is a solution to IS.\nObserve that total search problems do not allow the above reductions from problems such as CIRCUIT SAT (where the input is a boolean circuit, and solutions are input vectors that make the output true) due to the fact that CIRCUIT SAT and other NP-complete problems have inputs with empty solution sets.\nInstead, recent work on the computational complexity of finding a Nash equilibrium [7, 4, 5, 2, 3] has related it to the following problem.\nDefinition 4.\nEND OF THE LINE.\nInput: boolean circuits S and P, each having n input and n output bits, where P(0n ) = 0n and S(0n ) = 0n .\nSolution: x \u2208 {0, 1}n such that S(x) = x, or alternatively x \u2208 {0, 1}n such that P(S(x)) = x. S and P can be thought of as standing for successor and predecessor.\nObserve that by computing Si (0n ) (for i = 0, 1, 2, ...) and comparing with P(Si+1 (0n )), we must eventually find a solution to END OF THE LINE.\nEND OF THE LINE characterizes the complexity class PPAD (standing for parity argument on a graph, directed version), introduced in Papadimitriou [11], and any search problem S is PPAD-complete if END OF THE LINE reduces to S.\nOther PPAD-complete problems include the search for a ham sandwich hyperplane, and finding market equilibria in an exchange economy (see [11] for more detailed descriptions of these problems).\n3-GRAPHICAL NASH is the problem of finding a Nash equilibrium for a graphical game whose graph has degree 3.\nDaskalakis et al. [4] show PPAD-completeness of 3-GRAPHICAL NASH by a reduction from 3-DIMENSIONAL BROUWER, introduced in [4] and defined as follows.\nDefinition 5.\n3-DIMENSIONAL BROUWER.\nInput: a circuit C having 3n input bits and 2 output bits.\nThe input bits define a cubelet of the unit cube, consisting of the 3 coordinates of its points, given to n bits of precision.\nThe output represents one of four colours assigned by C to a cubelet.\nC is restricted so as to assign colour 1 to cubelets adjacent to the (y, z)-plane, colour 2 to remaining cubelets adjacent to the (x, z)-plane, colour 3 to remaining cubelets on the (x, y)-plane, and colour 0 to all other cubelets on the surface of the unit cube.\nA solution is a panchromatic vertex, a vertex adjacent to cubelets that have 4 distinct colours.\nThe reason why a solution is guaranteed to exist, is that an associated Brouwer function \u03c6 can be constructed, i.e. a continuous function from the unit cube to itself, such that panchromatic vertices correspond to fixpoints of \u03c6.\nBrouwer``s Fixpoint Theorem promises the existence of a fixpoint.\nThe proof of Theorem 4 uses a modification of the reduction of [4] from 3-DIMENSIONAL BROUWER to 3-GRAPHICAL NASH.\nTo prove the theorem, we begin with some preliminary results as follows.\nEach player has 2 actions, denoted 0 and 1.\nFor a player at vertex V let p[V ] denote the probability that the player plays 1.\nLEMMA 5.\n[7] There exists a graphical game Gshift of fixed size having vertices V , V where p[V ] is the fractional part of 2p[V ].\nCOROLLARY 1.\nThere exists a graphical game Gn\u2212shift of size \u0398(n) of constant pathwidth, having vertices V , Vn where p[Vn] is the fractional part of 2n .\np[V ].\nPROOF.\nMake a chain of n copies of Gshift in Lemma 5.\nEach subset of vertices in the path decomposition is the vertices in a copy of Gshift.\nLet In(x) denote the n-th bit of the binary expansion of x, where we interpret 1 as true and 0 as false.\nThe following uses gadgets from [7, 4].\nCOROLLARY 2.\nThere exists k such that for all n, and for all n1, n2, n3 \u2264 n, there exists a graphical game of size O(n) with pathwidth k, having vertices V1, V2, V3 where p[V3] = p[V1] + 2\u2212n3 (In1 p[V1] \u2227 In2 p[V2]).\nPROOF OF THEOREM 4.\nLet C be the boolean circuit describing an instance of 3-DIMENSIONAL BROUWER.\nLet g1, ... , gp(n) be the gates of C indexed in such a way that the input(s) to any gate are the output(s) of lower-indexed gates.\ng1, ... , g3n will be the 3n inputs to C. All players in the graphical game G constructed in [4] have 2 actions denoted 0 and 1.\nThe probability that V plays 1 is denoted p[V ].\nG has 3 players Vx, Vy and Vz for which p[Vx], p[Vy] and p[Vz] represent the coordinates of a point in the unit cube.\nG is designed to incentivize Vx, Vy and Vz to adjust their probabilities in directions given by a Brouwer function which is itself specified by the circuit C.\nIn a Nash equilibrium, p[Vx], p[Vy] and p[Vz] represent coordinates of a fixpoint of a function that belongs to the class of functions represented by 3-DIMENSIONAL BROUWER.\nFor 1 \u2264 i \u2264 p(n) we introduce a vertex V (i) C such that for 1 \u2264 j \u2264 i, Ij(p[V (i) C ]) is the output of gate gj; for i < j \u2264 p(n), Ij(p[V (i) C ]) is 0.\nConstruct V (i) C from V (i\u22121) C using Corollary 2.\nLet G(i) be the graphical game that does this.\nLet S1(G(i) ), ... , Sn(G(i) ) be a length n path decomposition of G(i) , where V (i\u22121) C \u2208 S1(G(i) ) and V (i) C \u2208 Sn(G(i) ).\nThen, a path decomposition of \u222a1\u2264i\u2264p(n)G(i) is obtained by taking the union of the separate path decompositions, together with Sn(G(i\u22121) ) \u222a S1(G(i) ) for 2 \u2264 i \u2264 p(n).\nLet GC be the above graphical game that simulates C. GC has 3n inputs, consisting of the first n bits of the binary expansions of p[Vx], p[Vy] and p[Vz].\nSimilarly to [4], the output of GC affects Vx, Vy and Vz as follows.\nColour 0 incentivizes Vx, Vy and Vz 108 to adjust their probabilities p[Vx], p[Vy] and p[Vz] in the direction (\u22121, \u22121, \u22121); colour 2 incentivizes them to move in direction (1, 0, 0); colour 2, direction (0, 1, 0); colour 3, direction (0, 0, 1).\nWe need to ensure that at points at the boundaries of adjacent cubelets, the change of direction will be approximately the average of directions of surrounding points.\nThat way, all four colors\/directions must be nearby so that they can cancel each other out (and we are at a panchromatic vertex).\nThis is achieved using the same trick as [4], in which we make a constant number M of copies of GC, which differ in that each copy adds a tiny displacement vector to its copies of p[Vx], p[Vy] and p[Vz] (which are derived from the original using the addition gadget of [7]).\nUsing the addition and multiplication gadgets of [7] we average the directions and add a small multiple of this average to (p[Vx], p[Vy], p[Vz]).\nAt a Nash equilibrium the outputs of each copy will cancel each other out.\nThe pathwidth of the whole game is at most M times the pathwidth GC.\n8.\nOPEN PROBLEMS The most important problem left open by this paper is whether it is possible to find a Nash equilibrium of a graphical game on a bounded-degree tree in polynomial time.\nOur construction shows that any two-pass algorithm that explicitly stores breakpoint policies needs exponential time and space.\nHowever, it does not preclude the existence of an algorithm that is based on a similar idea, but, instead of computing the entire breakpoint policy for each vertex, uses a small number of additional passes through the graph to decide which (polynomial-sized) parts of each breakpoint policy should be computed.\nIn particular, such an algorithm may be based on the approximation algorithm of [8], where the value of is chosen adaptively.\nAnother intriguing question is related to the fact that the graph for which we constructed an exponential-sized breakpoint policy has pathwidth 2, while our positive results are for a path, i.e., a graph of pathwidth 1.\nIt is not clear if for any bounded-degree graph of pathwidth 1 the running time of (the breakpoint policybased version of) our algorithm will be polynomial.\nIn particular, it is instructive to consider a caterpillar graph, i.e., the graph that can be obtained from Tn by deleting the vertices S1, ... , Sn.\nFor this graph, the best response policy of a vertex Vk in the spine of the caterpillar is obtained by combining the best response policy of its predecessor on the spine Vk\u22121 and its other child Tk; since the latter is a leaf, its best response policy is either trivial (i.e., [0, 1]2 , [0, 1]\u00d7{0}, or [0, 1]\u00d7{1}) or consists of two horizontal segments and one vertical segment of the form {\u03b1}\u00d7[0, 1] that connects them.\nAssuming for convenience that B(Vk, Tk) = [0, \u03b1]\u00d7{0} \u222a {\u03b1}\u00d7[0, 1] \u222a [\u03b1, 1]\u00d7{1}, and f is the indifference function for Vk, we observe that the best response policy for Vk consists of 5 components: \u02c6f(0), \u02c6f(1), and three components that correspond to [0, \u03b1]\u00d7{0}, {\u03b1}\u00d7[0, 1], and [\u03b1, 1]\u00d7{1}.\nHence, one can think of constructing B(Vk+1, Vk) as the following process: turn B(Vk, Vk\u22121) by \u03c0\/2, cut it along the (now horizontal) line vk = \u03b1, apply a fractional linear transform to the horizontal coordinate of both parts, and reconnect them using the image of the segment {\u03b1}\u00d7[0, 1] under f.\nThis implies that the problem of bounding the size of the best response policy (or, alternatively, the breakpoint policy), can be viewed as a generalization of the following computational geometry problem, which we believe may be of independent interest: PROBLEM 1.\nGiven a collection of axis-parallel segments in R2 , consider the following operation: pick an axis-parallel line li (either vertical or horizontal), cut the plane along this line, and shift one of the resulting two parts by an arbitrary amount \u03b4i; as a result, some segments will be split into two parts.\nReconnect these parts, i.e., for each segment of the form [a, b] \u00d7 {c} that was transformed into [a, t] \u00d7 {c + \u03b4i} and [t, b] \u00d7 {c}, introduce a segment {t}\u00d7[c, c+\u03b4i].\nIs it possible to start with the segment [0, 1] and after n operations obtain a set that cannot be represented as a union of poly(n) line segments?\nIf yes, can it be the case that in this set, there is no path with a polynomial number of turns that connects the endpoints of the original segment?\nIt turns out that in general, the answer to the first question is positive, i.e., after n steps, it is possible to obtain a set that consists of \u0398(cn ) segments for some c > 0.\nThis implies that even for a caterpillar, the best response policy can be exponentially large.\nHowever, in our example (which is omitted from this version of the paper due to space constraints), there exists a polynomial-size path through the best response policy, i.e., it does not prove that the breakpoint policy is necessarily exponential in size.\nIf one can prove that this is always the case, it may be possible to adapt this proof to show that there can be an exponential gap between the sizes of best response policies and breakpoint policies.\n9.\nREFERENCES [1] H. Bodlaender and T. Kloks.\nEfficient and constructive algorithms for the pathwidth and treewidth of graphs.\nJournal of Algorithms, 21:358-402, 1996.\n[2] X. Chen and X. Deng.\n3-NASH is PPAD-complete.\nTechnical Report TR-05-134, Electronic Colloquium in Computational Complexity, 2005.\n[3] X. Chen and X. Deng.\nSettling the complexity of 2-player Nash equilibrium.\nTechnical Report TR-05-140, Electronic Colloquium in Computational Complexity, 2005.\n[4] C. Daskalakis, P. Goldberg, and C. Papadimitriou.\nThe complexity of computing a Nash equilibrium.\nIn Proceedings of the 38th ACM Symposium on Theory of Computing, 2006.\n[5] C. Daskalakis and C. Papadimitriou.\nThree-player games are hard.\nTechnical Report TR-05-139, Electronic Colloquium in Computational Complexity, 2005.\n[6] E. Elkind, L. Goldberg, and P. Goldberg.\nNash equilibria in graphical games on trees revisited.\nTechnical Report TR-06-005, Electronic Colloquium in Computational Complexity, 2006.\n[7] P. Goldberg and C. Papadimitriou.\nReducibility among equilibrium problems.\nIn Proceedings of the 38th ACM Symposium on Theory of Computing, 2006.\n[8] M. Kearns, M. Littman, and S. Singh.\nGraphical models for game theory.\nIn Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, 2001.\n[9] M. Littman, M. Kearns, and S. Singh.\nAn efficient exact algorithm for singly connected graphical games.\nIn Proceedings of the 15th Annual Conference on Neural Information Processing Systems, 2001.\n[10] L. Ortiz and M. Kearns.\nNash propagation for loopy graphical games.\nIn Proceedings of the 17th Annual Conference on Neural Information Processing Systems, 2003.\n[11] C. Papadimitriou.\nOn the complexity of the parity argument and other inefficient proofs of existence.\nJ. Comput.\nSyst.\nSci., 48(3):498-532, 1994.\n109","lvl-3":"Nash Equilibria in Graphical Games on Trees Revisited *\nGraphical games have been proposed as a game-theoretic model of large-scale distributed networks of non-cooperative agents.\nWhen the number of players is large, and the underlying graph has low degree, they provide a concise way to represent the players' payoffs.\nIt has recently been shown that the problem of finding Nash equilibria in a general degree-3 graphical game with two actions per player is complete for the complexity class PPAD, indicating that it is unlikely that there is any polynomial-time algorithm for this problem.\nIn this paper, we study the complexity of graphical games with two actions per player on bounded-degree trees.\nThis setting was first considered by Kearns, Littman and Singh, who proposed a dynamic programming-based algorithm that computes all Nash equilibria of such games.\nThe running time of their algorithm is exponential, though approximate equilibria can be computed efficiently.\nLater, Littman, Kearns and Singh proposed a modification to this algorithm that can find a single Nash equilibrium in polynomial time.\nWe show that this modified algorithm is incorrect--the output is not always a Nash equilibrium.\nWe then propose a new algorithm that is based on the ideas of Kearns et al. and computes all Nash equilibria in quadratic time if the input graph is a path, and in polynomial time if it is an arbitrary graph of maximum degree 2.\nMoreover, our algorithm can be used to compute Nash equilibria of graphical games on arbitrary trees, but the running time can be exponential, even when the tree has bounded degree.\nWe show that this is inevitable--any algorithm of this type will take exponential time, even on bounded-degree trees with pathwidth 2.\nIt is an open question whether our algorithm runs in polynomial time on graphs with pathwidth 1, but we show that finding a Nash equilibrium for a 2-action graphical game in which the underlying graph has maximum degree 3 and constant pathwidth is PPAD-complete (so is unlikely to be tractable).\n* This research is supported by the EPSRC research grants \"Algorithmics of Network-sharing Games\" and \"Discontinuous Behaviour in the Complexity of randomized Algorithms\".\n1.\nINTRODUCTION\nGraphical games were introduced in the papers of Kearns et al. [8] and Littman et al. [9] as a succinct representation of games with a large number of players.\nThe classical normal form (or matrix form) representation has a size that is exponential in the number of players, making it unsuitable for large-scale distributed games.\nA graphical game associates each player with a vertex of an underlying graph G, and the payoff to that player is a function of the actions chosen by himself and his neighbours in G; if G has low degree, this is a concise way to represent a game with many players.\nThe papers [8, 9] give a dynamic-programming algorithm for finding Nash equilibria in graphical games where there are two actions per player and G is a tree.\nThe first of these papers describes a generic algorithm for this problem that can be specialized in two ways: as an algorithm that computes approximations to all Nash equilibria in time polynomial in the input size and the approximation quality, or as an exponential-time algorithm that allows the exact computation of all Nash equilibria in G.\nIn [9], the authors propose a modification to the latter algorithm that aims to find a single Nash equilibrium in polynomial time.\nThis does not quite work, as we show in Section 3, though it introduces a useful idea.\n1.1 Background\nThe generic algorithm of [8] consists of two phases which we will refer to as the upstream pass and the downstream pass; 1 the former starts at the leaves of the tree and ends at the root, while the latter starts at the root and ends at the leaves.\nIt is assumed that each player has two pure strategies (actions), which are denoted by 0 and 1; it follows that any mixed strategy can be represented as a single number x \u2208 [0, 1], where x is the probability that the player selects 1.\nDuring the upstream pass, each vertex V computes the set of its potential best responses to every mixed strategy w of its parent W; a strategy v is a potential best response to w if 1Note that the terminology \"upstream\" and \"downstream\" are reversed in [8, 9]--our trees are rooted at the top.\nthere is a Nash equilibrium in the graphical game downstream of V (inclusive) given that W plays w (for a more technical definition, the reader is referred to Section 2).\nThe output of this stage can be viewed as a (continuous) table T (w, v), where T (w, v) = 1 if and only if v is a potential best response to w; we refer to this table as the best response policy for V.\nThe generic algorithm does not address the problem of representing the best response policy; in fact, the most important difference between the two instantiations of the generic algorithm described in [8] is in their approach to this issue.\nThe computation is performed inductively: the best response policy for V is computed based on the best response policies of V's children U1,..., Uk.\nBy the end of the upstream pass, all children of the root have computed their best response policies.\nIn the beginning of the downstream pass, the root selects its strategy and informs its children about its choice.\nIt also selects a strategy for each child.\nA necessary and sufficient condition for the algorithm to proceed is that the strategy of the root is a best response to the strategies of its children and, for each child, the chosen strategy is one of the pre-computed potential best responses to the chosen strategy of the root.\nThe equilibrium then propagates downstream, with each vertex selecting its children's actions.\nThe action of the child is chosen to be any strategy from the pre-computed potential best responses to the chosen strategy of the parent.\nTo bound the running time of this algorithm, the paper [8] shows that any best response policy can be represented as a union of an exponential number of rectangles; the polynomial time approximation algorithm is obtained by combining this representation with a polynomial-sized grid.\nThe main idea of [9] is that it is not necessary to keep track of all rectangles in the best response policies; rather, at each step of the upstream pass, it is possible to select a polynomial-size subset of the corresponding policy (in [9], this subset is called a breakpoint policy), and still ensure that the downstream pass can proceed successfully (a sufficient condition for this is that the subset of the best response policy for V stored by the algorithm contains a continuous path from w = 0 to w = 1).\n1.2 Our Results\nOne of the main contributions of our paper is to show that the algorithm proposed by [9] is incorrect.\nIn Section 3 we describe a simple example for which the algorithm of [9] outputs a vector of strategies that does not constitute a Nash equilibrium of the underlying game.\nIn Sections 4, 5 and 6 we show how to fix the algorithm of [9] so that it always produces correct output.\nSection 4 considers the case in which the underlying graph is a path of length n. For this case, we show that the number of rectangles in each of the best response policies is O (n2).\nThis gives us an O (n3) algorithm for finding a Nash equilibrium, and for computing a representation of all Nash equilibria.\n(This algorithm is a special case of the generic algorithm of [8]--we show that it runs in polynomial time when the underlying graph is a path.)\nWe can improve the running time of the generic algorithm using the ideas of [9].\nIn particular, we give an O (n2) algorithm for finding a Nash equilibrium of a graphical game on a path of length n. Instead of storing best response policies, this algorithm stores appropriately-defined subsets, which, following [9], we call breakpoint policies (modifying the definition as necessary).\nWe obtain the following theorem THEOREM 1.\nThere is an O (n2) algorithm that finds a Nash equilibrium of a graphical game with two actions per player on an n-vertex path.\nThere is an O (n3) algorithm that computes a representation of all Nash equilibria of such a game.\nIn Section 5 we extend the results of Section 4 to general degree2 graphs, obtaining the following theorem.\nTHEOREM 2.\nThere is a polynomial-time algorithm thatfinds a Nash equilibrium of a graphical game with two actions per player on a graph with maximum degree 2.\nIn Section 6 we extend our algorithm so that it can be used to find a Nash equilibrium of a graphical game on an arbitrary tree.\nEven when the tree has bounded degree, the running time can be exponential.\nWe show that this is inevitable by constructing a family of graphical games on bounded-degree trees for which best response policies of some of the vertices have exponential size, and any twopass algorithm (i.e., an algorithm that is similar in spirit to that of [8]) has to store almost all points of the best response policies.\nIn particular, we show the following.\nTHEOREM 3.\nThere is an infinite family ofgraphical games on bounded-degree trees with pathwidth 2 such that any two-pass algorithm for finding Nash equilibria on these trees requires exponential time and space.\nIt is interesting to note that the trees used in the proof of Theorem 3 have pathwidth 2, that is, they are very close to being paths.\nIt is an open question whether our algorithm runs in polynomial time for graphs of pathwidth 1.\nThis question can be viewed as a generalization of a very natural computational geometry problem--we describe it in more detail in Section 8.\nIn Section 7, we give a complexity-theoretic intractability result for the problem of finding a Nash equilibrium of a graphical game on a graph with small pathwidth.\nWe prove the following theorem.\nTHEOREM 4.\nConsider the problem offinding a Nash equilibrium for a graphical game in which the underlying graph has maximum degree 3 and pathwidth k.\nThere is a constant k such that this problem is PPAD-complete.\nTheorem 4 limits the extent to which we can exploit \"path-like\" properties of the underlying graph, in order to find Nash equilibria.\nTo prove Theorem 4, we use recent PPAD-completeness results for games, in particular the papers [7, 4] which show that the problem of finding Nash equilibria in graphical games of degree d (for d> 3) is computationally equivalent to the problem of solving r-player normal-form games (for r> 4), both of which are PPAD-complete.\n2.\nPRELIMINARIES AND NOTATION\n3.\nALGORITHM OF LITTMAN ET AL. .\n4.\nFINDING EQUILIBRIA ON A PATH\n4.1 Finding a Single Nash Equilibrium in O (n2) Time\n5.\nNASH EQUILIBRIA ON GRAPHS WITH MAXIMUM DEGREE 2\n6.\nFINDING EQUILIBRIA ON AN (ARBITRARY) TREE\n6.1 Exponential Size Breakpoint Policy\n6.2 Irreducibility of the Best Response Policy for Tn\n7.\nPPAD-COMPLETENESS OF BOUNDED PATHWIDTH GRAPHICAL GAMES\n7.1 Pathwidth\n7.2 PPAD-completeness\n8.\nOPEN PROBLEMS\nThe most important problem left open by this paper is whether it is possible to find a Nash equilibrium of a graphical game on a bounded-degree tree in polynomial time.\nOur construction shows that any two-pass algorithm that explicitly stores breakpoint policies needs exponential time and space.\nHowever, it does not preclude the existence of an algorithm that is based on a similar idea, but, instead of computing the entire breakpoint policy for each vertex, uses a small number of additional passes through the graph to decide which (polynomial-sized) parts of each breakpoint policy should be computed.\nIn particular, such an algorithm may be based on the approximation algorithm of [8], where the value of e is chosen adaptively.\nAnother intriguing question is related to the fact that the graph for which we constructed an exponential-sized breakpoint policy has pathwidth 2, while our positive results are for a path, i.e., a graph of pathwidth 1.\nIt is not clear if for any bounded-degree graph of pathwidth 1 the running time of (the breakpoint policybased version of) our algorithm will be polynomial.\nIn particular, it is instructive to consider a \"caterpillar\" graph, i.e., the graph that can be obtained from Tn by deleting the vertices S1,..., Sn.\nFor this graph, the best response policy of a vertex Vk in the \"spine\" of the caterpillar is obtained by combining the best response policy of its predecessor on the spine Vk \u2212 1 and its other child Tk; since the latter is a leaf, its best response policy is either trivial (i.e., [0, 1] 2, [0, 1] \u00d7 {0}, or [0, 1] \u00d7 {1}) or consists of two horizontal segments and one vertical segment of the form {\u03b1} \u00d7 [0, 1] that connects them.\nAssuming for convenience that B (Vk, Tk) = [0, \u03b1] \u00d7 {0} \u222a {\u03b1} \u00d7 [0, 1] \u222a [\u03b1, 1] \u00d7 {1}, and f is the indifference function for Vk, we observe that the best response policy for Vk consists of 5 components: \u02c6f (0), f\u02c6 (1), and three components that correspond to [0, \u03b1] \u00d7 {0}, {\u03b1} \u00d7 [0, 1], and [\u03b1, 1] \u00d7 {1}.\nHence, one can think of constructing B (Vk +1, Vk) as the following process: turn B (Vk, Vk \u2212 1) by \u03c0 \/ 2, cut it along the (now horizontal) line vk = \u03b1, apply a fractional linear transform to the horizontal coordinate of both parts, and reconnect them using the image of the segment {\u03b1} \u00d7 [0, 1] under f.\nThis implies that the problem of bounding the size of the best response policy (or, alternatively, the breakpoint policy), can be viewed as a generalization of the following computational geometry problem, which we believe may be of independent interest: PROBLEM 1.\nGiven a collection of axis-parallel segments in R2, consider the following operation: pick an axis-parallel line li (either vertical or horizontal), cut the plane along this line, and shift one of the resulting two parts by an arbitrary amount \u03b4i; as a result, some segments will be split into two parts.\nReconnect these parts, i.e., for each segment of the form [a, b] \u00d7 {c} that was transformed into [a, t] \u00d7 {c + \u03b4i} and [t, b] \u00d7 {c}, introduce a segment {t} \u00d7 [c, c + \u03b4i].\nIs itpossible to start with the segment [0, 1] and after n operations obtain a set that cannot be represented as a union of poly (n) line segments?\nIfyes, can it be the case that in this set, there is no path with a polynomial number of turns that connects the endpoints of the original segment?\nIt turns out that in general, the answer to the first question is positive, i.e., after n steps, it is possible to obtain a set that consists of \u0398 (cn) segments for some c> 0.\nThis implies that even for a caterpillar, the best response policy can be exponentially large.\nHowever, in our example (which is omitted from this version of the paper due to space constraints), there exists a polynomial-size path through the best response policy, i.e., it does not prove that the breakpoint policy is necessarily exponential in size.\nIf one can prove that this is always the case, it may be possible to adapt this proof to show that there can be an exponential gap between the sizes of best response policies and breakpoint policies.","lvl-4":"Nash Equilibria in Graphical Games on Trees Revisited *\nGraphical games have been proposed as a game-theoretic model of large-scale distributed networks of non-cooperative agents.\nWhen the number of players is large, and the underlying graph has low degree, they provide a concise way to represent the players' payoffs.\nIt has recently been shown that the problem of finding Nash equilibria in a general degree-3 graphical game with two actions per player is complete for the complexity class PPAD, indicating that it is unlikely that there is any polynomial-time algorithm for this problem.\nIn this paper, we study the complexity of graphical games with two actions per player on bounded-degree trees.\nThis setting was first considered by Kearns, Littman and Singh, who proposed a dynamic programming-based algorithm that computes all Nash equilibria of such games.\nThe running time of their algorithm is exponential, though approximate equilibria can be computed efficiently.\nLater, Littman, Kearns and Singh proposed a modification to this algorithm that can find a single Nash equilibrium in polynomial time.\nWe show that this modified algorithm is incorrect--the output is not always a Nash equilibrium.\nWe then propose a new algorithm that is based on the ideas of Kearns et al. and computes all Nash equilibria in quadratic time if the input graph is a path, and in polynomial time if it is an arbitrary graph of maximum degree 2.\nMoreover, our algorithm can be used to compute Nash equilibria of graphical games on arbitrary trees, but the running time can be exponential, even when the tree has bounded degree.\nWe show that this is inevitable--any algorithm of this type will take exponential time, even on bounded-degree trees with pathwidth 2.\nIt is an open question whether our algorithm runs in polynomial time on graphs with pathwidth 1, but we show that finding a Nash equilibrium for a 2-action graphical game in which the underlying graph has maximum degree 3 and constant pathwidth is PPAD-complete (so is unlikely to be tractable).\n* This research is supported by the EPSRC research grants \"Algorithmics of Network-sharing Games\" and \"Discontinuous Behaviour in the Complexity of randomized Algorithms\".\n1.\nINTRODUCTION\nGraphical games were introduced in the papers of Kearns et al. [8] and Littman et al. [9] as a succinct representation of games with a large number of players.\nThe classical normal form (or matrix form) representation has a size that is exponential in the number of players, making it unsuitable for large-scale distributed games.\nA graphical game associates each player with a vertex of an underlying graph G, and the payoff to that player is a function of the actions chosen by himself and his neighbours in G; if G has low degree, this is a concise way to represent a game with many players.\nThe papers [8, 9] give a dynamic-programming algorithm for finding Nash equilibria in graphical games where there are two actions per player and G is a tree.\nThe first of these papers describes a generic algorithm for this problem that can be specialized in two ways: as an algorithm that computes approximations to all Nash equilibria in time polynomial in the input size and the approximation quality, or as an exponential-time algorithm that allows the exact computation of all Nash equilibria in G.\nIn [9], the authors propose a modification to the latter algorithm that aims to find a single Nash equilibrium in polynomial time.\nThis does not quite work, as we show in Section 3, though it introduces a useful idea.\n1.1 Background\nThe generic algorithm of [8] consists of two phases which we will refer to as the upstream pass and the downstream pass; 1 the former starts at the leaves of the tree and ends at the root, while the latter starts at the root and ends at the leaves.\nthere is a Nash equilibrium in the graphical game downstream of V (inclusive) given that W plays w (for a more technical definition, the reader is referred to Section 2).\nThe generic algorithm does not address the problem of representing the best response policy; in fact, the most important difference between the two instantiations of the generic algorithm described in [8] is in their approach to this issue.\nThe computation is performed inductively: the best response policy for V is computed based on the best response policies of V's children U1,..., Uk.\nBy the end of the upstream pass, all children of the root have computed their best response policies.\nIn the beginning of the downstream pass, the root selects its strategy and informs its children about its choice.\nIt also selects a strategy for each child.\nA necessary and sufficient condition for the algorithm to proceed is that the strategy of the root is a best response to the strategies of its children and, for each child, the chosen strategy is one of the pre-computed potential best responses to the chosen strategy of the root.\nThe equilibrium then propagates downstream, with each vertex selecting its children's actions.\nThe action of the child is chosen to be any strategy from the pre-computed potential best responses to the chosen strategy of the parent.\nTo bound the running time of this algorithm, the paper [8] shows that any best response policy can be represented as a union of an exponential number of rectangles; the polynomial time approximation algorithm is obtained by combining this representation with a polynomial-sized grid.\n1.2 Our Results\nOne of the main contributions of our paper is to show that the algorithm proposed by [9] is incorrect.\nIn Section 3 we describe a simple example for which the algorithm of [9] outputs a vector of strategies that does not constitute a Nash equilibrium of the underlying game.\nIn Sections 4, 5 and 6 we show how to fix the algorithm of [9] so that it always produces correct output.\nSection 4 considers the case in which the underlying graph is a path of length n. For this case, we show that the number of rectangles in each of the best response policies is O (n2).\nThis gives us an O (n3) algorithm for finding a Nash equilibrium, and for computing a representation of all Nash equilibria.\n(This algorithm is a special case of the generic algorithm of [8]--we show that it runs in polynomial time when the underlying graph is a path.)\nWe can improve the running time of the generic algorithm using the ideas of [9].\nIn particular, we give an O (n2) algorithm for finding a Nash equilibrium of a graphical game on a path of length n. Instead of storing best response policies, this algorithm stores appropriately-defined subsets, which, following [9], we call breakpoint policies (modifying the definition as necessary).\nWe obtain the following theorem THEOREM 1.\nThere is an O (n2) algorithm that finds a Nash equilibrium of a graphical game with two actions per player on an n-vertex path.\nThere is an O (n3) algorithm that computes a representation of all Nash equilibria of such a game.\nIn Section 5 we extend the results of Section 4 to general degree2 graphs, obtaining the following theorem.\nTHEOREM 2.\nThere is a polynomial-time algorithm thatfinds a Nash equilibrium of a graphical game with two actions per player on a graph with maximum degree 2.\nIn Section 6 we extend our algorithm so that it can be used to find a Nash equilibrium of a graphical game on an arbitrary tree.\nEven when the tree has bounded degree, the running time can be exponential.\nWe show that this is inevitable by constructing a family of graphical games on bounded-degree trees for which best response policies of some of the vertices have exponential size, and any twopass algorithm (i.e., an algorithm that is similar in spirit to that of [8]) has to store almost all points of the best response policies.\nIn particular, we show the following.\nTHEOREM 3.\nThere is an infinite family ofgraphical games on bounded-degree trees with pathwidth 2 such that any two-pass algorithm for finding Nash equilibria on these trees requires exponential time and space.\nIt is interesting to note that the trees used in the proof of Theorem 3 have pathwidth 2, that is, they are very close to being paths.\nIt is an open question whether our algorithm runs in polynomial time for graphs of pathwidth 1.\nThis question can be viewed as a generalization of a very natural computational geometry problem--we describe it in more detail in Section 8.\nIn Section 7, we give a complexity-theoretic intractability result for the problem of finding a Nash equilibrium of a graphical game on a graph with small pathwidth.\nWe prove the following theorem.\nTHEOREM 4.\nConsider the problem offinding a Nash equilibrium for a graphical game in which the underlying graph has maximum degree 3 and pathwidth k.\nThere is a constant k such that this problem is PPAD-complete.\nTheorem 4 limits the extent to which we can exploit \"path-like\" properties of the underlying graph, in order to find Nash equilibria.\nTo prove Theorem 4, we use recent PPAD-completeness results for games, in particular the papers [7, 4] which show that the problem of finding Nash equilibria in graphical games of degree d (for d> 3) is computationally equivalent to the problem of solving r-player normal-form games (for r> 4), both of which are PPAD-complete.\n8.\nOPEN PROBLEMS\nThe most important problem left open by this paper is whether it is possible to find a Nash equilibrium of a graphical game on a bounded-degree tree in polynomial time.\nOur construction shows that any two-pass algorithm that explicitly stores breakpoint policies needs exponential time and space.\nHowever, it does not preclude the existence of an algorithm that is based on a similar idea, but, instead of computing the entire breakpoint policy for each vertex, uses a small number of additional passes through the graph to decide which (polynomial-sized) parts of each breakpoint policy should be computed.\nIn particular, such an algorithm may be based on the approximation algorithm of [8], where the value of e is chosen adaptively.\nAnother intriguing question is related to the fact that the graph for which we constructed an exponential-sized breakpoint policy has pathwidth 2, while our positive results are for a path, i.e., a graph of pathwidth 1.\nIt is not clear if for any bounded-degree graph of pathwidth 1 the running time of (the breakpoint policybased version of) our algorithm will be polynomial.\nIn particular, it is instructive to consider a \"caterpillar\" graph, i.e., the graph that can be obtained from Tn by deleting the vertices S1,..., Sn.\nThis implies that the problem of bounding the size of the best response policy (or, alternatively, the breakpoint policy), can be viewed as a generalization of the following computational geometry problem, which we believe may be of independent interest: PROBLEM 1.\nIfyes, can it be the case that in this set, there is no path with a polynomial number of turns that connects the endpoints of the original segment?\nThis implies that even for a caterpillar, the best response policy can be exponentially large.\nHowever, in our example (which is omitted from this version of the paper due to space constraints), there exists a polynomial-size path through the best response policy, i.e., it does not prove that the breakpoint policy is necessarily exponential in size.\nIf one can prove that this is always the case, it may be possible to adapt this proof to show that there can be an exponential gap between the sizes of best response policies and breakpoint policies.","lvl-2":"Nash Equilibria in Graphical Games on Trees Revisited *\nGraphical games have been proposed as a game-theoretic model of large-scale distributed networks of non-cooperative agents.\nWhen the number of players is large, and the underlying graph has low degree, they provide a concise way to represent the players' payoffs.\nIt has recently been shown that the problem of finding Nash equilibria in a general degree-3 graphical game with two actions per player is complete for the complexity class PPAD, indicating that it is unlikely that there is any polynomial-time algorithm for this problem.\nIn this paper, we study the complexity of graphical games with two actions per player on bounded-degree trees.\nThis setting was first considered by Kearns, Littman and Singh, who proposed a dynamic programming-based algorithm that computes all Nash equilibria of such games.\nThe running time of their algorithm is exponential, though approximate equilibria can be computed efficiently.\nLater, Littman, Kearns and Singh proposed a modification to this algorithm that can find a single Nash equilibrium in polynomial time.\nWe show that this modified algorithm is incorrect--the output is not always a Nash equilibrium.\nWe then propose a new algorithm that is based on the ideas of Kearns et al. and computes all Nash equilibria in quadratic time if the input graph is a path, and in polynomial time if it is an arbitrary graph of maximum degree 2.\nMoreover, our algorithm can be used to compute Nash equilibria of graphical games on arbitrary trees, but the running time can be exponential, even when the tree has bounded degree.\nWe show that this is inevitable--any algorithm of this type will take exponential time, even on bounded-degree trees with pathwidth 2.\nIt is an open question whether our algorithm runs in polynomial time on graphs with pathwidth 1, but we show that finding a Nash equilibrium for a 2-action graphical game in which the underlying graph has maximum degree 3 and constant pathwidth is PPAD-complete (so is unlikely to be tractable).\n* This research is supported by the EPSRC research grants \"Algorithmics of Network-sharing Games\" and \"Discontinuous Behaviour in the Complexity of randomized Algorithms\".\n1.\nINTRODUCTION\nGraphical games were introduced in the papers of Kearns et al. [8] and Littman et al. [9] as a succinct representation of games with a large number of players.\nThe classical normal form (or matrix form) representation has a size that is exponential in the number of players, making it unsuitable for large-scale distributed games.\nA graphical game associates each player with a vertex of an underlying graph G, and the payoff to that player is a function of the actions chosen by himself and his neighbours in G; if G has low degree, this is a concise way to represent a game with many players.\nThe papers [8, 9] give a dynamic-programming algorithm for finding Nash equilibria in graphical games where there are two actions per player and G is a tree.\nThe first of these papers describes a generic algorithm for this problem that can be specialized in two ways: as an algorithm that computes approximations to all Nash equilibria in time polynomial in the input size and the approximation quality, or as an exponential-time algorithm that allows the exact computation of all Nash equilibria in G.\nIn [9], the authors propose a modification to the latter algorithm that aims to find a single Nash equilibrium in polynomial time.\nThis does not quite work, as we show in Section 3, though it introduces a useful idea.\n1.1 Background\nThe generic algorithm of [8] consists of two phases which we will refer to as the upstream pass and the downstream pass; 1 the former starts at the leaves of the tree and ends at the root, while the latter starts at the root and ends at the leaves.\nIt is assumed that each player has two pure strategies (actions), which are denoted by 0 and 1; it follows that any mixed strategy can be represented as a single number x \u2208 [0, 1], where x is the probability that the player selects 1.\nDuring the upstream pass, each vertex V computes the set of its potential best responses to every mixed strategy w of its parent W; a strategy v is a potential best response to w if 1Note that the terminology \"upstream\" and \"downstream\" are reversed in [8, 9]--our trees are rooted at the top.\nthere is a Nash equilibrium in the graphical game downstream of V (inclusive) given that W plays w (for a more technical definition, the reader is referred to Section 2).\nThe output of this stage can be viewed as a (continuous) table T (w, v), where T (w, v) = 1 if and only if v is a potential best response to w; we refer to this table as the best response policy for V.\nThe generic algorithm does not address the problem of representing the best response policy; in fact, the most important difference between the two instantiations of the generic algorithm described in [8] is in their approach to this issue.\nThe computation is performed inductively: the best response policy for V is computed based on the best response policies of V's children U1,..., Uk.\nBy the end of the upstream pass, all children of the root have computed their best response policies.\nIn the beginning of the downstream pass, the root selects its strategy and informs its children about its choice.\nIt also selects a strategy for each child.\nA necessary and sufficient condition for the algorithm to proceed is that the strategy of the root is a best response to the strategies of its children and, for each child, the chosen strategy is one of the pre-computed potential best responses to the chosen strategy of the root.\nThe equilibrium then propagates downstream, with each vertex selecting its children's actions.\nThe action of the child is chosen to be any strategy from the pre-computed potential best responses to the chosen strategy of the parent.\nTo bound the running time of this algorithm, the paper [8] shows that any best response policy can be represented as a union of an exponential number of rectangles; the polynomial time approximation algorithm is obtained by combining this representation with a polynomial-sized grid.\nThe main idea of [9] is that it is not necessary to keep track of all rectangles in the best response policies; rather, at each step of the upstream pass, it is possible to select a polynomial-size subset of the corresponding policy (in [9], this subset is called a breakpoint policy), and still ensure that the downstream pass can proceed successfully (a sufficient condition for this is that the subset of the best response policy for V stored by the algorithm contains a continuous path from w = 0 to w = 1).\n1.2 Our Results\nOne of the main contributions of our paper is to show that the algorithm proposed by [9] is incorrect.\nIn Section 3 we describe a simple example for which the algorithm of [9] outputs a vector of strategies that does not constitute a Nash equilibrium of the underlying game.\nIn Sections 4, 5 and 6 we show how to fix the algorithm of [9] so that it always produces correct output.\nSection 4 considers the case in which the underlying graph is a path of length n. For this case, we show that the number of rectangles in each of the best response policies is O (n2).\nThis gives us an O (n3) algorithm for finding a Nash equilibrium, and for computing a representation of all Nash equilibria.\n(This algorithm is a special case of the generic algorithm of [8]--we show that it runs in polynomial time when the underlying graph is a path.)\nWe can improve the running time of the generic algorithm using the ideas of [9].\nIn particular, we give an O (n2) algorithm for finding a Nash equilibrium of a graphical game on a path of length n. Instead of storing best response policies, this algorithm stores appropriately-defined subsets, which, following [9], we call breakpoint policies (modifying the definition as necessary).\nWe obtain the following theorem THEOREM 1.\nThere is an O (n2) algorithm that finds a Nash equilibrium of a graphical game with two actions per player on an n-vertex path.\nThere is an O (n3) algorithm that computes a representation of all Nash equilibria of such a game.\nIn Section 5 we extend the results of Section 4 to general degree2 graphs, obtaining the following theorem.\nTHEOREM 2.\nThere is a polynomial-time algorithm thatfinds a Nash equilibrium of a graphical game with two actions per player on a graph with maximum degree 2.\nIn Section 6 we extend our algorithm so that it can be used to find a Nash equilibrium of a graphical game on an arbitrary tree.\nEven when the tree has bounded degree, the running time can be exponential.\nWe show that this is inevitable by constructing a family of graphical games on bounded-degree trees for which best response policies of some of the vertices have exponential size, and any twopass algorithm (i.e., an algorithm that is similar in spirit to that of [8]) has to store almost all points of the best response policies.\nIn particular, we show the following.\nTHEOREM 3.\nThere is an infinite family ofgraphical games on bounded-degree trees with pathwidth 2 such that any two-pass algorithm for finding Nash equilibria on these trees requires exponential time and space.\nIt is interesting to note that the trees used in the proof of Theorem 3 have pathwidth 2, that is, they are very close to being paths.\nIt is an open question whether our algorithm runs in polynomial time for graphs of pathwidth 1.\nThis question can be viewed as a generalization of a very natural computational geometry problem--we describe it in more detail in Section 8.\nIn Section 7, we give a complexity-theoretic intractability result for the problem of finding a Nash equilibrium of a graphical game on a graph with small pathwidth.\nWe prove the following theorem.\nTHEOREM 4.\nConsider the problem offinding a Nash equilibrium for a graphical game in which the underlying graph has maximum degree 3 and pathwidth k.\nThere is a constant k such that this problem is PPAD-complete.\nTheorem 4 limits the extent to which we can exploit \"path-like\" properties of the underlying graph, in order to find Nash equilibria.\nTo prove Theorem 4, we use recent PPAD-completeness results for games, in particular the papers [7, 4] which show that the problem of finding Nash equilibria in graphical games of degree d (for d> 3) is computationally equivalent to the problem of solving r-player normal-form games (for r> 4), both of which are PPAD-complete.\n2.\nPRELIMINARIES AND NOTATION\nWe consider graphical games in which the underlying graph G is an n-vertex tree.\nEach vertex has two actions, which are denoted by 0 and 1.\nA mixed strategy is given by a single number x \u2208 [0, 1], which denotes the probability that the player selects action 1.\nFur the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root has a single child, and that its payoff is independent of the action chosen by the child.\nThis can be achieved by first choosing an arbitrary root of the tree, and then adding a dummy \"parent\" of this root, giving the new parent a constant payoff function.\nGiven an edge (V, W) of the tree G, and a mixed strategy w for W, let G (V, W), W = w be the instance obtained from G by (1) deleting all nodes Z which are separated from V by W (i.e., all nodes Z such that the path from Z to V passes through W), and (2) restricting the instance so that W is required to play mixed strategy w. Definition 1.\nSuppose that (V, W) is an edge of the tree, that v is a mixed strategy for V and that w is a mixed strategy for W.\nWe say that v is a potential best response to w (denoted by v \u2208 pbrV (w)) if there is an equilibrium in the instance G (V, W), W = w in which V has mixed strategy v.\nWe define the best response policy for V, given W, as B (W, V) = {(w, v) | v \u2208 pbrV (w), w \u2208 [0, 1]}.\nTypically, W is the parent of V, and this is just referred to as \"the best response policy for V\".\nThe expression B (W, V) | V = v is used to denote the set B (W, V) \u2229 [0, 1] \u00d7 {v}.\nThe upstream pass of the generic algorithm of [8] computes the best response policy for V for every node V other than the root.\nWith the above assumptions about the root, the downstream pass is straightforward: Let W denote the root and V denote its child.\nThe root selects any pair (w, v) from B (W, V).\nIt decides to play mixed strategy w and it instructs V to play mixed strategy v.\nThe remainder of the downward pass is recursive.\nWhen a node V is instructed by its parent to adopt mixed strategy v, it does the following for each child U--It finds a pair (v, u) \u2208 B (V, U) (with the same v value that it was given by its parent) and instructs U to play u.\n3.\nALGORITHM OF LITTMAN ET AL. .\nThe algorithm of [9] is based on the following observation: to compute a single Nash equilibrium by a two-pass algorithm, it is not necessary to construct the entire best response policy for each vertex.\nAs long as, at each step of the downstream pass, the vertex under consideration can select a vector of strategies for all its children so that each child's strategy is a potential best response to the parent's strategy, the algorithm succeeds in producing a Nash equilibrium.\nThis can be achieved if, at the beginning of the downstream pass, we have a data structure in which each vertex V with parent W stores a set \u02c6B (W, V) \u2286 B (W, V) (called a breakpoint policy) which covers every possible w \u2208 [0, 1].\nWe will show later that a sufficient condition for the construction of such a data structure is the invariant that, at every level of the upstream pass, \u02c6B (W, V) contains a continuous path from w = 0 to w = 1.\nIn [9], it is suggested that we can select the breakpoint policy in a particular way.\nNamely, the paper uses the following definition: Definition 2.\n(cf. [9]) A breakpoint policy for a node V with parent W consists of an ordered set of W-breakpoints w0 = 0 0, set V = Vj and let U = Vj \u2212 1 and W = Vj +1 be the vertices that precede and follow V, respectively.\nThe payoffs to V are described by a 2 \u00d7 2 \u00d7 2 matrix P: Pxyz is the payoff that V receives when U plays x, V plays y, and W plays z, where x, y, z \u2208 {0, 1}.\nSuppose that U plays 1 with probability u and W plays 1 with probability w.\nThen V's expected payoff from playing 0 is P0 = (1 \u2212 u) (1 \u2212 w) P000 + (1 \u2212 u) wP001 + u (1 \u2212 w) P100 + uwP101, while its expected payoff from playing 1 is P1 = (1 \u2212 u) (1 \u2212 w) P010 + (1 \u2212 u) wP011 + u (1 \u2212 w) P110 + uwP111.\nIf P0> P1, V strictly prefers to play 0, if P0 P1 if and only if A1u + A0 <0.\nIf also A1 = 0, A0 = 0, clearly, B (W, V) = [0, 1] 2, and the statement of the theorem is trivially true.\nOtherwise, the vertex V is indifferent between 0 and 1 if and only if A1 = ~ 0 and u = \u2212 A0\/A1.\nLet V = {v | v \u2208 (0, 1), \u2212 A0\/A1 \u2208 pbrU (v)}.\nBy the inductive hypothesis, V consists of at most 2 (j \u2212 1) + 4 segments and isolated points.\nFor any v \u2208 V, we have B (W, V) | V = v = [0, 1]: no matter what W plays, as long as U is playing \u2212 A0\/A1, V is content to play v. On the other hand, for any v \u2208 (0, 1) \\ V we have B (W, V) | V = v = \u2205: when V plays v, U can only respond with u = ~ \u2212 A0\/A1, in which case V can benefit from switching to one of the pure strategies.\nTo complete the description of B (W, V), it remains to analyze the cases v = 0 and v = 1.\nThe vertex V prefers to play 0 if A1> 0 and u \u2264 \u2212 A0\/A1, or A1 <0 and u \u2265 \u2212 A0\/A1, or\nA1 = 0 and A0 <0.\nAssume for now that A1> 0; the other two cases can be treated similarly.\nIn this case 0 \u2208 pbrV (w) for some w \u2208 [0, 1] if and only if there exists a u \u2208 pbrU (0) such that u \u2264 \u2212 A0\/A1: if no such u exists, whenever V plays 0 either U's response is not in pbrU (0) or V can improve its payoff by playing 1.\nTherefore, either B (W, V) | V = 0 = [0, 1] or B (W, V) | V = 0 = \u2205.\nSimilarly, B (W, V) | V = 1 is equal to either [0, 1] or \u2205, depending on pbrU (1).\nTherefore, the set B (W, V) consists of at most 2j + 4 \u2264 (j + 4) 2 rectangles: B (W, V) \u2229 [0, 1] \u00d7 (0, 1) = [0, 1] \u00d7 V contributes at most 2j + 2 rectangles, and each of the sets B (W, V) | V = 0 and B (W, V) | V = 1 contributes at most one rectangle.\nSimilarly, its total number of event points is at most 2j + 4: the only W-event points are 0 and 1, each V - event point of B (W, V) is a V-event point of B (V, U), and there are at most 2j + 2 of them.\n\u2022 B1u + B0 \u2261 ~ 0, A1 = \u03b1B1, A0 = \u03b1B0 for some \u03b1 \u2208 R.\nIn this case, V is indifferent between 0 and 1 if and only if w = \u03b1, or B1 = ~ 0 and u = \u2212 B0\/B1 = \u2212 A0\/A1.\nSimilarly to the previous case, we can show that B (W, V) \u2229 [0, 1] \u00d7 (0, 1) consists of the rectangle {\u03b1} \u00d7 [0, 1] and at most 2j + 2 rectangles of the form [0, 1] \u00d7 IV, where each IV corresponds to a connected component of B (V, U) | U = - B0 \/ B1.\nFurthermore, V prefers to play 0 if B1u + B0> 0 and w \u2265 \u03b1 or B1u + B0 <0 and w \u2264 \u03b1.\nTherefore, if B1u * + B0> 0 for some u * \u2208 pbrU (0), then B (W, V) | V = 0 contains [\u03b1, + \u221e) \u2229 [0, 1] and if B1u ** + B0 <0 for some u ** \u2208 pbrU (0), then B (W, V) | V = 0 contains [\u2212 \u221e, \u03b1] \u2229 [0, 1]; if both u * and u ** exist, B (W, V) | V = 0 = [0, 1].\nThe set B (W, V) | V = 1 can be described in a similar manner.\nBy the inductive hypothesis, B (V, U) has at most 2j + 2 event points; as at least two of these are U-event points, it has at most 2j V - event points.\nSince each V - event point of B (W, V) is a V event point of B (V, U) and B (W, V) has at most 3 W-event points (0, 1, and \u03b1), its total number of event points is at most 2j + 3 <2j + 4.\nAlso, similarly to the previous case it follows that B (W, V) consists of at most 2j + 4 <(j + 4) 2 rectangles.\n\u2022 B1u + B0 \u2261 ~ 0, \u03b1 (B1u + B0) \u2261 ~ A1u + A0.\nIn this case, one can define the indifference function f (\u00b7) as\ninto zero simultaneously.\nObserve that whenever w = f (u) and u, w \u2208 [0, 1], V is indifferent between playing 0 and 1.\nFor any A \u2286 [0, 1] 2, we define a function \u02c6fV by \u02c6fV (A) = {(f (u), v) | (v, u) \u2208 A}; note that \u02c6fV maps subsets of [0, 1] 2 to subsets of R \u00d7 [0, 1].\nSometimes we drop the subscript V when it is clear from the context.\nPROOF.\nFix an arbitrary v \u2208 (0, 1).\nSuppose that U plays some u \u2208 pbrU (v), w = f (u) satisfies w \u2208 [0, 1], and W plays w.\nThere exists a vector of strategies v1,..., vj-1 = u, vj = v such that for each Vk, k 0, for u = u0 the inequality P0 \u2265 P1 is equivalent to w \u2265 f (u0).\nTherefore, when U plays u0 and W plays w, w \u2265 f (u0), V prefers to play 0; as u0 \u2208 pbrU (u), it follows that 0 \u2208 pbrV (w).\nThe argument for the case B (u0) <0 is similar.\nConversely, if 0 \u2208 pbrV (w) for some w \u2208 [0, 1], there exists a vector (v1,..., vj-1, vj = 0, vj +1 = w) such that for each Vk, k \u2264 j, Vk plays vk, and this strategy is a best response to the strategies of Vk's neighbours.\nNote that for any such vector we have vj-1 \u2208 pbrU (0).\nBy way of contradiction, assume (w, 0) \u2208 ~ UuEpbrU (0) Ru.\nThen it must be the case that for any u0 \u2208 pbrU (0) either f (u0) w and Ru0 = [f (u0), + \u221e) \u00d7 {0}.\nIn both cases, when V plays 0, U plays u0, and V plays w, the inequality between f (u0) and w is equivalent to P0 f (u2).\nSince f (\u00b7) is continuous on [u1, u *) and (u *, u2], it is easy to see that\n2The case B1 = 0 causes no special problems.\nFor completeness, set u * to be any value outside of [0, 1] in this case.\nFigure 4: f is increasing on (\u2212 \u221e, u *) and (u *, + \u221e).\nand \u02c6f ([v1, v2] \u00d7 (u *, u2]) = (\u2212 \u221e, f (u2)] \u00d7 [v1, v2], i.e., in this case \u02c6f (R) \u2229 [0, 1] 2 consists of at most two rectangles.\nThe case limu -.\nu \u2217 \u2212 f (u) = \u2212 \u221e is similar.\nAs f\u02c6 (B (V, U)) = URCS (V, U) \u02c6f (R), it follows that \u02c6f (B (V, U)) consists of at most (j + 3) 2 + 2j + 2 rectangles.\nAlso, it is easy to see that both f\u02c6 (0) and f\u02c6 (1) consist of at most 2 line segments each.\nWe conclude that B (W, V) can be represented as a union of at most (j + 3) 2 + 2j + 6 <(j + 4) 2 rectangles.\nMoreover, if v is a V - event point of B (W, V), then v is a V event point of B (V, U) (this includes the cases v = 0 and v = 1, as 0 and 1 are V - event points of B (V, U)) and if w is a W-event point of B (W, V), then either w = 0 or w = 1 or there exists some u \u2208 [0, 1] such that w = f (u) and u is a U-event point of B (V, U).\nHence, B (W, V) has at most 2j + 4 event points.\nThe O (j2) bound on the running time in Theorem 5 follows from our description of the algorithm.\nThe O (n3) bound on the overall running time for finding a Nash equilibrium (and a representation of all Nash equilibria) follows.\n4.1 Finding a Single Nash Equilibrium in O (n2) Time\nThe upper bound on the running time of our algorithm is tight, at least assuming the straightforward implementation, in which each B (Vj +1, Vj) is stored as a union of rectangles: it is not hard to construct an example in which the size of B (Vj +1, Vj) is \u03a9 (j2).\nHowever, in some cases it is not necessary to represent all Nash equilibria; rather, the goal is to find an arbitrary equilibrium of the game.\nIn this section, we show that this problem can be solved in quadratic time, thus obtaining a proof of Theorem 1.\nOur solution is based on the idea of [9], i.e., working with subsets of the best response policies rather than the best response policies themselves; following [9], we will refer to such subsets as breakpoint policies.\nWhile it is not always possible to construct a breakpoint policy as defined in [9], we show how to modify this definition so as to ensure that a breakpoint policy always exists; moreover, we prove that for a path graph, the breakpoint policy of any vertex can be stored in a data structure whose size is linear in the number of descendants this vertex has.\nDefinition 3.\nA breakpointpolicy \u02c6B (V, U) for a vertex U whose parent is V is a non-self-intersecting curve of the form\nwhere Xi = [vi_1, vi] \u00d7 {ui}, Yi = {vi} \u00d7 [ui, ui +1] and ui, vi \u2208 [0, 1] for i = 0,..., m.\nWe say that a breakpoint policy is valid if v0 = 0, vm = 1, and \u02c6B (V, U) \u2286 B (V, U).\nWe will sometimes abuse notation by referring to \u02c6B (V, U) as a collection of segments Xi, Yi rather than their union.\nNote that we do not require that vi \u2264 vi +1 or ui \u2264 ui +1; consequently, in any argument involving breakpoint policies, all segments are to be treated as directed segments.\nObserve that any valid breakpoint policy \u02c6B (V, U) can be viewed as a continuous 1--1 mapping \u03b3 (t) =\nAs explained in Section 3, we can use a valid breakpoint policy instead of the best response policy during the downstream pass, and still guarantee that in the end, we will output a Nash equilibrium.\nTheorem 6 shows that one can inductively compute valid breakpoint policies for all vertices on the path; the proof of this theorem can be found in the full version of this paper [6].\nTHEOREM 6.\nFor any V = Vj, one canfind in polynomial time a valid breakpoint policy \u02c6B (W, V) that consists of at most 2j + 1 segments.\n5.\nNASH EQUILIBRIA ON GRAPHS WITH MAXIMUM DEGREE 2\nIn this section we show how the algorithm for paths can be applied to solve a game on any graph whose vertices have degree at most 2.\nA graph having maximum degree 2 is, of course, a union of paths and cycles.\nSince each connected component can be handled independently, to obtain a proof of Theorem 2, we only need to show how to deal with cycles.\nGiven a cycle with vertices V1,..., Vk (in cyclic order), we make two separate searches for a Nash equilibrium: first we search for a Nash equilibrium where some vertex plays a pure strategy, then we search for a fully mixed Nash equilibrium, where all vertices play mixed strategies.\nFor i \u2264 k let vi denote the probability that Vi plays 1.\nThe first search can be done as follows.\nFor each i \u2208 {1,..., k} and each b \u2208 {0, 1}, do the following.\n1.\nLet P be the path (Vi +1, Vi +2..., Vk, V1,..., Vi_1, Vi) 2.\nLet payoff to Vi +1 be based on putting vi = b (so it depends only on vi +1 and vi +2.)\n3.\nApply the upstream pass to P 4.\nPut vi = b; apply the downstream pass For each vertex, Vj, keep track of all possible mixed strategies vj 5.\nCheck whether Vi +1 has any responses that are consistent with vi = b; if so we have a Nash equilibrium.\n(Otherwise, there is no Nash equilibrium of the desired form.)\nFor the second search, note that if Vi plays a mixed strategy, then vi +1 and vi_1 satisfy an equation of the form vi +1 = (A0 + A1vi_1) \/ (B0 + B1vi_1).\nSince all vertices in the cycle play mixed strategies, we have vi +3 = (A' 0 + A' 1vi +1) \/ (B' 0 + B' 1vi +1).\nComposing the two linear fractional transforms, we obtain vi +3 = (A' ' 0 + A' ' 1 vi_1) \/ (B' ' 0 + B' ' 1 vi_1).\nfor some new constants A' ' 0, A' ' 1, B' ' 0, B' ' 1.\nChoose any vertex Vi.\nWe can express vi in terms of vi +2, then vi +4, vi +6 etc. and ultimately vi itself to obtain a quadratic equation (for vi) that is simple to derive from the payoffs in the game.\nIf the equation is non-trivial it has at most 2 solutions in (0, 1).\nFor an odd-length cycle all other vj's are derivable from those solutions, and if a fully mixed Nash equilibrium exists, all the vj should turn out to be real numbers in the range (0, 1).\nFor an even-length cycle, we obtain two quadratic equations, one for vi and another for\nvi +1, and we can in the same way test whether any solutions to these yield values for the other vj, all of which lie in (0, 1).\nIf the quadratic equation is trivial, there is potentially a continuum of fully-mixed equilibria.\nThe values for vi that may occur in a Nash equilibrium are those for which all dependent vj values lie in (0, 1); the latter condition is easy to check by computing the image of the interval (0, 1) under respective fractional linear transforms.\n6.\nFINDING EQUILIBRIA ON AN (ARBITRARY) TREE\nFor arbitrary trees, the general structure of the algorithm remains the same, i.e., one can construct a best response policy (or, alternatively, a breakpoint policy) for any vertex based on the best response policies of its children.\nWe assume that the degree of each vertex is bounded by a constant K, i.e., the payoff matrix for each vertex is of size O (2K).\nConsider a vertex V whose children are U1,..., Uk and whose parent is W; the best response policy of each Uj is B (V, Uj).\nSimilarly to the previous section, we can compute V's expected payoffs P0 and P1 from playing 0 or 1, respectively.\nNamely, when each of the Uj plays uj and W plays w, we have P0 = L0 (u1,..., uk, w), P1 = L1 (u1,..., uk, w), where the functions L0 (\u00b7,..., \u00b7), L1 (\u00b7,..., \u00b7) are linear in all of their arguments.\nHence, the inequality P0> P1 can be rewritten as\nwhere both A (\u00b7,..., \u00b7) and B (\u00b7,..., \u00b7) are linear in all of their arguments.\nSet ut = (u1,..., uk) and define the indifference function f: [0, 1] k ~ \u2192 [0, 1] as f (u) = A (i) \/ B (i); clearly, if each Uj plays uj, W plays w and w = f (u), V is indifferent between playing 0 and 1.\nFor any X = X1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xk, where Xi \u2286 [0, 1] 2 define\nand \u02c6f (1) = {(w, 1) | \u2203 u s.t. ui \u2208 pbrUi (1) and wB (u) \u2264 A (u)}.\nAs in previous section, we can show that B (W, V) is equal to\nalso, any path from w = 0 to w = 1 that is a subset of B (W, V) constitutes a valid breakpoint policy.\n6.1 Exponential Size Breakpoint Policy\nWhile the algorithm of Section 4 can be generalized for boundeddegree trees, its running time is no longer polynomial.\nIn fact, the converse is true: we can construct a family of trees and payoff matrices for all players so that the best response policies for some of the players consist of an exponential number of segments.\nMoreover, in our example the breakpoint policies coincide with the best response policies, which means that even finding a single Nash equilibrium using the approach of [8, 9] is going to take exponentially long time.\nIn fact, a stronger statement is true: for any polynomial-time two-pass algorithm (defined later) that works with subsets of best response policies for this graph, we can choose the payoffs of the vertices so that the downstream pass of this algorithm will fail.\nFigure 5: The tree T. that corresponds to exponential-size breakpoint policy.\nIn the rest of this subsection, we describe this construction.\nConsider the tree T. given by Figure 5; let V. be the root of this tree.\nFor every k = 1,..., n, let the payoffs of Sk and Tk be the same as those for the U and V described in Section 3; recall that the breakpoint policies for U and V are shown in Figure 2.\nIt is not hard to see that the indifference function for Tk is given by f (s) = .8 s +.1.\nThe payoff of V0 is 1 if V1 selects the same action as V0 and 0 otherwise; V0's best response policy is given by Figure 6.\nLEMMA 3.\nFix k w.\nThen B (Vk +1, Vk) consists of at least 3k segments.\nMoreover, {(v, w) | (v, w) \u2208 B (Vk +1, Vk), 0 \u2264 w \u2264 .2} = [0, .2] \u00d7 {0} and {(v, w) | (v, w) \u2208 B (Vk +1, Vk), .8 \u2264 w \u2264 1} = [.8, 1] \u00d7 {1}.\nthese sets correspond to horizontal segments of B (Vk, Tk).\nIt is easy to see that S1, S2, S3 \u2282 B (Vk +1, Vk).\nSince f is a continuous function, the number of segments in each Si is at least the number of segments in B (Vk, Vk-1) \u2229 [.1, .9] \u00d7 [0, 1], which is at least 3k-1 by induction hypothesis.\nMoreover, as f is monotone in u and f (1, 0) w. To this end, we prove a more general statement, namely, that the indifference function of a vertex can be an arbitrary fractional multilinear function of its descendants' strategies.\nWe say that a function of k variables is multilinear if it can be represented as a sum of monomials and each of these monomials is linear in all of its variables.\nNote that this definition is different from a more standard one in that we do not require that all of the monomials have the same degree.\nRecall that the payoffs of a vertex with k + 1 neighbours are described by matrices P0 and P1, where Pj i0i1...ik is the payoff that V gets when it plays j, and its neighbours play i0,..., ik, and j, i0,..., ik \u2208 {0, 1}.\nLet P [j] = P [j] (w, u1,..., uk) be the expected payoff obtained by this vertex when it plays j and the (mixed) strategies of its neighbours are given by a vector (w, u1,..., uk), i.e., P [j] = E [Pji0i1...ik] where i0,..., ik are independent Bernoulli random variables, each of which is 1 with the respective probabilities w, u1,..., uk.\nLEMMA 4.\nGiven a tree vertex V whose parent is Wand whose children are U1,..., Uk, for any function f = f (u1,..., uk) that can be represented as a ratio of two multilinear functions f1, f2,\nThe proof of this lemma is based on the fact that every monomial of the form as (u0) s0...(uk) sk, s1,..., sk \u2208 {0, 1}, can be represented as\nfor some Ct, t \u2208 {0, 1} k +1.\nThe details can be found in the full version of this paper [6].\n6.2 Irreducibility of the Best Response Policy for Tn\nWhile the best response policy constructed in the previous subsection has exponential size, it is not clear a ` priori that it is necessary to keep track of all of its line segments rather than to focus on a small subset of these segments.\nHowever, it turns out that for two-pass algorithms such as the algorithm of [8], the best response policy cannot be simplified.\nMore precisely, we say that an algorithm A is a two-pass algorithm if\nFigure 7: Breakpoint policy for V2.\n\u2022 A consists of an upstream pass and a downstream pass.\n\u2022 During the upstream pass, for each vertex V with parent W,\nA constructs a set BB (W, V) \u2286 B (W, V).\nThis set is produced from the sets {BB (V, U) | U is a child of V} by applying the procedure from the beginning of Section 6 (substituting BB (V, Uj) for B (V, Uj) for all children Uj of V), and then possibly omitting some of the points of the resulting set (which is then stored explicitly).\n\u2022 The downstream pass is identical to the downstream pass of [8] as described in Section 2 except that it operates on the sets BB (W, V) rather than on the sets B (W, V).\nTheorem 7 demonstrates that any two-pass algorithm will fail during the downstream pass on Tn if there is an index j such that the set BB (Vj +1, Vj) omits any interior point of any of the (at least 3j) segments of B (Vj +1, Vj).\nThis implies Theorem 3.\nTHEOREM 7.\nFor any two-pass algorithm A for which there exists an index j, j \u2208 [1, n\/4], a segment S of B (Vj, Vj_1), and an interior point (x, y) of S such that BB (Vj, Vj_1) does not contain (x, y), we can choose payoff matrices of the vertices Vj,..., Vn so that the downstream pass of A will fail, and, additionally, payoffs to V4j,..., Vn are identically 0.\nWe sketch the proof of Theorem 7; the details can be found in the full version of this paper [6].\nWe proceed by induction.\nFor j = 1, the argument is similar to that in Section 3.\nFor the inductive step, the main idea is that we can \"zoom in\" on any part of a best response policy (including the part that was omitted!)\nby using an appropriate indifference function; this allows us to reduce the case j = j0 to j = j0 \u2212 1.\n7.\nPPAD-COMPLETENESS OF BOUNDED PATHWIDTH GRAPHICAL GAMES\nIn the previous section, we showed that for graphical games on trees that are almost but not quite paths, two-pass algorithms fail to find Nash equilibria in polynomial time.\nWe next show that a milder path-like graph property allows us to construct graphical games for which it is unlikely that any polynomial-time algorithm will find Nash equilibria.\n7.1 Pathwidth\nA path decomposition of a graph G = (V, E) is a sequence of subset Si (V) \u2286 V such that for each edge (v, v') \u2208 E, v, v' \u2208 Si (V) for some i, and furthermore, for each v \u2208 V, if v \u2208 Si (V) and v \u2208 Sj (V) for j> i, then v \u2208 Sk (V) for all i \u2264 k \u2264 j.\nThe path decomposition has width k if all sets Si (V) have cardinality at most k + 1.\nThe pathwidth of G is the minimum width of any path decomposition of G.\nPathwidth is a restriction of treewidth (in which one would seek a tree whose vertices were the sets Si (V), and the sets containing some vertex would have to form a subtree).\nFor any constant k it can be decided in polynomial time whether a graph has pathwidth (or treewidth) k. Furthermore many graph-theoretic problems seem easier to solve in polynomial time, when restricted to fixed treewidth, or pathwidth, graphs, see [1] for an overview.\nNote that a path has pathwidth 1 and a cycle has pathwidth 2.\n7.2 PPAD-completeness\nWe review some basic definitions from the computational complexity theory of search problems.\nA search problem associates any input (here, a graphical game) with a set of solutions (here, the Nash equilibria of the input game), where the description length of any solution should be polynomially bounded as a function of the description length of its input.\nIn a total search problem, there is a guarantee that at least one solution exists for any input.\nNash's theorem assures us that the problem of finding Nash equilibria is total.\nA reduction from search problem S to problem S' is a mechanism that shows that any polynomial-time algorithm for S' implies a polynomial-time algorithm for S.\nIt consists of functions f and g, computable in polynomial time, where f maps inputs of S to inputs of S', and g maps solutions of S' to solutions of S, in such a way that if Is is an input to S, and Ss is a solution to f (Is), then g (Ss) is a solution to Is.\nObserve that total search problems do not allow the above reductions from problems such as CIRCUIT SAT (where the input is a boolean circuit, and solutions are input vectors that make the output true) due to the fact that CIRCUIT SAT and other NP-complete problems have inputs with empty solution sets.\nInstead, recent work on the computational complexity of finding a Nash equilibrium [7, 4, 5, 2, 3] has related it to the following problem.\nDefinition 4.\nEND OF THE LINE.\nInput: boolean circuits S and P, each having n input and n output bits, where P (0n) = 0n and S (0n) = ~ 0n.\nSolution: x \u2208 {0, 1} n such that S (x) = x, or alternatively x \u2208 {0, 1} n such that P (S (x)) = ~ x. S and P can be thought of as standing for \"successor\" and \"predecessor\".\nObserve that by computing Si (0n) (for i = 0, 1, 2, ...) and comparing with P (Si +1 (0n)), we must eventually find a solution to END OF THE LINE.\nEND OF THE LINE characterizes the complexity class PPAD (standing for parity argument on a graph, directed version), introduced in Papadimitriou [11], and any search problem S is PPAD-complete if END OF THE LINE reduces to S.\nOther PPAD-complete problems include the search for a ham sandwich hyperplane, and finding market equilibria in an exchange economy (see [11] for more detailed descriptions of these problems).\n3-GRAPHICAL NASH is the problem of finding a Nash equilibrium for a graphical game whose graph has degree 3.\nDaskalakis et al. [4] show PPAD-completeness of 3-GRAPHICAL NASH by a reduction from 3-DIMENSIONAL BROUWER, introduced in [4] and defined as follows.\nDefinition 5.\n3-DIMENSIONAL BROUWER.\nInput: a circuit C having 3n input bits and 2 output bits.\nThe input bits define a \"cubelet\" of the unit cube, consisting of the 3 coordinates of its points, given to n bits of precision.\nThe output represents one of four colours assigned by C to a cubelet.\nC is restricted so as to assign colour 1 to cubelets adjacent to the (y, z) - plane, colour 2 to remaining cubelets adjacent to the (x, z) - plane, colour 3 to remaining cubelets on the (x, y) - plane, and colour 0 to all other cubelets on the surface of the unit cube.\nA solution is a panchromatic vertex, a vertex adjacent to cubelets that have 4 distinct colours.\nThe reason why a solution is guaranteed to exist, is that an associated Brouwer function 0 can be constructed, i.e. a continuous function from the unit cube to itself, such that panchromatic vertices correspond to fixpoints of 0.\nBrouwer's Fixpoint Theorem promises the existence of a fixpoint.\nThe proof of Theorem 4 uses a modification of the reduction of [4] from 3-DIMENSIONAL BROUWER to 3-GRAPHICAL NASH.\nTo prove the theorem, we begin with some preliminary results as follows.\nEach player has 2 actions, denoted 0 and 1.\nFor a player at vertex V let P [V] denote the probability that the player plays 1.\nLEMMA 5.\n[7] There exists a graphical game 9shif t offixed size having vertices V, V' where P [V'] is the fractional part of 2P [V].\nCOROLLARY 1.\nThere exists a graphical game 9n-shift ofsize \u0398 (n) of constant pathwidth, having vertices V, Vn where P [Vn] is the fractional part of 2n.\nP [V].\nPROOF.\nMake a chain of n copies of 9shift in Lemma 5.\nEach subset of vertices in the path decomposition is the vertices in a copy of 9shif t. Let In (x) denote the n-th bit of the binary expansion of x, where we interpret 1 as true and 0 as false.\nThe following uses gadgets from [7, 4].\nCOROLLARY 2.\nThere exists k such that for all n, and for all n1, n2, n3 0.\nThis implies that even for a caterpillar, the best response policy can be exponentially large.\nHowever, in our example (which is omitted from this version of the paper due to space constraints), there exists a polynomial-size path through the best response policy, i.e., it does not prove that the breakpoint policy is necessarily exponential in size.\nIf one can prove that this is always the case, it may be possible to adapt this proof to show that there can be an exponential gap between the sizes of best response policies and breakpoint policies.","keyphrases":["graphic game","degre","nash equilibrium","ppad-complet","larg-scale distribut network","dynam program-base algorithm","bound-degre tree","gener algorithm","respons polici","downstream pass","breakpoint polici"],"prmu":["P","P","P","P","M","M","M","R","U","U","U"]} {"id":"J-26","title":"Combinatorial Agency","abstract":"Much recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own selfish goal. The field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings. This paper deals with a complementary problem in such settings: handling the hidden actions that are performed by the different parties. Our model is a combinatorial variant of the classical principal-agent problem from economic theory. In our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him. Our focus is on cases where complex combinations of the efforts of the agents influence the outcome. The principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game. We present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.","lvl-1":"Combinatorial Agency [Extended Abstract] \u2217 Moshe Babaioff School of Information Management and Systems UC Berkeley Berkeley, CA, 94720 USA moshe@sims.berkeley.edu Michal Feldman School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, 91904 Israel mfeldman@cs.huji.ac.il Noam Nisan School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, 91904 Israel noam@cs.huji.ac.il ABSTRACT Much recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own selfish goal.\nThe field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings.\nThis paper deals with a complementary problem in such settings: handling the hidden actions that are performed by the different parties.\nOur model is a combinatorial variant of the classical principalagent problem from economic theory.\nIn our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him.\nOur focus is on cases where complex combinations of the efforts of the agents influence the outcome.\nThe principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game.\nWe present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Electronic Commerce]: Payment schemes; C.2.4 [ComputerCommunication Networks]: Distributed Systems General Terms Design, Economics, Theory 1.\nINTRODUCTION 1.1 Background One of the most striking characteristics of modern computer networks - in particular the Internet - is that different parts of it are owned and operated by different individuals, firms, and organizations.\nThe analysis and design of protocols for this environment thus naturally needs to take into account the different selfish economic interests of the different participants.\nIndeed, the last few years have seen much work addressing this issue using game-theoretic notions (see [7] for an influential survey).\nA significant part of the difficulty stems from underlying asymmetries of information: one participant may not know everything that is known or done by another.\nIn particular, the field of algorithmic mechanism design [6] uses appropriate incentives to extract the private information from the participants.\nThis paper deals with the complementary lack of knowledge, that of hidden actions.\nIn many cases the actual behaviors - actions - of the different participants are hidden from others and only influence the final outcome indirectly.\nHidden here covers a wide range of situations including not precisely measurable, costly to determine, or even non-contractible - meaning that it can not be formally used in a legal contract.\nAn example that was discussed in [3] is Quality of Service routing in a network: every intermediate link or router may exert a different amount of effort (priority, bandwidth, ...) when attempting to forward a packet of information.\nWhile the final outcome of whether a packet reached its destination is clearly visible, it is rarely feasible to monitor the exact amount of effort exerted by each intermediate link - how can we ensure that they really do exert the appropriate amount of effort?\nMany other complex resource allocation problems exhibit similar hidden actions, e.g., a task that runs on a collection of shared servers may be allocated, by each server, an unknown percentage of the CPU``s processing power or of the physical memory.\nHow can we ensure that the right combination of allocations is actually made by the different servers?\nA related class of examples concerns security issues: each link in a complex system may exert different levels of effort for protecting some desired security property of the system.\nHow can we ensure that the desired level of 18 collective security is obtained?\nOur approach to this problem is based on the well studied principal-agent problem in economic theory: How can a principal motivate a rational agent to exert costly effort towards the welfare of the principal?\nThe crux of the model is that the agent``s action (i.e. whether he exerts effort or not) is invisible to the principal and only the final outcome, which is probabilistic and also influenced by other factors, is visible.\nThis problem is well studied in many contexts in classical economic theory and we refer the readers to introductory texts on economic theory such as [5] Chapter 14.\nThe solution is based on the observation that a properly designed contract, in which the payments are contingent upon the final outcome, can influence a rational agent to exert the required effort.\nIn this paper we initiate a general study of handling combinations of agents rather than a single agent.\nWhile much work was already done on motivating teams of agents [4], our emphasis is on dealing with the complex combinatorial structure of dependencies between agents'' actions.\nIn the general case, each combination of efforts exerted by the n different agents may result in a different expected gain for the principal.\nThe general question asks which conditional payments should the principal offer to which agents as to maximize his net utility?\nIn our setting and unlike in previous work (see, e.g., [12]), the main challenge is to determine the optimal amount of effort desired from each agent.\nThis paper suggest models for and provides some interesting initial results about this combinatorial agency problem.\nWe believe that we have only scratched the surface and leave many open questions, conjectures, and directions for further research.\nWe believe that this type of analysis may also find applications in regular economic activity.\nConsider for example a firm that sub-contracts a family of related tasks to many individuals (or other firms).\nIt will often not be possible to exactly monitor the actual effort level of each sub-contractor (e.g., in cases of public-relations activities, consulting activities, or any activities that require cooperation between different sub-contractors.)\nWhen the dependencies between the different subtasks are complex, we believe that combinatorial agency models can offer a foundation for the design of contracts with appropriate incentives.\nIt may also be useful to view our work as part of a general research agenda stemming from the fact that all types of economic activity are increasingly being handled with the aid of sophisticated computer systems.\nIn general, in such computerized settings, complex scenarios involving multiple agents and goods can naturally occur, and they need to be algorithmically handled.\nThis calls for the study of the standard issues in economic theory in new complex settings.\nThe principal-agent problem is a prime example where such complex settings introduce new challenges.\n1.2 Our Models We start by presenting a general model: in this model each of n agents has a set of possible actions, the combination of actions by the players results in some outcome, where this happens probabilistically.\nThe main part of the specification of a problem in this model is a function that specifies this distribution for each n-tuple of agents'' actions.\nAdditionally, the problem specifies the principal``s utility for each possible outcome, and for each agent, the agent``s cost for each possible action.\nThe principal motivates the agents by offering to each of them a contract that specifies a payment for each possible outcome of the whole project1 .\nKey here is that the actions of the players are non-observable and thus the contract cannot make the payments directly contingent on the actions of the players, but rather only on the outcome of the whole project.\nGiven a set of contracts, the agents will each optimize his own utility: i.e. will choose the action that maximizes his expected payment minus the cost of his action.\nSince the outcome depends on the actions of all players together, the agents are put in a game and are assumed to reach a Nash equilibrium2 .\nThe principal``s problem, our problem in this paper, is of designing an optimal set of contracts: i.e. contracts that maximize his expected utility from the outcome, minus his expected total payment.\nThe main difficulty is that of determining the required Nash equilibrium point.\nIn order to focus on the main issues, the rest of the paper deals with the basic binary case: each agent has only two possible actions exert effort and shirk and there are only two possible outcomes success and failure.\nIt seems that this case already captures the main interesting ingredients3 .\nIn this case, each agent``s problem boils down to whether to exert effort or not, and the principal``s problem boils down to which agents should be contracted to exert effort.\nThis model is still pretty abstract, and every problem description contains a complete table specifying the success probability for each subset of the agents who exert effort.\nWe then consider a more concrete model which concerns a subclass of problem instances where this exponential size table is succinctly represented.\nThis subclass will provide many natural types of problem instances.\nIn this subclass every agent performs a subtask which succeeds with a low probability \u03b3 if the agent does not exert effort and with a higher probability \u03b4 > \u03b3, if the agent does exert effort.\nThe whole project succeeds as a deterministic Boolean function of the success of the subtasks.\nThis Boolean function can now be represented in various ways.\nTwo basic examples are the AND function in which the project succeeds only if all subtasks succeed, and the OR function which succeeds if any of the subtasks succeeds.\nA more complex example considers a communication network, where each agent controls a single edge, and success of the subtask means that a message is forwarded by that edge.\nEffort by the edge increases this success probability.\nThe complete project succeeds if there is a complete path of successful edges between a given source and sink.\nComplete definitions of the models appear in Section 2.\n1.3 Our Results 1 One could think of a different model in which the agents have intrinsic utility from the outcome and payments may not be needed, as in [10, 11].\n2 In this paper our philosophy is that the principal can suggest a Nash equilibrium point to the agents, thus focusing on the best Nash equilibrium.\nOne may alternatively study the worst case equilibrium as in [12], or alternatively, attempt modeling some kind of an extensive game between the agents, as in [9, 10, 11].\n3 However, some of the more advanced questions we ask for this case can be viewed as instances of the general model.\n19 We address a host of questions and prove a large number of results.\nWe believe that despite the large amount of work that appears here, we have only scratched the surface.\nIn many cases we were not able to achieve the general characterization theorems that we desired and had to settle for analyzing special cases or proving partial results.\nIn many cases, simulations reveal structure that we were not able to formally prove.\nWe present here an informal overview of the issues that we studied, what we were able to do, and what we were not.\nThe full treatment of most of our results appears only in the extended version [2], and only some are discussed, often with associated simulation results, in the body of the paper.\nOur first object of study is the structure of the class of sets of agents that can be contracted for a given problem instance.\nLet us fix a given function describing success probabilities, fix the agent``s costs, and let us consider the set of contracted agents for different values of the principal``s associated value from success.\nFor very low values, no agent will be contracted since even a single agent``s cost is higher than the principal``s value.\nFor very high values, all agents will always be contracted since the marginal contribution of an agent multiplied by the principal``s value will overtake any associated payment.\nWhat happens for intermediate principal``s values?\nWe first observe that there is a finite number of transitions between different sets, as the principal``s project value increases.\nThese transitions behave very differently for different functions.\nFor example, we show that for the AND function only a single transition occurs: for low enough values no agent will be contracted, while for higher values all agents will be contracted - there is no intermediate range for which only some of the agents are contracted.\nFor the OR function, the situation is opposite: as the principal``s value increases, the set of contracted agents increases one-by-one.\nWe are able to fully characterize the types of functions for which these two extreme types of transitions behavior occur.\nHowever, the structure of these transitions in general seems quite complex, and we were not able to fully analyze them even in simple cases like the Majority function (the project succeeds if a majority of subtasks succeeds) or very simple networks.\nWe do have several partial results, including a construction with an exponential number of transitions.\nDuring the previous analysis we also study what we term the price of unaccountability: How much is the social utility achieved under the optimal contracts worse than what could be achieved in the non-strategic case4 , where the socially optimal actions are simply dictated by the principal?\nWe are able to fully analyze this price for the AND function, where it is shown to tend to infinity as the number of agents tends to infinity.\nMore general analysis remains an open problem.\nOur analysis of these questions sheds light on the difficulty of the various natural associated algorithmic problems.\nIn particular, we observe that the optimal contract can be found in time polynomial in the explicit representation of the probability function.\nWe prove a lower bound that shows that the optimal contract cannot be found in number of queries that is polynomial just in the number of agents, in a general black-box model.\nWe also show that when the probability function is succinctly represented as 4 The non-strategic case is often referred to as the case with contractible actions or the principal``s first-best solution.\na read-once network, the problem becomes #P-hard.\nThe status of some algorithmic questions remains open, in particular that of finding the optimal contract for technologies defined by serial-parallel networks.\nIn a follow-up paper [1] we deal with equilibria in mixed strategies and show that the principal can gain from inducing a mixed-Nash equilibrium between the agents rather than a pure one.\nWe also show cases where the principal can gain by asking agents to reduce their effort level, even when this effort comes for free.\nBoth phenomena can not occur in the non-strategic setting.\n2.\nMODEL AND PRELIMINARIES 2.1 The General Setting A principal employs a set of agents N of size n. Each agent i \u2208 N has a possible set of actions Ai, and a cost (effort) ci(ai) \u2265 0 for each possible action ai \u2208 Ai (ci : Ai \u2192 +).\nThe actions of all players determine, in a probabilistic way, a contractible outcome o \u2208 O, according to a success function t : A1\u00d7, ... \u00d7 An \u2192 \u0394(O) (where \u0394(O) denotes the set of probability distributions on O).\nA technology is a pair, (t, c), of a success function, t, and cost functions, c = (c1, c2, ... , cn).\nThe principal has a certain value for each possible outcome, given by the function v : O \u2192 .\nAs we will only consider risk-neutral players in this paper5 , we will also treat v as a function on \u0394(O), by taking simple expected value.\nActions of the players are invisible, but the final outcome o is visible to him and to others (in particular the court), and he may design enforceable contracts based on the final outcome.\nThus the contract for agent i is a function (payment) pi : O \u2192 ; again, we will also view pi as a function on \u0394(O).\nGiven this setting, the agents have been put in a game, where the utility of agent i under the vector of actions a = (a1, ... , an) is given by ui(a) = pi(t(a))\u2212ci(ai).\nThe agents will be assumed to reach Nash equilibrium, if such equilibrium exists.\nThe principal``s problem (which is our problem in this paper) is how to design the contracts pi as to maximize his own expected utility u(a) = v(t(a)) \u2212 P i pi(t(a)), where the actions a1, ... , an are at Nash-equilibrium.\nIn the case of multiple Nash equilibria we let the principal choose the equilibrium, thus focusing on the best Nash equilibrium.\nA variant, which is similar in spirit to strong implementation in mechanism design would be to take the worst Nash equilibrium, or even, stronger yet, to require that only a single equilibrium exists.\nFinally, the social welfare for a \u2208 A is u(a) + P i\u2208N ui(a) = v(t(a)) \u2212 P i\u2208N ci(ai).\n2.2 The Binary-Outcome Binary-Action Model We wish to concentrate on the complexities introduced by the combinatorial structure of the success function t, we restrict ourselves to a simpler setting that seems to focus more clearly on the structure of t.\nA similar model was used in [12].\nWe first restrict the action spaces to have only two states (binary-action): 0 (low effort) and 1 (high effort).\nThe cost function of agent i is now just a scalar ci > 0 denoting the cost of exerting high effort (where the low effort has cost 0).\nThe vector of costs is c = (c1, c2, ... , cn), 5 The risk-averse case would obviously be a natural second step in the research of this model, as has been for noncombinatorial scenarios.\n20 and we use the notation (t, c) to denote a technology in such a binary-outcome model.\nWe then restrict the outcome space to have only two states (binary-outcome): 0 (project failure) and 1 (project success).\nThe principal``s value for a successful project is given by a scalar v > 0 (where the value of project failure is 0).\nWe assume that the principal can pay the agents but not fine them (known as the limited liability constraint).\nThe contract to agent i is thus now given by a scalar value pi \u2265 0 that denotes the payment that i gets in case of project success.\nIf the project fails, the agent gets 0.\nWhen the lowest cost action has zero cost (as we assume), this immediately implies that the participation constraint holds.\nAt this point the success function t becomes a function t : {0, 1}n \u2192 [0, 1], where t(a1, ... , an) denotes the probability of project success where players with ai = 0 do not exert effort and incur no cost, and players with ai = 1 do exert effort and incur a cost of ci.\nAs we wish to concentrate on motivating agents, rather than on the coordination between agents, we assume that more effort by an agent always leads to a better probability of success, i.e. that the success function t is strictly monotone.\nFormally, if we denote by a\u2212i \u2208 A\u2212i the (n \u2212 1)dimensional vector of the actions of all agents excluding agent i. i.e., a\u2212i = (a1, ... , ai\u22121, ai+1, ... , an), then a success function must satisfy: \u2200i \u2208 N, \u2200a\u2212i \u2208 A\u2212i t(1, a\u2212i) > t(0, a\u2212i) Additionally, we assume that t(a) > 0 for any a \u2208 A (or equivalently, t(0, 0, ... , 0) > 0).\nDefinition 1.\nThe marginal contribution of agent i, denoted by \u0394i, is the difference between the probability of success when i exerts effort and when he shirks.\n\u0394i(a\u2212i) = t(1, a\u2212i) \u2212 t(0, a\u2212i) Note that since t is monotone, \u0394i is a strictly positive function.\nAt this point we can already make some simple observations.\nThe best action, ai \u2208 Ai, of agent i can now be easily determined as a function of what the others do, a\u2212i \u2208 A\u2212i, and his contract pi.\nClaim 1.\nGiven a profile of actions a\u2212i, agent i``s best strategy is ai = 1 if pi \u2265 ci \u0394i(a\u2212i) , and is ai = 0 if pi \u2264 ci \u0394i(a\u2212i) .\n(In the case of equality the agent is indifferent between the two alternatives.)\nAs pi \u2265 ci \u0394i(a\u2212i) if and only if ui(1, a\u2212i) = pi \u00b7t(1, a\u2212i)\u2212ci \u2265 pi \u00b7t(0, a\u2212i) = ui(0, a\u2212i), i``s best strategy is to choose ai = 1 in this case.\nThis allows us to specify the contracts that are the principal``s optimal, for inducing a given equilibrium.\nObservation 1.\nThe best contracts (for the principal) that induce a \u2208 A as an equilibrium are pi = 0 for agent i who exerts no effort (ai = 0), and pi = ci \u0394i(a\u2212i) for agent i who exerts effort (ai = 1).\nIn this case, the expected utility of agent i who exerts effort is ci \u00b7 t(1,a\u2212i) \u0394i(a\u2212i) \u2212 1 , and 0 for an agent who shirk.\nThe principal``s expected utility is given by u(a, v) = (v\u2212P)\u00b7t(a), where P is the total payment in case of success, given by P = P i|ai=1 ci \u0394i(a\u2212i) .\nWe say that the principal contracts with agent i if pi > 0 (and ai = 1 in the equilibrium a \u2208 A).\nThe principal``s goal is to maximize his utility given his value v, i.e. to determine the profile of actions a\u2217 \u2208 A, which gives the highest value of u(a, v) in equilibrium.\nChoosing a \u2208 A corresponds to choosing a set S of agents that exert effort (S = {i|ai = 1}).\nWe call the set of agents S\u2217 that the principal contracts with in a\u2217 (S\u2217 = {i|a\u2217 i = 1}) an optimal contract for the principal at value v.\nWe sometimes abuse notation and denote t(S) instead of t(a), when S is exactly the set of agents that exert effort in a \u2208 A.\nA natural yardstick by which to measure this decision is the non-strategic case, i.e. when the agents need not be motivated but are rather controlled directly by the principal (who also bears their costs).\nIn this case the principal will simply choose the profile a \u2208 A that optimizes the social welfare (global efficiency), t(a) \u00b7 v \u2212 P i|ai=1 ci.\nThe worst ratio between the social welfare in this non-strategic case and the social welfare for the profile a \u2208 A chosen by the principal in the agency case, may be termed the price of unaccountability.\nGiven a technology (t, c), let S\u2217 (v) denote the optimal contract in the agency case and let S\u2217 ns(v) denote an optimal contract in the non-strategic case, when the principal``s value is v.\nThe social welfare for value v when the set S of agents is contracted is t(S) \u00b7 v \u2212 P i\u2208S ci (in both the agency and non-strategic cases).\nDefinition 2.\nThe price of unaccountability POU(t, c) of a technology (t, c) is defined as the worst ratio (over v) between the total social welfare in the non-strategic case and the agency case: POU(t, c) = Supv>0 t(S\u2217 ns(v)) \u00b7 v \u2212 P i\u2208S\u2217 ns(v) ci t(S\u2217(v)) \u00b7 v \u2212 P i\u2208S\u2217(v) ci In cases where several sets are optimal in the agency case, we take the worst set (i.e., the set that yields the lowest social welfare).\nWhen the technology (t, c) is clear in the context we will use POU to denote the price of unaccountability for technology (t, c).\nNote that the POU is at least 1 for any technology.\nAs we would like to focus on results that derived from properties of the success function, in most of the paper we will deal with the case where all agents have an identical cost c, that is ci = c for all i \u2208 N.\nWe denote a technology (t, c) with identical costs by (t, c).\nFor the simplicity of the presentation, we sometimes use the term technology function to refer to the success function of the technology.\n2.3 Structured Technology Functions In order to be more concrete, we will especially focus on technology functions whose structure can be described easily as being derived from independent agent tasks - we call these structured technology functions.\nThis subclass will first give us some natural examples of technology function, and will also provide a succinct and natural way to represent the technology functions.\nIn a structured technology function, each individual succeeds or fails in his own task independently.\nThe project``s success or failure depends, possibly in a complex way, on the set of successful sub-tasks.\nThus we will assume a monotone Boolean function f : {0, 1}n \u2192 {0, 1} which denotes 21 whether the project succeeds as a function of the success of the n agents'' tasks (and is not determined by any set of n\u22121 agents).\nAdditionally there are constants 0 < \u03b3i < \u03b4i < 1, where \u03b3i denotes the probability of success for agent i if he does not exert effort, and \u03b4i (> \u03b3i) denotes the probability of success if he does exert effort.\nIn order to reduce the number of parameters, we will restrict our attention to the case where \u03b31 = ... = \u03b3n = \u03b3 and \u03b41 = ... = \u03b4n = 1 \u2212 \u03b3 thus leaving ourselves with a single parameter \u03b3 s.t. 0 < \u03b3 < 1 2 .\nUnder this structure, the technology function t is defined by t(a1, ... , an) being the probability that f(x1, ... , xn) = 1 where the bits x1, ... , xn are chosen according to the following distribution: if ai = 0 then xi = 1 with probability \u03b3 and xi = 0 with probability 1 \u2212 \u03b3; otherwise, i.e. if ai = 1, then xi = 1 with probability 1 \u2212 \u03b3 and xi = 0 with probability \u03b3.\nWe denote x = (x1, ... , xn).\nThe question of the representation of the technology function is now reduced to that of representing the underlying monotone Boolean function f.\nIn the most general case, the function f can be given by a general monotone Boolean circuit.\nAn especially natural sub-class of functions in the structured technologies setting would be functions that can be represented as a read-once network - a graph with a given source and sink, where every edge is labeled by a different player.\nThe project succeeds if the edges that belong to player``s whose task succeeded form a path between the source and the sink6 .\nA few simple examples should be in order here: 1.\nThe AND technology: f(x1, ... , xn) is the logical conjunction of xi (f(x) = V i\u2208N xi).\nThus the project succeeds only if all agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 1(a).\nIf m agents exert effort ( P i ai = m), then t(a) = tm = \u03b3n\u2212m (1 \u2212 \u03b3)m .\nE.g. for two players, the technology function t(a1a2) = ta1+a2 is given by t0 = t(00) = \u03b32 , t1 = t(01) = t(10) = \u03b3(1 \u2212 \u03b3), and t2 = t(11) = (1 \u2212 \u03b3)2 .\n2.\nThe OR technology: f(x1, ... , xn) is the logical disjunction of xi (f(x) = W i\u2208N xi).\nThus the project succeeds if at least one of the agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 1(b).\nIf m agents exert effort, then tm = 1 \u2212 \u03b3m (1 \u2212 \u03b3)n\u2212m .\nE.g. for two players, the technology function is given by t(00) = 1 \u2212 (1 \u2212 \u03b3)2 , t(01) = t(10) = 1 \u2212 \u03b3(1 \u2212 \u03b3), and t(11) = 1 \u2212 \u03b32 .\n3.\nThe Or-of-Ands (OOA) technology: f(x) is the logical disjunction of conjunctions.\nIn the simplest case of equal-length clauses (denote by nc the number of clauses and by nl their length), f(x) = Wnc j=1( Vnl k=1 xj k).\nThus the project succeeds if in at least one clause all agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 2(a).\nIf mi agents on path i exert effort, then t(m1, ..., mnc ) = 1 \u2212 Q i(1 \u2212 \u03b3nl\u2212mi (1 \u2212 \u03b3)mi ).\nE.g. for four players, the technology function t(a1 1 a1 2, a2 1 a2 2) is given by t(00, 00) = 1 \u2212 (1 \u2212 \u03b32 )2 , t(01, 00) = t(10, 00) = t(00, 01) = t(00, 10) = 1 \u2212 (1 \u2212 \u03b3(1 \u2212 \u03b3))(1 \u2212 \u03b32 ), and so on.\n6 One may view this representation as directly corresponding to the project of delivering a message from the source to the sink in a real network of computers, with the edges being controlled by selfish agents.\nFigure 1: Graphical representations of (a) AND and (b) OR technologies.\nFigure 2: Graphical representations of (a) OOA and (b) AOO technologies.\n4.\nThe And-of-Ors (AOO) technology: f(x) is the logical conjunction of disjunctions.\nIn the simplest case of equal-length clauses (denote by nl the number of clauses and by nc their length), f(x) = Vnl j=1( Wnc k=1 xj k).\nThus the project succeeds if at least one agent from each disjunctive-form-clause succeeds in his tasks.\nThis is shown graphically as a read-once network in Figure 2(b).\nIf mi agents on clause i exert effort, then t(m1, ..., mnc ) = Q i(1 \u2212 \u03b3mi (1 \u2212 \u03b3)nc\u2212mi ).\nE.g. for four players, the technology function t(a1 1 a1 2, a2 1 a2 2) is given by t(00, 00) = (1 \u2212 (1 \u2212 \u03b3)2 )2 , t(01, 00) = t(10, 00) = t(00, 01) = t(00, 10) = (1 \u2212 \u03b3(1 \u2212 \u03b3))(1 \u2212 (1 \u2212 \u03b3)2 ), and so on.\n5.\nThe Majority technology: f(x) is 1 if a majority of the values xi are 1.\nThus the project succeeds if most players succeed.\nThe majority function, even on 3 inputs, can not be represented by a read-once network, but is easily represented by a monotone Boolean formula maj(x, y, z) = xy+yz+xz.\nIn this case the technology function is given by t(000) = 3\u03b32 (1 \u2212 \u03b3) + \u03b33 , t(001) = t(010) = t(100) = \u03b33 +2(1\u2212\u03b3)2 \u03b3 +\u03b32 (1\u2212\u03b3), etc. 3.\nANALYSIS OF SOME ANONYMOUS TECHNOLOGIES A success function t is called anonymous if it is symmetric with respect to the players.\nI.e. t(a1, ... , an) depends only on P i\u2208N ai (the number of agents that exert effort).\nA technology (t, c) is anonymous if t is anonymous and the cost c is identical to all agents.\nOf the examples presented above, the AND, OR, and majority technologies were anonymous (but not AOO and OOA).\nAs for an anonymous t only the number of agents that exert effort is important, we can shorten the notations and denote tm = t(1m , 0n\u2212m ), \u0394m = tm+1 \u2212 tm, pm = c \u0394m\u22121 and um = tm \u00b7 (v \u2212 m \u00b7 pm), for the case of identical cost c. 22 v 3 0 gamma 200 150 0.4 100 50 0.3 0 0.20.10 2 1 0 3 12000 6000 8000 4000 2000 gamma 0 0.4 0.45 10000 0.3 0.350.250.2 Figure 3: Number of agents in the optimal contract of the AND (left) and OR (right) technologies with 3 players, as a function of \u03b3 and v. AND technology: either 0 or 3 agents are contracted, and the transition value is monotonic in \u03b3.\nOR technology: for any \u03b3 we can see all transitions.\n3.1 AND and OR Technologies Let us start with a direct and full analysis of the AND and OR technologies for two players for the case \u03b3 = 1\/4 and c = 1.\nExample 1.\nAND technology with two agents, c = 1, \u03b3 = 1\/4: we have t0 = \u03b32 = 1\/16, t1 = \u03b3(1 \u2212 \u03b3) = 3\/16, and t2 = (1 \u2212 \u03b3)2 = 9\/16 thus \u03940 = 1\/8 and \u03941 = 3\/8.\nThe principal has 3 possibilities: contracting with 0, 1, or 2 agents.\nLet us write down the expressions for his utility in these 3 cases: \u2022 0 Agents: No agent is paid thus and the principal``s utility is u0 = t0 \u00b7 v = v\/16.\n\u2022 1 Agent: This agent is paid p1 = c\/\u03940 = 8 on success and the principal``s utility is u1 = t1(v \u2212 p1) = 3v\/16\u2212 3\/2.\n\u2022 2 Agents: each agent is paid p2 = c\/\u03941 = 8\/3 on success, and the principal``s utility is u2 = t2(v\u22122p2) = 9v\/16 \u2212 3.\nNotice that the option of contracting with one agent is always inferior to either contracting with both or with none, and will never be taken by the principal.\nThe principal will contract with no agent when v < 6, with both agents whenever v > 6, and with either non or both for v = 6.\nThis should be contrasted with the non-strategic case in which the principal completely controls the agents (and bears their costs) and thus simply optimizes globally.\nIn this case the principal will make both agents exert effort whenever v \u2265 4.\nThus for example, for v = 6 the globally optimal decision (non-strategic case) would give a global utility of 6 \u00b7 9\/16 \u2212 2 = 11\/8 while the principal``s decision (in the agency case) would give a global utility of 3\/8, giving a ratio of 11\/3.\nIt turns out that this is the worst price of unaccountability in this example, and it is obtained exactly at the transition point of the agency case, as we show below.\nExample 2.\nOR technology with two agents, c = 1, \u03b3 = 1\/4: we have t0 = 1 \u2212 (1 \u2212 \u03b3)2 = 7\/16, t1 = 1 \u2212 \u03b3(1 \u2212 \u03b3) = 13\/16, and t2 = 1 \u2212 \u03b32 = 15\/16 thus \u03940 = 3\/8 and \u03941 = 1\/8.\nLet us write down the expressions for the principal``s utility in these three cases: \u2022 0 Agents: No agent is paid and the principal``s utility is u0 = t0 \u00b7 v = 7v\/16.\n\u2022 1 Agent: This agent is paid p1 = c\/\u03940 = 8\/3 on success and the principal``s utility is u1 = t1(v \u2212 p1) = 13v\/16 \u2212 13\/6.\n\u2022 2 Agents: each agent is paid p2 = c\/\u03941 = 8 on success, and the principal``s utility is u2 = t2(v \u2212 2p2) = 15v\/16 \u2212 15\/2.\nNow contracting with one agent is better than contracting with none whenever v > 52\/9 (and is equivalent for v = 52\/9), and contracting with both agents is better than contracting with one agent whenever v > 128\/3 (and is equivalent for v = 128\/3), thus the principal will contract with no agent for 0 \u2264 v \u2264 52\/9, with one agent for 52\/9 \u2264 v \u2264 128\/3, and with both agents for v \u2265 128\/3.\nIn the non-strategic case, in comparison, the principal will make a single agent exert effort for v > 8\/3, and the second one exert effort as well when v > 8.\nIt turns out that the price of unaccountability here is 19\/13, and is achieved at v = 52\/9, which is exactly the transition point from 0 to 1 contracted agents in the agency case.\nThis is not a coincidence that in both the AND and OR technologies the POU is obtained for v that is a transition point (see full proof in [2]).\nLemma 1.\nFor any given technology (t, c) the price of unaccountability POU(t, c) is obtained at some value v which is a transition point, of either the agency or the non-strategic cases.\nProof sketch: We look at all transition points in both cases.\nFor any value lower than the first transition point, 0 agents are contracted in both cases, and the social welfare ratio is 1.\nSimilarly, for any value higher than the last transition point, n agents are contracted in both cases, and the social welfare ratio is 1.\nThus, we can focus on the interval between the first and last transition points.\nBetween any pair of consecutive points, the social welfare ratio is between two linear functions of v (the optimal contracts are fixed on such a segment).\nWe then show that for each segment, the suprimum ratio is obtained at an end point of the segment (a transition point).\nAs there are finitely many such points, the global suprimum is obtained at the transition point with the maximal social welfare ratio.\n2 We already see a qualitative difference between the AND and OR technologies (even with 2 agents): in the first case either all agents are contracted or none, while in the second case, for some intermediate range of values v, exactly one agent is contracted.\nFigure 3 shows the same phenomena for AND and OR technologies with 3 players.\nTheorem 1.\nFor any anonymous AND technology7 : \u2022 there exists a value8 v\u2217 < \u221e such that for any v < v\u2217 it is optimal to contract with no agent, for v > v\u2217 it is optimal to contract with all n agents, and for v = v\u2217, both contracts (0, n) are optimal.\n7 AND technology with any number of agents n and any \u03b3, and any identical cost c. 8 v\u2217 is a function of n, \u03b3, c. 23 \u2022 the price of unaccountability is obtained at the transition point of the agency case, and is POU = ` 1 \u03b3 \u2212 1 \u00b4n\u22121 + (1 \u2212 \u03b3 1 \u2212 \u03b3 ) Proof sketch: For any fixed number of contracted agents, k, the principal``s utility is a linear function in v, where the slope equals the success probability under k contracted agents.\nThus, the optimal contract corresponds to the maximum over a set of linear functions.\nLet v\u2217 denote the point at which the principal is indifferent between contracting with 0 or n agents.\nIn [2] we show that at v\u2217, the principal``s utility from contracting with 0 (or n) agents is higher than his utility when contracting with any number of agents k \u2208 {1, ... , n \u2212 1}.\nAs the number of contracted agents is monotonic non-decreasing in the value (due to Lemma 3), for any v < v\u2217, contracting with 0 agents is optimal, and for any v > v\u2217, contracting with n agents is optimal.\nThis is true for both the agency and the non-strategic cases.\nAs in both cases there is a single transition point, the claim about the price of unaccountability for AND technology is proved as a special case of Lemma 2 below.\nFor AND technology tn\u22121 t0 = (1\u2212\u03b3)n\u22121 \u00b7\u03b3 \u03b3n = 1 \u03b3 \u2212 1 n\u22121 and tn\u22121 tn = (1\u2212\u03b3)n\u22121 \u00b7\u03b3 (1\u2212\u03b3)n = \u03b3 1\u2212\u03b3 , and the expressions for the POU follows.\n2 In [2] we present a general characterization of technologies with a single transition in the agency and the non-strategic cases, and provide a full proof of Theorem 1 as a special case.\nThe property of a single transition occurs in both the agency and the non-strategic cases, where the transition occurs at a smaller value of v in the non-strategic case.\nNotice that the POU is not bounded across the AND family of technologies (for various n, \u03b3) as POU \u2192 \u221e either if \u03b3 \u2192 0 (for any given n \u2265 2) or n \u2192 \u221e (for any fixed \u03b3 \u2208 (0, 1 2 )).\nNext we consider the OR technology and show that it exhibits all n transitions.\nTheorem 2.\nFor any anonymous OR technology, there exist finite positive values v1 < v2 < ... < vn such that for any v s.t. vk < v < vk+1, contracting with exactly k agents is optimal (for v < v1, no agent is contracted, and for v > vn, all n agents are contracted).\nFor v = vk, the principal is indifferent between contracting with k \u2212 1 or k agents.\nProof sketch: To prove the claim we define vk to be the value for which the principal is indifferent between contracting with k \u2212 1 agents, and contracting with k agents.\nWe then show that for any k, vk < vk+1.\nAs the number of contracted agents is monotonic non-decreasing in the value (due to Lemma 3), v1 < v2 < ... < vn is a sufficient condition for the theorem to hold.\n2 The same behavior occurs in both the agency and the nonstrategic case.\nThis characterization is a direct corollary of a more general characterization given in [2].\nWhile in the AND technology we were able to fully determine the POU analytically, the OR technology is more difficult to analyze.\nOpen Question 1.\nWhat is the POU for OR with n > 2 agents?\nIs it bounded by a constant for every n?\nWe are only able to determine the POU of the OR technology for the case of two agents [2].\nEven for the 2 agents case we already observe a qualitative difference between the POU in the AND and OR technologies.\nObservation 2.\nWhile in the AND technology the POU for n = 2 is not bounded from above (for \u03b3 \u2192 0), the highest POU in OR technology with two agents is 2 (for \u03b3 \u2192 0).\n3.2 What Determines the Transitions?\nTheorems 1 and 2 say that both the AND and OR technologies exhibit the same transition behavior (changes of the optimal contract) in the agency and the non-strategic cases.\nHowever, this is not true in general.\nIn [2] we provide a full characterization of the sufficient and necessary conditions for general anonymous technologies to have a single transition and all n transitions.\nWe find that the conditions in the agency case are different than the ones in the non-strategic case.\nWe are able to determine the POU for any anonymous technology that exhibits a single transition in both the agency and the non-strategic cases (see full proof in [2]).\nLemma 2.\nFor any anonymous technology that has a single transition in both the agency and the non-strategic cases, the POU is given by: POU = 1 + tn\u22121 t0 \u2212 tn\u22121 tn and it is obtained at the transition point of the agency case.\nProof sketch: Since the payments in the agency case are higher than in the non-strategic case, the transition point in the agency case occurs for a higher value than in the non-strategic case.\nThus, there exists a region in which the optimal numbers of contracted agents in the agency and the non-strategic cases are 0 and n, respectively.\nBy Lemma 1 the POU is obtained at a transition point.\nAs the social welfare ratio is decreasing in v in this region, the POU is obtained at the higher value, that is, at the transition point of the agency case.\nThe transition point in the agency case is the point at which the principal is indifferent between contracting with 0 and with n agents, v\u2217 = c\u00b7n tn\u2212t0 \u00b7 tn tn\u2212tn\u22121 .\nSubstituting the transition point of the agency case into the POU expression yields the required expression.\nPOU = v\u2217 \u00b7 tn \u2212 c \u00b7 n v\u2217 \u00b7 t0 = 1 + tn\u22121 t0 \u2212 tn\u22121 tn 2 3.3 The MAJORITY Technology The project under the MAJORITY function succeeds if the majority of the agents succeed in their tasks (see Section 2.3).\nWe are unable to characterize the transition behavior of the MAJORITY technology analytically.\nFigure 4 presents the optimal number of contracted agents as a function of v and \u03b3, for n = 5.\nThe phenomena that we observe in this example (and others that we looked at) leads us to the following conjecture.\nConjecture 1.\nFor any Majority technology (any n, \u03b3 and c), there exists l, 1 \u2264 l \u2264 n\/2 such that the first transition is from 0 to l agents, and then all the remaining n \u2212 l transitions exist.\n24 4 5 3 1 0 2 400 0 0.3 100 gamma 0.2 300 0.450.25 200 v 500 0.35 0.4 Figure 4: Simulations results showing the number of agents in the optimal contract of the MAJORITY technology with 5 players, as a function of \u03b3 and v.\nAs \u03b3 decreases the first transition is at a lower value and to a higher number of agents.\nFor any sufficiently small \u03b3, the first transition is to 3 = 5\/2 agents, and for any sufficiently large \u03b3, the first transition is to 1 agents.\nFor any \u03b3, the first transition is never to more than 3 agents, and after the first transition we see all following possible transitions.\nMoreover, for any fixed c, n, l = 1 when \u03b3 is close enough to 1 2 , l is a non-decreasing function of \u03b3 (with image {1, ... , n\/2 }), and l = n\/2 when \u03b3 is close enough to 0.\n4.\nNON-ANONYMOUS TECHNOLOGIES In non-anonymous technologies (even with identical costs), we need to talk about the contracted set of agents and not only about the number of contracted agents.\nIn this section, we identify the sets of agents that can be obtained as the optimal contract for some v.\nThese sets construct the orbit of a technology.\nDefinition 3.\nFor a technology t, a set of agents S is in the orbit of t if for some value v, the optimal contract is exactly with the set S of agents (where ties between different S``s are broken according to a lexicographic order9 ).\nThe korbit of t is the collection of sets of size exactly k in the orbit.\nObserve that in the non-strategic case the k-orbit of any technology with identical cost c is of size at most 1 (as all sets of size k has the same cost, only the one with the maximal probability can be on the orbit).\nThus, the orbit of any such technology in the non-strategic case is of size at most n + 1.\nWe show that the picture in the agency case is very different.\nA basic observation is that the orbit of a technology is actually an ordered list of sets of agents, where the order is determined by the following lemma.\nLemma 3.\n( Monotonicity lemma) For any technology (t, c), in both the agency and the non-strategic cases, the 9 This implies that there are no two sets with the same success probability in the orbit.\nexpected utility of the principal at the optimal contracts, the success probability of the optimal contracts, and the expected payment of the optimal contract, are all monotonically nondecreasing with the value.\nProof.\nSuppose the sets of agents S1 and S2 are optimal in v1 and v2 < v1, respectively.\nLet Q(S) denote the expected total payment to all agents in S in the case that the principal contracts with the set S and the project succeeds (for the agency case, Q(S) = t(S) \u00b7 P i\u2208S ci t(S)\u2212t(S\\i) , while for the non-strategic case Q(S) = P i\u2208S ci).\nThe principal``s utility is a linear function of the value, u(S, v) = t(S)\u00b7v\u2212Q(S).\nAs S1 is optimal at v1, u(S1, v1) \u2265 u(S2, v1), and as t(S2) \u2265 0 and v1 > v2, u(S2, v1) \u2265 u(S2, v2).\nWe conclude that u(S1, v1) \u2265 u(S2, v2), thus the utility is monotonic non-decreasing in the value.\nNext we show that the success probability is monotonic non-decreasing in the value.\nS1 is optimal at v1, thus: t(S1) \u00b7 v1 \u2212 Q(S1) \u2265 t(S2) \u00b7 v1 \u2212 Q(S2) S2 is optimal at v2, thus: t(S2) \u00b7 v2 \u2212 Q(S2) \u2265 t(S1) \u00b7 v2 \u2212 Q(S1) Summing these two equations, we get that (t(S1) \u2212 t(S2)) \u00b7 (v1 \u2212 v2) \u2265 0, which implies that if v1 > v2 than t(S1) \u2265 t(S2).\nFinally we show that the expected payment is monotonic non-decreasing in the value.\nAs S2 is optimal at v2 and t(S1) \u2265 t(S2), we observe that: t(S2) \u00b7 v2 \u2212 Q(S2) \u2265 t(S1) \u00b7 v2 \u2212 Q(S1) \u2265 t(S2) \u00b7 v2 \u2212 Q(S1) or equivalently, Q(S2) \u2264 Q(S1), which is what we wanted to show.\n4.1 AOO and OOA Technologies We begin our discussion of non-anonymous technologies with two examples; the And-of-Ors (AOO) and Or-of-Ands (OOA) technologies.\nThe AOO technology (see figure 2) is composed of multiple OR-components that are Anded together.\nTheorem 3.\nLet h be an anonymous OR technology, and let f = Vnc j=1 h be the AOO technology that is obtained by a conjunction of nc of these OR-components on disjoint inputs.\nThen for any value v, an optimal contract contracts with the same number of agents in each OR-component.\nThus, the orbit of f is of size at most nl + 1, where nl is the number of agents in h. Part of the proof of the theorem (for the complete proof see [2]), is based on such AOO technology being a special case of a more general family of technologies, in which disjoint anonymous technologies are And-ed together, as explained in the next section.\nWe conjecture that a similar result holds for the OOA technology.\nConjecture 2.\nIn an OOA technology which is a disjunction of the same anonymous paths (with the same number of agents, \u03b3 and c, but over disjoint inputs), for any value v the optimal contract is constructed from some number of fully-contracted paths.\nMoreover, there exist v1 < ... < vnl such that for any v, vi \u2264 v \u2264 vi+1, exactly i paths are contracted.\nWe are unable to prove it in general, but can prove it for the case of an OOA technology with two paths of length two (see [2]).\n25 4.2 Orbit Characterization The AOO is an example of a technology whose orbit size is linear in its number of agents.\nIf conjecture 2 is true, the same holds for the OOA technology.\nWhat can be said about the orbit size of a general non-anonymous technology?\nIn case of identical costs, it is impossible for all subsets of agents to be on the orbit.\nThis holds by the observation that the 1-orbit (a single agent that exerts effort) is of size at most 1.\nOnly the agent that gives the highest success probability (when only he exerts effort) can be on the orbit (as he also needs to be paid the least).\nNevertheless, we next show that the orbit can have exponential size.\nA collection of sets of k elements (out of n) is admissible, if every two sets in the collection differ by at least 2 elements (e.g. for k=3, 123 and 234 can not be together in the collection, but 123 and 345 can be).\nTheorem 4.\nEvery admissible collection can be obtained as the k \u2212 orbit of some t. Proof sketch: The proof is constructive.\nLet S be some admissible collection of k-size sets.\nFor each set S \u2208 S in the collection we pick S, such that for any two admissible sets Si = Sj, Si = Sj .\nWe then define the technology function t as follows: for any S \u2208 S, t(S) = 1\/2 \u2212 S and \u2200i \u2208 S, t(S \\ i) = 1\/2 \u2212 2 S. Thus, the marginal contribution of every i \u2208 S is S. Note that since S is admissible, t is well defined, as for any two sets S, S \u2208 S and any two agents i, j, S \\ i = S \\ j. For any other set Z, we define t(Z) in a way that ensures that the marginal contribution of each agent in Z is a very small (the technical details appear in the full version).\nThis completes the definition of t.\nWe show that each admissible set S \u2208 S is optimal at the value vS = ck 2 2 S .\nWe first show that it is better than any other S \u2208 S.\nAt the value vS = ck 2 2 S , the set S that corresponds to S maximizes the utility of the principal.\nThis result is obtained by taking the derivative of u(S, v).\nTherefore S yields a higher utility than any other S \u2208 S.\nWe also pick the range of S to ensure that at vS, S is better than any other set S \\ i s.t. S \u2208 S.\nNow we are left to show that at vS, the set S yields a higher utility than any other set Z \u2208 S.\nThe construction of t(Z) ensures this since the marginal contribution of each agent in Z is such a small , that the payment is too high for the set to be optimal.\n2 In [2] we present the full proof of the theorem, as well as the full proofs of all other claims presented in this section without such a proof.\nWe next show that there exist very large admissible collections.\nLemma 4.\nFor any n \u2265 k, there exists an admissible collection of k-size sets of size \u03a9( 1 n \u00b7 `n k \u00b4 ).\nProof sketch: The proof is based on an error correcting code that corrects one bit.\nSuch a code has a distance \u2265 3, thus admissible.\nIt is known that there are such codes with \u03a9(2n \/n) code words.\nTo ensure that an appropriate fraction of these code words have weight k, we construct a new code by XOR-ing each code word with a random word r.\nThe properties of XOR ensure that the new code remains admissible.\nEach code word is now uniformly mapped to the whole cube, and thus its probability of having weight k is `n k \u00b4 \/2n .\nThus the expected number of weight k words is \u03a9( `n k \u00b4 \/n), and for some r this expectation is achieved or exceeded.\n2 For k = n\/2 we can construct an exponential size admissible collection, which by Theorem 4 can be used to build a technology with exponential size orbit.\nCorollary 1.\nThere exists a technology (t, c) with orbit of size \u03a9( 2n n \u221a n ).\nThus, we are able to construct a technology with exponential orbit, but this technology is not a network technology or a structured technology.\nOpen Question 2.\nIs there a Read Once network with exponential orbit?\nIs there a structured technology with exponential orbit?\nNevertheless, so far, we have not seen examples of seriesparallel networks whose orbit size is larger than n + 1.\nOpen Question 3.\nHow big can the orbit size of a seriesparallel network be?\nWe make the first step towards a solution of this question by showing that the size of the orbit of a conjunction of two disjoint networks (taking the two in a serial) is at most the sum of the two networks'' orbit sizes.\nLet g and h be two Boolean functions on disjoint inputs and let f = g V h (i.e., take their networks in series).\nThe optimal contract for f for some v, denoted by S, is composed of some agents from the h-part and some from the g-part, call them T and R respectively.\nLemma 5.\nLet S be an optimal contract for f = g V h on v. Then, T is an optimal contract for h on v \u00b7 tg(R), and R is an optimal contract for g on v \u00b7 th(T).\nProof sketch: We exress the pricipal``s utility u(S, v) from contracting with the set S when his value is v.\nWe abuse notation and use the function to denote the technology as well.\nLet \u0394f i (S \\ i) denote the marginal contribution of agent i \u2208 S. Then, for any i \u2208 T, \u0394f i (S \\ i) = g(R) \u00b7 \u0394h i (T \\ i), and for any i \u2208 R, \u0394f i (S \\ i) = h(T) \u00b7 \u0394g i (R \\ i).\nBy substituting these expressions and f(S) = h(T) \u00b7 g(R), we derive that u(S, v) = h(T) g(R) \u00b7 v \u2212 P i\u2208T ci \u0394h i (T \\i) + g(R) \u00b7 P i\u2208R ci \u0394 g i (R\\i) .\nThe first term is maximized at a set T that is optimal for h on the value g(R) \u00b7 v, while the second term is independent of T and h. Thus, S is optimal for f on v if and only if T is an optimal contract for h on v \u00b7 tg(R).\nSimilarly, we show that R is an optimal contract for g on v \u00b7 th(T).\n2 Lemma 6.\nThe real function v \u2192 th(T), where T is the h \u2212 part of an optimal contract for f on v, is monotone non-decreasing (and similarly for the function v \u2192 tg(R)).\nProof.\nLet S1 = T1 \u222a R1 be the optimal contract for f on v1, and let S2 = T2 \u222aR2 be the optimal contract for f on v2 < v1.\nBy Lemma 3 f(S1) \u2265 f(S2), and since f = g \u00b7 h, f(S1) = h(T1) \u00b7 g(R1) \u2265 h(T2) \u00b7 g(R2) = f(S2).\nAssume in contradiction that h(T1) < h(T2), then since h(T1)\u00b7g(R1) \u2265 h(T2)\u00b7g(R2) this implies that g(R1) > g(R2).\nBy Lemma 5, T1 is optimal for h on v1 \u00b7 g(R1), and T2 is optimal for h on v2 \u00b7g(R2).\nAs v1 > v2 and g(R1) > g(R2), T1 is optimal for h on a larger value than T2, thus by Lemma 3, h(T1) \u2265 h(T2), a contradiction.\n26 Based on Lemma 5 and Lemma 6, we obtain the following Lemma.\nFor the full proof, see [2].\nLemma 7.\nLet g and h be two Boolean functions on disjoint inputs and let f = g V h (i.e., take their networks in series).\nSuppose x and y are the respective orbit sizes of g and h; then, the orbit size of f is less or equal to x + y \u2212 1.\nBy induction we get the following corollary.\nCorollary 2.\nAssume that {(gj, cj )}m j=1 is a set of anonymous technologies on disjoint inputs, each with identical agent cost (all agents of technology gj has the same cost cj).\nThen the orbit of f = Vm j=1 gj is of size at most ( Pm j=1 nj ) \u2212 1, where nj is the number of agents in technology gj (the orbit is linear in the number of agents).\nIn particular, this holds for AOO technology where each OR-component is anonymous.\nIt would also be interesting to consider a disjunction of two Boolean functions.\nOpen Question 4.\nDoes Lemma 7 hold also for the Boolean function f = g W h (i.e., when the networks g, h are taken in parallel)?\nWe conjecture that this is indeed the case, and that the corresponding Lemmas 5 and 7 exist for the OR case as well.\nIf this is true, this will show that series-parallel networks have polynomial size orbit.\n5.\nALGORITHMIC ASPECTS Our analysis throughout the paper sheds some light on the algorithmic aspects of computing the best contract.\nIn this section we state these implications (for the proofs see [2]).\nWe first consider the general model where the technology function is given by an arbitrary monotone function t (with rational values), and we then consider the case of structured technologies given by a network representation of the underlying Boolean function.\n5.1 Binary-Outcome Binary-Action Technologies Here we assume that we are given a technology and value v as the input, and our output should be the optimal contract, i.e. the set S\u2217 of agents to be contracted and the contract pi for each i \u2208 S\u2217 .\nIn the general case, the success function t is of size exponential in n, the number of agents, and we will need to deal with that.\nIn the special case of anonymous technologies, the description of t is only the n+1 numbers t0, ... , tn, and in this case our analysis in section 3 completely suffices for computing the optimal contract.\nProposition 1.\nGiven as input the full description of a technology (the values t0, ... , tn and the identical cost c for an anonymous technology, or the value t(S) for all the 2n possible subsets S \u2286 N of the players, and a vector of costs c for non-anonymous technologies), the following can all be computed in polynomial time: \u2022 The orbit of the technology in both the agency and the non-strategic cases.\n\u2022 An optimal contract for any given value v, for both the agency and the non-strategic cases.\n\u2022 The price of unaccountability POU(t, c).\nProof.\nWe prove the claims for the non-anonymous case, the proof for the anonymous case is similar.\nWe first show how to construct the orbit of the technology (the same procedure apply in both cases).\nTo construct the orbit we find all transition points and the sets that are in the orbit.\nThe empty contract is always optimal for v = 0.\nAssume that we have calculated the optimal contracts and the transition points up to some transition point v for which S is an optimal contract with the highest success probability.\nWe show how to calculate the next transition point and the next optimal contract.\nBy Lemma 3 the next contract on the orbit (for higher values) has a higher success probability (there are no two sets with the same success probability on the orbit).\nWe calculate the next optimal contract by the following procedure.\nWe go over all sets T such that t(T) > t(S), and calculate the value for which the principal is indifferent between contracting with T and contracting with S.\nThe minimal indifference value is the next transition point and the contract that has the minimal indifference value is the next optimal contract.\nLinearity of the utility in the value and monotonicity of the success probability of the optimal contracts ensure that the above works.\nClearly the above calculation is polynomial in the input size.\nOnce we have the orbit, it is clear that an optimal contract for any given value v can be calculated.\nWe find the largest transition point that is not larger than the value v, and the optimal contract at v is the set with the higher success probability at this transition point.\nFinally, as we can calculate the orbit of the technology in both the agency and the non-strategic cases in polynomial time, we can find the price of unaccountability in polynomial time.\nBy Lemma 1 the price of unaccountability POU(t) is obtained at some transition point, so we only need to go over all transition points, and find the one with the maximal social welfare ratio.\nA more interesting question is whether if given the function t as a black box, we can compute the optimal contract in time that is polynomial in n.\nWe can show that, in general this is not the case: Theorem 5.\nGiven as input a black box for a success function t (when the costs are identical), and a value v, the number of queries that is needed, in the worst case, to find the optimal contract is exponential in n. Proof.\nConsider the following family of technologies.\nFor some small > 0 and k = n\/2 we define the success probability for a given set T as follows.\nIf |T| < k, then t(T) = |T| \u00b7 .\nIf |T| > k, then t(T) = 1 \u2212 (n \u2212 |T|) \u00b7 .\nFor each set of agents \u02c6T of size k, the technology t \u02c6T is defined by t( \u02c6T) = 1 \u2212 (n \u2212 | \u02c6T|) \u00b7 and t(T) = |T| \u00b7 for any T = \u02c6T of size k. For the value v = c\u00b7(k + 1\/2), the optimal contract for t \u02c6T is \u02c6T (for the contract \u02c6T the utility of the principal is about v \u2212c\u00b7k = 1\/2\u00b7c > 0, while for any other contract the utility is negative).\nIf the algorithm queries about at most ` n n\/2 \u00b4 \u2212 2 sets of size k, then it cannot always determine the optimal contract (as any of the sets that it has not queried about might be the optimal one).\nWe conclude that ` n n\/2 \u00b4 \u2212 1 queries are needed to determine the optimal contract, and this is exponential in n. 27 5.2 Structured Technologies In this section we will consider the natural representation of read-once networks for the underlying Boolean function.\nThus the problem we address will be: The Optimal Contract Problem for Read Once Networks: Input: A read-once network G = (V, E), with two specific vertices s, t; rational values \u03b3e, \u03b4e for each player e \u2208 E (and ce = 1), and a rational value v. Output: A set S of agents who should be contracted in an optimal contract.\nLet t(E) denote the probability of success when each edge succeeds with probability \u03b4e.\nWe first notice that even computing the value t(E) is a hard problem: it is called the network reliability problem and is known to be #P \u2212 hard [8].\nJust a little effort will reveal that our problem is not easier: Theorem 6.\nThe Optimal Contract Problem for Read Once Networks is #P-hard (under Turing reductions).\nProof.\nWe will show that an algorithm for this problem can be used to solve the network reliability problem.\nGiven an instance of a network reliability problem < G, {\u03b6e}e\u2208E > (where \u03b6e denotes e``s probability of success), we define an instance of the optimal contract problem as follows: first define a new graph G which is obtained by Anding G with a new player x, with \u03b3x very close to 1 2 and \u03b4x = 1 \u2212 \u03b3x.\nFor the other edges, we let \u03b4e = \u03b6e and \u03b3e = \u03b6e\/2.\nBy choosing \u03b3x close enough to 1 2 , we can make sure that player x will enter the optimal contract only for very large values of v, after all other agents are contracted (if we can find the optimal contract for any value, it is easy to find a value for which in the original network the optimal contract is E, by keep doubling the value and asking for the optimal contract.\nOnce we find such a value, we choose \u03b3x s.t. c 1\u22122\u03b3x is larger than that value).\nLet us denote \u03b2x = 1 \u2212 2\u03b3x.\nThe critical value of v where player x enters the optimal contract of G , can be found using binary search over the algorithm that supposedly finds the optimal contract for any network and any value.\nNote that at this critical value v, the principal is indifferent between the set E and E \u222a {x}.\nNow when we write the expression for this indifference, in terms of t(E) and \u0394t i(E) , we observe the following.\nt(E) \u00b7 \u03b3x \u00b7 v \u2212 X i\u2208E c \u03b3x \u00b7 \u0394t i(E \\ i) !\n= t(E)(1 \u2212 \u03b3x) v \u2212 X i\u2208E c (1 \u2212 \u03b3x) \u00b7 \u0394t i(E \\ i) \u2212 c t(E) \u00b7 \u03b2x !\nif and only if t(E) = (1 \u2212 \u03b3x) \u00b7 c (\u03b2x)2 \u00b7 v thus, if we can always find the optimal contract we are also able to compute the value of t(E).\nIn conclusion, computing the optimal contract in general is hard.\nThese results suggest two natural research directions.\nThe first avenue is to study families of technologies whose optimal contracts can be computed in polynomial time.\nThe second avenue is to explore approximation algorithms for the optimal contract problem.\nA possible candidate for the first direction is the family of series-parallel networks, for which the network reliability problem (computing the value of t) is polynomial.\nOpen Question 5.\nCan the optimal contract problem for Read Once series-parallel networks be solved in polynomial time?\nWe can only handle the non-trivial level of AOO networks: Lemma 8.\nGiven a Read Once AND-of-OR network such that each OR-component is an anonymous technology, the optimal contract problem can be solved in polynomial time.\nAcknowledgments.\nThis work is supported by the Israel Science Foundation, the USA-Israel Binational Science Foundation, the Lady Davis Fellowship Trust, and by a National Science Foundation grant number ANI-0331659.\n6.\nREFERENCES [1] M. Babaioff, M. Feldman, and N. Nisan.\nThe Price of Purity and Free-Labor in Combinatorial Agency.\nIn Working Paper, 2005.\n[2] M. Babaioff, M. Feldman, and N. Nisan.\nCombinatorial agency, 2006.\nwww.sims.berkeley.edu\/\u02dcmoshe\/comb-agency.pdf.\n[3] M. Feldman, J. Chuang, I. Stoica, and S. Shenker.\nHidden-action in multi-hop routing.\nIn EC``05, pages 117-126, 2005.\n[4] B. Holmstrom.\nMoral Hazard in Teams.\nBell Journal of Economics, 13:324-340, 1982.\n[5] A. Mass-Colell, M. Whinston, and J. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[6] N. Nisan and A. Ronen.\nAlgorithmic mechanism design.\nGames and Economic Behaviour, 35:166 - 196, 2001.\nA preliminary version appeared in STOC 1999.\n[7] C. Papadimitriou.\nAlgorithms, Games, and the Internet.\nIn Proceedings of 33rd STOC, pages 749-753, 2001.\n[8] J. S. Provan and M. O. Ball.\nThe complexity of counting cuts and of computing the probability that a graph is connected.\nSIAM J. Comput., 12(4):777-788, 1983.\n[9] A. Ronen and L. Wahrmann.\nPrediction Games.\nWINE, pages 129-140, 2005.\n[10] R. Smorodinsky and M. Tennenholtz.\nSequential Information Elicitation in Multi-Agent Systems.\n20th Conference on Uncertainty in AI, 2004.\n[11] R. Smorodinsky and M. Tennenholtz.\nOvercoming Free-Riding in Multi-Party Computations - The Anonymous Case.\nForthcoming, GEB, 2005.\n[12] E. Winter.\nIncentives and Discrimination.\nAmerican Economic Review, 94:764-773, 2004.\n28","lvl-3":"Combinatorial Agency\nABSTRACT\nMuch recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own \"selfish\" goal.\nThe field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings.\nThis paper deals with a complementary problem in such settings: handling the \"hidden actions\" that are performed by the different parties.\nOur model is a combinatorial variant of the classical principalagent problem from economic theory.\nIn our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him.\nOur focus is on cases where complex combinations of the efforts of the agents influence the outcome.\nThe principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game.\nWe present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.\n1.\nINTRODUCTION\n1.1 Background\nOne of the most striking characteristics of modern computer networks--in particular the Internet--is that different parts of it are owned and operated by different individuals, firms, and organizations.\nThe analysis and design of protocols for this environment thus naturally needs to take into account the different \"selfish\" economic interests of the different participants.\nIndeed, the last few years have seen much work addressing this issue using game-theoretic notions (see [7] for an influential survey).\nA significant part of the difficulty stems from underlying asymmetries of information: one participant may not know everything that is known or done by another.\nIn particular, the field of algorithmic mechanism design [6] uses appropriate incentives to \"extract\" the private information from the participants.\nThis paper deals with the complementary lack of knowledge, that of hidden actions.\nIn many cases the actual behaviors--actions--of the different participants are \"hidden\" from others and only influence the final outcome indirectly.\n\"Hidden\" here covers a wide range of situations including \"not precisely measurable\", \"costly to determine\", or even \"non-contractible\"--meaning that it cannot be formally used in a legal contract.\nAn example that was discussed in [3] is Quality of Service routing in a network: every intermediate link or router may exert a different amount of \"effort\" (priority, bandwidth, ...) when attempting to forward a packet of information.\nWhile the final outcome of whether a packet reached its destination is clearly visible, it is rarely feasible to monitor the exact amount of effort exerted by each intermediate link--how can we ensure that they really do exert the appropriate amount of effort?\nMany other complex resource allocation problems exhibit similar hidden actions, e.g., a task that runs on a collection of shared servers may be allocated, by each server, an unknown percentage of the CPU's processing power or of the physical memory.\nHow can we ensure that the right combination of allocations is actually made by the different servers?\nA related class of examples concerns security issues: each \"link\" in a complex system may exert different levels of effort for protecting some desired security property of the system.\nHow can we ensure that the desired level of\nCategories and Subject Descriptors\n1.2 Our Models\n1.3 Our Results\n2.\nMODEL AND PRELIMINARIES\n2.1 The General Setting\n2.2 The Binary-Outcome Binary-Action Model\n2.3 Structured Technology Functions\n3.1 AND and OR Technologies\n3.2 What Determines the Transitions?\n3.3 The MAJORITY Technology\n4.\nNON-ANONYMOUS TECHNOLOGIES\n4.1 AOO and OOA Technologies\n4.2 Orbit Characterization\n5.\nALGORITHMIC ASPECTS\nOur analysis throughout the paper sheds some light on the algorithmic aspects of computing the best contract.\nIn this section we state these implications (for the proofs see [2]).\nWe first consider the general model where the technology function is given by an arbitrary monotone function t (with rational values), and we then consider the case of structured technologies given by a network representation of the underlying Boolean function.\n5.1 Binary-Outcome Binary-Action Technologies\nHere we assume that we are given a technology and value v as the input, and our output should be the optimal contract, i.e. the set S * of agents to be contracted and the contract pi for each i E S *.\nIn the general case, the success function t is of size exponential in n, the number of agents, and we will need to deal with that.\nIn the special case of anonymous technologies, the description of t is only the n +1 numbers t0,..., tn, and in this case our analysis in section 3 completely suffices for computing the optimal contract.\n\u2022 The orbit of the technology in both the agency and the non-strategic cases.\n\u2022 An optimal contract for any given value v, for both the agency and the non-strategic cases.\n\u2022 The price of unaccountability POU (t, ~ c).\nPROOF.\nWe prove the claims for the non-anonymous case, the proof for the anonymous case is similar.\nWe first show how to construct the orbit of the technology (the same procedure apply in both cases).\nTo construct the orbit we find all transition points and the sets that are in the orbit.\nThe empty contract is always optimal for v = 0.\nAssume that we have calculated the optimal contracts and the transition points up to some transition point v for which S is an optimal contract with the highest success probability.\nWe show how to calculate the next transition point and the next optimal contract.\nBy Lemma 3 the next contract on the orbit (for higher values) has a higher success probability (there are no two sets with the same success probability on the orbit).\nWe calculate the next optimal contract by the following procedure.\nWe go over all sets T such that t (T)> t (S), and calculate the value for which the principal is indifferent between contracting with T and contracting with S.\nThe minimal indifference value is the next transition point and the contract that has the minimal indifference value is the next optimal contract.\nLinearity of the utility in the value and monotonicity of the success probability of the optimal contracts ensure that the above works.\nClearly the above calculation is polynomial in the input size.\nOnce we have the orbit, it is clear that an optimal contract for any given value v can be calculated.\nWe find the largest transition point that is not larger than the value v, and the optimal contract at v is the set with the higher success probability at this transition point.\nFinally, as we can calculate the orbit of the technology in both the agency and the non-strategic cases in polynomial time, we can find the price of unaccountability in polynomial time.\nBy Lemma 1 the price of unaccountability POU (t) is obtained at some transition point, so we only need to go over all transition points, and find the one with the maximal social welfare ratio.\nA more interesting question is whether if given the function t as a black box, we can compute the optimal contract in time that is polynomial in n.\nWe can show that, in general this is not the case: THEOREM 5.\nGiven as input a black box for a success function t (when the costs are identical), and a value v, the number of queries that is needed, in the worst case, to find the optimal contract is exponential in n. PROOF.\nConsider the following family of technologies.\nFor some small e> 0 and k = [n\/2] we define the success probability for a given set T as follows.\nIf ITI k, then t (T) = 1--(n--ITI) \u2022 E. For each set of agents T\u02c6 of size k, the technology t T\u02c6 is defined by t (T\u02c6) = 1--(n--I T\u02c6I) \u2022 e and t (T) = ITI \u2022 e for any T = ~ T\u02c6 of size k. For the value v = c \u2022 (k + 1\/2), the optimal contract for t T\u02c6 is T\u02c6 (for the contract T\u02c6 the utility of the principal is about v--c \u2022 k = 1\/2 \u2022 c> 0, while for any other contract the utility is negative).\nIf the algorithm queries about at most (n)--2 sets fin\/2] of size k, then it cannot always determine the optimal contract (as any of the sets that it has not queried about might be the optimal one).\nWe conclude that (n)--1 queries fin\/2] are needed to determine the optimal contract, and this is exponential in n.\n5.2 Structured Technologies\nIn this section we will consider the natural representation of read-once networks for the underlying Boolean function.\nThus the problem we address will be: The Optimal Contract Problem for Read Once Networks: Input: A read-once network G = (V, E), with two specific vertices s, t; rational values - ye, \u03b4e for each player e \u2208 E (and ce = 1), and a rational value v. Output: A set S of agents who should be contracted in an optimal contract.\nLet t (E) denote the probability of success when each edge succeeds with probability \u03b4e.\nWe first notice that even computing the value t (E) is a hard problem: it is called the network reliability problem and is known to be #P \u2212 hard [8].\nJust a little effort will reveal that our problem is not easier: THEOREM 6.\nThe Optimal Contract Problem for Read Once Networks is #P - hard (under Turing reductions).\nPROOF.\nWe will show that an algorithm for this problem can be used to solve the network reliability problem.\nGiven an instance of a network reliability problem (where (e denotes e's probability of success), we define an instance of the optimal contract problem as follows: first define a new graph G' which is obtained by\" And\" ing G with a new player x, with - yx very close to 21 and \u03b4x = 1 \u2212 - yx.\nFor the other edges, we let \u03b4e = (e and - ye = (e\/2.\nBy choosing - yx close enough to 21, we can make sure that player x will enter the optimal contract only for very large values of v, after all other agents are contracted (if we can find the optimal contract for any value, it is easy to find a value for which in the original network the optimal contract is E, by keep doubling the value and asking for the optimal contract.\nOnce we find such a value, we choose - yx s.t. c 1--2\u03b3x is larger than that value).\nLet us denote \u03b2x = 1 \u2212 2-yx.\nThe critical value of v where player x enters the optimal contract of G', can be found using binary search over the algorithm that supposedly finds the optimal contract for any network and any value.\nNote that at this critical value v, the principal is indifferent between the set E and E \u222a {x}.\nNow when we write the expression for this indifference, in terms of t (E) and \u0394ti (E), we observe the following.\nthus, if we can always find the optimal contract we are also able to compute the value of t (E).\nIn conclusion, computing the optimal contract in general is hard.\nThese results suggest two natural research directions.\nThe first avenue is to study families of technologies whose optimal contracts can be computed in polynomial time.\nThe second avenue is to explore approximation algorithms for the optimal contract problem.\nA possible candidate for the first direction is the family of series-parallel networks, for which the network reliability problem (computing the value of t) is polynomial.","lvl-4":"Combinatorial Agency\nABSTRACT\nMuch recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own \"selfish\" goal.\nThe field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings.\nThis paper deals with a complementary problem in such settings: handling the \"hidden actions\" that are performed by the different parties.\nOur model is a combinatorial variant of the classical principalagent problem from economic theory.\nIn our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him.\nOur focus is on cases where complex combinations of the efforts of the agents influence the outcome.\nThe principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game.\nWe present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.\n1.\nINTRODUCTION\n1.1 Background\nOne of the most striking characteristics of modern computer networks--in particular the Internet--is that different parts of it are owned and operated by different individuals, firms, and organizations.\nThe analysis and design of protocols for this environment thus naturally needs to take into account the different \"selfish\" economic interests of the different participants.\nIn particular, the field of algorithmic mechanism design [6] uses appropriate incentives to \"extract\" the private information from the participants.\nThis paper deals with the complementary lack of knowledge, that of hidden actions.\nIn many cases the actual behaviors--actions--of the different participants are \"hidden\" from others and only influence the final outcome indirectly.\nHow can we ensure that the right combination of allocations is actually made by the different servers?\nA related class of examples concerns security issues: each \"link\" in a complex system may exert different levels of effort for protecting some desired security property of the system.\nHow can we ensure that the desired level of\n5.\nALGORITHMIC ASPECTS\nOur analysis throughout the paper sheds some light on the algorithmic aspects of computing the best contract.\nIn this section we state these implications (for the proofs see [2]).\nWe first consider the general model where the technology function is given by an arbitrary monotone function t (with rational values), and we then consider the case of structured technologies given by a network representation of the underlying Boolean function.\n5.1 Binary-Outcome Binary-Action Technologies\nHere we assume that we are given a technology and value v as the input, and our output should be the optimal contract, i.e. the set S * of agents to be contracted and the contract pi for each i E S *.\nIn the general case, the success function t is of size exponential in n, the number of agents, and we will need to deal with that.\nIn the special case of anonymous technologies, the description of t is only the n +1 numbers t0,..., tn, and in this case our analysis in section 3 completely suffices for computing the optimal contract.\n\u2022 The orbit of the technology in both the agency and the non-strategic cases.\n\u2022 An optimal contract for any given value v, for both the agency and the non-strategic cases.\n\u2022 The price of unaccountability POU (t, ~ c).\nPROOF.\nWe prove the claims for the non-anonymous case, the proof for the anonymous case is similar.\nWe first show how to construct the orbit of the technology (the same procedure apply in both cases).\nTo construct the orbit we find all transition points and the sets that are in the orbit.\nThe empty contract is always optimal for v = 0.\nAssume that we have calculated the optimal contracts and the transition points up to some transition point v for which S is an optimal contract with the highest success probability.\nWe show how to calculate the next transition point and the next optimal contract.\nBy Lemma 3 the next contract on the orbit (for higher values) has a higher success probability (there are no two sets with the same success probability on the orbit).\nWe calculate the next optimal contract by the following procedure.\nWe go over all sets T such that t (T)> t (S), and calculate the value for which the principal is indifferent between contracting with T and contracting with S.\nThe minimal indifference value is the next transition point and the contract that has the minimal indifference value is the next optimal contract.\nLinearity of the utility in the value and monotonicity of the success probability of the optimal contracts ensure that the above works.\nClearly the above calculation is polynomial in the input size.\nOnce we have the orbit, it is clear that an optimal contract for any given value v can be calculated.\nWe find the largest transition point that is not larger than the value v, and the optimal contract at v is the set with the higher success probability at this transition point.\nFinally, as we can calculate the orbit of the technology in both the agency and the non-strategic cases in polynomial time, we can find the price of unaccountability in polynomial time.\nBy Lemma 1 the price of unaccountability POU (t) is obtained at some transition point, so we only need to go over all transition points, and find the one with the maximal social welfare ratio.\nA more interesting question is whether if given the function t as a black box, we can compute the optimal contract in time that is polynomial in n.\nWe can show that, in general this is not the case: THEOREM 5.\nGiven as input a black box for a success function t (when the costs are identical), and a value v, the number of queries that is needed, in the worst case, to find the optimal contract is exponential in n. PROOF.\nConsider the following family of technologies.\nFor some small e> 0 and k = [n\/2] we define the success probability for a given set T as follows.\nIf the algorithm queries about at most (n)--2 sets fin\/2] of size k, then it cannot always determine the optimal contract (as any of the sets that it has not queried about might be the optimal one).\nWe conclude that (n)--1 queries fin\/2] are needed to determine the optimal contract, and this is exponential in n.\n5.2 Structured Technologies\nIn this section we will consider the natural representation of read-once networks for the underlying Boolean function.\nThus the problem we address will be: The Optimal Contract Problem for Read Once Networks: Input: A read-once network G = (V, E), with two specific vertices s, t; rational values - ye, \u03b4e for each player e \u2208 E (and ce = 1), and a rational value v. Output: A set S of agents who should be contracted in an optimal contract.\nLet t (E) denote the probability of success when each edge succeeds with probability \u03b4e.\nWe first notice that even computing the value t (E) is a hard problem: it is called the network reliability problem and is known to be #P \u2212 hard [8].\nJust a little effort will reveal that our problem is not easier: THEOREM 6.\nThe Optimal Contract Problem for Read Once Networks is #P - hard (under Turing reductions).\nPROOF.\nWe will show that an algorithm for this problem can be used to solve the network reliability problem.\nGiven an instance of a network reliability problem (where (e denotes e's probability of success), we define an instance of the optimal contract problem as follows: first define a new graph G' which is obtained by\" And\" ing G with a new player x, with - yx very close to 21 and \u03b4x = 1 \u2212 - yx.\nOnce we find such a value, we choose - yx s.t. c 1--2\u03b3x is larger than that value).\nLet us denote \u03b2x = 1 \u2212 2-yx.\nThe critical value of v where player x enters the optimal contract of G', can be found using binary search over the algorithm that supposedly finds the optimal contract for any network and any value.\nNote that at this critical value v, the principal is indifferent between the set E and E \u222a {x}.\nthus, if we can always find the optimal contract we are also able to compute the value of t (E).\nIn conclusion, computing the optimal contract in general is hard.\nThese results suggest two natural research directions.\nThe first avenue is to study families of technologies whose optimal contracts can be computed in polynomial time.\nThe second avenue is to explore approximation algorithms for the optimal contract problem.\nA possible candidate for the first direction is the family of series-parallel networks, for which the network reliability problem (computing the value of t) is polynomial.","lvl-2":"Combinatorial Agency\nABSTRACT\nMuch recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own \"selfish\" goal.\nThe field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings.\nThis paper deals with a complementary problem in such settings: handling the \"hidden actions\" that are performed by the different parties.\nOur model is a combinatorial variant of the classical principalagent problem from economic theory.\nIn our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him.\nOur focus is on cases where complex combinations of the efforts of the agents influence the outcome.\nThe principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game.\nWe present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.\n1.\nINTRODUCTION\n1.1 Background\nOne of the most striking characteristics of modern computer networks--in particular the Internet--is that different parts of it are owned and operated by different individuals, firms, and organizations.\nThe analysis and design of protocols for this environment thus naturally needs to take into account the different \"selfish\" economic interests of the different participants.\nIndeed, the last few years have seen much work addressing this issue using game-theoretic notions (see [7] for an influential survey).\nA significant part of the difficulty stems from underlying asymmetries of information: one participant may not know everything that is known or done by another.\nIn particular, the field of algorithmic mechanism design [6] uses appropriate incentives to \"extract\" the private information from the participants.\nThis paper deals with the complementary lack of knowledge, that of hidden actions.\nIn many cases the actual behaviors--actions--of the different participants are \"hidden\" from others and only influence the final outcome indirectly.\n\"Hidden\" here covers a wide range of situations including \"not precisely measurable\", \"costly to determine\", or even \"non-contractible\"--meaning that it cannot be formally used in a legal contract.\nAn example that was discussed in [3] is Quality of Service routing in a network: every intermediate link or router may exert a different amount of \"effort\" (priority, bandwidth, ...) when attempting to forward a packet of information.\nWhile the final outcome of whether a packet reached its destination is clearly visible, it is rarely feasible to monitor the exact amount of effort exerted by each intermediate link--how can we ensure that they really do exert the appropriate amount of effort?\nMany other complex resource allocation problems exhibit similar hidden actions, e.g., a task that runs on a collection of shared servers may be allocated, by each server, an unknown percentage of the CPU's processing power or of the physical memory.\nHow can we ensure that the right combination of allocations is actually made by the different servers?\nA related class of examples concerns security issues: each \"link\" in a complex system may exert different levels of effort for protecting some desired security property of the system.\nHow can we ensure that the desired level of\nCategories and Subject Descriptors\ncollective security is obtained?\nOur approach to this problem is based on the well studied principal-agent problem in economic theory: How can a \"principal\" motivate a rational \"agent\" to exert costly effort towards the welfare of the principal?\nThe crux of the model is that the agent's action (i.e. whether he exerts effort or not) is invisible to the principal and only the final outcome, which is probabilistic and also influenced by other factors, is visible.\nThis problem is well studied in many contexts in classical economic theory and we refer the readers to introductory texts on economic theory such as [5] Chapter 14.\nThe solution is based on the observation that a properly designed contract, in which the payments are contingent upon the final outcome, can influence a rational agent to exert the required effort.\nIn this paper we initiate a general study of handling combinations of agents rather than a single agent.\nWhile much work was already done on motivating teams of agents [4], our emphasis is on dealing with the complex combinatorial structure of dependencies between agents' actions.\nIn the general case, each combination of efforts exerted by the n different agents may result in a different expected gain for the principal.\nThe general question asks which conditional payments should the principal offer to which agents as to maximize his net utility?\nIn our setting and unlike in previous work (see, e.g., [12]), the main challenge is to determine the optimal amount of effort desired from each agent.\nThis paper suggest models for and provides some interesting initial results about this \"combinatorial agency\" problem.\nWe believe that we have only scratched the surface and leave many open questions, conjectures, and directions for further research.\nWe believe that this type of analysis may also find applications in \"regular\" economic activity.\nConsider for example a firm that sub-contracts a family of related tasks to many individuals (or other firms).\nIt will often not be possible to exactly monitor the actual effort level of each sub-contractor (e.g., in cases of public-relations activities, consulting activities, or any activities that require cooperation between different sub-contractors.)\nWhen the dependencies between the different subtasks are complex, we believe that combinatorial agency models can offer a foundation for the design of contracts with appropriate incentives.\nIt may also be useful to view our work as part of a general research agenda stemming from the fact that all types of economic activity are increasingly being handled with the aid of sophisticated computer systems.\nIn general, in such computerized settings, complex scenarios involving multiple agents and goods can naturally occur, and they need to be algorithmically handled.\nThis calls for the study of the standard issues in economic theory in new complex settings.\nThe principal-agent problem is a prime example where such complex settings introduce new challenges.\n1.2 Our Models\nWe start by presenting a general model: in this model each of n agents has a set of possible actions, the combination of actions by the players results in some outcome, where this happens probabilistically.\nThe main part of the specification of a problem in this model is a function that specifies this distribution for each n-tuple of agents' actions.\nAdditionally, the problem specifies the principal's utility for each possible outcome, and for each agent, the agent's cost for each possible action.\nThe principal motivates the agents by offering to each of them a contract that specifies a payment for each possible outcome of the whole project1.\nKey here is that the actions of the players are non-observable and thus the contract cannot make the payments directly contingent on the actions of the players, but rather only on the outcome of the whole project.\nGiven a set of contracts, the agents will each optimize his own utility: i.e. will choose the action that maximizes his expected payment minus the cost of his action.\nSince the outcome depends on the actions of all players together, the agents are put in a game and are assumed to reach a Nash equilibrium2.\nThe principal's problem, our problem in this paper, is of designing an optimal set of contracts: i.e. contracts that maximize his expected utility from the outcome, minus his expected total payment.\nThe main difficulty is that of determining the required Nash equilibrium point.\nIn order to focus on the main issues, the rest of the paper deals with the basic binary case: each agent has only two possible actions\" exert effort\" and\" shirk\" and there are only two possible outcomes\" success\" and\" failure\".\nIt seems that this case already captures the main interesting ingredients3.\nIn this case, each agent's problem boils down to whether to exert effort or not, and the principal's problem boils down to which agents should be contracted to exert effort.\nThis model is still pretty abstract, and every problem description contains a complete table specifying the success probability for each subset of the agents who exert effort.\nWe then consider a more concrete model which concerns a subclass of problem instances where this exponential size table is succinctly represented.\nThis subclass will provide many natural types of problem instances.\nIn this subclass every agent performs a subtask which succeeds with a low probability - y if the agent does not exert effort and with a higher probability \u03b4> - y, if the agent does exert effort.\nThe whole project succeeds as a deterministic Boolean function of the success of the subtasks.\nThis Boolean function can now be represented in various ways.\nTwo basic examples are the\" AND\" function in which the project succeeds only if all subtasks succeed, and the \"OR\" function which succeeds if any of the subtasks succeeds.\nA more complex example considers a communication network, where each agent controls a single edge, and success of the subtask means that a message is forwarded by that edge.\n\"Effort\" by the edge increases this success probability.\nThe complete project succeeds if there is a complete path of successful edges between a given source and sink.\nComplete definitions of the models appear in Section 2.\n1.3 Our Results\nWe address a host of questions and prove a large number of results.\nWe believe that despite the large amount of work that appears here, we have only scratched the surface.\nIn many cases we were not able to achieve the general characterization theorems that we desired and had to settle for analyzing special cases or proving partial results.\nIn many cases, simulations reveal structure that we were not able to formally prove.\nWe present here an informal overview of the issues that we studied, what we were able to do, and what we were not.\nThe full treatment of most of our results appears only in the extended version [2], and only some are discussed, often with associated simulation results, in the body of the paper.\nOur first object of study is the structure of the class of sets of agents that can be contracted for a given problem instance.\nLet us fix a given function describing success probabilities, fix the agent's costs, and let us consider the set of contracted agents for different values of the principal's associated value from success.\nFor very low values, no agent will be contracted since even a single agent's cost is higher than the principal's value.\nFor very high values, all agents will always be contracted since the marginal contribution of an agent multiplied by the principal's value will overtake any associated payment.\nWhat happens for intermediate principal's values?\nWe first observe that there is a finite number of \"transitions\" between different sets, as the principal's project value increases.\nThese transitions behave very differently for different functions.\nFor example, we show that for the AND function only a single transition occurs: for low enough values no agent will be contracted, while for higher values all agents will be contracted--there is no intermediate range for which only some of the agents are contracted.\nFor the OR function, the situation is opposite: as the principal's value increases, the set of contracted agents increases one-by-one.\nWe are able to fully characterize the types of functions for which these two extreme types of transitions behavior occur.\nHowever, the structure of these transitions in general seems quite complex, and we were not able to fully analyze them even in simple cases like the\" Majority function\" (the project succeeds if a majority of subtasks succeeds) or very simple networks.\nWe do have several partial results, including a construction with an exponential number of transitions.\nDuring the previous analysis we also study what we term \"the price of unaccountability\": How much is the social utility achieved under the optimal contracts worse than what could be achieved in the non-strategic case4, where the socially optimal actions are simply dictated by the principal?\nWe are able to fully analyze this price for the\" AND\" function, where it is shown to tend to infinity as the number of agents tends to infinity.\nMore general analysis remains an open problem.\nOur analysis of these questions sheds light on the difficulty of the various natural associated algorithmic problems.\nIn particular, we observe that the optimal contract can be found in time polynomial in the explicit representation of the probability function.\nWe prove a lower bound that shows that the optimal contract cannot be found in number of queries that is polynomial just in the number of agents, in a general black-box model.\nWe also show that when the probability function is succinctly represented as 4The non-strategic case is often referred to as the case with\" contractible actions\" or the principal's\" first-best\" solution.\na read-once network, the problem becomes #P - hard.\nThe status of some algorithmic questions remains open, in particular that of finding the optimal contract for technologies defined by serial-parallel networks.\nIn a follow-up paper [1] we deal with equilibria in mixed strategies and show that the principal can gain from inducing a mixed-Nash equilibrium between the agents rather than a pure one.\nWe also show cases where the principal can gain by asking agents to reduce their effort level, even when this effort comes for free.\nBoth phenomena cannot occur in the non-strategic setting.\n2.\nMODEL AND PRELIMINARIES\n2.1 The General Setting\nA principal employs a set of agents N of size n. Each agent i G N has a possible set of actions Ai, and a cost (effort) ci (ai)> 0 for each possible action ai G Ai (ci: Ai--+ OR +).\nThe actions of all players determine, in a probabilistic way, a \"contractible\" outcome o G O, according to a success function t: A1x,...x An--+ \u0394 (O) (where \u0394 (O) denotes the set of probability distributions on O).\nA technology is a pair, (t, c), of a success function, t, and cost functions, c = (c1, c2,..., cn).\nThe principal has a certain value for each possible outcome, given by the function v: O--+ OR.\nAs we will only consider risk-neutral players in this paper5, we will also treat v as a function on \u0394 (O), by taking simple expected value.\nActions of the players are invisible, but the final outcome o is visible to him and to others (in particular the court), and he may design enforceable contracts based on the final outcome.\nThus the contract for agent i is a function (payment) pi: O--+ OR; again, we will also view pi as a function on \u0394 (O).\nGiven this setting, the agents have been put in a game, where the utility of agent i under the vector of actions a = (a1,..., an) is given by ui (a) = pi (t (a))--ci (ai).\nThe agents will be assumed to reach Nash equilibrium, if such equilibrium exists.\nThe principal's problem (which is our problem in this paper) is how to design the contracts pi as to maximize his own expected utility u (a) = v (t (a))--Pi pi (t (a)), where the actions a1,..., an are at Nash-equilibrium.\nIn the case of multiple Nash equilibria we let the principal choose the equilibrium, thus focusing on the \"best\" Nash equilibrium.\nA variant, which is similar in spirit to \"strong implementation\" in mechanism design would be to take the worst Nash equilibrium, or even, stronger yet, to require that only a single equilibrium exists.\nFinally, the social welfare for a G A is u (a) + Pi \u2208 N ui (a) = v (t (a))--Pi \u2208 N ci (ai).\n2.2 The Binary-Outcome Binary-Action Model\nWe wish to concentrate on the complexities introduced by the combinatorial structure of the success function t, we restrict ourselves to a simpler setting that seems to focus more clearly on the structure of t.\nA similar model was used in [12].\nWe first restrict the action spaces to have only two states (binary-action): 0 (low effort) and 1 (high effort).\nThe cost function of agent i is now just a scalar ci> 0 denoting the cost of exerting high effort (where the low effort has cost 0).\nThe vector of costs is c ~ = (c1, c2,..., cn), 5The risk-averse case would obviously be a natural second step in the research of this model, as has been for noncombinatorial scenarios.\nand we use the notation (t, ~ c) to denote a technology in such a binary-outcome model.\nWe then restrict the outcome space to have only two states (binary-outcome): 0 (project failure) and 1 (project success).\nThe principal's value for a successful project is given by a scalar v> 0 (where the value of project failure is 0).\nWe assume that the principal can pay the agents but not fine them (known as the limited liability constraint).\nThe contract to agent i is thus now given by a scalar value pi \u2265 0 that denotes the payment that i gets in case of project success.\nIf the project fails, the agent gets 0.\nWhen the lowest cost action has zero cost (as we assume), this immediately implies that the participation constraint holds.\nAt this point the success function t becomes a function t: {0, 1} n \u2192 [0, 1], where t (a1,..., an) denotes the probability of project success where players with ai = 0 do not exert effort and incur no cost, and players with ai = 1 do exert effort and incur a cost of ci.\nAs we wish to concentrate on motivating agents, rather than on the coordination between agents, we assume that more effort by an agent always leads to a better probability of success, i.e. that the success function t is strictly monotone.\nFormally, if we denote by a \u2212 i \u2208 A \u2212 i the (n \u2212 1) dimensional vector of the actions of all agents excluding agent i. i.e., a \u2212 i = (a1,..., ai \u2212 1, ai +1,..., an), then a success function must satisfy: \u2200 i \u2208 N, \u2200 a \u2212 i \u2208 A \u2212 i t (1, a \u2212 i)> t (0, a \u2212 i) Additionally, we assume that t (a)> 0 for any a \u2208 A (or equivalently, t (0, 0,..., 0)> 0).\nNote that since t is monotone, \u0394i is a strictly positive function.\nAt this point we can already make some simple observations.\nThe best action, ai \u2208 Ai, of agent i can now be easily determined as a function of what the others do, a \u2212 i \u2208 A \u2212 i, and his contract pi.\nin this case.\nThis allows us to specify the contracts that are the principal's optimal, for inducing a given equilibrium.\ni who exerts no effort (ai = 0), and pi = ci\nWe say that the principal contracts with agent i if pi> 0 (and ai = 1 in the equilibrium a \u2208 A).\nThe principal's goal is to maximize his utility given his value v, i.e. to determine the profile of actions a \u2217 \u2208 A, which gives the highest value of u (a, v) in equilibrium.\nChoosing a \u2208 A corresponds to choosing a set S of agents that exert effort (S = {i | ai = 1}).\nWe call the set of agents S \u2217 that the principal contracts with in a \u2217 (S \u2217 = {i | a \u2217 i = 1}) an optimal contract for the principal at value v.\nWe sometimes abuse notation and denote t (S) instead of t (a), when S is exactly the set of agents that exert effort in a \u2208 A.\nA natural yardstick by which to measure this decision is the non-strategic case, i.e. when the agents need not be motivated but are rather controlled directly by the principal (who also bears their costs).\nIn this case the principal will welfare (global efficiency), t (a) \u00b7 v \u2212 E simply choose the profile a \u2208 A that optimizes the social i | ai = 1 ci.\nThe worst ratio between the social welfare in this non-strategic case and the social welfare for the profile a \u2208 A chosen by the principal in the agency case, may be termed the price of unaccountability.\nGiven a technology (t, ~ c), let S \u2217 (v) denote the optimal contract in the agency case and let S \u2217 ns (v) denote an optimal contract in the non-strategic case, when the principal's value is v.\nThe social welfare for value v when the set S of agents is contracted is t (S) \u00b7 v \u2212 Ei \u2208 S ci (in both the agency and non-strategic cases).\nIn cases where several sets are optimal in the agency case, we take the worst set (i.e., the set that yields the lowest social welfare).\nWhen the technology (t, ~ c) is clear in the context we will use POU to denote the price of unaccountability for technology (t, ~ c).\nNote that the POU is at least 1 for any technology.\nAs we would like to focus on results that derived from properties of the success function, in most of the paper we will deal with the case where all agents have an identical cost c, that is ci = c for all i \u2208 N.\nWe denote a technology (t, ~ c) with identical costs by (t, c).\nFor the simplicity of the presentation, we sometimes use the term technology function to refer to the success function of the technology.\n2.3 Structured Technology Functions\nIn order to be more concrete, we will especially focus on technology functions whose structure can be described easily as being derived from independent agent tasks--we call these structured technology functions.\nThis subclass will first give us some natural examples of technology function, and will also provide a succinct and natural way to represent the technology functions.\nIn a structured technology function, each individual succeeds or fails in his own \"task\" independently.\nThe project's success or failure depends, possibly in a complex way, on the set of successful sub-tasks.\nThus we will assume a monotone Boolean function f: {0, 1} n \u2192 {0, 1} which denotes\nwhether the project succeeds as a function of the success of the n agents' tasks (and is not determined by any set of n-1 agents).\nAdditionally there are constants 0 <- yi <\u03b4i <1, where - yi denotes the probability of success for agent i if he does not exert effort, and \u03b4i (> - yi) denotes the probability of success if he does exert effort.\nIn order to reduce the number of parameters, we will restrict our attention to the case where - y1 =...= - yn = - y and \u03b41 =...= \u03b4n = 1 - - y thus leaving ourselves with a single parameter - y s.t. 0 <- y <21.\nUnder this structure, the technology function t is defined by t (a1,..., an) being the probability that f (x1,..., xn) = 1 where the bits x1,..., xn are chosen according to the following distribution: if ai = 0 then xi = 1 with probability - y and xi = 0 with probability 1 - - y; otherwise, i.e. if ai = 1, then xi = 1 with probability 1 - - y and xi = 0 with probability - y.\nWe denote x = (x1,..., xn).\nThe question of the representation of the technology function is now reduced to that of representing the underlying monotone Boolean function f.\nIn the most general case, the function f can be given by a general monotone Boolean circuit.\nAn especially natural sub-class of functions in the structured technologies setting would be functions that can be represented as a read-once network--a graph with a given source and sink, where every edge is labeled by a different player.\nThe project succeeds if the edges that belong to player's whose task succeeded form a path between the source and the sink6.\nA few simple examples should be in order here:\n1.\nThe\" AND\" technology: f (x1,..., xn) is the logical conjunction of xi (f (x) = AicN xi).\nThus the project succeeds only if all agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 1 (a).\nIf m agents exert effort (Ei ai = m), then t (a) = tm = - yn-m (1 - - y) m. E.g. for two players, the technology function t (a1a2) = ta1 + a2 is given by\n2.\nThe\" OR\" technology: f (x1,..., xn) is the logical disjunction of xi (f (x) = V icN xi).\nThus the project succeeds if at least one of the agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 1 (b).\nIf m agents exert effort, then tm = 1 - - ym (1 - - y) n-m.E.g. for two players, the technology function is given by t (00) = 1 - (1 - - y) 2, t (01) = t (10) = 1 - - y (1 - - y), and t (11) = 1 - - y2.\n3.\nThe\" Or-of-Ands\" (OOA) technology: f (x) is the logical disjunction of conjunctions.\nIn the simplest case of equal-length clauses (denote by nc the number of clauses and by nl their length), f (x) = Vnc j = 1 (Anl k = 1 xjk).\nThus the project succeeds if in at least one clause all agents succeed in their tasks.\nThis is shown graphically as a read-once network in Figure 2 (a).\nIf mi agents on path i exert effort, then t (m1,..., mnc) = 1 - rJi (1 - - ynl-mi (1 - - y) mi).\nE.g. for four players, the technology function t (a11 a12, a21 a22) is given by t (00, 00) = 1 - (1 - - y2) 2, t (01, 00) = t (10, 00) = t (00, 01) = t (00, 10) = 1 - (1 - - y (1 - - y)) (1 - - y2), and so on.\nFigure 1: Graphical representations of (a) AND and (b) OR technologies.\nFigure 2: Graphical representations of (a) OOA and (b) AOO technologies.\n4.\nThe\" And-of-Ors\" (AOO) technology: f (x) is the logical conjunction of disjunctions.\nIn the simplest case clauses and by nc their length), f (x) = Anl of equal-length clauses (denote by nl the number of j = 1 (Vnc k = 1 xjk).\nThus the project succeeds if at least one agent from each disjunctive-form-clause succeeds in his tasks.\nThis is shown graphically as a read-once network in Figt (m1,..., mnc) = rJ ure 2 (b).\nIf mi agents on clause i exert effort, then i (1 - - ymi (1 - - y) nc-mi).\nE.g. for four players, the technology function t (a11 a12, a21 a22) is given by t (00, 00) = (1 - (1 - - y) 2) 2, t (01, 00) = t (10, 00) = t (00, 01) = t (00, 10) = (1 - - y (1 - - y)) (1 (1 - - y) 2), and so on.\n5.\nThe\" Majority\" technology: f (x) is \"1\" if a majority of the values xi are 1.\nThus the project succeeds if most players succeed.\nThe majority function, even on 3 inputs, cannot be represented by a read-once network, but is easily represented by a monotone Boolean formula maj (x, y, z) = xy + yz + xz.\nIn this case the technology function is given by t (000) = 3-y2 (1 - - y) + - y3, t (001) = t (010) = t (100) = - y3 +2 (1--y) 2-y + - y2 (1 - - y), etc. 3.\nANALYSIS OF SOME ANONYMOUS TECHNOLOGIES\nA success function t is called anonymous if it is symmetric with respect to the players.\nI.e. t (a1,..., an) depends only on EicN ai (the number of agents that exert effort).\nA technology (t, c) is anonymous if t is anonymous and the cost c is identical to all agents.\nOf the examples presented above, the AND, OR, and majority technologies were anonymous (but not AOO and OOA).\nAs for an anonymous t only the number of agents that exert effort is important, we can shorten the notations and denote tm = t (1m, 0n-m), \u0394m = tm +1 - tm, pm = \u0394m \u2212 1 and um = tm \u2022 (v - m \u2022 pm), for the case of c identical cost c.\nFigure 3: Number of agents in the optimal contract\nof the AND (left) and OR (right) technologies with 3 players, as a function of - y and v. AND technology: either 0 or 3 agents are contracted, and the transition value is monotonic in - y. OR technology: for any - y we can see all transitions.\n3.1 AND and OR Technologies\nLet us start with a direct and full analysis of the AND and OR technologies for two players for the case - y = 1\/4 and c = 1.\n\u2022 0 Agents: No agent is paid thus and the principal's utility is u0 = t0 \u00b7 v = v\/16.\n\u2022 1 Agent: This agent is paid p1 = c \/ \u03940 = 8 on success and the principal's utility is u1 = t1 (v \u2212 p1) = 3v\/16 \u2212 3\/2.\n\u2022 2 Agents: each agent is paid p2 = c \/ \u03941 = 8\/3 on success, and the principal's utility is u2 = t2 (v \u2212 2p2) = 9v\/16 \u2212 3.\nNotice that the option of contracting with one agent is always inferior to either contracting with both or with none, and will never be taken by the principal.\nThe principal will contract with no agent when v <6, with both agents whenever v> 6, and with either non or both for v = 6.\nThis should be contrasted with the non-strategic case in which the principal completely controls the agents (and bears their costs) and thus simply optimizes globally.\nIn this case the principal will make both agents exert effort whenever v \u2265 4.\nThus for example, for v = 6 the globally optimal decision (non-strategic case) would give a global utility of 6 \u00b7 9\/16 \u2212 2 = 11\/8 while the principal's decision (in the agency case) would give a global utility of 3\/8, giving a ratio of 11\/3.\nIt turns out that this is the worst price of unaccountability in this example, and it is obtained exactly at the transition point of the agency case, as we show below.\nEXAMPLE 2.\nOR technology with two agents, c = 1, - y = 1\/4: we have t0 = 1 \u2212 (1 \u2212 - y) 2 = 7\/16, t1 = 1 \u2212 - y (1 \u2212 - y) = 13\/16, and t2 = 1 \u2212 - y2 = 15\/16 thus \u03940 = 3\/8 and \u03941 = 1\/8.\nLet us write down the expressions for the principal's utility in these three cases:\n\u2022 0 Agents: No agent is paid and the principal's utility is u0 = t0 \u00b7 v = 7v\/16.\n\u2022 1 Agent: This agent is paid p1 = c \/ \u03940 = 8\/3 on success and the principal's utility is u1 = t1 (v \u2212 p1) = 13v\/16 \u2212 13\/6.\n\u2022 2 Agents: each agent is paid p2 = c \/ \u03941 = 8 on success, and the principal's utility is u2 = t2 (v \u2212 2p2) = 15v\/16 \u2212 15\/2.\nNow contracting with one agent is better than contracting with none whenever v> 52\/9 (and is equivalent for v = 52\/9), and contracting with both agents is better than contracting with one agent whenever v> 128\/3 (and is equivalent for v = 128\/3), thus the principal will contract with no agent for 0 \u2264 v \u2264 52\/9, with one agent for 52\/9 \u2264 v \u2264 128\/3, and with both agents for v \u2265 128\/3.\nIn the non-strategic case, in comparison, the principal will make a single agent exert effort for v> 8\/3, and the second one exert effort as well when v> 8.\nIt turns out that the price of unaccountability here is 19\/13, and is achieved at v = 52\/9, which is exactly the transition point from 0 to 1 contracted agents in the agency case.\nThis is not a coincidence that in both the AND and OR technologies the POU is obtained for v that is a transition point (see full proof in [2]).\nProof sketch: We look at all transition points in both cases.\nFor any value lower than the first transition point, 0 agents are contracted in both cases, and the social welfare ratio is 1.\nSimilarly, for any value higher than the last transition point, n agents are contracted in both cases, and the social welfare ratio is 1.\nThus, we can focus on the interval between the first and last transition points.\nBetween any pair of consecutive points, the social welfare ratio is between two linear functions of v (the optimal contracts are fixed on such a segment).\nWe then show that for each segment, the suprimum ratio is obtained at an end point of the segment (a transition point).\nAs there are finitely many such points, the global suprimum is obtained at the transition point with the maximal social welfare ratio.\n\u2737 We already see a qualitative difference between the AND and OR technologies (even with 2 agents): in the first case either all agents are contracted or none, while in the second case, for some intermediate range of values v, exactly one agent is contracted.\nFigure 3 shows the same phenomena for AND and OR technologies with 3 players.\n\u2022 there exists a value8 v \u2217 <\u221e such that for any v v \u2217 it is optimal to contract with all n agents, and for v = v \u2217, both contracts (0, n) are optimal.\nProof sketch: For any fixed number of contracted agents, k, the principal's utility is a linear function in v, where the slope equals the success probability under k contracted agents.\nThus, the optimal contract corresponds to the maximum over a set of linear functions.\nLet v \u2217 denote the point at which the principal is indifferent between contracting with 0 or n agents.\nIn [2] we show that at v \u2217, the principal's utility from contracting with 0 (or n) agents is higher than his utility when contracting with any number of agents k E {1,..., n - 11.\nAs the number of contracted agents is monotonic non-decreasing in the value (due to Lemma 3), for any v v \u2217, contracting with n agents is optimal.\nThis is true for both the agency and the non-strategic cases.\nAs in both cases there is a single transition point, the claim about the price of unaccountability for AND technology is proved as a special case of Lemma 2 below.\nFor\nIn [2] we present a general characterization of technologies with a single transition in the agency and the non-strategic cases, and provide a full proof of Theorem 1 as a special case.\nThe property of a single transition occurs in both the agency and the non-strategic cases, where the transition occurs at a smaller value of v in the non-strategic case.\nNotice that the POU is not bounded across the AND family of technologies (for various n, - y) as POU--+ oo either if - y--+ 0 (for any given n> 2) or n--+ oo (for any fixed - y E (0, 21)).\nNext we consider the OR technology and show that it exhibits all n transitions.\nProof sketch: To prove the claim we define vk to be the value for which the principal is indifferent between contracting with k - 1 agents, and contracting with k agents.\nWe then show that for any k, vk 2 agents?\nIs it bounded by a constant for every n?\nWe are only able to determine the POU of the OR technology for the case of two agents [2].\nEven for the 2 agents case we already observe a qualitative difference between the POU in the AND and OR technologies.\nOBSERVATION 2.\nWhile in the AND technology the POU for n = 2 is not bounded from above (for - y--+ 0), the highest POU in OR technology with two agents is 2 (for - y--+ 0).\n3.2 What Determines the Transitions?\nTheorems 1 and 2 say that both the AND and OR technologies exhibit the same transition behavior (changes of the optimal contract) in the agency and the non-strategic cases.\nHowever, this is not true in general.\nIn [2] we provide a full characterization of the sufficient and necessary conditions for general anonymous technologies to have a single transition and all n transitions.\nWe find that the conditions in the agency case are different than the ones in the non-strategic case.\nWe are able to determine the POU for any anonymous technology that exhibits a single transition in both the agency and the non-strategic cases (see full proof in [2]).\nand it is obtained at the transition point of the agency case.\nProof sketch: Since the payments in the agency case are higher than in the non-strategic case, the transition point in the agency case occurs for a higher value than in the non-strategic case.\nThus, there exists a region in which the optimal numbers of contracted agents in the agency and the non-strategic cases are 0 and n, respectively.\nBy Lemma 1 the POU is obtained at a transition point.\nAs the social welfare ratio is decreasing in v in this region, the POU is obtained at the higher value, that is, at the transition point of the agency case.\nThe transition point in the agency case is the point at which the principal is indifferent between contracting with 0 and with n agents, v \u2217 = c \u00b7 n\nSubstituting the transition point of the agency case into the POU expression yields the required expression.\n3.3 The MAJORITY Technology\nThe project under the MAJORITY function succeeds if the majority of the agents succeed in their tasks (see Section 2.3).\nWe are unable to characterize the transition behavior of the MAJORITY technology analytically.\nFigure 4 presents the optimal number of contracted agents as a function of v and - y, for n = 5.\nThe phenomena that we observe in this example (and others that we looked at) leads us to the following conjecture.\nFigure 4: Simulations results showing the number of\nagents in the optimal contract of the MAJORITY technology with 5 players, as a function of - y and v.\nAs - y decreases the first transition is at a lower value and to a higher number of agents.\nFor any sufficiently small - y, the first transition is to 3 = F5\/2] agents, and for any sufficiently large - y, the first transition is to 1 agents.\nFor any - y, the first transition is never to more than 3 agents, and after the first transition we see all following possible transitions.\n4.\nNON-ANONYMOUS TECHNOLOGIES\nIn non-anonymous technologies (even with identical costs), we need to talk about the contracted set of agents and not only about the number of contracted agents.\nIn this section, we identify the sets of agents that can be obtained as the optimal contract for some v.\nThese sets construct the orbit of a technology.\nObserve that in the non-strategic case the k-orbit of any technology with identical cost c is of size at most 1 (as all sets of size k has the same cost, only the one with the maximal probability can be on the orbit).\nThus, the orbit of any such technology in the non-strategic case is of size at most n + 1.\nWe show that the picture in the agency case is very different.\nA basic observation is that the orbit of a technology is actually an ordered list of sets of agents, where the order is determined by the following lemma.\nexpected utility of the principal at the optimal contracts, the success probability of the optimal contracts, and the expected payment of the optimal contract, are all monotonically nondecreasing with the value.\nPROOF.\nSuppose the sets of agents S1 and S2 are optimal in v1 and v2 u (S2, v1), and as t (S2)> 0 and v1> v2, u (S2, v1)> u (S2, v2).\nWe conclude that u (S1, v1)> u (S2, v2), thus the utility is monotonic non-decreasing in the value.\nNext we show that the success probability is monotonic non-decreasing in the value.\nS1 is optimal at v1, thus:\nFinally we show that the expected payment is monotonic non-decreasing in the value.\nAs S2 is optimal at v2 and\nor equivalently, Q (S2) k, there exists an admissible col)).\nlection of k-size sets of size 52 (1n \u00b7 (nk Proof sketch: The proof is based on an error correcting code that corrects one bit.\nSuch a code has a distance> 3, thus admissible.\nIt is known that there are such codes with 52 (2n\/n) code words.\nTo ensure that an appropriate fraction of these code words have weight k, we construct a new code by XOR-ing each code word with a random word r.\nThe properties of XOR ensure that the new code remains admissible.\nEach code word is now uniformly mapped to the whole cube, and thus its probability of having weight k is (n) \/ 2n.\nThus the expected number of weight k words is 52 ((nk) \/ n), and for some r this expectation is achieved or\nFor k = n\/2 we can construct an exponential size admissible collection, which by Theorem 4 can be used to build a technology with exponential size orbit.\nCOROLLARY 1.\nThere exists a technology (t, c) with orbit of size 52 (2' n \u221a n).\nThus, we are able to construct a technology with exponential orbit, but this technology is not a network technology or a structured technology.\nOPEN QUESTION 2.\nIs there a Read Once network with exponential orbit?\nIs there a structured technology with exponential orbit?\nNevertheless, so far, we have not seen examples of seriesparallel networks whose orbit size is larger than n + 1.\nOPEN QUESTION 3.\nHow big can the orbit size of a seriesparallel network be?\nWe make the first step towards a solution of this question by showing that the size of the orbit of a conjunction of two disjoint networks (taking the two in a serial) is at most the sum of the two networks' orbit sizes.\nand let f = g n h (i.e., take their networks in series).\nThe Let g and h be two Boolean functions on disjoint inputs optimal contract for f for some v, denoted by S, is composed of some agents from the h-part and some from the g-part, call them T and R respectively.\nProof sketch: We exress the pricipal's utility u (S, v) from contracting with the set S when his value is v.\nWe abuse notation and use the function to denote the technology as well.\nLet \u0394fi (S \\ i) denote the marginal contribution of agent i E S. Then, for any i E T, \u0394fi (S \\ i) = g (R) \u00b7 \u0394hi (T \\ i), and for any i E R, \u0394fi (S \\ i) = h (T) \u00b7 \u0394gi (R \\ i).\nBy substituting these expressions and f (S) = h (T) \u00b7 g (R),\nthat is optimal for h on the value g (R) \u00b7 v, while the second term is independent of T and h. Thus, S is optimal for f on v if and only if T is an optimal contract for h on v \u00b7 tg (R).\nSimilarly, we show that R is an optimal contract for g on v \u00b7 th (T).\n\u2751 LEMMA 6.\nThe real function v--+ th (T), where T is the h \u2212 part of an optimal contract for f on v, is monotone non-decreasing (and similarly for the function v--+ tg (R)).\nPROOF.\nLet S1 = T1 U R1 be the optimal contract for f on v1, and let S2 = T2 UR2 be the optimal contract for f on v2 f (S2), and since f = g \u00b7 h, f (S1) = h (T1) \u00b7 g (R1)> h (T2) \u00b7 g (R2) = f (S2).\nAssume in contradiction that h (T1) h (T2) \u00b7 g (R2) this implies that g (R1)> g (R2).\nBy Lemma 5, T1 is optimal for h on v1 \u00b7 g (R1), and T2 is optimal for h on v2 \u00b7 g (R2).\nAs v1> v2 and g (R1)> g (R2), T1 is optimal for h on a larger value than T2, thus by Lemma 3, h (T1)> h (T2), a contradiction.\nBased on Lemma 5 and Lemma 6, we obtain the following Lemma.\nFor the full proof, see [2].\ncost (all agents of technology gj has the same cost cj).\nThen j = 1 gj is of size at most (Emj = 1 nj)--1, where nj is the number of agents in technology gj (the orbit is linear in the number of agents).\nIn particular, this holds for AOO technology where each OR-component is anonymous.\nIt would also be interesting to consider a disjunction of two Boolean functions.\nfunction f = g V h (i.e., when the networks g, h are taken OPEN QUESTION 4.\nDoes Lemma 7 hold also for the Boolean in parallel)?\nWe conjecture that this is indeed the case, and that the corresponding Lemmas 5 and 7 exist for the OR case as well.\nIf this is true, this will show that series-parallel networks have polynomial size orbit.\n5.\nALGORITHMIC ASPECTS\nOur analysis throughout the paper sheds some light on the algorithmic aspects of computing the best contract.\nIn this section we state these implications (for the proofs see [2]).\nWe first consider the general model where the technology function is given by an arbitrary monotone function t (with rational values), and we then consider the case of structured technologies given by a network representation of the underlying Boolean function.\n5.1 Binary-Outcome Binary-Action Technologies\nHere we assume that we are given a technology and value v as the input, and our output should be the optimal contract, i.e. the set S * of agents to be contracted and the contract pi for each i E S *.\nIn the general case, the success function t is of size exponential in n, the number of agents, and we will need to deal with that.\nIn the special case of anonymous technologies, the description of t is only the n +1 numbers t0,..., tn, and in this case our analysis in section 3 completely suffices for computing the optimal contract.\n\u2022 The orbit of the technology in both the agency and the non-strategic cases.\n\u2022 An optimal contract for any given value v, for both the agency and the non-strategic cases.\n\u2022 The price of unaccountability POU (t, ~ c).\nPROOF.\nWe prove the claims for the non-anonymous case, the proof for the anonymous case is similar.\nWe first show how to construct the orbit of the technology (the same procedure apply in both cases).\nTo construct the orbit we find all transition points and the sets that are in the orbit.\nThe empty contract is always optimal for v = 0.\nAssume that we have calculated the optimal contracts and the transition points up to some transition point v for which S is an optimal contract with the highest success probability.\nWe show how to calculate the next transition point and the next optimal contract.\nBy Lemma 3 the next contract on the orbit (for higher values) has a higher success probability (there are no two sets with the same success probability on the orbit).\nWe calculate the next optimal contract by the following procedure.\nWe go over all sets T such that t (T)> t (S), and calculate the value for which the principal is indifferent between contracting with T and contracting with S.\nThe minimal indifference value is the next transition point and the contract that has the minimal indifference value is the next optimal contract.\nLinearity of the utility in the value and monotonicity of the success probability of the optimal contracts ensure that the above works.\nClearly the above calculation is polynomial in the input size.\nOnce we have the orbit, it is clear that an optimal contract for any given value v can be calculated.\nWe find the largest transition point that is not larger than the value v, and the optimal contract at v is the set with the higher success probability at this transition point.\nFinally, as we can calculate the orbit of the technology in both the agency and the non-strategic cases in polynomial time, we can find the price of unaccountability in polynomial time.\nBy Lemma 1 the price of unaccountability POU (t) is obtained at some transition point, so we only need to go over all transition points, and find the one with the maximal social welfare ratio.\nA more interesting question is whether if given the function t as a black box, we can compute the optimal contract in time that is polynomial in n.\nWe can show that, in general this is not the case: THEOREM 5.\nGiven as input a black box for a success function t (when the costs are identical), and a value v, the number of queries that is needed, in the worst case, to find the optimal contract is exponential in n. PROOF.\nConsider the following family of technologies.\nFor some small e> 0 and k = [n\/2] we define the success probability for a given set T as follows.\nIf ITI k, then t (T) = 1--(n--ITI) \u2022 E. For each set of agents T\u02c6 of size k, the technology t T\u02c6 is defined by t (T\u02c6) = 1--(n--I T\u02c6I) \u2022 e and t (T) = ITI \u2022 e for any T = ~ T\u02c6 of size k. For the value v = c \u2022 (k + 1\/2), the optimal contract for t T\u02c6 is T\u02c6 (for the contract T\u02c6 the utility of the principal is about v--c \u2022 k = 1\/2 \u2022 c> 0, while for any other contract the utility is negative).\nIf the algorithm queries about at most (n)--2 sets fin\/2] of size k, then it cannot always determine the optimal contract (as any of the sets that it has not queried about might be the optimal one).\nWe conclude that (n)--1 queries fin\/2] are needed to determine the optimal contract, and this is exponential in n.\n5.2 Structured Technologies\nIn this section we will consider the natural representation of read-once networks for the underlying Boolean function.\nThus the problem we address will be: The Optimal Contract Problem for Read Once Networks: Input: A read-once network G = (V, E), with two specific vertices s, t; rational values - ye, \u03b4e for each player e \u2208 E (and ce = 1), and a rational value v. Output: A set S of agents who should be contracted in an optimal contract.\nLet t (E) denote the probability of success when each edge succeeds with probability \u03b4e.\nWe first notice that even computing the value t (E) is a hard problem: it is called the network reliability problem and is known to be #P \u2212 hard [8].\nJust a little effort will reveal that our problem is not easier: THEOREM 6.\nThe Optimal Contract Problem for Read Once Networks is #P - hard (under Turing reductions).\nPROOF.\nWe will show that an algorithm for this problem can be used to solve the network reliability problem.\nGiven an instance of a network reliability problem (where (e denotes e's probability of success), we define an instance of the optimal contract problem as follows: first define a new graph G' which is obtained by\" And\" ing G with a new player x, with - yx very close to 21 and \u03b4x = 1 \u2212 - yx.\nFor the other edges, we let \u03b4e = (e and - ye = (e\/2.\nBy choosing - yx close enough to 21, we can make sure that player x will enter the optimal contract only for very large values of v, after all other agents are contracted (if we can find the optimal contract for any value, it is easy to find a value for which in the original network the optimal contract is E, by keep doubling the value and asking for the optimal contract.\nOnce we find such a value, we choose - yx s.t. c 1--2\u03b3x is larger than that value).\nLet us denote \u03b2x = 1 \u2212 2-yx.\nThe critical value of v where player x enters the optimal contract of G', can be found using binary search over the algorithm that supposedly finds the optimal contract for any network and any value.\nNote that at this critical value v, the principal is indifferent between the set E and E \u222a {x}.\nNow when we write the expression for this indifference, in terms of t (E) and \u0394ti (E), we observe the following.\nthus, if we can always find the optimal contract we are also able to compute the value of t (E).\nIn conclusion, computing the optimal contract in general is hard.\nThese results suggest two natural research directions.\nThe first avenue is to study families of technologies whose optimal contracts can be computed in polynomial time.\nThe second avenue is to explore approximation algorithms for the optimal contract problem.\nA possible candidate for the first direction is the family of series-parallel networks, for which the network reliability problem (computing the value of t) is polynomial.","keyphrases":["combinatori agenc","optim set of contract","classic princip-agent","servic qualiti","nash equilibrium","contract action","k-orbit","anonym technolog","seri-parallel network","unaccount price","agenc theori","princip-agent model","incent","con"],"prmu":["P","M","M","U","M","R","U","U","U","U","R","M","U","U"]} {"id":"H-5","title":"Utility-based Information Distillation Over Temporally Sequenced Documents","abstract":"This paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework. It combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs ('tasks' with multiple queries). Our approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings. For our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task. Answer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses. We also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty. Our results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.","lvl-1":"Utility-based Information Distillation Over Temporally Sequenced Documents Yiming Yang Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA yiming@cs.cmu.edu Abhimanyu Lad Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA alad@cs.cmu.edu Ni Lao Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA nlao@cs.cmu.edu Abhay Harpale Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA aharpale@cs.cmu.edu Bryan Kisiel Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA bkisiel@cs.cmu.edu Monica Rogati Language Technologies Inst.\nCarnegie Mellon University Pittsburgh, USA mrogati@cs.cmu.edu ABSTRACT This paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework.\nIt combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs (`tasks'' with multiple queries).\nOur approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings.\nFor our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task.\nAnswer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses.\nWe also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty.\nOur results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Information filtering, Relevance feedback, Retrieval models, Selection process; I.5.2 General Terms Design, Measurement, Performance, Experimentation.\n1.\nINTRODUCTION Tracking new and relevant information from temporal data streams for users with long-lasting needs has been a challenging research topic in information retrieval.\nAdaptive filtering (AF) is one such task of online prediction of the relevance of each new document with respect to pre-defined topics.\nBased on the initial query and a few positive examples (if available), an AF system maintains a profile for each such topic of interest, and constantly updates it based on feedback from the user.\nThe incremental learning nature of AF systems makes them more powerful than standard search engines that support ad-hoc retrieval (e.g. Google and Yahoo) in terms of finding relevant information with respect to long-lasting topics of interest, and more attractive for users who are willing to provide feedback to adapt the system towards their specific information needs, without having to modify their queries manually.\nA variety of supervised learning algorithms (Rocchio-style classifiers, Exponential-Gaussian models, local regression and logistic regression approaches) have been studied for adaptive settings, examined with explicit and implicit relevance feedback, and evaluated with respect to utility optimization on large benchmark data collections in TREC (Text Retrieval Conferences) and TDT (Topic Detection and Tracking) forums [1, 4, 7, 15, 16, 20, 24, 23].\nRegularized logistic regression [21] has been found representative for the state-of-the-art approaches, and highly efficient for frequent model adaptations over large document collections such as the TREC-10 corpus (over 800,000 documents and 84 topics).\nDespite substantial achievements in recent adaptive filtering research, significant problems remain unsolved regarding how to leverage user feedback effectively and efficiently.\nSpecifically, the following issues may seriously limit the true utility of AF systems in real-world applications: 1.\nUser has a rather `passive'' role in the conventional adaptive filtering setup - he or she reacts to the system only when the system makes a `yes'' decision on a document, by confirming or rejecting that decision.\nA more `active'' alternative would be to allow the user to issue multiple queries for a topic, review a ranked list of candidate documents (or passages) per query, and provide feedback on the ranked list, thus refining their information need and requesting updated ranked lists.\nThe latter form of user interaction has been highly effective in standard retrieval for ad-hoc queries.\nHow to deploy such a strategy for long-lasting information needs in AF settings is an open question for research.\n2.\nThe unit for receiving a relevance judgment (`yes'' or `no'') is restricted to the document level in conventional AF.\nHowever, a real user may be willing to provide more informative, fine-grained feedback via highlighting some pieces of text in a retrieved document as relevant, instead of labeling the entire document as relevant.\nEffectively leveraging such fine-grained feedback could substantially enhance the quality of an AF system.\nFor this, we need to enable supervised learning from labeled pieces of text of arbitrary span instead of just allowing labeled documents.\n3.\nSystem-selected documents are often highly redundant.\nA major news event, for example, would be reported by multiple sources repeatedly for a while, making most of the information content in those articles redundant with each other.\nA conventional AF system would select all these redundant news stories for user feedback, wasting the user``s time while offering little gain.\nClearly, techniques for novelty detection can help in principle [25, 2, 22] for improving the utility of the AF systems.\nHowever, the effectiveness of such techniques at passage level to detect novelty with respect to user``s (fine-grained) feedback and to detect redundancy in ranked lists remains to be evaluated using a measure of utility that mimics the needs of a real user.\nTo address the above limitations of current AF systems, we propose and examine a new approach in this paper, combining the strengths of conventional AF (incremental learning of topic models), multi-pass passage retrieval for long-lasting queries conditioned on topic, and novelty detection for removal of redundancy from user interactions with the system.\nWe call the new process utility-based information distillation.\nNote that conventional benchmark corpora for AF evaluations, which have relevance judgments at the document level and do not define tasks with multiple queries, are insufficient for evaluating the new approach.\nTherefore, we extended a benchmark corpus - the TDT4 collection of news stories and TV broadcasts - with task definitions, multiple queries per task, and answer keys per query.\nWe have conducted our experiments on this extended TDT4 corpus and have made the additionally generated data publicly available for future comparative evaluations 1 .\nTo automatically evaluate the system-returned arbitrary spans of text using our answer keys, we further developed an evaluation scheme with semi-automatic procedure for 1 URL: http:\/\/nyc.lti.cs.cmu.edu\/downloads acquiring rules that can match nuggets against system responses.\nMoreover, we propose an extension of NDCG (Normalized Discounted Cumulated Gain) [9] for assessing the utility of ranked passages as a function of both relevance and novelty.\nThe rest of this paper is organized as follows.\nSection 2 outlines the information distillation process with a concrete example.\nSection 3 describes the technical cores of our system called CAF\u00b4E - CMU Adaptive Filtering Engine.\nSection 4 discusses issues with respect to evaluation methodology and proposes a new scheme.\nSection 5 describes the extended TDT4 corpus.\nSection 6 presents our experiments and results.\nSection 7 concludes the study and gives future perspectives.\n2.\nA SAMPLE TASK Consider a news event - the escape of seven convicts from a Texas prison in December 2000 and their capture a month later.\nAssuming a user were interested in this event since its early stage, the information need could be: `Find information about the escape of convicts from Texas prison, and information related to their recapture''.\nThe associated lower-level questions could be: 1.\nHow many prisoners escaped?\n2.\nWhere and when were they sighted?\n3.\nWho are their known contacts inside and outside the prison?\n4.\nHow are they armed?\n5.\nDo they have any vehicles?\n6.\nWhat steps have been taken so far?\nWe call such an information need a task, and the associated questions as the queries in this task.\nA distillation system is supposed to monitor the incoming documents, process them chunk by chunk in a temporal order, select potentially relevant and novel passages from each chunk with respect to each query, and present a ranked list of passages to the user.\nPassage ranking here is based on how relevant a passage is with respect to the current query, how novel it is with respect to the current user history (of his or her interactions with the system), and how redundant it is compared to other passages with a higher rank in the list.\nWhen presented with a list of passages, the user may provide feedback by highlighting arbitrary spans of text that he or she found relevant.\nThese spans of text are taken as positive examples in the adaptation of the query profile, and also added to the user``s history.\nPassages not marked by the user are taken as negative examples.\nAs soon as the query profile is updated, the system re-issues a search and returns another ranked list of passages where the previously seen passages are either removed or ranked low, based on user preference.\nFor example, if the user highlights `...officials have posted a $100,000 reward for their capture...'' as relevant answer to the query What steps have been taken so far?\n, then the highlighted piece is used as an additional positive training example in the adaptation of the query profile.\nThis piece of feedback is also added to the user history as a seen example, so that in future, the system will not place another passage mentioning `$100,000 reward'' at the top of the ranked list.\nHowever, an article mentioning `...officials have doubled the reward money to $200,000...'' might be ranked high since it is both relevant to the (updated) query profile and novel with respect to the (updated) user history.\nThe user may modify the original queries or add a new query during the process; the query profiles will be changed accordingly.\nClearly, novelty detection is very important for the utility of such a system because of the iterative search.\nWithout novelty detection, the old relevant passages would be shown to the user repeatedly in each ranked list.\nThrough the above example, we can see the main properties of our new framework for utility-based information distillation over temporally ordered documents.\nOur framework combines and extends the power of adaptive filtering (AF), ad-hoc retrieval (IR) and novelty detection (ND).\nCompared to standard IR, our approach has the power of incrementally learning long-term information needs and modeling a sequence of queries within a task.\nCompared to conventional AF, it enables a more active role of the user in refining his or her information needs and requesting new results by allowing relevance and novelty feedback via highlighting of arbitrary spans of text in passages returned by the system.\nCompared to past work, this is the first evaluation of novelty detection integrated with adaptive filtering for sequenced queries that allows flexible user feedback over ranked passages.\nThe combination of AF, IR and ND with the new extensions raises an important research question regarding evaluation methodology: how can we measure the utility of such an information distillation system?\nExisting metrics in standard IR, AF and ND are insufficient, and new solutions must be explored, as we will discuss in Section 4, after describing the technical cores of our system in the next section.\n3.\nTECHNICAL CORES The core components of CAF\u00b4E are - 1) AF for incremental learning of query profiles, 2) IR for estimating relevance of passages with respect to query profiles, 3) ND for assessing novelty of passages with respect to user``s history, and 4) anti-redundancy component to remove redundancy from ranked lists.\n3.1 Adaptive Filtering Component We use a state-of-the-art algorithm in the field - the regularized logistic regression method which had the best results on several benchmark evaluation corpora for AF [21].\nLogistic regression (LR) is a supervised learning algorithm for statistical classification.\nBased on a training set of labeled instances, it learns a class model which can then by used to predict the labels of unseen instances.\nIts performance as well as efficiency in terms of training time makes it a good candidate when frequent updates of the class model are required, as is the case in adaptive filtering, where the system must learn from each new feedback provided by the user.\n(See [21] and [23] for computational complexity and implementation issues).\nIn adaptive filtering, each query is considered as a class and the probability of a passage belonging to this class corresponds to the degree of relevance of the passage with respect to the query.\nFor training the model, we use the query itself as the initial positive training example of the class, and the user-highlighted pieces of text (marked as Relevant or Not-relevant) during feedback as additional training examples.\nTo address the cold start issue in the early stage before any user feedback is obtained, the system uses a small sample from a retrospective corpus as the initial negative examples in the training set.\nThe details of using logistic regression for adaptive filtering (assigning different weights to positive and negative training instances, and regularizing the objective function to prevent over-fitting on training data) are presented in [21].\nThe class model w\u2217 learned by Logistic Regression, or the query profile, is a vector whose dimensions are individual terms and whose elements are the regression coefficients, indicating how influential each term is in the query profile.\nThe query profile is updated whenever a new piece of user feedback is received.\nA temporally decaying weight can be applied to each training example, as an option, to emphasize the most recent user feedback.\n3.2 Passage Retrieval Component We use standard IR techniques in this part of our system.\nIncoming documents are processed in chunks, where each chunk can be defined as a fixed span of time or as a fixed number of documents, as preferred by the user.\nFor each incoming document, corpus statistics like the IDF (Inverted Document Frequency) of each term are updated.\nWe use a state-of-the-art named entity identifier and tracker [8, 12] to identify person and location names, and merge them with co-referent named entities seen in the past.\nThen the documents are segmented into passages, which can be a whole document, a paragraph, a sentence, or any other continuous span of text, as preferred.\nEach passage is represented using a vector of TF-IDF (Term FrequencyInverse Document Frequency) weights, where term can be a word or a named entity.\nGiven a query profile, i.e. the logistic regression solution w\u2217 as described in Section 3.1, the system computes the posterior probability of relevance for each passage x as fRL(x) \u2261 P(y = 1|x, w\u2217 ) = 1 (1 + e\u2212w\u2217\u00b7x) (1) Passages are ordered by their relevance scores, and the ones with scores above a threshold (tuned on a training set) comprise the relevance list that is passed on to the novelty detection step.\n3.3 Novelty Detection Component CAF\u00b4E maintains a user history H(t), which contains all the spans of text hi that the user highlighted (as feedback) during his or her past interactions with the system, up to the current time t. Denoting the history as H(t) = n h1, h2, ..., ht o , (2) the novelty score of a new candidate passage x is computed as: fND(x) = 1 \u2212 max i\u22081.\n.\nt {cos(x, hi)} (3) where both candidate passage x and highlighted spans of text hi are represented as TF-IDF vectors.\nThe novelty score of each passage is compared to a prespecified threshold (also tuned on a training set), and any passage with a score below this threshold is removed from the relevance list.\n3.4 Anti-redundant Ranking Component Although the novelty detection component ensures that only novel (previously unseen) information remains in the relevance list, this list might still contain the same novel information at multiple positions in the ranked list.\nSuppose, for example, that the user has already read about a $100,000 reward for information about the escaped convicts.\nA new piece of news that the award has been increased to $200,000 is novel since the user hasn``t read about it yet.\nHowever, multiple news sources would report this news and we might end up showing (redundant) articles from all these sources in a ranked list.\nHence, a ranked list should also be made non-redundant with respect to its own contents.\nWe use a simplified version of the Maximal Marginal Relevance method [5], originally developed for combining relevance and novelty in text retrieval and summarization.\nOur procedure starts with the current list of passages sorted by relevance (section 3.2), filtered by Novelty Detection component (section 3.3), and generates a new non-redundant list as follows: 1.\nTake the top passage in the current list as the top one in the new list.\n2.\nAdd the next passage x in the current list to the new list only if fAR(x) > t where fAR(x) = 1 \u2212 max pi\u2208Lnew {cos(x, pi)} and Lnew is the set of passages already selected in the new list.\n3.\nRepeat step 2 until all the passages in the current list have been examined.\nAfter applying the above-mentioned algorithm, each passage in the new list is sufficiently dissimilar to others, thus favoring diversity rather than redundancy in the new ranked list.\nThe anti-redundancy threshold t is tuned on a training set.\n4.\nEVALUATION METHODOLOGY The approach we proposed above for information distillation raises important issues regarding evaluation methodology.\nFirstly, since our framework allows the output to be passages at different levels of granularity (e.g. k-sentence windows where k may vary) instead of a fixed length, it is not possible to have pre-annotated relevance judgments at all such granularity levels.\nSecondly, since we wish to measure the utility of the system output as a combination of both relevance and novelty, traditional relevance-only based measures must be replaced by measures that penalize the repetition of the same information in the system output across time.\nThirdly, since the output of the system is ranked lists, we must reward those systems that present useful information (both relevant and previously unseen) using shorter ranked lists, and penalize those that present the same information using longer ranked lists.\nNone of the existing measures in ad-hoc retrieval, adaptive filtering, novelty detection or other related areas (text summarization and question answering) have desirable properties in all the three aspects.\nTherefore, we must develop a new evaluation methodology.\n4.1 Answer Keys To enable the evaluation of a system whose output consists of passages of arbitrary length, we borrow the concept of answer keys from the Question Answering (QA) community, where systems are allowed to return arbitrary spans of text as answers.\nAnswer keys define what should be present in a system response to receive credit, and are comprised of a collection of information nuggets, i.e. factoid units about which human assessors can make binary decisions of whether or not a system response contains them.\nDefining answer keys and making the associated binary decisions are conceptual tasks that require semantic mapping [19], since system-returned passages can contain the same information expressed in many different ways.\nHence, QA evaluations have relied on human assessors for the mapping between various expressions, making the process costly, time consuming, and not scalable to large query and document collections, and extensive system evaluations with various parameter settings.\n4.1.1 Automating Evaluation based on Answer Keys Automatic evaluation methods would allow for faster system building and tuning, as well as provide an objective and affordable way of comparing various systems.\nRecently, such methods have been proposed, more or less, based on the idea of n-gram co-occurrences.\nPourpre [10] assigns a fractional recall score to a system response based on its unigram overlap with a given nugget``s description.\nFor example, a system response `A B C'' has recall 3\/4 with respect to a nugget with description `A B C D''.\nHowever, such an approach is unfair to systems that present the same information but using words other than A, B, C, and D. Another open issue is how to weight individual words in measuring the closeness of a match.\nFor example, consider the question How many prisoners escaped?\n.\nIn the nugget `Seven prisoners escaped from a Texas prison'', there is no indication that `seven'' is the keyword, and that it must be matched to get any relevance credit.\nUsing IDF values does not help, since `seven'' will generally not have a higher IDF than words like `texas'' and `prison''.\nAlso, redefining the nugget as just `seven'' does not solve the problem since now it might spuriously match any mention of `seven'' out of context.\nNuggeteer [13] works on similar principles but makes binary decisions about whether a nugget is present in a given system response by tuning a threshold.\nHowever, it is also plagued by `spurious relevance'' since not all words contained in the nugget description (or known correct responses) are central to the nugget.\n4.1.2 Nugget-Matching Rules We propose a reliable automatic method for determining whether a snippet of text contains a given nugget, based on nugget-matching rules, which are generated using a semiautomatic procedure explained below.\nThese rules are essentially Boolean queries that will only match against snippets that contain the nugget.\nFor instance, a candidate rule for matching answers to How many prisoners escaped?\nis (Texas AND seven AND escape AND (convicts OR prisoners)), possibly with other synonyms and variants in the rule.\nFor a corpus of news articles, which usually follow a typical formal prose, it is fairly easy to write such simple rules to match expected answers using a bootstrap approach, as described below.\nWe propose a two-stage approach, inspired by Autoslog [14], that combines the strength of humans in identifying semantically equivalent expressions and the strength of the system in gathering statistical evidence from a humanannotated corpus of documents.\nIn the first stage, human subjects annotated (using a highlighting tool) portions of ontopic documents that contained answers to each nugget 2 .\nIn the second stage, subjects used our rule generation tool to create rules that would match the annotations for each nugget.\nThe tool allows users to enter a Boolean rule as a disjunction of conjunctions (e.g. ((a AND b) OR (a AND c AND d) OR (e))).\nGiven a candidate rule, our tool uses it as a Boolean query over the entire set of on-topic documents and calculates its recall and precision with respect to the annotations that it is expected to match.\nHence, the subjects can start with a simple rule and iteratively refine it until they are satisfied with its recall and precision.\nWe observed that it was very easy for humans to improve the precision of a rule by tweaking its existing conjunctions (adding more ANDs), and improving the recall by adding more conjunctions to the disjunction (adding more ORs).\nAs an example, let``s try to create a rule for the nugget which says that seven prisoners escaped from the Texas prison.\nWe start with a simple rule - (seven).\nWhen we input this into the rule generation tool, we realize that this rule matches many spurious occurrences of seven (e.g. `...seven states...'') and thus gets a low precision score.\nWe can further qualify our rule - Texas AND seven AND convicts.\nNext, by looking at the `missed annotations'', we realize that some news articles mentioned ...seven prisoners escaped... We then replace convicts with the disjunction (convicts OR prisoners).\nWe continue tweaking the rule in this manner until we achieve a sufficiently high recall and precision - i.e. the (small number of) misses and false alarms can be safely ignored.\nThus we can create nugget-matching rules that succinctly capture various ways of expressing a nugget, while avoiding matching incorrect (or out of context) responses.\nHuman involvement in the rule creation process ensures high quality generic rules which can then be used to evaluate arbitrary system responses reliably.\n4.2 Evaluating the Utility of a Sequence of Ranked Lists The utility of a retrieval system can be defined as the difference between how much the user gained in terms of useful information, and how much the user lost in terms of time and energy.\nWe calculate this utility from the utilities of individual passages as follows.\nAfter reading each passage returned by the system, the user derives some gain depending on the presence of relevant and novel information, and incurs a loss in terms of the time and energy spent in going through the passage.\nHowever, the likelihood that the user would actually read a passage depends on its position in the ranked list.\nHence, for a query q, the expected utility 2 LDC [18] already provides relevance judgments for 100 topics on the TDT4 corpus.\nWe further ensured that these judgments are exhaustive on the entire corpus using pooling.\nof a passage pi at rank i can be defined as U(pi, q) = P(i) \u2217 (Gain(pi, q) \u2212 Loss(pi, q)) (4) where P(i) is the probability that the user would go through a passage at rank i.\nThe expected utility for an entire ranked list of length n can be calculated simply by adding the expected utility of each passage: U(q) = nX i=1 P(i) \u2217 (Gain(pi, q) \u2212 Loss(pi, q)) (5) Note that if we ignore the loss term and define P(i) as P(i) \u221d 1\/ logb(b + i \u2212 1) (6) then we get the recently popularized metric called Discounted Cumulated Gain (DCG) [9], where Gain(pi, q) is defined as the graded relevance of passage pi.\nHowever, without the loss term, DCG is a purely recall-oriented metric and not suitable for an adaptive filtering setting, where the system``s utility depends in part on its ability to limit the number of items shown to the user.\nAlthough P(i) could be defined based on empirical studies of user behavior, for simplicity, we use P(i) exactly as defined in equation 6.\nThe gain G(pi, q) of passage pi with respect to the query q is a function of - 1) the number of relevant nuggets present in pi, and 2) the novelty of each of these nuggets.\nWe combine these two factors as follows.\nFor each nugget Nj, we assign an initial weight wj, and also keep a count nj of the number of times this nugget has been seen by the user in the past.\nThe gain derived from each subsequent occurrence of the same nugget is assumed to reduce by a dampening factor \u03b3.\nThus, G(pi, q) is defined as G(pi, q) = X Nj \u2208C(pi,q) wj \u2217 \u03b3nj (7) where C(pi, q) is the set of all nuggets that appear in passage pi and also belong to the answer key of query q.\nThe initial weights wj are all set of be 1.0 in our experiments, but can also be set based on a pyramid approach [11].\nThe choice of dampening factor \u03b3 determines the user``s tolerance for redundancy.\nWhen \u03b3 = 0, a nugget will only receive credit for its first occurrence i.e. when nj is zero3 .\nFor 0 < \u03b3 < 1, a nugget receives smaller credit for each successive occurrence.\nWhen \u03b3 = 1, no dampening occurs and repeated occurrences of a nugget receive the same credit.\nNote that the nugget occurrence counts are preserved between evaluation of successive ranked lists returned by the system, since the users are expected to remember what the system showed them in the past.\nWe define the loss L(pi, q) as a constant cost c (we use 0.1) incurred when reading a system-returned passage.\nThus, our metric can be re-written as U(q) = nX i=1 Gain(pi, q) logb(b + i \u2212 1) \u2212 L(n) (8) where L(n) is the loss associated with a ranked list of length n: L(n) = c \u00b7 nX i=1 1 logb(b + i \u2212 1) (9) 3 Note that 00 = 1 Due to the similarity with Discounted Cumulated Gain (DCG), we call our metric Discounted Cumulated Utility (DCU).\nThe DCU score obtained by the system is converted to a Normalized DCU (NDCU) score by dividing it by the DCU score of the ideal ranked list, which is created by ordering passages by their decreasing utility scores U(pi, q) and stopping when U(pi, q) \u2264 0 i.e. when the gain is less than or equal to the cost of reading the passage.\n5.\nDATA TDT4 was the benchmark corpus used in TDT2002 and TDT2003 evaluations.\nThe corpus consists of over 90, 000 news articles from multiple sources (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America, PRI the World, etc.) published between October 2000 and January 2001, in the languages of Arabic, English, and Mandarin.\nSpeech-recognized and machine-translated versions of the non-English articles were provided as well.\nLDC [18] has annotated the corpus with 100 topics, that correspond to various news events in this time period.\nOut of these, we selected a subset of 12 actionable events, and defined corresponding tasks for them4 .\nFor each task, we manually defined a profile consisting of an initial set of (5 to 10) queries, a free-text description of the user history, i.e., what the user already knows about the event, and a list of known on-topic and off-topic documents (if available) as training examples.\nFor each query, we generated answer keys and corresponding nugget matching rules using the procedure described in section 4.1.2, and produced a total of 120 queries, with an average of 7 nuggets per query.\n6.\nEXPERIMENTS AND RESULTS 6.1 Baselines We used Indri [17], a popular language-model based retrieval engine, as a baseline for comparison with CAF\u00b4E. Indri supports standard search engine functionality, including pseudo-relevance feedback (PRF) [3, 6], and is representative of a typical query-based retrieval system.\nIndri does not support any kind of novelty detection.\nWe compare Indri with PRF turned on and off, against CAF\u00b4E with user feedback, novelty detection and antiredundant ranking turned on and off.\n6.2 Experimental Setup We divided the TDT4 corpus spanning 4 months into 10 chunks, each defined as a period of 12 consecutive days.\nAt any given point of time in the distillation process, each system accessed the past data up to the current point, and returned a ranked list of up 50 passages per query.\nThe 12 tasks defined on the corpus were divided into a training and test set with 6 tasks each.\nEach system was allowed to use the training set to tune its parameters for optimizing NDCU (equation 8), including the relevance threshold for both Indri and CAF\u00b4E, and the novelty and antiredundancy thresholds for CAF\u00b4E.\nThe NDCU for each system run is calculated automatically.\nUser feedback was also simulated - relevance judgments for each system-returned passage (as determined by the nugget matching rules described in section 4.1.2) were 4 URL: http:\/\/nyc.lti.cs.cmu.edu\/downloads Figure 1: Performance of Indri across chunks Figure 2: Performance of CAF\u00b4E across chunks used as user feedback in the adaptation of query profiles and user histories.\n6.3 Results In Table 1, we show the NDCU scores of the two systems under various settings.\nThese scores are averaged over the six tasks in the test set, and are calculated with two dampening factors (see section 4.2): \u03b3 = 0 and 0.1, to simulate no tolerance and small tolerance for redundancy, respectively.\nUsing \u03b3 = 0 creates a much more strict metric since it does not give any credit to a passage that contains relevant but redundant information.\nHence, the improvement obtained from enabling user feedback is smaller with \u03b3 = 0 than the improvement obtained from feedback with \u03b3 = 0.1.\nThis reveals a shortcoming of contemporary retrieval systemswhen the user gives positive feedback on a passage, the systems gives higher weights to the terms present in that passage and tends to retrieve other passages containing the same terms - and thus - usually the same information.\nHowever, the user does not benefit from seeing such redundant passages, and is usually interested in other passages containing related information.\nIt is informative to evaluate retrieval systems using our utility measure (with \u03b3 = 0) which accounts for novelty and thus gives a more realistic picture of how well a system can generalize from user feedback, rather than using traditional IR measures like recall and precision which give an incomplete picture of improvement obtained from user feedback.\nSometimes, however, users might indeed be interested in seeing the same information from multiple sources, as an Table 1: NDCU Scores of Indri and CAF\u00b4E for two dampening factors (\u03b3), and various settings (F: Feedback, N: Novelty Detection, A: Anti-Redundant Ranking) Indri CAF\u00b4E \u03b3 Base +PRF Base +F +F+N +F+A +F+N+A 0 0.19 0.19 0.22 0.23 0.24 0.24 0.24 0.1 0.28 0.29 0.24 0.35 0.35 0.36 0.36 indicator of its importance or reliability.\nIn such a case, we can simply choose a higher value for \u03b3 which corresponds to a higher tolerance for redundancy, and hence let the system tune its parameters accordingly.\nSince documents were processed chunk by chunk, it would be interesting to see how the performance of systems improves over time.\nFigures 1 and 2 show the performance trends for both the systems across chunks.\nWhile the performance with and without feedback on the first few chunks is expected to be close, for subsequent chunks, the performance curve with feedback enabled rises above the one with the no-feedback setting.\nThe performance trends are not consistent across all chunks because on-topic documents are not uniformly distributed over all the chunks, making some queries `easier'' than others in certain chunks.\nMoreover, since Indri uses pseudo-relevance feedback while CAF\u00b4E uses feedback based on actual relevance judgments, the improvement in case of Indri is less dramatic than that of CAF\u00b4E. 7.\nCONCLUDING REMARKS This paper presents the first investigation on utility-based information distillation with a system that learns the longlasting information needs from fine-grained user feedback over a sequence of ranked passages.\nOur system, called CAF\u00b4E, combines adaptive filtering, novelty detection and antiredundant passage ranking in a unified framework for utility optimization.\nWe developed a new scheme for automated evaluation and feedback based on a semi-automatic procedure for acquiring rules that allow automatically matching nuggets against system responses.\nWe also proposed an extension of the NDCG metric for assessing the utility of ranked passages as a weighted combination of relevance and novelty.\nOur experiments on the newly annotated TDT4 benchmark corpus show encouraging utility enhancement over Indri, and also over our own system with incremental learning and novelty detection turned off.\n8.\nACKNOWLEDGMENTS We would like to thank Rosta Farzan, Jonathan Grady, Jaewook Ahn, Yefei Peng, and the Qualitative Data Analysis Program at the University of Pittsburgh lead by Dr. Stuart Shulman for their help with collecting and processing the extended TDT4 annotations used in our experiments.\nThis work is supported in parts by the National Science Foundation (NSF) under grant IIS0434035, and the Defense Advanced Research Project Agency (DARPA) under contracts NBCHD030010 and W0550432.\nAny opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.\n9.\nADDITIONAL AUTHORS Jian Zhang (jianzhan@stat.purdue.edu)\u2217 , Jaime Carbonell (jgc@cs.cmu.edu)\u2020 , Peter Brusilovsky (peterb+@pitt.edu)\u2021 , Daqing He(dah44@pitt.edu)\u2021 10.\nREFERENCES [1] J. Allan.\nIncremental Relevance Feedback for Information Filtering.\nProceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 270-278, 1996.\n[2] J. Allan, C. Wade, and A. Bolivar.\nRetrieval and Novelty Detection at the Sentence Level.\nProceedings of the ACM SIGIR conference on research and development in information retrieval, 2003.\n[3] C. Buckley, G. Salton, and J. Allan.\nAutomatic Retrieval with Locality Information using SMART.\nNIST special publication, (500207):59-72, 1993.\n[4] J. Callan.\nLearning While Filtering Documents.\nProceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 224-231, 1998.\n[5] J. Carbonell and J. Goldstein.\nThe use of MMR, Diversity-based Reranking for Reordering Documents and Producing Summaries.\nProceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335-336, 1998.\n[6] E. Efthimiadis.\nQuery Expansion.\nAnnual Review of Information Science and Technology (ARIST), 31:p121-87, 1996.\n[7] J. Fiscus and G. Duddington.\nTopic Detection and Tracking Overview.\nTopic Detection and Tracking: Event-based Information Organization, pages 17-31.\n[8] R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, X. Luo, N. Nicolov, and S. Roukos.\nA Statistical Model for Multilingual Entity Detection and Tracking.\nNAACL\/HLT, 2004.\n[9] K. J\u00a8arvelin and J. Kek\u00a8al\u00a8ainen.\nCumulated Gain-based Evaluation of IR Techniques.\nACM Transactions on Information Systems (TOIS), 20(4):422-446, 2002.\n[10] J. Lin and D. Demner-Fushman.\nAutomatically Evaluating Answers to Definition Questions.\nProceedings of the 2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT\/EMNLP 2005), 2005.\n\u2217 Statistics Dept., Purdue University, West Lafayette, USA \u2020 Language Technologies Inst., Carnegie Mellon University, Pittsburgh, USA \u2021 School of Information Sciences, Univ. of Pittsburgh, Pittsburgh, USA [11] J. Lin and D. Demner-Fushman.\nWill Pyramids Built of nUggets Topple Over.\nProceedings of HLT-NAACL, 2006.\n[12] X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos.\nA Mention-synchronous Coreference Resolution Algorithm based on the Bell Tree.\nProc.\nof ACL, 4:136-143, 2004.\n[13] G. Marton.\nNuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgments.\nHLT\/NAACL, 2006.\n[14] E. Riloff.\nAutomatically Constructing a Dictionary for Information Extraction Tasks.\nProceedings of the Eleventh National Conference on Artificial Intelligence, pages 811-816, 1993.\n[15] S. Robertson and S. Walker.\nMicrosoft Cambridge at TREC-9: Filtering track.\nThe Ninth Text REtrieval Conference (TREC-9), pages 361-368.\n[16] R. Schapire, Y. Singer, and A. Singhal.\nBoosting and Rocchio Applied to Text Filtering.\nProceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 215-223, 1998.\n[17] T. Strohman, D. Metzler, H. Turtle, and W. Croft.\nIndri: A Language Model-based Search Engine for Complex Queries.\nProceedings of the International Conference on Intelligence Analysis, 2004.\n[18] The Linguistic Data Consortium.\nhttp:\/\/www.ldc.upenn.edu\/.\n[19] E. Voorhees.\nOverview of the TREC 2003 Question Answering Track.\nProceedings of the Twelfth Text REtrieval Conference (TREC 2003), 2003.\n[20] Y. Yang and B. Kisiel.\nMargin-based Local Regression for Adaptive Filtering.\nProceedings of the twelfth international conference on Information and knowledge management, pages 191-198, 2003.\n[21] Y. Yang, S. Yoo, J. Zhang, and B. Kisiel.\nRobustness of Adaptive Filtering Methods in a Cross-benchmark Evaluation.\nProceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 98-105, 2005.\n[22] C. Zhai, W. Cohen, and J. Lafferty.\nBeyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval.\nProceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, pages 10-17, 2003.\n[23] J. Zhang and Y. Yang.\nRobustness of Regularized Linear Classification Methods in Text Categorization.\nProceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, pages 190-197, 2003.\n[24] Y. Zhang.\nUsing Bayesian Priors to Combine Classifiers for Adaptive Filtering.\nProceedings of the 27th annual international conference on Research and development in information retrieval, pages 345-352, 2004.\n[25] Y. Zhang, J. Callan, and T. Minka.\nNovelty and Redundancy Detection in Adaptive Filtering.\nProceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2002.","lvl-3":"Utility-based Information Distillation Over Temporally Sequenced Documents\nABSTRACT\nThis paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework.\nIt combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs (` tasks' with multiple queries).\nOur approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings.\nFor our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task.\nAnswer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses.\nWe also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty.\nOur results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.\n1.\nINTRODUCTION\nTracking new and relevant information from temporal data streams for users with long-lasting needs has been a challenging research topic in information retrieval.\nAdaptive filtering (AF) is one such task of online prediction of the relevance of each new document with respect to pre-defined topics.\nBased on the initial query and a few positive examples (if available), an AF system maintains a profile for each such topic of interest, and constantly updates it based on feedback from the user.\nThe incremental learning nature of AF systems makes them more powerful than standard search engines that support ad-hoc retrieval (e.g. Google and Yahoo) in terms of finding relevant information with respect to long-lasting topics of interest, and more attractive for users who are willing to provide feedback to adapt the system towards their specific information needs, without having to modify their queries manually.\nA variety of supervised learning algorithms (Rocchio-style classifiers, Exponential-Gaussian models, local regression and logistic regression approaches) have been studied for adaptive settings, examined with explicit and implicit relevance feedback, and evaluated with respect to utility optimization on large benchmark data collections in TREC (Text Retrieval Conferences) and TDT (Topic Detection and Tracking) forums [1, 4, 7, 15, 16, 20, 24, 23].\nRegularized logistic regression [21] has been found representative for the state-of-the-art approaches, and highly efficient for frequent model adaptations over large document collections such as the TREC-10 corpus (over 800,000 documents and 84 topics).\nDespite substantial achievements in recent adaptive filtering research, significant problems remain unsolved regarding how to leverage user feedback effectively and efficiently.\nSpecifically, the following issues may seriously limit the true utility of AF systems in real-world applications:\nadaptive filtering setup--he or she reacts to the system only when the system makes a ` yes' decision on a document, by confirming or rejecting that decision.\nA more ` active' alternative would be to allow the user to issue multiple queries for a topic, review a ranked list of candidate documents (or passages) per query, and provide feedback on the ranked list, thus refining their information need and requesting updated ranked lists.\nThe latter form of user interaction has been highly effective in standard retrieval for ad hoc queries.\nHow to deploy such a strategy for long-lasting information needs in AF settings is an open question for research.\n2.\nThe unit for receiving a relevance judgment (` yes' or ` no') is restricted to the document level in conventional AF.\nHowever, a real user may be willing to provide more informative, fine-grained feedback via highlighting some pieces of text in a retrieved document as relevant, instead of labeling the entire document as relevant.\nEffectively leveraging such fine-grained feedback could substantially enhance the quality of an AF system.\nFor this, we need to enable supervised learning from labeled pieces of text of arbitrary span instead of just allowing labeled documents.\n3.\nSystem-selected documents are often highly redundant.\nA major news event, for example, would be reported by multiple sources repeatedly for a while, making most of the information content in those articles redundant with each other.\nA conventional AF system would select all these redundant news stories for user feedback, wasting the user's time while offering little gain.\nClearly, techniques for novelty detection can help in principle [25, 2, 22] for improving the utility of the AF systems.\nHowever, the effectiveness of such techniques at passage level to detect novelty with respect to user's (fine-grained) feedback and to detect redundancy in ranked lists remains to be evaluated using a measure of utility that mimics the needs of a real user.\nTo address the above limitations of current AF systems, we propose and examine a new approach in this paper, combining the strengths of conventional AF (incremental learning of topic models), multi-pass passage retrieval for long-lasting queries conditioned on topic, and novelty detection for removal of redundancy from user interactions with the system.\nWe call the new process utility-based information distillation.\nNote that conventional benchmark corpora for AF evaluations, which have relevance judgments at the document level and do not define tasks with multiple queries, are insufficient for evaluating the new approach.\nTherefore, we extended a benchmark corpus--the TDT4 collection of news stories and TV broadcasts--with task definitions, multiple queries per task, and answer keys per query.\nWe have conducted our experiments on this extended TDT4 corpus and have made the additionally generated data publicly available for future comparative evaluations 1.\nTo automatically evaluate the system-returned arbitrary spans of text using our answer keys, we further developed an evaluation scheme with semi-automatic procedure for\nacquiring rules that can match nuggets against system responses.\nMoreover, we propose an extension of NDCG (Normalized Discounted Cumulated Gain) [9] for assessing the utility of ranked passages as a function of both relevance and novelty.\nThe rest of this paper is organized as follows.\nSection 2 outlines the information distillation process with a concrete example.\nSection 3 describes the technical cores of our system called CAF \u00b4 E--CMU Adaptive Filtering Engine.\nSection 4 discusses issues with respect to evaluation methodology and proposes a new scheme.\nSection 5 describes the extended TDT4 corpus.\nSection 6 presents our experiments and results.\nSection 7 concludes the study and gives future perspectives.\n2.\nA SAMPLE TASK\n3.\nTECHNICAL CORES\n3.1 Adaptive Filtering Component\n3.2 Passage Retrieval Component\n3.3 Novelty Detection Component\n3.4 Anti-redundant Ranking Component\n4.\nEVALUATION METHODOLOGY\n4.1 Answer Keys\n4.1.1 Automating Evaluation based on Answer Keys\n4.1.2 Nugget-Matching Rules\n4.2 Evaluating the Utility of a Sequence of Ranked Lists\n5.\nDATA\n6.\nEXPERIMENTS AND RESULTS\n6.1 Baselines\n6.2 Experimental Setup\n6.3 Results\n7.\nCONCLUDING REMARKS\nThis paper presents the first investigation on utility-based information distillation with a system that learns the longlasting information needs from fine-grained user feedback over a sequence of ranked passages.\nOur system, called CAF \u00b4 E, combines adaptive filtering, novelty detection and antiredundant passage ranking in a unified framework for utility optimization.\nWe developed a new scheme for automated evaluation and feedback based on a semi-automatic procedure for acquiring rules that allow automatically matching nuggets against system responses.\nWe also proposed an extension of the NDCG metric for assessing the utility of ranked passages as a weighted combination of relevance and novelty.\nOur experiments on the newly annotated TDT4 benchmark corpus show encouraging utility enhancement over Indri, and also over our own system with incremental learning and novelty detection turned off.","lvl-4":"Utility-based Information Distillation Over Temporally Sequenced Documents\nABSTRACT\nThis paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework.\nIt combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs (` tasks' with multiple queries).\nOur approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings.\nFor our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task.\nAnswer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses.\nWe also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty.\nOur results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.\n1.\nINTRODUCTION\nTracking new and relevant information from temporal data streams for users with long-lasting needs has been a challenging research topic in information retrieval.\nAdaptive filtering (AF) is one such task of online prediction of the relevance of each new document with respect to pre-defined topics.\nBased on the initial query and a few positive examples (if available), an AF system maintains a profile for each such topic of interest, and constantly updates it based on feedback from the user.\nDespite substantial achievements in recent adaptive filtering research, significant problems remain unsolved regarding how to leverage user feedback effectively and efficiently.\nSpecifically, the following issues may seriously limit the true utility of AF systems in real-world applications:\nadaptive filtering setup--he or she reacts to the system only when the system makes a ` yes' decision on a document, by confirming or rejecting that decision.\nA more ` active' alternative would be to allow the user to issue multiple queries for a topic, review a ranked list of candidate documents (or passages) per query, and provide feedback on the ranked list, thus refining their information need and requesting updated ranked lists.\nThe latter form of user interaction has been highly effective in standard retrieval for ad hoc queries.\nHow to deploy such a strategy for long-lasting information needs in AF settings is an open question for research.\n2.\nHowever, a real user may be willing to provide more informative, fine-grained feedback via highlighting some pieces of text in a retrieved document as relevant, instead of labeling the entire document as relevant.\nEffectively leveraging such fine-grained feedback could substantially enhance the quality of an AF system.\nFor this, we need to enable supervised learning from labeled pieces of text of arbitrary span instead of just allowing labeled documents.\n3.\nSystem-selected documents are often highly redundant.\nA conventional AF system would select all these redundant news stories for user feedback, wasting the user's time while offering little gain.\nClearly, techniques for novelty detection can help in principle [25, 2, 22] for improving the utility of the AF systems.\nHowever, the effectiveness of such techniques at passage level to detect novelty with respect to user's (fine-grained) feedback and to detect redundancy in ranked lists remains to be evaluated using a measure of utility that mimics the needs of a real user.\nWe call the new process utility-based information distillation.\nNote that conventional benchmark corpora for AF evaluations, which have relevance judgments at the document level and do not define tasks with multiple queries, are insufficient for evaluating the new approach.\nTherefore, we extended a benchmark corpus--the TDT4 collection of news stories and TV broadcasts--with task definitions, multiple queries per task, and answer keys per query.\nWe have conducted our experiments on this extended TDT4 corpus and have made the additionally generated data publicly available for future comparative evaluations 1.\nTo automatically evaluate the system-returned arbitrary spans of text using our answer keys, we further developed an evaluation scheme with semi-automatic procedure for\nacquiring rules that can match nuggets against system responses.\nMoreover, we propose an extension of NDCG (Normalized Discounted Cumulated Gain) [9] for assessing the utility of ranked passages as a function of both relevance and novelty.\nSection 2 outlines the information distillation process with a concrete example.\nSection 3 describes the technical cores of our system called CAF \u00b4 E--CMU Adaptive Filtering Engine.\nSection 4 discusses issues with respect to evaluation methodology and proposes a new scheme.\nSection 5 describes the extended TDT4 corpus.\nSection 6 presents our experiments and results.\nSection 7 concludes the study and gives future perspectives.\n7.\nCONCLUDING REMARKS\nThis paper presents the first investigation on utility-based information distillation with a system that learns the longlasting information needs from fine-grained user feedback over a sequence of ranked passages.\nOur system, called CAF \u00b4 E, combines adaptive filtering, novelty detection and antiredundant passage ranking in a unified framework for utility optimization.\nWe developed a new scheme for automated evaluation and feedback based on a semi-automatic procedure for acquiring rules that allow automatically matching nuggets against system responses.\nWe also proposed an extension of the NDCG metric for assessing the utility of ranked passages as a weighted combination of relevance and novelty.\nOur experiments on the newly annotated TDT4 benchmark corpus show encouraging utility enhancement over Indri, and also over our own system with incremental learning and novelty detection turned off.","lvl-2":"Utility-based Information Distillation Over Temporally Sequenced Documents\nABSTRACT\nThis paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework.\nIt combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs (` tasks' with multiple queries).\nOur approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings.\nFor our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task.\nAnswer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses.\nWe also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty.\nOur results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.\n1.\nINTRODUCTION\nTracking new and relevant information from temporal data streams for users with long-lasting needs has been a challenging research topic in information retrieval.\nAdaptive filtering (AF) is one such task of online prediction of the relevance of each new document with respect to pre-defined topics.\nBased on the initial query and a few positive examples (if available), an AF system maintains a profile for each such topic of interest, and constantly updates it based on feedback from the user.\nThe incremental learning nature of AF systems makes them more powerful than standard search engines that support ad-hoc retrieval (e.g. Google and Yahoo) in terms of finding relevant information with respect to long-lasting topics of interest, and more attractive for users who are willing to provide feedback to adapt the system towards their specific information needs, without having to modify their queries manually.\nA variety of supervised learning algorithms (Rocchio-style classifiers, Exponential-Gaussian models, local regression and logistic regression approaches) have been studied for adaptive settings, examined with explicit and implicit relevance feedback, and evaluated with respect to utility optimization on large benchmark data collections in TREC (Text Retrieval Conferences) and TDT (Topic Detection and Tracking) forums [1, 4, 7, 15, 16, 20, 24, 23].\nRegularized logistic regression [21] has been found representative for the state-of-the-art approaches, and highly efficient for frequent model adaptations over large document collections such as the TREC-10 corpus (over 800,000 documents and 84 topics).\nDespite substantial achievements in recent adaptive filtering research, significant problems remain unsolved regarding how to leverage user feedback effectively and efficiently.\nSpecifically, the following issues may seriously limit the true utility of AF systems in real-world applications:\nadaptive filtering setup--he or she reacts to the system only when the system makes a ` yes' decision on a document, by confirming or rejecting that decision.\nA more ` active' alternative would be to allow the user to issue multiple queries for a topic, review a ranked list of candidate documents (or passages) per query, and provide feedback on the ranked list, thus refining their information need and requesting updated ranked lists.\nThe latter form of user interaction has been highly effective in standard retrieval for ad hoc queries.\nHow to deploy such a strategy for long-lasting information needs in AF settings is an open question for research.\n2.\nThe unit for receiving a relevance judgment (` yes' or ` no') is restricted to the document level in conventional AF.\nHowever, a real user may be willing to provide more informative, fine-grained feedback via highlighting some pieces of text in a retrieved document as relevant, instead of labeling the entire document as relevant.\nEffectively leveraging such fine-grained feedback could substantially enhance the quality of an AF system.\nFor this, we need to enable supervised learning from labeled pieces of text of arbitrary span instead of just allowing labeled documents.\n3.\nSystem-selected documents are often highly redundant.\nA major news event, for example, would be reported by multiple sources repeatedly for a while, making most of the information content in those articles redundant with each other.\nA conventional AF system would select all these redundant news stories for user feedback, wasting the user's time while offering little gain.\nClearly, techniques for novelty detection can help in principle [25, 2, 22] for improving the utility of the AF systems.\nHowever, the effectiveness of such techniques at passage level to detect novelty with respect to user's (fine-grained) feedback and to detect redundancy in ranked lists remains to be evaluated using a measure of utility that mimics the needs of a real user.\nTo address the above limitations of current AF systems, we propose and examine a new approach in this paper, combining the strengths of conventional AF (incremental learning of topic models), multi-pass passage retrieval for long-lasting queries conditioned on topic, and novelty detection for removal of redundancy from user interactions with the system.\nWe call the new process utility-based information distillation.\nNote that conventional benchmark corpora for AF evaluations, which have relevance judgments at the document level and do not define tasks with multiple queries, are insufficient for evaluating the new approach.\nTherefore, we extended a benchmark corpus--the TDT4 collection of news stories and TV broadcasts--with task definitions, multiple queries per task, and answer keys per query.\nWe have conducted our experiments on this extended TDT4 corpus and have made the additionally generated data publicly available for future comparative evaluations 1.\nTo automatically evaluate the system-returned arbitrary spans of text using our answer keys, we further developed an evaluation scheme with semi-automatic procedure for\nacquiring rules that can match nuggets against system responses.\nMoreover, we propose an extension of NDCG (Normalized Discounted Cumulated Gain) [9] for assessing the utility of ranked passages as a function of both relevance and novelty.\nThe rest of this paper is organized as follows.\nSection 2 outlines the information distillation process with a concrete example.\nSection 3 describes the technical cores of our system called CAF \u00b4 E--CMU Adaptive Filtering Engine.\nSection 4 discusses issues with respect to evaluation methodology and proposes a new scheme.\nSection 5 describes the extended TDT4 corpus.\nSection 6 presents our experiments and results.\nSection 7 concludes the study and gives future perspectives.\n2.\nA SAMPLE TASK\nConsider a news event--the escape of seven convicts from a Texas prison in December 2000 and their capture a month later.\nAssuming a user were interested in this event since its early stage, the information need could be: ` Find information about the escape of convicts from Texas prison, and information related to their recapture'.\nThe associated lower-level questions could be:\n1.\nHow many prisoners escaped?\n2.\nWhere and when were they sighted?\n3.\nWho are their known contacts inside and outside the prison?\n4.\nHow are they armed?\n5.\nDo they have any vehicles?\n6.\nWhat steps have been taken so far?\nWe call such an information need a task, and the associated questions as the queries in this task.\nA distillation system is supposed to monitor the incoming documents, process them chunk by chunk in a temporal order, select potentially relevant and novel passages from each chunk with respect to each query, and present a ranked list of passages to the user.\nPassage ranking here is based on how relevant a passage is with respect to the current query, how novel it is with respect to the current user history (of his or her interactions with the system), and how redundant it is compared to other passages with a higher rank in the list.\nWhen presented with a list of passages, the user may provide feedback by highlighting arbitrary spans of text that he or she found relevant.\nThese spans of text are taken as positive examples in the adaptation of the query profile, and also added to the user's history.\nPassages not marked by the user are taken as negative examples.\nAs soon as the query profile is updated, the system re-issues a search and returns another ranked list of passages where the previously seen passages are either removed or ranked low, based on user preference.\nFor example, if the user highlights `...officials have posted a $100,000 reward for their capture ...' as relevant answer to the query \"What steps have been taken so far?\"\n, then the highlighted piece is used as an additional positive training example in the adaptation of the query profile.\nThis piece of feedback is also added to the user history as a seen example, so that in\nfuture, the system will not place another passage mentioning ` $100,000 reward' at the top of the ranked list.\nHowever, an article mentioning `...officials have doubled the reward money to $200,000 ...' might be ranked high since it is both relevant to the (updated) query profile and novel with respect to the (updated) user history.\nThe user may modify the original queries or add a new query during the process; the query profiles will be changed accordingly.\nClearly, novelty detection is very important for the utility of such a system because of the iterative search.\nWithout novelty detection, the old relevant passages would be shown to the user repeatedly in each ranked list.\nThrough the above example, we can see the main properties of our new framework for utility-based information distillation over temporally ordered documents.\nOur framework combines and extends the power of adaptive filtering (AF), ad hoc retrieval (IR) and novelty detection (ND).\nCompared to standard IR, our approach has the power of incrementally learning long-term information needs and modeling a sequence of queries within a task.\nCompared to conventional AF, it enables a more active role of the user in refining his or her information needs and requesting new results by allowing relevance and novelty feedback via highlighting of arbitrary spans of text in passages returned by the system.\nCompared to past work, this is the first evaluation of novelty detection integrated with adaptive filtering for sequenced queries that allows flexible user feedback over ranked passages.\nThe combination of AF, IR and ND with the new extensions raises an important research question regarding evaluation methodology: how can we measure the utility of such an information distillation system?\nExisting metrics in standard IR, AF and ND are insufficient, and new solutions must be explored, as we will discuss in Section 4, after describing the technical cores of our system in the next section.\n3.\nTECHNICAL CORES\nThe core components of CAF \u00b4 E are--1) AF for incremental learning of query profiles, 2) IR for estimating relevance of passages with respect to query profiles, 3) ND for assessing novelty of passages with respect to user's history, and 4) anti-redundancy component to remove redundancy from ranked lists.\n3.1 Adaptive Filtering Component\nWe use a state-of-the-art algorithm in the field--the regularized logistic regression method which had the best results on several benchmark evaluation corpora for AF [21].\nLogistic regression (LR) is a supervised learning algorithm for statistical classification.\nBased on a training set of labeled instances, it learns a class model which can then by used to predict the labels of unseen instances.\nIts performance as well as efficiency in terms of training time makes it a good candidate when frequent updates of the class model are required, as is the case in adaptive filtering, where the system must learn from each new feedback provided by the user.\n(See [21] and [23] for computational complexity and implementation issues).\nIn adaptive filtering, each query is considered as a class and the probability of a passage belonging to this class corresponds to the degree of relevance of the passage with respect to the query.\nFor training the model, we use the query itself as the initial positive training example of the class, and the user-highlighted pieces of text (marked as Relevant or Not-relevant) during feedback as additional training examples.\nTo address the cold start issue in the early stage before any user feedback is obtained, the system uses a small sample from a retrospective corpus as the initial negative examples in the training set.\nThe details of using logistic regression for adaptive filtering (assigning different weights to positive and negative training instances, and regularizing the objective function to prevent over-fitting on training data) are presented in [21].\nThe class model ~ w * learned by Logistic Regression, or the query profile, is a vector whose dimensions are individual terms and whose elements are the regression coefficients, indicating how influential each term is in the query profile.\nThe query profile is updated whenever a new piece of user feedback is received.\nA temporally decaying weight can be applied to each training example, as an option, to emphasize the most recent user feedback.\n3.2 Passage Retrieval Component\nWe use standard IR techniques in this part of our system.\nIncoming documents are processed in chunks, where each chunk can be defined as a fixed span of time or as a fixed number of documents, as preferred by the user.\nFor each incoming document, corpus statistics like the IDF (Inverted Document Frequency) of each term are updated.\nWe use a state-of-the-art named entity identifier and tracker [8, 12] to identify person and location names, and merge them with co-referent named entities seen in the past.\nThen the documents are segmented into passages, which can be a whole document, a paragraph, a sentence, or any other continuous span of text, as preferred.\nEach passage is represented using a vector of TF-IDF (Term Frequency--Inverse Document Frequency) weights, where term can be a word or a named entity.\nGiven a query profile, i.e. the logistic regression solution ~ w * as described in Section 3.1, the system computes the posterior probability of relevance for each passage x ~ as\nPassages are ordered by their relevance scores, and the ones with scores above a threshold (tuned on a training set) comprise the relevance list that is passed on to the novelty detection step.\n3.3 Novelty Detection Component\nCAF \u00b4 E maintains a user history H (t), which contains all the spans of text hi that the user highlighted (as feedback) during his or her past interactions with the system, up to the current time t. Denoting the history as\nthe novelty score of a new candidate passage x ~ is computed as:\nwhere both candidate passage x and highlighted spans of text hi are represented as TF-IDF vectors.\nThe novelty score of each passage is compared to a prespecified threshold (also tuned on a training set), and any\npassage with a score below this threshold is removed from the relevance list.\n3.4 Anti-redundant Ranking Component\nAlthough the novelty detection component ensures that only novel (previously unseen) information remains in the relevance list, this list might still contain the same novel information at multiple positions in the ranked list.\nSuppose, for example, that the user has already read about a $100,000 reward for information about the escaped convicts.\nA new piece of news that the award has been increased to $200,000 is novel since the user hasn't read about it yet.\nHowever, multiple news sources would report this news and we might end up showing (redundant) articles from all these sources in a ranked list.\nHence, a ranked list should also be made non-redundant with respect to its own contents.\nWe use a simplified version of the Maximal Marginal Relevance method [5], originally developed for combining relevance and novelty in text retrieval and summarization.\nOur procedure starts with the current list of passages sorted by relevance (section 3.2), filtered by Novelty Detection component (section 3.3), and generates a new non-redundant list as follows:\n1.\nTake the top passage in the current list as the top one in the new list.\n2.\nAdd the next passage x ~ in the current list to the new list only if\nwhere\nand Lnew is the set of passages already selected in the new list.\n3.\nRepeat step 2 until all the passages in the current list have been examined.\nAfter applying the above-mentioned algorithm, each passage in the new list is sufficiently dissimilar to others, thus favoring diversity rather than redundancy in the new ranked list.\nThe anti-redundancy threshold t is tuned on a training set.\n4.\nEVALUATION METHODOLOGY\nThe approach we proposed above for information distillation raises important issues regarding evaluation methodology.\nFirstly, since our framework allows the output to be passages at different levels of granularity (e.g. k-sentence windows where k may vary) instead of a fixed length, it is not possible to have pre-annotated relevance judgments at all such granularity levels.\nSecondly, since we wish to measure the utility of the system output as a combination of both relevance and novelty, traditional relevance-only based measures must be replaced by measures that penalize the repetition of the same information in the system output across time.\nThirdly, since the output of the system is ranked lists, we must reward those systems that present useful information (both relevant and previously unseen) using shorter ranked lists, and penalize those that present the same information using longer ranked lists.\nNone of the existing measures in ad hoc retrieval, adaptive filtering, novelty detection or other related areas (text summarization and question answering) have desirable properties in all the three aspects.\nTherefore, we must develop a new evaluation methodology.\n4.1 Answer Keys\nTo enable the evaluation of a system whose output consists of passages of arbitrary length, we borrow the concept of answer keys from the Question Answering (QA) community, where systems are allowed to return arbitrary spans of text as answers.\nAnswer keys define what should be present in a system response to receive credit, and are comprised of a collection of information nuggets, i.e. factoid units about which human assessors can make binary decisions of whether or not a system response contains them.\nDefining answer keys and making the associated binary decisions are conceptual tasks that require semantic mapping [19], since system-returned passages can contain the same information expressed in many different ways.\nHence, QA evaluations have relied on human assessors for the mapping between various expressions, making the process costly, time consuming, and not scalable to large query and document collections, and extensive system evaluations with various parameter settings.\n4.1.1 Automating Evaluation based on Answer Keys\nAutomatic evaluation methods would allow for faster system building and tuning, as well as provide an objective and affordable way of comparing various systems.\nRecently, such methods have been proposed, more or less, based on the idea of n-gram co-occurrences.\nPourpre [10] assigns a fractional recall score to a system response based on its unigram overlap with a given nugget's description.\nFor example, a system response ` A B C' has recall 3\/4 with respect to a nugget with description ` A B C D'.\nHowever, such an approach is unfair to systems that present the same information but using words other than A, B, C, and D. Another open issue is how to weight individual words in measuring the closeness of a match.\nFor example, consider the question \"How many prisoners escaped?\"\n.\nIn the nugget ` Seven prisoners escaped from a Texas prison', there is no indication that ` seven' is the keyword, and that it must be matched to get any relevance credit.\nUsing IDF values does not help, since ` seven' will generally not have a higher IDF than words like ` texas' and ` prison'.\nAlso, redefining the nugget as just ` seven' does not solve the problem since now it might spuriously match any mention of ` seven' out of context.\nNuggeteer [13] works on similar principles but makes binary decisions about whether a nugget is present in a given system response by tuning a threshold.\nHowever, it is also plagued by ` spurious relevance' since not all words contained in the nugget description (or known correct responses) are central to the nugget.\n4.1.2 Nugget-Matching Rules\nWe propose a reliable automatic method for determining whether a snippet of text contains a given nugget, based on nugget-matching rules, which are generated using a semiautomatic procedure explained below.\nThese rules are essentially Boolean queries that will only match against snippets that contain the nugget.\nFor instance, a candidate rule for matching answers to \"How many prisoners escaped?\"\nis (Texas AND seven AND escape AND (convicts\nOR prisoners)), possibly with other synonyms and variants in the rule.\nFor a corpus of news articles, which usually follow a typical formal prose, it is fairly easy to write such simple rules to match expected answers using a bootstrap approach, as described below.\nWe propose a two-stage approach, inspired by Autoslog [14], that combines the strength of humans in identifying semantically equivalent expressions and the strength of the system in gathering statistical evidence from a humanannotated corpus of documents.\nIn the first stage, human subjects annotated (using a highlighting tool) portions of ontopic documents that contained answers to each nugget 2.\nIn the second stage, subjects used our rule generation tool to create rules that would match the annotations for each nugget.\nThe tool allows users to enter a Boolean rule as a disjunction of conjunctions (e.g. ((a AND b) OR (a AND c AND d) OR (e))).\nGiven a candidate rule, our tool uses it as a Boolean query over the entire set of on-topic documents and calculates its recall and precision with respect to the annotations that it is expected to match.\nHence, the subjects can start with a simple rule and iteratively refine it until they are satisfied with its recall and precision.\nWe observed that it was very easy for humans to improve the precision of a rule by tweaking its existing conjunctions (adding more ANDs), and improving the recall by adding more conjunctions to the disjunction (adding more ORs).\nAs an example, let's try to create a rule for the nugget which says that seven prisoners escaped from the Texas prison.\nWe start with a simple rule--(seven).\nWhen we input this into the rule generation tool, we realize that this rule matches many spurious occurrences of seven (e.g. `...seven states ...') and thus gets a low precision score.\nWe can further qualify our rule--Texas AND seven AND convicts.\nNext, by looking at the ` missed annotations', we realize that some news articles mentioned \"...seven prisoners escaped ...\".\nWe then replace convicts with the disjunction (convicts OR prisoners).\nWe continue tweaking the rule in this manner until we achieve a sufficiently high recall and precision--i.e. the (small number of) misses and false alarms can be safely ignored.\nThus we can create nugget-matching rules that succinctly capture various ways of expressing a nugget, while avoiding matching incorrect (or out of context) responses.\nHuman involvement in the rule creation process ensures high quality generic rules which can then be used to evaluate arbitrary system responses reliably.\n4.2 Evaluating the Utility of a Sequence of Ranked Lists\nThe utility of a retrieval system can be defined as the difference between how much the user gained in terms of useful information, and how much the user lost in terms of time and energy.\nWe calculate this utility from the utilities of individual passages as follows.\nAfter reading each passage returned by the system, the user derives some gain depending on the presence of relevant and novel information, and incurs a loss in terms of the time and energy spent in going through the passage.\nHowever, the likelihood that the user would actually read a passage depends on its position in the ranked list.\nHence, for a query q, the expected utility 2LDC [18] already provides relevance judgments for 100 topics on the TDT4 corpus.\nWe further ensured that these judgments are exhaustive on the entire corpus using pooling.\nof a passage pi at rank i can be defined as\nwhere P (i) is the probability that the user would go through a passage at rank i.\nThe expected utility for an entire ranked list of length n can be calculated simply by adding the expected utility of each passage:\nthen we get the recently popularized metric called Discounted Cumulated Gain (DCG) [9], where Gain (pi, q) is defined as the graded relevance of passage pi.\nHowever, without the loss term, DCG is a purely recall-oriented metric and not suitable for an adaptive filtering setting, where the system's utility depends in part on its ability to limit the number of items shown to the user.\nAlthough P (i) could be defined based on empirical studies of user behavior, for simplicity, we use P (i) exactly as defined in equation 6.\nThe gain G (pi, q) of passage pi with respect to the query q is a function of--1) the number of relevant nuggets present in pi, and 2) the novelty of each of these nuggets.\nWe combine these two factors as follows.\nFor each nugget Nj, we assign an initial weight wj, and also keep a count nj of the number of times this nugget has been seen by the user in the past.\nThe gain derived from each subsequent occurrence of the same nugget is assumed to reduce by a dampening factor - y. Thus, G (pi, q) is defined as\nwhere C (pi, q) is the set of all nuggets that appear in passage pi and also belong to the answer key of query q.\nThe initial weights wj are all set of be 1.0 in our experiments, but can also be set based on a pyramid approach [11].\nThe choice of dampening factor - y determines the user's tolerance for redundancy.\nWhen - y = 0, a nugget will only receive credit for its first occurrence i.e. when nj is zero3.\nFor 0 <- y <1, a nugget receives smaller credit for each successive occurrence.\nWhen - y = 1, no dampening occurs and repeated occurrences of a nugget receive the same credit.\nNote that the nugget occurrence counts are preserved between evaluation of successive ranked lists returned by the system, since the users are expected to remember what the system showed them in the past.\nWe define the loss L (pi, q) as a constant cost c (we use 0.1) incurred when reading a system-returned passage.\nThus, our metric can be re-written as\nDue to the similarity with Discounted Cumulated Gain (DCG), we call our metric Discounted Cumulated Utility (DCU).\nThe DCU score obtained by the system is converted to a Normalized DCU (NDCU) score by dividing it by the DCU score of the ideal ranked list, which is created by ordering passages by their decreasing utility scores U (pi, q) and stopping when U (pi, q) \u2264 0 i.e. when the gain is less than or equal to the cost of reading the passage.\n5.\nDATA\nTDT4 was the benchmark corpus used in TDT2002 and TDT2003 evaluations.\nThe corpus consists of over 90, 000 news articles from multiple sources (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America, PRI the World, etc.) published between October 2000 and January 2001, in the languages of Arabic, English, and Mandarin.\nSpeech-recognized and machine-translated versions of the non-English articles were provided as well.\nLDC [18] has annotated the corpus with 100 topics, that correspond to various news events in this time period.\nOut of these, we selected a subset of 12 actionable events, and defined corresponding tasks for them4.\nFor each task, we manually defined a profile consisting of an initial set of (5 to 10) queries, a free-text description of the user history, i.e., what the user already knows about the event, and a list of known on-topic and off-topic documents (if available) as training examples.\nFor each query, we generated answer keys and corresponding nugget matching rules using the procedure described in section 4.1.2, and produced a total of 120 queries, with an average of 7 nuggets per query.\n6.\nEXPERIMENTS AND RESULTS\n6.1 Baselines\nWe used Indri [17], a popular language-model based retrieval engine, as a baseline for comparison with CAF \u00b4 E. Indri supports standard search engine functionality, including pseudo-relevance feedback (PRF) [3, 6], and is representative of a typical query-based retrieval system.\nIndri does not support any kind of novelty detection.\nWe compare Indri with PRF turned on and off, against CAF \u00b4 E with user feedback, novelty detection and antiredundant ranking turned on and off.\n6.2 Experimental Setup\nWe divided the TDT4 corpus spanning 4 months into 10 chunks, each defined as a period of 12 consecutive days.\nAt any given point of time in the distillation process, each system accessed the past data up to the current point, and returned a ranked list of up 50 passages per query.\nThe 12 tasks defined on the corpus were divided into a training and test set with 6 tasks each.\nEach system was allowed to use the training set to tune its parameters for optimizing NDCU (equation 8), including the relevance threshold for both Indri and CAF \u00b4 E, and the novelty and antiredundancy thresholds for CAF \u00b4 E.\nThe NDCU for each system run is calculated automatically.\nUser feedback was also simulated--relevance judgments for each system-returned passage (as determined by the nugget matching rules described in section 4.1.2) were\nused as user feedback in the adaptation of query profiles and user histories.\n6.3 Results\nIn Table 1, we show the NDCU scores of the two systems under various settings.\nThese scores are averaged over the six tasks in the test set, and are calculated with two dampening factors (see section 4.2): - y = 0 and 0.1, to simulate no tolerance and small tolerance for redundancy, respectively.\nUsing - y = 0 creates a much more strict metric since it does not give any credit to a passage that contains relevant but redundant information.\nHence, the improvement obtained from enabling user feedback is smaller with - y = 0 than the improvement obtained from feedback with - y = 0.1.\nThis reveals a shortcoming of contemporary retrieval systems--when the user gives positive feedback on a passage, the systems gives higher weights to the terms present in that passage and tends to retrieve other passages containing the same terms--and thus--usually the same information.\nHowever, the user does not benefit from seeing such redundant passages, and is usually interested in other passages containing related information.\nIt is informative to evaluate retrieval systems using our utility measure (with - y = 0) which accounts for novelty and thus gives a more realistic picture of how well a system can generalize from user feedback, rather than using traditional IR measures like recall and precision which give an incomplete picture of improvement obtained from user feedback.\nSometimes, however, users might indeed be interested in seeing the same information from multiple sources, as an\nFigure 1: Performance of Indri across chunks Figure 2: Performance of CAF \u00b4 E across chunks\nTable 1: NDCU Scores of Indri and CAF \u00b4 E for two dampening factors (\u03b3), and various settings (F: Feedback,\nindicator of its importance or reliability.\nIn such a case, we can simply choose a higher value for \u03b3 which corresponds to a higher tolerance for redundancy, and hence let the system tune its parameters accordingly.\nSince documents were processed chunk by chunk, it would be interesting to see how the performance of systems improves over time.\nFigures 1 and 2 show the performance trends for both the systems across chunks.\nWhile the performance with and without feedback on the first few chunks is expected to be close, for subsequent chunks, the performance curve with feedback enabled rises above the one with the no-feedback setting.\nThe performance trends are not consistent across all chunks because on-topic documents are not uniformly distributed over all the chunks, making some queries ` easier' than others in certain chunks.\nMoreover, since Indri uses pseudo-relevance feedback while CAF \u00b4 E uses feedback based on actual relevance judgments, the improvement in case of Indri is less dramatic than that of CAF \u00b4 E.\n7.\nCONCLUDING REMARKS\nThis paper presents the first investigation on utility-based information distillation with a system that learns the longlasting information needs from fine-grained user feedback over a sequence of ranked passages.\nOur system, called CAF \u00b4 E, combines adaptive filtering, novelty detection and antiredundant passage ranking in a unified framework for utility optimization.\nWe developed a new scheme for automated evaluation and feedback based on a semi-automatic procedure for acquiring rules that allow automatically matching nuggets against system responses.\nWe also proposed an extension of the NDCG metric for assessing the utility of ranked passages as a weighted combination of relevance and novelty.\nOur experiments on the newly annotated TDT4 benchmark corpus show encouraging utility enhancement over Indri, and also over our own system with incremental learning and novelty detection turned off.","keyphrases":["tempor order document","adapt filter","adapt filter","novelti detect","passag rank","passag rank","ndcg metric","util-base inform distil","ad-hoc retriev","new evalu methodolog","answer kei","nugget-match rule","unifi framework","util-base distil","\ufb02exibl user feedback","evalu methodolog"],"prmu":["P","P","P","P","P","P","P","M","U","M","M","M","M","M","M","M"]} {"id":"J-7","title":"The Role of Compatibility in the Diffusion of Technologies Through Social Networks","abstract":"In many settings, competing technologies -- for example, operating systems, instant messenger systems, or document formats -- can be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability. There are a range of reasons why this phenomenon occurs, many of which -- based on legal, social, or business considerations -- seem to defy concise mathematical models. Despite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms. Our approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B. We consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or -- at an extra cost -- adopt both A and B. We characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range. We also extend the framework to the case of multiple technologies, where we find that a simple model captures the phenomenon of two firms adopting a limited strategic alliance to defend against a new, third technology.","lvl-1":"The Role of Compatibility in the Diffusion of Technologies Through Social Networks Nicole Immorlica Microsoft Research Redmond WA nickle@microsoft.com Jon Kleinberg Dept. of Computer Science Cornell University, Ithaca NY kleinber@cs.cornell.edu Mohammad Mahdian Yahoo! Research Santa Clara CA mahdian@yahoo-inc.com Tom Wexler Dept. of Computer Science Cornell University, Ithaca NY wexler@cs.cornell.edu ABSTRACT In many settings, competing technologies - for example, operating systems, instant messenger systems, or document formatscan be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability.\nThere are a range of reasons why this phenomenon occurs, many of which - based on legal, social, or business considerations - seem to defy concise mathematical models.\nDespite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms.\nOur approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B.\nWe consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or - at an extra cost - adopt both A and B.\nWe characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range.\nWe also extend the framework to the case of multiple technologies, where we find that a simple This work has been supported in part by NSF grants CCF0325453, IIS-0329064, CNS-0403340, and BCS-0537606, a Google Research Grant, a Yahoo! Research Alliance Grant, the Institute for the Social Sciences at Cornell, and the John D. and Catherine T. MacArthur Foundation.\nmodel captures the phenomenon of two firms adopting a limited strategic alliance to defend against a new, third technology.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION Diffusion and Networked Coordination Games.\nA fundamental question in the social sciences is to understand the ways in which new ideas, behaviors, and practices diffuse through populations.\nSuch issues arise, for example, in the adoption of new technologies, the emergence of new social norms or organizational conventions, or the spread of human languages [2, 14, 15, 16, 17].\nAn active line of research in economics and mathematical sociology is concerned with modeling these types of diffusion processes as a coordination game played on a social network [1, 5, 7, 13, 19].\nWe begin by discussing one of the most basic game-theoretic diffusion models, proposed in an influential paper of Morris [13], which will form the starting point for our work here.\nWe describe it in terms of the following technology adoption scenario, though there are many other examples that would serve the same purpose.\nSuppose there are two instant messenger (IM) systems A and B, which are not interoperable - users must be on the same system in order to communicate.\nThere is a social network G on the users, indicating who wants to talk to whom, and the endpoints of each edge (v, w) play a coordination game with possible strategies A or B: if v and w each choose IM system B, then they they each receive a payoff of q (since they can talk to each other using system B); if they each choose IM system A, then they they each receive a payoff of 1 \u2212 q; and if they choose opposite systems, then they each receive a payoff of 0 (reflecting the lack of interoperability).\nNote that A is the better technology if q < 1 2 , in the sense that A-A payoffs would then exceed B-B payoffs, while A is the worse technology if q > 1 2 .\n75 A number of qualitative insights can be derived from a diffusion model even at this level of simplicity.\nSpecifically, consider a network G, and let all nodes initially play B.\nNow suppose a small number of nodes begin adopting strategy A instead.\nIf we apply best-response updates to nodes in the network, then nodes in effect will be repeatedly applying the following simple rule: switch to A if enough of your network neighbors have already adopted A. (E.g. you begin using a particular IM system - or social-networking site, or electronic document format - if enough of your friends are users of it.)\nAs this unfolds, there can be a cascading sequence of nodes switching to A, such that a network-wide equilibrium is reached in the limit: this equilibrium may involve uniformity, with all nodes adopting A; or it may involve coexistence, with the nodes partitioned into a set adopting A and a set adopting B, and edges yielding zero payoff connecting the two sets.\nMorris [13] provides a set of elegant graph-theoretic characterizations for when these qualitatively different types of equilibria arise, in terms of the underlying network topology and the quality of A relative to B (i.e. the relative sizes of 1 \u2212 q and q).\nCompatibility, Interoperability, and Bilinguality.\nIn most of the settings that form the motivation for diffusion models, coexistence (however unbalanced) is the typical outcome: for example, human languages and social conventions coexist along geographic boundaries; it is a stable outcome for the financial industry to use Windows while the entertainment industry uses Mac OS.\nAn important piece that is arguably missing from the basic game-theoretic models of diffusion, however, is a more detailed picture of what is happening at the coexistence boundary, where the basic form of the model posits nodes that adopt A linked to nodes that adopt B.\nIn these motivating settings for the models, of course, one very often sees interface regions in which individuals essentially become bilingual.\nIn the case of human language diffusion, this bilinguality is meant literally: geographic regions where there is substantial interaction with speakers of two different languages tend to have inhabitants who speak both.\nBut bilinguality is also an essential feature of technological interaction: in the end, many people have accounts on multiple IM systems, for example, and more generally many maintain the ability to work within multiple computer systems so as to collaborate with people embedded in each.\nTaking this view, it is natural to ask how diffusion models behave when extended so that certain nodes can be bilingual in this very general sense, adopting both strategies at some cost to themselves.\nWhat might we learn from such an extension?\nTo begin with, it has the potential to provide a valuable perspective on the question of compatibility and incompatibility that underpins competition among technology companies.\nThere is a large literature on how compatibility among technologies affects competition between firms, and in particular how incompatibility may be a beneficial strategic decision for certain participants in a market [3, 4, 8, 9, 12].\nWhinston [18] provides an interesting taxonomy of different kinds of strategic incompatibility; and specific industry case studies (including theoretical perspectives) have recently been carried out for commercial banks [10], copying and imaging technology [11] and instant messenger systems [6].\nWhile these existing models of compatibility capture network effects in the sense that the users in the market prefer to use technology that is more widespread, they do not capture the more finegrained network phenomenon represented by diffusion - that each user is including its local view in the decision, based on what its own social network neighbors are doing.\nA diffusion model that incorporated such extensions could provide insight into the structure of boundaries in the network between technologies; it could potentially offer a graph-theoretic basis for how incompatibility may benefit an existing technology, by strengthening these boundaries and preventing the incursion of a new, better technology.\nThe present work: Diffusion with bilingual behavior.\nIn this paper, we develop a set of diffusion models that incorporate notions of compatibility and bilinguality, and we find that some unexpected phenomena emerge even from very simple versions of the models.\nWe begin with perhaps the simplest way of extending Morris``s model discussed above to incorporate bilingual behavior.\nConsider again the example of IM systems A and B, with the payoff structure as before, but now suppose that each node can adopt a third strategy, denoted AB, in which it decides to use both A and B.\nAn adopter of AB gets to use, on an edge-by-edge basis, whichever of A or B yields higher payoffs in each interaction, and the payoff structure is defined according to this principle: if an adopter of AB interacts with an adopter of B, both receive q; with an adopter of A, both receive 1 \u2212 q; and with another adopter of AB, both receive max(q, 1 \u2212 q).\nFinally, an adopter of AB pays a fixed-cost penalty of c (i.e. \u2212c is added to its total payoff) to represent the cost of having to maintain both technologies.\nThus, in this model, there are two parameters that can be varied: the relative qualities of the two technologies (encoded by q), and the cost of being bilingual, which reflects a type of incompatibility (encoded by c).\nFollowing [13] we assume the underlying graph G is infinite; we further assume that for some natural number \u0394, each node has degree \u0394.1 We are interested in the question posed at the outset, of whether a new technology A can spread through a network where almost everyone is initially using B. Formally, we say that strategy A can become epidemic if the following holds: starting from a state in which all nodes in a finite set S adopt A, and all other nodes adopt B, a sequence of best-response updates (potentially with tiebreaking) in G \u2212 S causes every node to eventually adopt A.\nWe also introduce one additional bit of notation that will be useful in the subsequent sections: we define r = c\/\u0394, the fixed penalty for adopting AB, scaled so that it is a per-edge cost.\nIn the Morris model, where the only strategic options are A and B, a key parameter is the contagion threshold of G, denoted q\u2217 (G): this is the supremum of q for which A can become epidemic in G with parameter q in the payoff structure.\nA central result of [13] is that 1 2 is the maximum possible contagion threshold for any graph: supG q\u2217 (G) = 1 2 .\nIndeed, there exist graphs in which the contagion threshold is as large as 1 2 (including the infinite line - the unique infinite connected 2-regular graph); on the other hand, one can show there is no graph with a contagion threshold greater than 1 2 .\nIn our model where the bilingual strategy AB is possible, we have a two-dimensional parameter space, so instead of a contagion threshold q\u2217 (G) we have an epidemic region \u03a9(G), which is the subset of the (q, r) plane for which A can become epidemic in G. And in place of the maximum possible contagion threshold supG q\u2217 (G), we must consider the general epidemic region \u03a9 = \u222aG\u03a9(G), where the union is taken over all infinite \u0394-regular graphs; this is the set of all (q, r) values for which A can become epidemic in some \u0394-regular network.\n1 We can obtain strictly analogous results by taking a sequence of finite graphs and expressing results asymptotically, but the use of an infinite bounded-degree graph G makes it conceptually much cleaner to express the results (as it does in Morris``s paper [13]): less intricate quantification is needed to express the diffusion properties, and the qualitative phenomena remain the same.\n76 1\/20 1 r q 0 1\/2 1 Figure 1: The region of the (q, r) plane for which technology A can become epidemic on the infinite line.\nOur Results.\nWe find, first of all, that the epidemic region \u03a9(G) can be unexpectedly complex, even for very simple graphs G. Figure 1 shows the epidemic region for the infinite line; one observes that neither the region \u03a9(G) nor its complement is convex in the positive quadrant, due to the triangular cut-out shape.\n(We find analogous shapes that become even more complex for other simple infinite graph structures; see for example Figures 3 and 4.)\nIn particular, this means that for values of q close to but less than 1 2 , strategy A can become epidemic on the infinite line if r is sufficiently small or sufficiently large, but not if r takes values in some intermediate interval.\nIn other words, strategy B (which represents the worse technology, since q < 1 2 ) will survive if and only if the cost of being bilingual is calibrated to lie in this middle interval.\nThis is a reflection of limited compatibility - that it may be in the interest of an incumbent technology to make it difficult but not too difficult to use a new technology - and we find it surprising that it should emerge from a basic model on such a simple network structure.\nIt is natural to ask whether there is a qualitative interpretation of how this arises from the model, and in fact it is not hard to give such an interpretation, as follows.\nWhen r is very small, it is cheap for nodes to adopt AB as a strategy, and so AB spreads through the whole network.\nOnce AB is everywhere, the best-response updates cause all nodes to switch to A, since they get the same interaction benefits without paying the penalty of r.\nWhen r is very large, nodes at the interface, with one A neighbor and one B neighbor, will find it too expensive to choose AB, so they will choose A (the better technology), and hence A will spread step-by-step through the network.\nWhen r takes an intermediate value, a node v at the interface, with one A neighbor and one B neighbor, will find it most beneficial to adopt AB as a strategy.\nOnce this happens, the neighbor of v who is playing B will not have sufficient incentive to switch, and the best-response updates make no further progress.\nHence, this intermediate value of r allows a boundary of AB to form between the adopters of A and the adopters of B.\nIn short, the situation facing B is this: if it is too permissive, it gets invaded by AB followed by A; if it is too inflexible, forcing nodes to choose just one of A or B, it gets destroyed by a cascade of direct conversions to A.\nBut if it has the right balance in the value of r, then the adoptions of A come to a stop at a bilingual boundary where nodes adopt AB.\nMoving beyond specific graphs G, we find that this non-convexity holds in a much more general sense as well, by considering the general epidemic region \u03a9 = \u222aG\u03a9(G).\nFor any given value of \u0394, the region \u03a9 is a complicated union of bounded and unbounded polygons, and we do not have a simple closed-form description for it.\nHowever, we can show via a potential function argument that no point (q, r) with q > 1 2 belongs to \u03a9.\nMoreover, we can show the existence of a point (q, r) \u2208 \u03a9 for which q < 1 2 .\nOn the other hand, consideration of the epidemic region for the infinite line shows that (1 2 , r) \u2208 \u03a9 for r = 0 and for r sufficiently large.\nHence, neither \u03a9 nor its complement is convex in the positive quadrant.\nFinally, we also extend a characterization that Morris gave for the contagion threshold [13], producing a somewhat more intricate characterization of the region \u03a9(G).\nIn Morris``s setting, without an AB strategy, he showed that A cannot become epidemic with parameter q if and only if every cofinite set of nodes contains a subset S that functions as a well-connected community: every node in S has at least a (1 \u2212 q) fraction of its neighbors in S.\nIn other words, tightly-knit communities are the natural obstacles to diffusion in his setting.\nWith the AB strategy as a further option, a more complex structure becomes the obstacle: we show that A cannot become epidemic with parameters (q, r) if and only if every cofinite set contains a structure consisting of a tightly-knit community with a particular kind of interface of neighboring nodes.\nWe show that such a structure allows nodes to adopt AB at the interface and B inside the community itself, preventing the further spread of A; and conversely, this is the only way for the spread of A to be blocked.\nThe analysis underlying the characterization theorem yields a number of other consequences; a basic one is, roughly speaking, that the outcome of best-response updates is independent of the order in which the updates are sequenced (provided only that each node attempts to update itself infinitely often).\nFurther Extensions.\nAnother way to model compatibility and interoperability in diffusion models is through the off-diagonal terms representing the payoff for interactions between a node adopting A and a node adopting B. Rather than setting these to 0, we can consider setting them to a value x \u2264 min(q, 1 \u2212 q).\nWe find that for the case of two technologies, the model does not become more general, in that any such instance is equivalent, by a re-scaling of q and r, to one where x = 0.\nMoreover, using our characterization of the region \u03a9(G) in terms of communities and interfaces, we show a monotonicty result: if A can become epidemic on a graph G with parameters (q, r, x), and then x is increased, then A can still become epidemic with the new parameters.\nWe also consider the effect of these off-diagonal terms in an extension to k > 2 competing technologies; for technologies X and Y , let qX denote the payoff from an X-X interaction on an edge and qXY denote the payoff from an X-Y interaction on an edge.\nWe consider a setting in which two technologies B and C, which initially coexist with qBC = 0, face the introduction of a third, better technology A at a finite set of nodes.\nWe show an example in which B and C both survive in equilibrium if they set qBC in a particular range of values, but not if they set qBC too low or too high to lie in this range.\nThus, in even in a basic diffusion model with three technologies, one finds cases in which two firms have an incentive to adopt a limited strategic alliance, partially increasing their interoperability to defend against a new entrant in the market.\n2.\nMODEL We now develop some further notation and definitions that will be useful for expressing the model.\nRecall that we have an infinite \u0394-regular graph G, and strategies A, B, and AB that are used in a coordination game on each edge.\nFor edge (v, w), the payoff 77 to each endpoint is 0 if one of the two nodes chooses strategy A and the other chooses strategy B; 1 \u2212 q if one chooses strategy A and the other chooses either A or AB; q if one chooses strategy B and the other chooses either B or AB; and max(q, 1 \u2212 q) if both choose strategy AB.\nThe overall payoff of an agent v is the sum of the above values over all neighbors w of v, minus a cost which is 0 if v chooses A or B and c = r\u0394 if she chooses AB.\nWe refer to the overall game, played by all nodes in G, as a contagion game, and denote it using the tuple (G, q, r).\nThis game can have many Nash equilibria.\nIn particular, the two states where everybody uses technology A or everybody uses technology B are both equilibria of this game.\nAs discussed in the previous section, we are interested in the dynamics of reaching an equilibrium in this game; in particular, we would like to know whether it is possible to move from an all-B equilibrium to an all-A equilibrium by changing the strategy of a finite number of agents, and following a sequence of best-response moves.\nWe provide a formal description of this question via the following two definitions.\nDEFINITION 2.1.\nConsider a contagion game (G, q, r).\nA state in this game is a strategy profile s : V (G) \u2192 {A, B, AB}.\nFor two states s and s and a vertex v \u2208 V (G), if starting from state s and letting v play her best-response move (breaking ties in favor of A and then AB) we get to the state s , we write s v \u2192 s .\nSimilarly, for two states s and s and a finite sequence S = v1, v2, ... , vk of vertices of G (where vi``s are not necessarily distinct), we say s S \u2192 s if there is a sequence of states s1, ... , sk\u22121 such that s v1 \u2192 s1 v2 \u2192 s2 v3 \u2192 \u00b7 \u00b7 \u00b7 sk\u22121 vk \u2192 s .\nFor an infinite sequence S = v1, v2, ... of vertices of G, we denote the subsequence v1, v2, ... , vk by Sk.\nWe say s S \u2192 s for two states s and s if for every vertex v \u2208 V (G) there exists a k0(v) such that for every k > k0(v), s Sk \u2192 sk for a state sk with sk(v) = s (v).\nDEFINITION 2.2.\nFor T \u2286 V (G), we denote by sT the strategy profile that assigns A to every agent in T and B to every agent in V (G) \\ T.\nWe say that technology A can become an epidemic in the game (G, q, r) if there is a finite set T of nodes in G (called the seed set) and a sequence S of vertices in V (G) \\ T (where each vertex can appear more than once) such that sT S \u2192 sV (G), i.e., endowing agents in T with technology A and letting other agents play their best response according to schedule S would lead every agent to eventually adopt strategy A.2 The above definition requires that the all-A equilibrium be reachable from the initial state by at least one schedule S of best-response moves.\nIn fact, we will show in Section 4 that if A can become an epidemic in a game, then for every schedule of best-response moves of the nodes in V (G) \\ T in which each node is scheduled an infinite number of times, eventually all nodes adopt strategy A.3 3.\nEXAMPLES We begin by considering some basic examples that yield epidemic regions with the kinds of non-convexity properties discussed 2 Note that in our definition we assume that agents in T are endowed with the strategy A at the beginning.\nAlternatively, one can define the notion of epidemic by allowing agents in T to be endowed with any combination of AB and A, or with just AB.\nHowever, the difference between these definitions is rather minor and our results carry over with little or no change to these alternative models.\n3 Note that we assume agents in the seed set T cannot change their strategy.\n0\u22121 1 2 Figure 2: The thick line graph in Section 1.\nWe first discuss a natural \u0394-regular generalization of the infinite line graph, and for this one we work out the complete analysis that describes the region \u03a9(G), the set of all pairs (q, r) for which the technology A can become an epidemic.\nWe then describe, without the accompanying detailed analysis, the epidemic regions for the infinite \u0394-regular tree and for the two-dimensional grid.\nThe infinite line and the thick line graph.\nFor a given even integer \u0394, we define the thick line graph L\u0394 as follows: the vertex set of this graph is Z \u00d7 {1, 2, ... , \u0394\/2}, where Z is the set of all integers.\nThere is an edge between vertices (x, i) and (x , i ) if and only if |x \u2212 x | = 1.\nFor each x \u2208 Z, we call the set of vertices {(x, i) : i \u2208 {1, ... , \u0394\/2} the x``th group of vertices.\nFigure 2 shows a picture of L6 Now, assume that starting from a position where every node uses the strategy B, we endow all agents in a group (say, group 0) with the strategy A. Consider the decision faced by the agents in group 1, who have their right-hand neighbors using B and their left-hand neighbors using A. For these agents, the payoffs of strategies A, B, and AB are (1 \u2212 q)\u0394\/2, q\u0394\/2, and \u0394\/2 \u2212 r\u0394, respectively.\nTherefore, if q \u2264 1 2 and q \u2264 2r, the best response of such an agent is A. Hence, if the above inequality holds and we let agents in groups 1, \u22121, 2, \u22122, ... play their best response in this order, then A will become an epidemic.\nAlso, if we have q > 2r and q \u2264 1 \u2212 2r, the best response of an agent with her neighbors on one side playing A and neighbors on the other side playing B is the strategy AB.\nTherefore, if we let agents in groups 1 and \u22121 change to their best response, they would switch their strategy to AB.\nAfter this, agents in group 2 will see AB on their left and B on their right.\nFor these agents (and similarly for the agents in group \u22122), the payoff of strategies A, B, and AB are (1\u2212q)\u0394\/2, q\u0394, and (q+max(q, 1\u2212q))\u0394\/2\u2212 r\u0394, respectively.\nTherefore, if max(1, 2q) \u2212 2r \u2265 1 \u2212 q and max(1, 2q) \u2212 2r \u2265 2q, or equivalently, if 2r \u2264 q and q + r \u2264 1 2 , the best response of such an agent is AB.\nHence, if the above inequality holds and we let agents in groups 2, \u22122, 3, \u22123 ... play their best response in this order, then every agent (except for agents in group 0) switches to AB.\nNext, if we let agents in groups 1, \u22121, 2, \u22122, ... change their strategy again, for q \u2264 1\/2, every agent will switch to strategy A, and hence A becomes an epidemic.4 4 Strictly speaking, since we defined a schedule of moves as a single infinite sequence of vertices in V (G) \\ T, the order 1, \u22121, 2, \u22122, ... , 1, \u22121, 2, \u22122, ... is not a valid schedule.\nHowever, since vertices of G have finite degree, it is not hard to see that any ordering of a multiset containing any (possibly infinite) 78 1\/20 r q 0 1\/4 3\/16 1\/12 1\/4 Figure 3: Epidemic regions for the infinite grid 1\/20 1\/\u0394 r q 0 1\/\u0394 Figure 4: Epidemic regions for the infinite \u0394-regular tree The above argument shows that for any combination of (q, r) parameters in the marked region in Figure 1, technology A can become an epidemic.\nIt is not hard to see that for points outside this region, A cannot become epidemic.\nFurther examples: trees and grids.\nFigures 3 and 4 show the epidemic regions for the infinite grid and the infinite \u0394-regular tree.\nNote they also exhibit non-convexities.\n4.\nCHARACTERIZATION In this section, we characterize equilibrium properties of contagion games.\nTo this end, we must first argue that contagion games in fact have well-defined and stable equilibria.\nWe then discuss some respects in which the equilibrium reached from an initial state is essentially independent of the order in which best-response updates are performed.\nWe begin with the following lemma, which proves that agents eventually converge to a fixed strategy, and so the final state of a game is well-defined by its initial state and an infinite sequence of moves.\nSpecifically, we prove that once an agent decides to adopt technology A, she never discards it, and once she decides to discard technology B, she never re-adopts it.\nThus, after an infinite number of best-response moves, each agent converges to a single strategy.\nLEMMA 4.1.\nConsider a contagion game (G, q, r) and a (possibly infinite) subset T \u2286 V (G) of agents.\nLet sT be the strategy profile assigning A to every agent in T and B to every agent in V (G) \\ T. Let S = v1, v2, ... be a (possibly infinite) sequence of number of copies of each vertex of V (G) \\ T can be turned into an equivalent schedule of moves.\nFor example, the sequence 1, \u22121, 2, \u22122, 1, \u22121, 3, \u22123, 2, \u22122, ... gives the same outcome as 1, \u22121, 2, \u22122, ... , 1, \u22121, 2, \u22122, ... in the thick line example.\nagents in V (G) \\ T and consider the sequence of states s1, s2, ... obtained by allowing agents to play their best-response in the order defined by S (i.e., s v1 \u2192 s1 v2 \u2192 s2 v3 \u2192 \u00b7 \u00b7 \u00b7 ).\nThen for every i, one of the following holds: \u2022 si(vi+1) = B and si+1(vi+1) = A, \u2022 si(vi+1) = B and si+1(vi+1) = AB, \u2022 si(vi+1) = AB and si+1(vi+1) = A, \u2022 si(vi+1) = si+1(vi+1).\nPROOF.\nLet X >k v Y indicate that agent v (weakly) prefers strategy X to strategy Y in state sk.\nFor any k let zk A, zk B, and zk AB be the number of neighbors of v with strategies A, B, and AB in state sk, respectively.\nThus, for agent v in state sk, 1.\nA >k v B if (1 \u2212 q)(zk A + zk AB) is greater than q(zk B + zk AB), 2.\nA >k v AB if (1\u2212 q)(zk A + zk AB) is greater than (1\u2212 q)zk A + qzk B + max(q, 1 \u2212 q)zk AB \u2212 \u0394r, 3.\nand AB >k v B if (1\u2212q)zk A +qzk B +max(q, 1\u2212q)zk AB \u2212\u0394r is greater than q(zk B + zk AB).\nSuppose the lemma is false and consider the smallest i such that the lemma is violated.\nLet v = vi+1 be the agent who played her best response at time i. Thus, either 1.\nsi(v) = A and si+1(v) = B, or 2.\nsi(v) = A and si+1(v) = AB, or 3.\nsi(v) = AB and si+1(v) = B.\nWe show that in the third case, agent v could not have been playing a best response.\nThe other cases are similar.\nIn the third case, we have si(v) = AB and si+1(v) = B.\nAs si(v) = AB, there must be a time j < i where sj v \u2192 sj+1 and sj+1(v) = AB.\nSince this was a best-response move for v, inequality 3 implies that (1 \u2212 q)zj A + max(0, 1 \u2212 2q)zj AB \u2265 \u0394r.\nFurthermore, as i is the earliest time at which the lemma is violated, zi A \u2265 zj A and zj AB \u2212 zi AB \u2264 zi A \u2212 zj A. Thus, the change Q in payoff between AB and B (plus \u0394r) is Q \u2261 (1 \u2212 q)zi A + max(0, 1 \u2212 2q)zi AB \u2265 (1 \u2212 q)(zi A \u2212 zj A + zj A) + max(0, 1 \u2212 2q)(zj AB \u2212 zi A + zj A) = (1 \u2212 q)zj A + max(0, 1 \u2212 2q)zj AB + max(q, 1 \u2212 q)(zi A \u2212 zj A) \u2265 (1 \u2212 q)zj A + max(0, 1 \u2212 2q)zj AB \u2265 \u0394r, and so, by inequality 3, B can not be a better response than AB for v in state si.\nCOROLLARY 4.2.\nFor every infinite sequence S of vertices in V (G) \\ T, there is a unique state s such that s0 S \u2192 s, where s0 denotes the initial state where every vertex in T plays A and every vertex in V (G) \\ T plays B.\nSuch a state s is called the outcome of the game (G, q, r) starting from T and using the schedule S. Equivalence of best-response schedules.\nLemma 4.1 shows that the outcome of a game is well-defined and unique.\nThe following theorems show that the outcome is also invariant to the dynamics, or sequence of best-response moves, under certain mild conditions.\nThe first theorem states that if the all-A equilibrium is the outcome of a game for some (unconstrained) schedule, then it is the outcome for any schedule in which each vertex is allowed to move infinitely many times.\nThe second theorem states that the outcome of a game is the same for any schedule of moves in which every vertex moves infinitely many times.\n79 THEOREM 4.3.\nConsider a contagion game (G, q, r), a subset T \u2286 V (G), and a schedule S of vertices in V (G) \\ T such that the outcome of the game is the all-A equilibrium.\nThen for any schedule S of vertices in V (G) \\ T such that every vertex in this set occurs infinitely many times, the outcome of the game using the schedule S is also the all-A equilibrium.\nPROOF.\nNote that S is a subsequence of S .\nLet \u03c0 : S \u2192 S be the injection mapping S to its subsequence in S .\nWe show for any vi \u2208 S, if vi switches to AB, then \u03c0(vi) switches to AB or A, and if vi switches to A, then \u03c0(vi) switches to A (here v switches to X means that after the best-response move, the strategy of v is X).\nSuppose not and let i be the smallest integer such that the statement doesn``t hold.\nLet zA, zB, and zAB be the number of neighbors of vi with strategies A, B, and AB in the current state defined by S. Define zA,zB, and zAB similarly for S .\nThen, by Lemma 4.1 and the choice of i, zA \u2265 zA, zB \u2264 zB, zAB \u2212 zAB \u2264 zB \u2212 zB, and zAB \u2212 zAB \u2264 zA \u2212 zA.\nNow suppose vi switches to AB.\nThen the same sequence of inequalities as in Lemma 4.1 show that AB is a better response than B for \u03c0(vi) (although A might be the best response) and so \u03c0(vi) switches to either AB or A.\nThe other case (vi switches to A) is similar.\nTHEOREM 4.4.\nConsider a contagion game (G, q, r) and a subset T \u2286 V (G).\nThen for every two schedules S and S of vertices in V (G)\\T such that every vertex in this set occurs infinitely many times in each of these schedules, the outcomes of the game using these schedules are the same.\nPROOF.\nThe proof of this theorem is similar to that of theorem 4.3 and is deferred to the full version of the paper.\nBlocking structures.\nFinally, we prove the characterization mentioned in the introduction: A cannot become epidemic if and only if (G, q, r) possesses a certain kind of blocking structure.\nThis result generalizes Morris``s theorem on the contagion threshold for his model; in his case without AB as a possible strategy, a simpler kind of community structure was the obstacle to A becoming epidemic.\nWe begin by defining the blocking structures.\nDEFINITION 4.5.\nConsider a contagion game (G, q, r).\nA pair (SAB, SB) of disjoint subsets of V (G) is called a blocking structure for this game if for every vertex v \u2208 SAB, degSB (v) > r q \u0394, and for every vertex v \u2208 SB, (1 \u2212 q) degSB (v) + min(q, 1 \u2212 q) degSAB (v) > (1 \u2212 q \u2212 r)\u0394, and degSB (v) + q degSAB (v) > (1 \u2212 q)\u0394, where degS(v) denotes the number of neighbors of v in the set S. THEOREM 4.6.\nFor every contagion game (G, q, r), technology A cannot become epidemic in this game if and only if every co-finite set of vertices of G contains a blocking structure.\nPROOF.\nWe first show that if every co-finite set of vertices of G contains a blocking structure, then technology A cannot become epidemic.\nLet T be any finite set of vertices endowed with technology A, and let (SAB, SB) be the blocking structure contained in V (G) \\ T.\nWe claim that in the outcome of the game for any sequence S of moves, the vertices in SAB have strategy B or AB and the vertices in SB have strategy B. Suppose not and let v be the first vertex in sequence S to violate this (i.e., v \u2208 SAB switches to A or v \u2208 SB switches to A or AB).\nSuppose v \u2208 SAB (the other cases are similar).\nLet zA, zB, and zAB denote the number of neighbors of v with strategies A, B, and AB respectively.\nAs v is the first vertex violating the claim, zA \u2264 \u0394\u2212 degSB (v)\u2212 degSAB (v) and zB \u2265 degSB (v).\nWe show AB is a better strategy than A for v. To show this, we must prove that (1 \u2212 q)zA + qzB + max(q, 1 \u2212 q)zAB \u2212 \u0394r > (1 \u2212 q)(zA + zAB) or, equivalently, the quantity Q \u2261 qzB + max(2q \u2212 1, 0)zAB \u2212 \u0394r > 0: Q = (max(2q \u2212 1, 0) \u2212 r)\u0394 \u2212 max(2q \u2212 1, 0)zA +(q \u2212 max(2q \u2212 1, 0))zB \u2265 (max(2q \u2212 1, 0) \u2212 r)\u0394 + min(q, 1 \u2212 q) degSB (v) \u2212 max(2q \u2212 1, 0)(\u0394 \u2212 degSB (v) \u2212 degSAB (v)) \u2265 [min(q, 1 \u2212 q) + max(2q \u2212 1, 0)] degSB (v) \u2212 r\u0394 = q degSB (v) \u2212 r\u0394 > 0, where the last inequality holds by the definition of the blocking structure.\nWe next show that A cannot become epidemic if and only if every co-finite set of vertices contains a blocking structure.\nTo construct a blocking structure for the complement of a finite set T of vertices, endow T with strategy A and consider the outcome of the game for any sequence S which schedules each vertex an infinite number of times.\nLet SAB be the set of vertices with strategy AB and SB be the set of vertices with strategy B in this outcome.\nNote for any v \u2208 SAB, AB is a best-response and so is strictly better than strategy A, i.e. q degSB (v) + max(q, 1 \u2212 q) degSAB \u2212\u0394r > (1\u2212 q) degSAB (v), from where it follows that degSB (v) > (r\u0394)\/q.\nThe inequalities for the vertices v \u2208 SB can be derived in a similar manner.\nA corollary to the above theorem is that for every infinite graph G, the epidemic regions in the q-r plane for this graph is a finite union of bounded and unbounded polygons.\nThis is because the inequalities defining blocking structures are linear inequalities in q and r, and the coefficients of these inequalities can take only finitely many values.\n5.\nNON-EPIDEMIC REGIONS IN GENERAL GRAPHS The characterization theorem in the previous section provides one way of thinking about the region \u03a9(G), the set of all (q, r) pairs for which A can become epidemic in the game (G, q, r).\nWe now consider the region \u03a9 = \u222aG\u03a9(G), where the union is taken over all infinite \u0394-regular graphs; this is the set of all (q, r) values for which A can become epidemic in some \u0394-regular network.\nThe analysis here uses Lemma 4.1 and an argument based on an appropriately defined potential function.\nThe first theorem shows that no point (q, r) with q > 1 2 belongs to \u03a9.\nSince q > 1 2 implies that the incumbent technology B is superior, it implies that in any network, a superior incumbent will survive for any level of compatibility.\nTHEOREM 5.1.\nFor every \u0394-regular graph G and parameters q and r, the technology A cannot become an epidemic in the game (G, q, r) if q > 1\/2.\nPROOF.\nAssume, for contradiction, that there is a \u0394-regular graph G and values q > 1\/2 and r, a set T of vertices of G that are initially endowed with the strategy A, and a schedule S of moves for vertices in V (G) \\ T such that this sequence leads to an all-A equilibrium.\nWe derive a contradiction by defining a non-negative 80 potential function that starts with a finite value and showing that after each best response by some vertex the value of this function decreases by some positive amount bounded away from zero.\nAt any state in the game, let XA,B denote the number of edges in G that have one endpoint using strategy A and the other endpoint using strategy B. Furthermore, let nAB denote the number of agents using the strategy AB.\nThe potential function is the following: qXA,B + cnAB (recall c = \u0394r is the cost of adopting two technologies).\nSince G has bounded degree and the initial set T is finite, the initial value of this potential function is finite.\nWe now show that every best response move decreases the value of this function by some positive amount bounded away from zero.\nBy Lemma 4.1, we only need to analyze the effect on the potential function for moves of the sort described by the lemma.\nTherefore we have three cases: a node u switches from strategy B to AB, a node u switches from strategy AB to A, or a node u switches from strategy B to A.\nWe consider the first case here; the proofs for the other cases are similar.\nSuppose a node u with strategy B switches to strategy AB.\nLet zAB, zA, and zB denote the number of neighbors of u in partition piece AB, A, and B respectively.\nThus, recalling that q > 1\/2, we see u``s payoff with strategy B is q(zAB + zB) whereas his payoff with strategy AB is q(zAB + zB) + (1 \u2212 q)zA \u2212 c.\nIn order for this strategic change to improve u``s payoff, it must be the case that (1 \u2212 q)zA \u2265 c. (1) Now, notice that such a strategic change on the part of u induces a change in the potential function of \u2212qzA + c as zA edges are removed from the XA,B edges between A and B and the size of partition piece AB is increased by one.\nThis change will be negative so long as zA > c\/q which holds by inequality 1 as q > (1\u2212q) for q > 1\/2.\nFurthermore, as zA can take only finitely many values (zA \u2208 {0, 1, ... , \u0394}), this change is bounded away from zero.\nThis next theorem shows that for any \u0394, there is a point (q, r) \u2208 \u03a9 for which q < 1 2 .\nThis means that there is a setting of the parameters q and r for which the new technology A is superior, but for which the incumbent technology is guaranteed to survive regardless of the underlying network.\nTHEOREM 5.2.\nThere exist q < 1\/2 and r such that for every contagion game (G, q, r), A cannot become epidemic.\nPROOF.\nThe proof is based on the potential function from Theorem 5.1: qXA,B + cnAB.\nWe first show that if q is close enough to 1\/2 and r is chosen appropriately, this potential function is non-increasing.\nSpecifically, let q = 1 2 \u2212 1 64\u0394 and c = r\u0394 = \u03b1, where \u03b1 is any irrational number strictly between 3\/64 and q. Again, there are three cases corresponding to the three possible strategy changes for a node u. Let zAB, zA, and zB denote the number of neighbors of node u in partition piece AB, A, and B respectively.\nCase 1: B \u2192 AB.\nRecalling that q < 1\/2, we see u``s payoff with strategy B is q(zAB + zB) whereas his payoff with strategy AB is (1 \u2212 q)(zAB + zA) + qzB \u2212 c.\nIn order for this strategic change to improve u``s payoff, it must be the case that (1 \u2212 2q)zAB + (1 \u2212 q)zA \u2265 c. (2) Now, notice that such a strategic change on the part of u induces a change in the potential function of \u2212qzA + c as zA edges are removed from the XA,B edges between A and B and the size of partition piece AB is increased by one.\nThis change will be nonpositive so long as zA \u2265 c\/q.\nBy inequality 2 and the fact that zA is an integer, zA \u2265 \u2030 c 1 \u2212 q \u2212 (1 \u2212 2q)zAB 1 \u2212 q \u0131 .\nSubstituting our choice of parameters, (and noting that q \u2208 [1\/4, 1\/2] and zAB \u2264 \u0394), we see that the term inside the ceiling is less than 1 and at least 3\/64 3\/4 \u2212 1\/32 1\/2 > 0.\nThus, the ceiling is one, which is larger than c\/q.\nCase 2: AB \u2192 A. Recalling that q < 1\/2, we see u``s payoff with strategy AB is (1 \u2212 q)(zAB + zA) + qzB \u2212 c whereas her payoff with strategy A is (1 \u2212 q)(zAB + zA).\nIn order for this strategic change to improve u``s payoff, it must be the case that qzB \u2264 c. (3) Such a strategic change on the part of u induces a change in the potential function of qzB \u2212c as zB edges are added to the XA,B edges between A and B and the size of partition piece AB is decreased by one.\nThis change will be non-positive so long as zB \u2264 c\/q, which holds by inequality 3.\nCase 3: B \u2192 A. Note u``s payoff with strategy B is q(zAB + zB) whereas his payoff with strategy A is (1 \u2212 q)(zAB + zA).\nIn order for this strategic change to improve u``s payoff, it must be the case that (1 \u2212 2q)zAB \u2265 qzB \u2212 (1 \u2212 q)zA.\n(4) Such a strategic change on the part of u induces a change in the potential function of q(zB \u2212 zA) as zA edges are removed and zB edges are added to the XA,B edges between A and B.\nThis change will be negative so long as zB < zA.\nBy inequality 4 and the fact that zA is an integer, zA \u2265 qzB 1 \u2212 q + (1 \u2212 2q)zAB 1 \u2212 q .\nSubstituting our choice of parameters, it is easy to see that the term inside the floor is at most zB + 1\/4, and so the floor is at most zB as zB is an integer.\nWe have shown the potential function is non-increasing for our choice of q and c.\nThis implies the potential function is eventually constant.\nAs c is irrational and the remaining terms are always rational, both nAB and XA,B must remain constant for the potential function as a whole to remain constant.\nSuppose A is epidemic in this region.\nAs nAB is constant and A is epidemic, it must be that nAB = 0.\nThus, the only moves involve a node u switching from strategy B to strategy A.\nIn order for XA,B to be constant for such moves, it must be that zA (the number of neighbors of u in A) equals zB (the number of neighbors of u in B) and, as nAB = 0, we have that zA = zB = \u0394\/2.\nThus, the payoff of u for strategy A is (1 \u2212 q)zA < \u0394\/4 whereas her payoff for strategy AB is (1\u2212q)zA +qzB \u2212c > \u0394\/2\u2212q \u2265 \u0394\/4.\nThis contradicts the assumption that u is playing her best response by switching to A. 6.\nLIMITED COMPATIBILITY We now consider some further ways of modeling compatibility and interoperability.\nWe first consider two technologies, as in the previous sections, and introduce off-diagonal payoffs to capture a positive benefit in direct A-B interactions.\nWe find that this is 81 in fact no more general than the model with zero payoffs for A-B interactions.\nWe then consider extensions to three technologies, identifying situations in which two coexisting incumbent technologies may or may not want to increases their mutual compatibility in the face of a new, third technology.\nTwo technologies.\nA natural relaxation of the two-technology model is to introduce (small) positive payoffs for A-B interaction; that is, cross-technology communication yields some lesser value to both agents.\nWe can model this using a variable xAB representing the payoff gathered by an agent with technology A when her neighbor has technology B, and similarly, a variable xBA representing the payoff gathered by an agent with B when her neighbor has A.\nHere we consider the special case in which these off-diagonal entries are symmetric, i.e., xAB = xBA = x.\nWe also assume that x < q \u2264 1 \u2212 q.\nWe first show that the game with off-diagonal entries is equivalent to a game without these entries, under a simple re-scaling of q and r. Note that if we re-scale all payoffs by either an additive or a multiplicative constant, the behavior of the game is unaffected.\nGiven a game with off-diagonal entries parameterized by q, r and x, consider subtracting x from all payoffs, and scaling up by a factor of 1\/(1 \u2212 2x).\nAs can be seen by examining Table 1, the resulting payoffs are exactly those of a game without off-diagonal entries, parameterized by q = (q \u2212 x)\/(1 \u2212 2x) and r = r\/(1 \u2212 2x).\nThus the addition of symmetric off-diagonal entries does not expand the class of games being considered.\nTable 1 represents the payoffs in the coordination game in terms of these parameters.\nNevertheless, we can still ask how the addition of an off-diagonal entry might affect the outcome of any particular game.\nAs the following example shows, increasing compatibility between two technologies can allow one technology that was not initially epidemic to become so.\nEXAMPLE 6.1.\nConsider the contagion game played on a thick line graph (see Section 3) with r = 5\/32 and q = 3\/8.\nIn this case, A is not epidemic, as can be seen by examining Figure 1, since 2r < q and q + r > 1\/2.\nHowever, if we insert symmetric off-diagonal payoffs x = 1\/4, we have a new game, equivalent to a game parameterized by r = 5\/16 and q = 1\/4.\nSince q < 1\/2 and q < 2r , A is epidemic in this game, and thus also in the game with limited compatibility.\nWe now show that generally, if A is the superior technology (i.e., q < 1\/2), adding a compatibility term x can only help A spread.\nTHEOREM 6.2.\nLet G be a game without compatibility, parameterized by r and q on a particular network.\nLet G be that same game, but with an added symmetric compatibility term x.\nIf A is epidemic for G, then A is epidemic for G .\nPROOF.\nWe will show that any blocking structure in G is also a blocking structure in G. By our characterization theorem, Theorem 4.6, this implies the desired result.\nWe have that G is equivalent to a game without compatibility parameterized by q = (q \u2212 x)\/(1 \u2212 2x) and r = r\/(1 \u2212 2x).\nConsider a blocking structure (SB, SAB) for G .\nWe know that for any v \u2208 SAB, q dSB (v) > r \u0394.\nThus qdSB (v) > (q \u2212 x)dSB (v) = q (1 \u2212 2x)dSB (v) > r (1 \u2212 2x)\u0394 = r\u0394, as required for a blocking structure in G. Similarly, the two blocking structure constraints for v \u2208 SB are only strengthened when we move from G to G.\nMore than two technologies.\nGiven the complex structure inherent in contagion games with two technologies, the understanding of contagion games with three or more technologies is largely open.\nHere we indicate some of the technical issues that come up with multiple technologies, through a series of initial results.\nThe basic set-up we study is one in which two incumbent technologies B and C are initially coexisting, and a third technology A, superior to both, is introduced initially at a finite set of nodes.\nWe first present a theorem stating that for any even \u0394, there is a contagion game on a \u0394\u2212regular graph in which the two incumbent technologies B and C may find it beneficial to increase their compatibility so as to prevent getting wiped out by the new superior technology A.\nIn particular, we consider a situation in which initially, two technologies B and C with zero compatibility are at a stable state.\nBy a stable state, we mean that no finite perturbation of the current states can lead to an epidemic for either B or C.\nWe also have a technology A that is superior to both B and C, and can become epidemic by forcing a single node to choose A. However, by increasing their compatibility, B and C can maintain their stability and resist an epidemic from A. Let qA denote the payoffs to two adjacent nodes that both choose technology A, and define qB and qC analogously.\nWe will assume qA > qB > qC .\nWe also assume that r, the cost of selecting additional technologies, is sufficiently large so as to ensure that nodes never adopt more than one technology.\nFinally, we consider a compatibility parameter qBC that represents the payoffs to two adjacent nodes when one selects B and the other selects C. Thus our contagion game is now described by five parameters (G, qA, qB, qC , qBC ).\nTHEOREM 6.3.\nFor any even \u0394 \u2265 12, there is a \u0394-regular graph G, an initial state s, and values qA, qB, qC , and qBC , such that \u2022 s is an equilibrium in both (G, qA, qB, qC , 0) and (G, qA, qB, qC , qBC ), \u2022 neither B nor C can become epidemic in either (G, qA, qB, qC , 0) or (G, qA, qB, qC , qBC ) starting from state s, \u2022 A can become epidemic (G, qA, qB, qC , 0) starting from state s, and \u2022 A can not become epidemic in (G, qA, qB, qC , qBC ) starting from state s. PROOF.\n(Sketch.)\nGiven \u0394, define G by starting with an infinite grid and connecting each node to its nearest \u0394 \u2212 2 neighbors that are in the same row.\nThe initial state s assigns strategy B to even rows and strategy C to odd rows.\nLet qA = 4k2 + 4k + 1\/2, qB = 2k + 2, qC = 2k + 1, and qBC = 2k + 3\/4.\nThe first, third, and fourth claims in the theorem can be verified by checking the corresponding inequalities.\nThe second claim follows from the first and the observation that the alternating rows contain any plausible epidemic from growing vertically.\nThe above theorem shows that two technologies may both be able to survive the introduction of a new technology by increasing their level of compatibility with each other.\nAs one might expect, 82 A B AB A (1 \u2212 q; 1 \u2212 q) (x; x) (1 \u2212 q; 1 \u2212 q \u2212 r) B (x; x) (q; q) (q; q \u2212 r) AB (1 \u2212 q \u2212 r; 1 \u2212 q) (q \u2212 r; q) (max(q, 1 \u2212 q) \u2212 r; max(q, 1 \u2212 q) \u2212 r) Table 1: The payoffs in the coordination game.\nEntry (x, y) in row i, column j indicates that the row player gets a payoff of x and the column player gets a payoff of y when the row player plays strategy i and the column player plays strategy j. there are cases when increased compatibility between two technologies helps one technology at the expense of the other.\nSurprisingly, however, there are also instances in which compatibility is in fact harmful to both parties; the next example considers a fixed initial configuration with technologies A, B and C that is at equilibrium when qBC = 0.\nHowever, if this compatibility term is increased sufficiently, equilibrium is lost, and A becomes epidemic.\nEXAMPLE 6.4.\nConsider the union of an infinite two-dimensional grid graph with nodes u(x, y) and an infinite line graph with nodes v(y).\nAdd an edge between u(1, y) and v(y) for all y. For this network, we consider the initial configuration in which all v(y) nodes select A, and node u(x, y) selects B if x < 0 and selects C otherwise.\nWe now define the parameters of this game as follows.\nLet qA = 3.95, qB = 1.25, qC = 1, and qBC = 0.\nIt is easily verified that for these values, the initial configuration given above is an equilibrium.\nHowever, now suppose we increase the coordination term, setting qBC = 0.9.\nThis is not an equilibrium, since each node of the form u(0, y) now has an incentive to switch from C (generating a payoff of 3.9) to B (thereby generating a payoff of 3.95).\nHowever, once these nodes have adopted B, the best-response for each node of the form u(1, y) is A (A generates a payoff of 4 where as B only generates a payoff of 3.95).\nFrom here, it is not hard to show that A spreads directly throughout the entire network.\n7.\nREFERENCES [1] L. Blume.\nThe statistical mechanics of strategic interaction.\nGames and Economic Behavior, 5:387-424, 1993.\n[2] R. L. Cooper (editor).\nLanguage spread: Studies in diffusion and social change.\nIndiana U. Press, 1982.\n[3] N. Economides.\nDesirability of Compatibility in the Absence of Network Externalities.\nAmerican Economic Review, 79(1989), pp. 1165-1181.\n[4] N. Economides.\nRaising Rivals'' Costs in Complementary Goods Markets: LECs Entering into Long Distance and Microsoft Bundling Internet Explorer.\nNYU Center for Law and Business Working Paper 98-004, 1998.\n[5] G. Ellison.\nLearning, local interaction, and coordination.\nEconometrica, 61:1047-1071, 1993.\n[6] G. Faulhaber.\nNetwork Effects and Merger Analysis: Instant Messaging and the AOL-Time Warner Case.\nTelecommunications Policy, Jun\/Jul 2002, 26, 311-333 [7] M. Jackson and L. Yariv.\nDiffusion on social networks.\nEconomiePublique, 16:69-82, 2005.\n[8] M. Katz and C. Shapiro.\nNetwork Externalities, Competition and Compatibility.\nAmerican Economic Review.\n75(1985), 424-40.\n[9] M. Kearns, L. Ortiz.\nAlgorithms for Interdependent Security Games.\nNIPS 2003.\n[10] C. R. Knittel and V. Stango.\nStrategic Incompatibility in ATM Markets.\nNBER Working Paper No. 12604, October 2006.\n[11] J. Mackie-Mason and J. Metzler.\nLinks Between Markets and Aftermarkets: Kodak (1997).\nIn Kwoka and White eds., The Antitrust Revolution, Oxford, 2004.\n[12] C. Matutes and P. Regibeau.\nMix and Match: Product Compatibility without Network Externalities.\nRAND Journal of Economics, 19(1988), pp. 221-234.\n[13] S. Morris.\nContagion.\nReview of Economic Studies, 67:57-78, 2000.\n[14] E. Rogers.\nDiffusion of innovations.\nFree Press, fourth edition, 1995.\n[15] T. Schelling.\nMicromotives and Macrobehavior.\nNorton, 1978.\n[16] D. Strang and S. Soule.\nDiffusion in organizations and social movements: From hybrid corn to poison pills.\nAnnual Review of Sociology, 24:265-290, 1998.\n[17] T. Valente.\nNetwork Models of the Diffusion of Innovations.\nHampton Press, 1995.\n[18] M. Whinston.\nTying, Foreclosure, and Exclusion.\nAmerican Economic Review 80(1990), 837-59.\n[19] H. Peyton Young.\nIndividual Strategy and Social Structure: An Evolutionary Theory of Institutions.\nPrinceton University Press, 1998.\n83","lvl-3":"The Role of Compatibility in the Diffusion of Technologies Through Social Networks\nABSTRACT\nIn many settings, competing technologies--for example, operating systems, instant messenger systems, or document formats--can be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability.\nThere are a range of reasons why this phenomenon occurs, many of which--based on legal, social, or business considerations--seem to defy concise mathematical models.\nDespite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms.\nOur approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B.\nWe consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or--at an extra cost--adopt both A and B.\nWe characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range.\nWe also extend the framework to the case of multiple technologies, where we find that a simple This work has been supported in part by NSF grants CCF0325453, IIS-0329064, CNS-0403340, and BCS-0537606, a Google Research Grant, a Yahoo! Research Alliance Grant, the Institute for the Social Sciences at Cornell, and the John D. and Catherine T. MacArthur Foundation.\nmodel captures the phenomenon of two firms adopting a limited \"strategic alliance\" to defend against a new, third technology.\n1.\nINTRODUCTION\nDiffusion and Networked Coordination Games.\nA fundamental question in the social sciences is to understand the ways in which new ideas, behaviors, and practices diffuse through populations.\nSuch issues arise, for example, in the adoption of new technologies, the emergence of new social norms or organizational conventions, or the spread of human languages [2, 14, 15, 16, 17].\nAn active line of research in economics and mathematical sociology is concerned with modeling these types of diffusion processes as a coordination game played on a social network [1, 5, 7, 13, 19].\nWe begin by discussing one of the most basic game-theoretic diffusion models, proposed in an influential paper of Morris [13], which will form the starting point for our work here.\nWe describe it in terms of the following technology adoption scenario, though there are many other examples that would serve the same purpose.\nSuppose there are two instant messenger (IM) systems A and B, which are not interoperable--users must be on the same system in order to communicate.\nThere is a social network G on the users, indicating who wants to talk to whom, and the endpoints of each edge (v, w) play a coordination game with possible strategies A or B: if v and w each choose IM system B, then they they each receive a payoff of q (since they can talk to each other using system B); if they each choose IM system A, then they they each receive a payoff of 1 \u2212 q; and if they choose opposite systems, then they each receive a payoff of 0 (reflecting the lack of interoperability).\nNote that A is the \"better\" technology if q <21, in the sense that A-A payoffs would then exceed B-B payoffs, while A is the worse technology if q> 21.\nA number of qualitative insights can be derived from a diffusion model even at this level of simplicity.\nSpecifically, consider a network G, and let all nodes initially play B.\nNow suppose a small number of nodes begin adopting strategy A instead.\nIf we apply best-response updates to nodes in the network, then nodes in effect will be repeatedly applying the following simple rule: switch to A if enough of your network neighbors have already adopted A. (E.g. you begin using a particular IM system--or social-networking site, or electronic document format--if enough of your friends are users of it.)\nAs this unfolds, there can be a cascading sequence of nodes switching to A, such that a network-wide equilibrium is reached in the limit: this equilibrium may involve uniformity, with all nodes adopting A; or it may involve coexistence, with the nodes partitioned into a set adopting A and a set adopting B, and edges yielding zero payoff connecting the two sets.\nMorris [13] provides a set of elegant graph-theoretic characterizations for when these qualitatively different types of equilibria arise, in terms of the underlying network topology and the quality of A relative to B (i.e. the relative sizes of 1--q and q).\nCompatibility, Interoperability, and Bilinguality.\nIn most of the settings that form the motivation for diffusion models, coexistence (however unbalanced) is the typical outcome: for example, human languages and social conventions coexist along geographic boundaries; it is a stable outcome for the financial industry to use Windows while the entertainment industry uses Mac OS.\nAn important piece that is arguably missing from the basic game-theoretic models of diffusion, however, is a more detailed picture of what is happening at the coexistence boundary, where the basic form of the model posits nodes that adopt A linked to nodes that adopt B.\nIn these motivating settings for the models, of course, one very often sees interface regions in which individuals essentially become \"bilingual.\"\nIn the case of human language diffusion, this bilinguality is meant literally: geographic regions where there is substantial interaction with speakers of two different languages tend to have inhabitants who speak both.\nBut bilinguality is also an essential feature of technological interaction: in the end, many people have accounts on multiple IM systems, for example, and more generally many maintain the ability to work within multiple computer systems so as to collaborate with people embedded in each.\nTaking this view, it is natural to ask how diffusion models behave when extended so that certain nodes can be bilingual in this very general sense, adopting both strategies at some cost to themselves.\nWhat might we learn from such an extension?\nTo begin with, it has the potential to provide a valuable perspective on the question of compatibility and incompatibility that underpins competition among technology companies.\nThere is a large literature on how compatibility among technologies affects competition between firms, and in particular how incompatibility may be a beneficial strategic decision for certain participants in a market [3, 4, 8, 9, 12].\nWhinston [18] provides an interesting taxonomy of different kinds of strategic incompatibility; and specific industry case studies (including theoretical perspectives) have recently been carried out for commercial banks [10], copying and imaging technology [11] and instant messenger systems [6].\nWhile these existing models of compatibility capture network effects in the sense that the users in the market prefer to use technology that is more widespread, they do not capture the more finegrained network phenomenon represented by diffusion--that each user is including its local view in the decision, based on what its own social network neighbors are doing.\nA diffusion model that incorporated such extensions could provide insight into the structure of boundaries in the network between technologies; it could potentially offer a graph-theoretic basis for how incompatibility may benefit an existing technology, by strengthening these boundaries and preventing the incursion of a new, better technology.\nThe present work: Diffusion with bilingual behavior.\nIn this paper, we develop a set of diffusion models that incorporate notions of compatibility and bilinguality, and we find that some unexpected phenomena emerge even from very simple versions of the models.\nWe begin with perhaps the simplest way of extending Morris's model discussed above to incorporate bilingual behavior.\nConsider again the example of IM systems A and B, with the payoff structure as before, but now suppose that each node can adopt a third strategy, denoted AB, in which it decides to use both A and B.\nAn adopter of AB gets to use, on an edge-by-edge basis, whichever of A or B yields higher payoffs in each interaction, and the payoff structure is defined according to this principle: if an adopter of AB interacts with an adopter of B, both receive q; with an adopter of A, both receive 1--q; and with another adopter of AB, both receive max (q, 1--q).\nFinally, an adopter of AB pays a fixed-cost penalty of c (i.e.--c is added to its total payoff) to represent the cost of having to maintain both technologies.\nThus, in this model, there are two parameters that can be varied: the relative qualities of the two technologies (encoded by q), and the cost of being bilingual, which reflects a type of incompatibility (encoded by c).\nFollowing [13] we assume the underlying graph G is infinite; we further assume that for some natural number \u0394, each node has degree \u0394 .1 We are interested in the question posed at the outset, of whether a new technology A can spread through a network where almost everyone is initially using B. Formally, we say that strategy A can become epidemic if the following holds: starting from a state in which all nodes in a finite set S adopt A, and all other nodes adopt B, a sequence of best-response updates (potentially with tiebreaking) in G--S causes every node to eventually adopt A.\nWe also introduce one additional bit of notation that will be useful in the subsequent sections: we define r = c \/ \u0394, the fixed penalty for adopting AB, scaled so that it is a per-edge cost.\nIn the Morris model, where the only strategic options are A and B, a key parameter is the contagion threshold of G, denoted q \u2217 (G): this is the supremum of q for which A can become epidemic in G with parameter q in the payoff structure.\nA central result of [13] is that 21 is the maximum possible contagion threshold for any graph: supG q \u2217 (G) = 21.\nIndeed, there exist graphs in which the contagion threshold is as large as 21 (including the infinite line--the unique infinite connected 2-regular graph); on the other hand, one can show there is no graph with a contagion threshold greater than\nIn our model where the bilingual strategy AB is possible, we have a two-dimensional parameter space, so instead of a contagion threshold q \u2217 (G) we have an epidemic region \u03a9 (G), which is the subset of the (q, r) plane for which A can become epidemic in G. And in place of the maximum possible contagion threshold supG q \u2217 (G), we must consider the general epidemic region \u03a9 = UG\u03a9 (G), where the union is taken over all infinite \u0394-regular graphs; this is the set of all (q, r) values for which A can become epidemic in some \u0394-regular network.\n1We can obtain strictly analogous results by taking a sequence of finite graphs and expressing results asymptotically, but the use of an infinite bounded-degree graph G makes it conceptually much cleaner to express the results (as it does in Morris's paper [13]): less intricate quantification is needed to express the diffusion properties, and the qualitative phenomena remain the same.\nFigure 1: The region of the (q, r) plane for which technology A can become epidemic on the infinite line.\nOur Results.\nWe find, first of all, that the epidemic region \u03a9 (G) can be unexpectedly complex, even for very simple graphs G. Figure 1 shows the epidemic region for the infinite line; one observes that neither the region \u03a9 (G) nor its complement is convex in the positive quadrant, due to the triangular \"cut-out\" shape.\n(We find analogous shapes that become even more complex for other simple infinite graph structures; see for example Figures 3 and 4.)\nIn particular, this means that for values of q close to but less than 21, strategy A can become epidemic on the infinite line if r is sufficiently small or sufficiently large, but not if r takes values in some intermediate interval.\nIn other words, strategy B (which represents the worse technology, since q <21) will survive if and only if the cost of being bilingual is calibrated to lie in this middle interval.\nThis is a reflection of limited compatibility--that it may be in the interest of an incumbent technology to make it difficult but not too difficult to use a new technology--and we find it surprising that it should emerge from a basic model on such a simple network structure.\nIt is natural to ask whether there is a qualitative interpretation of how this arises from the model, and in fact it is not hard to give such an interpretation, as follows.\nWhen r is very small, it is cheap for nodes to adopt AB as a strategy, and so AB spreads through the whole network.\nOnce AB is everywhere, the best-response updates cause all nodes to switch to A, since they get the same interaction benefits without paying the penalty of r.\nWhen r is very large, nodes at the interface, with one A neighbor and one B neighbor, will find it too expensive to choose AB, so they will choose A (the better technology), and hence A will spread step-by-step through the network.\nWhen r takes an intermediate value, a node v at the interface, with one A neighbor and one B neighbor, will find it most beneficial to adopt AB as a strategy.\nOnce this happens, the neighbor of v who is playing B will not have sufficient incentive to switch, and the best-response updates make no further progress.\nHence, this intermediate value of r allows a \"boundary\" of AB to form between the adopters of A and the adopters of B.\nIn short, the situation facing B is this: if it is too permissive, it gets invaded by AB followed by A; if it is too inflexible, forcing nodes to choose just one of A or B, it gets destroyed by a cascade of direct conversions to A.\nBut if it has the right balance in the value of r, then the adoptions of A come to a stop at a bilingual boundary where nodes adopt AB.\nMoving beyond specific graphs G, we find that this non-convexity holds in a much more general sense as well, by considering the general epidemic region \u03a9 = UG\u03a9 (G).\nFor any given value of \u0394, the region \u03a9 is a complicated union of bounded and unbounded polygons, and we do not have a simple closed-form description for it.\nHowever, we can show via a potential function argument that no point (q, r) with q> 21 belongs to \u03a9.\nMoreover, we can show the existence of a point (q, r) E ~ \u03a9 for which q <21.\nOn the other hand, consideration of the epidemic region for the infinite line shows that (21, r) E \u03a9 for r = 0 and for r sufficiently large.\nHence, neither \u03a9 nor its complement is convex in the positive quadrant.\nFinally, we also extend a characterization that Morris gave for the contagion threshold [13], producing a somewhat more intricate characterization of the region \u03a9 (G).\nIn Morris's setting, without an AB strategy, he showed that A cannot become epidemic with parameter q if and only if every cofinite set of nodes contains a subset S that functions as a well-connected \"community\": every node in S has at least a (1--q) fraction of its neighbors in S.\nIn other words, tightly-knit communities are the natural obstacles to diffusion in his setting.\nWith the AB strategy as a further option, a more complex structure becomes the obstacle: we show that A cannot become epidemic with parameters (q, r) if and only if every cofinite set contains a structure consisting of a tightly-knit community with a particular kind of \"interface\" of neighboring nodes.\nWe show that such a structure allows nodes to adopt AB at the interface and B inside the community itself, preventing the further spread of A; and conversely, this is the only way for the spread of A to be blocked.\nThe analysis underlying the characterization theorem yields a number of other consequences; a basic one is, roughly speaking, that the outcome of best-response updates is independent of the order in which the updates are sequenced (provided only that each node attempts to update itself infinitely often).\nFurther Extensions.\nAnother way to model compatibility and interoperability in diffusion models is through the \"off-diagonal\" terms representing the payoff for interactions between a node adopting A and a node adopting B. Rather than setting these to 0, we can consider setting them to a value x 2 competing technologies; for technologies X and Y, let qX denote the payoff from an X-X interaction on an edge and qXY denote the payoff from an X-Y interaction on an edge.\nWe consider a setting in which two technologies B and C, which initially coexist with qBC = 0, face the introduction of a third, better technology A at a finite set of nodes.\nWe show an example in which B and C both survive in equilibrium if they set qBC in a particular range of values, but not if they set qBC too low or too high to lie in this range.\nThus, in even in a basic diffusion model with three technologies, one finds cases in which two firms have an incentive to adopt a limited \"strategic alliance,\" partially increasing their interoperability to defend against a new entrant in the market.\n2.\nMODEL\n3.\nEXAMPLES\n4.\nCHARACTERIZATION\n5.\nNON-EPIDEMIC REGIONS IN GENERAL GRAPHS\n6.\nLIMITED COMPATIBILITY\nWe now consider some further ways of modeling compatibility and interoperability.\nWe first consider two technologies, as in the previous sections, and introduce \"off-diagonal\" payoffs to capture a positive benefit in direct A-B interactions.\nWe find that this is\nin fact no more general than the model with zero payoffs for A-B interactions.\nWe then consider extensions to three technologies, identifying situations in which two coexisting incumbent technologies may or may not want to increases their mutual compatibility in the face of a new, third technology.\nTwo technologies.\nA natural relaxation of the two-technology model is to introduce (small) positive payoffs for A-B interaction; that is, cross-technology communication yields some lesser value to both agents.\nWe can model this using a variable xAB representing the payoff gathered by an agent with technology A when her neighbor has technology B, and similarly, a variable xBA representing the payoff gathered by an agent with B when her neighbor has A.\nHere we consider the special case in which these \"off-diagonal\" entries are symmetric, i.e., xAB = xBA = x.\nWe also assume that x 1\/2.\nHowever, if we insert symmetric off-diagonal payoffs x = 1\/4, we have a new game, equivalent to a game parameterized by r' = 5\/16 and q' = 1\/4.\nSince q' <1\/2 and q' <2r', A is epidemic in this game, and thus also in the game with limited compatibility.\nWe now show that generally, if A is the superior technology (i.e., q <1\/2), adding a compatibility term x can only help A spread.\nTHEOREM 6.2.\nLet G be a game without compatibility, parameterized by r and q on a particular network.\nLet G' be that same game, but with an added symmetric compatibility term x.\nIf A is epidemic for G, then A is epidemic for G'.\nPROOF.\nWe will show that any blocking structure in G' is also a blocking structure in G. By our characterization theorem, Theorem 4.6, this implies the desired result.\nWe have that G' is equivalent to a game without compatibility parameterized by q' = (q--x) \/ (1--2x) and r' = r \/ (1--2x).\nConsider a blocking structure (SB, SAB) for G'.\nWe know that for any v E SAB, q'd SB (v)> r' \u0394.\nThus\nas required for a blocking structure in G. Similarly, the two blocking structure constraints for v E SB are only strengthened when we move from G' to G.\nMore than two technologies.\nGiven the complex structure inherent in contagion games with two technologies, the understanding of contagion games with three or more technologies is largely open.\nHere we indicate some of the technical issues that come up with multiple technologies, through a series of initial results.\nThe basic set-up we study is one in which two incumbent technologies B and C are initially coexisting, and a third technology A, superior to both, is introduced initially at a finite set of nodes.\nWe first present a theorem stating that for any even \u0394, there is a contagion game on a \u0394--regular graph in which the two incumbent technologies B and C may find it beneficial to increase their compatibility so as to prevent getting wiped out by the new superior technology A.\nIn particular, we consider a situation in which initially, two technologies B and C with zero compatibility are at a stable state.\nBy a stable state, we mean that no finite perturbation of the current states can lead to an epidemic for either B or C.\nWe also have a technology A that is superior to both B and C, and can become epidemic by forcing a single node to choose A. However, by increasing their compatibility, B and C can maintain their stability and resist an epidemic from A. Let qA denote the payoffs to two adjacent nodes that both choose technology A, and define qB and qC analogously.\nWe will assume qA> qB> qC.\nWe also assume that r, the cost of selecting additional technologies, is sufficiently large so as to ensure that nodes never adopt more than one technology.\nFinally, we consider a compatibility parameter qBC that represents the payoffs to two adjacent nodes when one selects B and the other selects C. Thus our contagion game is now described by five parameters (G, qA, qB, qC, qBC).\nPROOF.\n(Sketch.)\nGiven \u0394, define G by starting with an infinite grid and connecting each node to its nearest \u0394--2 neighbors that are in the same row.\nThe initial state s assigns strategy B to even rows and strategy C to odd rows.\nLet qA = 4k2 + 4k + 1\/2, qB = 2k + 2, qC = 2k + 1, and qBC = 2k + 3\/4.\nThe first, third, and fourth claims in the theorem can be verified by checking the corresponding inequalities.\nThe second claim follows from the first and the observation that the alternating rows contain any plausible epidemic from growing vertically.\nThe above theorem shows that two technologies may both be able to survive the introduction of a new technology by increasing their level of compatibility with each other.\nAs one might expect,\nTable 1: The payoffs in the coordination game.\nEntry (x, y) in row i, column j indicates that the row player gets a payoff of x and the column player gets a payoff of y when the row player plays strategy i and the column player plays strategy j.\nthere are cases when increased compatibility between two technologies helps one technology at the expense of the other.\nSurprisingly, however, there are also instances in which compatibility is in fact harmful to both parties; the next example considers a fixed initial configuration with technologies A, B and C that is at equilibrium when qBC = 0.\nHowever, if this compatibility term is increased sufficiently, equilibrium is lost, and A becomes epidemic.\nEXAMPLE 6.4.\nConsider the union of an infinite two-dimensional grid graph with nodes u (x, y) and an infinite line graph with nodes v (y).\nAdd an edge between u (1, y) and v (y) for all y. For this network, we consider the initial configuration in which all v (y) nodes select A, and node u (x, y) selects B if x <0 and selects C otherwise.\nWe now define the parameters of this game as follows.\nLet qA = 3.95, qB = 1.25, qC = 1, and qBC = 0.\nIt is easily verified that for these values, the initial configuration given above is an equilibrium.\nHowever, now suppose we increase the coordination term, setting qBC = 0.9.\nThis is not an equilibrium, since each node of the form u (0, y) now has an incentive to switch from C (generating a payoff of 3.9) to B (thereby generating a payoff of 3.95).\nHowever, once these nodes have adopted B, the best-response for each node of the form u (1, y) is A (A generates a payoff of 4 where as B only generates a payoff of 3.95).\nFrom here, it is not hard to show that A spreads directly throughout the entire network.","lvl-4":"The Role of Compatibility in the Diffusion of Technologies Through Social Networks\nABSTRACT\nIn many settings, competing technologies--for example, operating systems, instant messenger systems, or document formats--can be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability.\nThere are a range of reasons why this phenomenon occurs, many of which--based on legal, social, or business considerations--seem to defy concise mathematical models.\nDespite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms.\nOur approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B.\nWe consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or--at an extra cost--adopt both A and B.\nWe characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range.\nWe also extend the framework to the case of multiple technologies, where we find that a simple This work has been supported in part by NSF grants CCF0325453, IIS-0329064, CNS-0403340, and BCS-0537606, a Google Research Grant, a Yahoo! Research Alliance Grant, the Institute for the Social Sciences at Cornell, and the John D. and Catherine T. MacArthur Foundation.\nmodel captures the phenomenon of two firms adopting a limited \"strategic alliance\" to defend against a new, third technology.\n1.\nINTRODUCTION\nDiffusion and Networked Coordination Games.\nSuch issues arise, for example, in the adoption of new technologies, the emergence of new social norms or organizational conventions, or the spread of human languages [2, 14, 15, 16, 17].\nAn active line of research in economics and mathematical sociology is concerned with modeling these types of diffusion processes as a coordination game played on a social network [1, 5, 7, 13, 19].\nWe begin by discussing one of the most basic game-theoretic diffusion models, proposed in an influential paper of Morris [13], which will form the starting point for our work here.\nWe describe it in terms of the following technology adoption scenario, though there are many other examples that would serve the same purpose.\nNote that A is the \"better\" technology if q <21, in the sense that A-A payoffs would then exceed B-B payoffs, while A is the worse technology if q> 21.\nA number of qualitative insights can be derived from a diffusion model even at this level of simplicity.\nSpecifically, consider a network G, and let all nodes initially play B.\nNow suppose a small number of nodes begin adopting strategy A instead.\nCompatibility, Interoperability, and Bilinguality.\nAn important piece that is arguably missing from the basic game-theoretic models of diffusion, however, is a more detailed picture of what is happening at the coexistence boundary, where the basic form of the model posits nodes that adopt A linked to nodes that adopt B.\nIn these motivating settings for the models, of course, one very often sees interface regions in which individuals essentially become \"bilingual.\"\nIn the case of human language diffusion, this bilinguality is meant literally: geographic regions where there is substantial interaction with speakers of two different languages tend to have inhabitants who speak both.\nTaking this view, it is natural to ask how diffusion models behave when extended so that certain nodes can be bilingual in this very general sense, adopting both strategies at some cost to themselves.\nWhat might we learn from such an extension?\nTo begin with, it has the potential to provide a valuable perspective on the question of compatibility and incompatibility that underpins competition among technology companies.\nThere is a large literature on how compatibility among technologies affects competition between firms, and in particular how incompatibility may be a beneficial strategic decision for certain participants in a market [3, 4, 8, 9, 12].\nWhile these existing models of compatibility capture network effects in the sense that the users in the market prefer to use technology that is more widespread, they do not capture the more finegrained network phenomenon represented by diffusion--that each user is including its local view in the decision, based on what its own social network neighbors are doing.\nA diffusion model that incorporated such extensions could provide insight into the structure of boundaries in the network between technologies; it could potentially offer a graph-theoretic basis for how incompatibility may benefit an existing technology, by strengthening these boundaries and preventing the incursion of a new, better technology.\nThe present work: Diffusion with bilingual behavior.\nIn this paper, we develop a set of diffusion models that incorporate notions of compatibility and bilinguality, and we find that some unexpected phenomena emerge even from very simple versions of the models.\nWe begin with perhaps the simplest way of extending Morris's model discussed above to incorporate bilingual behavior.\nConsider again the example of IM systems A and B, with the payoff structure as before, but now suppose that each node can adopt a third strategy, denoted AB, in which it decides to use both A and B.\nFinally, an adopter of AB pays a fixed-cost penalty of c (i.e.--c is added to its total payoff) to represent the cost of having to maintain both technologies.\nThus, in this model, there are two parameters that can be varied: the relative qualities of the two technologies (encoded by q), and the cost of being bilingual, which reflects a type of incompatibility (encoded by c).\nWe also introduce one additional bit of notation that will be useful in the subsequent sections: we define r = c \/ \u0394, the fixed penalty for adopting AB, scaled so that it is a per-edge cost.\nIn the Morris model, where the only strategic options are A and B, a key parameter is the contagion threshold of G, denoted q \u2217 (G): this is the supremum of q for which A can become epidemic in G with parameter q in the payoff structure.\nA central result of [13] is that 21 is the maximum possible contagion threshold for any graph: supG q \u2217 (G) = 21.\nIndeed, there exist graphs in which the contagion threshold is as large as 21 (including the infinite line--the unique infinite connected 2-regular graph); on the other hand, one can show there is no graph with a contagion threshold greater than\nFigure 1: The region of the (q, r) plane for which technology A can become epidemic on the infinite line.\nOur Results.\n(We find analogous shapes that become even more complex for other simple infinite graph structures; see for example Figures 3 and 4.)\nIn particular, this means that for values of q close to but less than 21, strategy A can become epidemic on the infinite line if r is sufficiently small or sufficiently large, but not if r takes values in some intermediate interval.\nIn other words, strategy B (which represents the worse technology, since q <21) will survive if and only if the cost of being bilingual is calibrated to lie in this middle interval.\nThis is a reflection of limited compatibility--that it may be in the interest of an incumbent technology to make it difficult but not too difficult to use a new technology--and we find it surprising that it should emerge from a basic model on such a simple network structure.\nIt is natural to ask whether there is a qualitative interpretation of how this arises from the model, and in fact it is not hard to give such an interpretation, as follows.\nWhen r is very small, it is cheap for nodes to adopt AB as a strategy, and so AB spreads through the whole network.\nOnce AB is everywhere, the best-response updates cause all nodes to switch to A, since they get the same interaction benefits without paying the penalty of r.\nWhen r is very large, nodes at the interface, with one A neighbor and one B neighbor, will find it too expensive to choose AB, so they will choose A (the better technology), and hence A will spread step-by-step through the network.\nWhen r takes an intermediate value, a node v at the interface, with one A neighbor and one B neighbor, will find it most beneficial to adopt AB as a strategy.\nHence, this intermediate value of r allows a \"boundary\" of AB to form between the adopters of A and the adopters of B.\nBut if it has the right balance in the value of r, then the adoptions of A come to a stop at a bilingual boundary where nodes adopt AB.\nMoving beyond specific graphs G, we find that this non-convexity holds in a much more general sense as well, by considering the general epidemic region \u03a9 = UG\u03a9 (G).\nFor any given value of \u0394, the region \u03a9 is a complicated union of bounded and unbounded polygons, and we do not have a simple closed-form description for it.\nHowever, we can show via a potential function argument that no point (q, r) with q> 21 belongs to \u03a9.\nMoreover, we can show the existence of a point (q, r) E ~ \u03a9 for which q <21.\nOn the other hand, consideration of the epidemic region for the infinite line shows that (21, r) E \u03a9 for r = 0 and for r sufficiently large.\nHence, neither \u03a9 nor its complement is convex in the positive quadrant.\nFinally, we also extend a characterization that Morris gave for the contagion threshold [13], producing a somewhat more intricate characterization of the region \u03a9 (G).\nIn Morris's setting, without an AB strategy, he showed that A cannot become epidemic with parameter q if and only if every cofinite set of nodes contains a subset S that functions as a well-connected \"community\": every node in S has at least a (1--q) fraction of its neighbors in S.\nIn other words, tightly-knit communities are the natural obstacles to diffusion in his setting.\nWith the AB strategy as a further option, a more complex structure becomes the obstacle: we show that A cannot become epidemic with parameters (q, r) if and only if every cofinite set contains a structure consisting of a tightly-knit community with a particular kind of \"interface\" of neighboring nodes.\nWe show that such a structure allows nodes to adopt AB at the interface and B inside the community itself, preventing the further spread of A; and conversely, this is the only way for the spread of A to be blocked.\nFurther Extensions.\nAnother way to model compatibility and interoperability in diffusion models is through the \"off-diagonal\" terms representing the payoff for interactions between a node adopting A and a node adopting B. Rather than setting these to 0, we can consider setting them to a value x 2 competing technologies; for technologies X and Y, let qX denote the payoff from an X-X interaction on an edge and qXY denote the payoff from an X-Y interaction on an edge.\nWe consider a setting in which two technologies B and C, which initially coexist with qBC = 0, face the introduction of a third, better technology A at a finite set of nodes.\nWe show an example in which B and C both survive in equilibrium if they set qBC in a particular range of values, but not if they set qBC too low or too high to lie in this range.\nThus, in even in a basic diffusion model with three technologies, one finds cases in which two firms have an incentive to adopt a limited \"strategic alliance,\" partially increasing their interoperability to defend against a new entrant in the market.\n6.\nLIMITED COMPATIBILITY\nWe now consider some further ways of modeling compatibility and interoperability.\nWe first consider two technologies, as in the previous sections, and introduce \"off-diagonal\" payoffs to capture a positive benefit in direct A-B interactions.\nWe find that this is\nin fact no more general than the model with zero payoffs for A-B interactions.\nWe then consider extensions to three technologies, identifying situations in which two coexisting incumbent technologies may or may not want to increases their mutual compatibility in the face of a new, third technology.\nTwo technologies.\nA natural relaxation of the two-technology model is to introduce (small) positive payoffs for A-B interaction; that is, cross-technology communication yields some lesser value to both agents.\nWe can model this using a variable xAB representing the payoff gathered by an agent with technology A when her neighbor has technology B, and similarly, a variable xBA representing the payoff gathered by an agent with B when her neighbor has A.\nHere we consider the special case in which these \"off-diagonal\" entries are symmetric, i.e., xAB = xBA = x.\nWe also assume that x 1\/2.\nHowever, if we insert symmetric off-diagonal payoffs x = 1\/4, we have a new game, equivalent to a game parameterized by r' = 5\/16 and q' = 1\/4.\nSince q' <1\/2 and q' <2r', A is epidemic in this game, and thus also in the game with limited compatibility.\nWe now show that generally, if A is the superior technology (i.e., q <1\/2), adding a compatibility term x can only help A spread.\nTHEOREM 6.2.\nLet G be a game without compatibility, parameterized by r and q on a particular network.\nLet G' be that same game, but with an added symmetric compatibility term x.\nIf A is epidemic for G, then A is epidemic for G'.\nPROOF.\nWe will show that any blocking structure in G' is also a blocking structure in G. By our characterization theorem, Theorem 4.6, this implies the desired result.\nWe have that G' is equivalent to a game without compatibility parameterized by q' = (q--x) \/ (1--2x) and r' = r \/ (1--2x).\nConsider a blocking structure (SB, SAB) for G'.\nThus\nMore than two technologies.\nGiven the complex structure inherent in contagion games with two technologies, the understanding of contagion games with three or more technologies is largely open.\nHere we indicate some of the technical issues that come up with multiple technologies, through a series of initial results.\nThe basic set-up we study is one in which two incumbent technologies B and C are initially coexisting, and a third technology A, superior to both, is introduced initially at a finite set of nodes.\nWe first present a theorem stating that for any even \u0394, there is a contagion game on a \u0394--regular graph in which the two incumbent technologies B and C may find it beneficial to increase their compatibility so as to prevent getting wiped out by the new superior technology A.\nIn particular, we consider a situation in which initially, two technologies B and C with zero compatibility are at a stable state.\nBy a stable state, we mean that no finite perturbation of the current states can lead to an epidemic for either B or C.\nWe also have a technology A that is superior to both B and C, and can become epidemic by forcing a single node to choose A. However, by increasing their compatibility, B and C can maintain their stability and resist an epidemic from A. Let qA denote the payoffs to two adjacent nodes that both choose technology A, and define qB and qC analogously.\nWe will assume qA> qB> qC.\nWe also assume that r, the cost of selecting additional technologies, is sufficiently large so as to ensure that nodes never adopt more than one technology.\nFinally, we consider a compatibility parameter qBC that represents the payoffs to two adjacent nodes when one selects B and the other selects C. Thus our contagion game is now described by five parameters (G, qA, qB, qC, qBC).\nPROOF.\n(Sketch.)\nGiven \u0394, define G by starting with an infinite grid and connecting each node to its nearest \u0394--2 neighbors that are in the same row.\nThe initial state s assigns strategy B to even rows and strategy C to odd rows.\nThe first, third, and fourth claims in the theorem can be verified by checking the corresponding inequalities.\nThe second claim follows from the first and the observation that the alternating rows contain any plausible epidemic from growing vertically.\nThe above theorem shows that two technologies may both be able to survive the introduction of a new technology by increasing their level of compatibility with each other.\nAs one might expect,\nTable 1: The payoffs in the coordination game.\nEntry (x, y) in row i, column j indicates that the row player gets a payoff of x and the column player gets a payoff of y when the row player plays strategy i and the column player plays strategy j.\nthere are cases when increased compatibility between two technologies helps one technology at the expense of the other.\nSurprisingly, however, there are also instances in which compatibility is in fact harmful to both parties; the next example considers a fixed initial configuration with technologies A, B and C that is at equilibrium when qBC = 0.\nHowever, if this compatibility term is increased sufficiently, equilibrium is lost, and A becomes epidemic.\nEXAMPLE 6.4.\nConsider the union of an infinite two-dimensional grid graph with nodes u (x, y) and an infinite line graph with nodes v (y).\nAdd an edge between u (1, y) and v (y) for all y. For this network, we consider the initial configuration in which all v (y) nodes select A, and node u (x, y) selects B if x <0 and selects C otherwise.\nWe now define the parameters of this game as follows.\nIt is easily verified that for these values, the initial configuration given above is an equilibrium.\nHowever, now suppose we increase the coordination term, setting qBC = 0.9.\nThis is not an equilibrium, since each node of the form u (0, y) now has an incentive to switch from C (generating a payoff of 3.9) to B (thereby generating a payoff of 3.95).\nHowever, once these nodes have adopted B, the best-response for each node of the form u (1, y) is A (A generates a payoff of 4 where as B only generates a payoff of 3.95).\nFrom here, it is not hard to show that A spreads directly throughout the entire network.","lvl-2":"The Role of Compatibility in the Diffusion of Technologies Through Social Networks\nABSTRACT\nIn many settings, competing technologies--for example, operating systems, instant messenger systems, or document formats--can be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability.\nThere are a range of reasons why this phenomenon occurs, many of which--based on legal, social, or business considerations--seem to defy concise mathematical models.\nDespite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms.\nOur approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B.\nWe consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or--at an extra cost--adopt both A and B.\nWe characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range.\nWe also extend the framework to the case of multiple technologies, where we find that a simple This work has been supported in part by NSF grants CCF0325453, IIS-0329064, CNS-0403340, and BCS-0537606, a Google Research Grant, a Yahoo! Research Alliance Grant, the Institute for the Social Sciences at Cornell, and the John D. and Catherine T. MacArthur Foundation.\nmodel captures the phenomenon of two firms adopting a limited \"strategic alliance\" to defend against a new, third technology.\n1.\nINTRODUCTION\nDiffusion and Networked Coordination Games.\nA fundamental question in the social sciences is to understand the ways in which new ideas, behaviors, and practices diffuse through populations.\nSuch issues arise, for example, in the adoption of new technologies, the emergence of new social norms or organizational conventions, or the spread of human languages [2, 14, 15, 16, 17].\nAn active line of research in economics and mathematical sociology is concerned with modeling these types of diffusion processes as a coordination game played on a social network [1, 5, 7, 13, 19].\nWe begin by discussing one of the most basic game-theoretic diffusion models, proposed in an influential paper of Morris [13], which will form the starting point for our work here.\nWe describe it in terms of the following technology adoption scenario, though there are many other examples that would serve the same purpose.\nSuppose there are two instant messenger (IM) systems A and B, which are not interoperable--users must be on the same system in order to communicate.\nThere is a social network G on the users, indicating who wants to talk to whom, and the endpoints of each edge (v, w) play a coordination game with possible strategies A or B: if v and w each choose IM system B, then they they each receive a payoff of q (since they can talk to each other using system B); if they each choose IM system A, then they they each receive a payoff of 1 \u2212 q; and if they choose opposite systems, then they each receive a payoff of 0 (reflecting the lack of interoperability).\nNote that A is the \"better\" technology if q <21, in the sense that A-A payoffs would then exceed B-B payoffs, while A is the worse technology if q> 21.\nA number of qualitative insights can be derived from a diffusion model even at this level of simplicity.\nSpecifically, consider a network G, and let all nodes initially play B.\nNow suppose a small number of nodes begin adopting strategy A instead.\nIf we apply best-response updates to nodes in the network, then nodes in effect will be repeatedly applying the following simple rule: switch to A if enough of your network neighbors have already adopted A. (E.g. you begin using a particular IM system--or social-networking site, or electronic document format--if enough of your friends are users of it.)\nAs this unfolds, there can be a cascading sequence of nodes switching to A, such that a network-wide equilibrium is reached in the limit: this equilibrium may involve uniformity, with all nodes adopting A; or it may involve coexistence, with the nodes partitioned into a set adopting A and a set adopting B, and edges yielding zero payoff connecting the two sets.\nMorris [13] provides a set of elegant graph-theoretic characterizations for when these qualitatively different types of equilibria arise, in terms of the underlying network topology and the quality of A relative to B (i.e. the relative sizes of 1--q and q).\nCompatibility, Interoperability, and Bilinguality.\nIn most of the settings that form the motivation for diffusion models, coexistence (however unbalanced) is the typical outcome: for example, human languages and social conventions coexist along geographic boundaries; it is a stable outcome for the financial industry to use Windows while the entertainment industry uses Mac OS.\nAn important piece that is arguably missing from the basic game-theoretic models of diffusion, however, is a more detailed picture of what is happening at the coexistence boundary, where the basic form of the model posits nodes that adopt A linked to nodes that adopt B.\nIn these motivating settings for the models, of course, one very often sees interface regions in which individuals essentially become \"bilingual.\"\nIn the case of human language diffusion, this bilinguality is meant literally: geographic regions where there is substantial interaction with speakers of two different languages tend to have inhabitants who speak both.\nBut bilinguality is also an essential feature of technological interaction: in the end, many people have accounts on multiple IM systems, for example, and more generally many maintain the ability to work within multiple computer systems so as to collaborate with people embedded in each.\nTaking this view, it is natural to ask how diffusion models behave when extended so that certain nodes can be bilingual in this very general sense, adopting both strategies at some cost to themselves.\nWhat might we learn from such an extension?\nTo begin with, it has the potential to provide a valuable perspective on the question of compatibility and incompatibility that underpins competition among technology companies.\nThere is a large literature on how compatibility among technologies affects competition between firms, and in particular how incompatibility may be a beneficial strategic decision for certain participants in a market [3, 4, 8, 9, 12].\nWhinston [18] provides an interesting taxonomy of different kinds of strategic incompatibility; and specific industry case studies (including theoretical perspectives) have recently been carried out for commercial banks [10], copying and imaging technology [11] and instant messenger systems [6].\nWhile these existing models of compatibility capture network effects in the sense that the users in the market prefer to use technology that is more widespread, they do not capture the more finegrained network phenomenon represented by diffusion--that each user is including its local view in the decision, based on what its own social network neighbors are doing.\nA diffusion model that incorporated such extensions could provide insight into the structure of boundaries in the network between technologies; it could potentially offer a graph-theoretic basis for how incompatibility may benefit an existing technology, by strengthening these boundaries and preventing the incursion of a new, better technology.\nThe present work: Diffusion with bilingual behavior.\nIn this paper, we develop a set of diffusion models that incorporate notions of compatibility and bilinguality, and we find that some unexpected phenomena emerge even from very simple versions of the models.\nWe begin with perhaps the simplest way of extending Morris's model discussed above to incorporate bilingual behavior.\nConsider again the example of IM systems A and B, with the payoff structure as before, but now suppose that each node can adopt a third strategy, denoted AB, in which it decides to use both A and B.\nAn adopter of AB gets to use, on an edge-by-edge basis, whichever of A or B yields higher payoffs in each interaction, and the payoff structure is defined according to this principle: if an adopter of AB interacts with an adopter of B, both receive q; with an adopter of A, both receive 1--q; and with another adopter of AB, both receive max (q, 1--q).\nFinally, an adopter of AB pays a fixed-cost penalty of c (i.e.--c is added to its total payoff) to represent the cost of having to maintain both technologies.\nThus, in this model, there are two parameters that can be varied: the relative qualities of the two technologies (encoded by q), and the cost of being bilingual, which reflects a type of incompatibility (encoded by c).\nFollowing [13] we assume the underlying graph G is infinite; we further assume that for some natural number \u0394, each node has degree \u0394 .1 We are interested in the question posed at the outset, of whether a new technology A can spread through a network where almost everyone is initially using B. Formally, we say that strategy A can become epidemic if the following holds: starting from a state in which all nodes in a finite set S adopt A, and all other nodes adopt B, a sequence of best-response updates (potentially with tiebreaking) in G--S causes every node to eventually adopt A.\nWe also introduce one additional bit of notation that will be useful in the subsequent sections: we define r = c \/ \u0394, the fixed penalty for adopting AB, scaled so that it is a per-edge cost.\nIn the Morris model, where the only strategic options are A and B, a key parameter is the contagion threshold of G, denoted q \u2217 (G): this is the supremum of q for which A can become epidemic in G with parameter q in the payoff structure.\nA central result of [13] is that 21 is the maximum possible contagion threshold for any graph: supG q \u2217 (G) = 21.\nIndeed, there exist graphs in which the contagion threshold is as large as 21 (including the infinite line--the unique infinite connected 2-regular graph); on the other hand, one can show there is no graph with a contagion threshold greater than\nIn our model where the bilingual strategy AB is possible, we have a two-dimensional parameter space, so instead of a contagion threshold q \u2217 (G) we have an epidemic region \u03a9 (G), which is the subset of the (q, r) plane for which A can become epidemic in G. And in place of the maximum possible contagion threshold supG q \u2217 (G), we must consider the general epidemic region \u03a9 = UG\u03a9 (G), where the union is taken over all infinite \u0394-regular graphs; this is the set of all (q, r) values for which A can become epidemic in some \u0394-regular network.\n1We can obtain strictly analogous results by taking a sequence of finite graphs and expressing results asymptotically, but the use of an infinite bounded-degree graph G makes it conceptually much cleaner to express the results (as it does in Morris's paper [13]): less intricate quantification is needed to express the diffusion properties, and the qualitative phenomena remain the same.\nFigure 1: The region of the (q, r) plane for which technology A can become epidemic on the infinite line.\nOur Results.\nWe find, first of all, that the epidemic region \u03a9 (G) can be unexpectedly complex, even for very simple graphs G. Figure 1 shows the epidemic region for the infinite line; one observes that neither the region \u03a9 (G) nor its complement is convex in the positive quadrant, due to the triangular \"cut-out\" shape.\n(We find analogous shapes that become even more complex for other simple infinite graph structures; see for example Figures 3 and 4.)\nIn particular, this means that for values of q close to but less than 21, strategy A can become epidemic on the infinite line if r is sufficiently small or sufficiently large, but not if r takes values in some intermediate interval.\nIn other words, strategy B (which represents the worse technology, since q <21) will survive if and only if the cost of being bilingual is calibrated to lie in this middle interval.\nThis is a reflection of limited compatibility--that it may be in the interest of an incumbent technology to make it difficult but not too difficult to use a new technology--and we find it surprising that it should emerge from a basic model on such a simple network structure.\nIt is natural to ask whether there is a qualitative interpretation of how this arises from the model, and in fact it is not hard to give such an interpretation, as follows.\nWhen r is very small, it is cheap for nodes to adopt AB as a strategy, and so AB spreads through the whole network.\nOnce AB is everywhere, the best-response updates cause all nodes to switch to A, since they get the same interaction benefits without paying the penalty of r.\nWhen r is very large, nodes at the interface, with one A neighbor and one B neighbor, will find it too expensive to choose AB, so they will choose A (the better technology), and hence A will spread step-by-step through the network.\nWhen r takes an intermediate value, a node v at the interface, with one A neighbor and one B neighbor, will find it most beneficial to adopt AB as a strategy.\nOnce this happens, the neighbor of v who is playing B will not have sufficient incentive to switch, and the best-response updates make no further progress.\nHence, this intermediate value of r allows a \"boundary\" of AB to form between the adopters of A and the adopters of B.\nIn short, the situation facing B is this: if it is too permissive, it gets invaded by AB followed by A; if it is too inflexible, forcing nodes to choose just one of A or B, it gets destroyed by a cascade of direct conversions to A.\nBut if it has the right balance in the value of r, then the adoptions of A come to a stop at a bilingual boundary where nodes adopt AB.\nMoving beyond specific graphs G, we find that this non-convexity holds in a much more general sense as well, by considering the general epidemic region \u03a9 = UG\u03a9 (G).\nFor any given value of \u0394, the region \u03a9 is a complicated union of bounded and unbounded polygons, and we do not have a simple closed-form description for it.\nHowever, we can show via a potential function argument that no point (q, r) with q> 21 belongs to \u03a9.\nMoreover, we can show the existence of a point (q, r) E ~ \u03a9 for which q <21.\nOn the other hand, consideration of the epidemic region for the infinite line shows that (21, r) E \u03a9 for r = 0 and for r sufficiently large.\nHence, neither \u03a9 nor its complement is convex in the positive quadrant.\nFinally, we also extend a characterization that Morris gave for the contagion threshold [13], producing a somewhat more intricate characterization of the region \u03a9 (G).\nIn Morris's setting, without an AB strategy, he showed that A cannot become epidemic with parameter q if and only if every cofinite set of nodes contains a subset S that functions as a well-connected \"community\": every node in S has at least a (1--q) fraction of its neighbors in S.\nIn other words, tightly-knit communities are the natural obstacles to diffusion in his setting.\nWith the AB strategy as a further option, a more complex structure becomes the obstacle: we show that A cannot become epidemic with parameters (q, r) if and only if every cofinite set contains a structure consisting of a tightly-knit community with a particular kind of \"interface\" of neighboring nodes.\nWe show that such a structure allows nodes to adopt AB at the interface and B inside the community itself, preventing the further spread of A; and conversely, this is the only way for the spread of A to be blocked.\nThe analysis underlying the characterization theorem yields a number of other consequences; a basic one is, roughly speaking, that the outcome of best-response updates is independent of the order in which the updates are sequenced (provided only that each node attempts to update itself infinitely often).\nFurther Extensions.\nAnother way to model compatibility and interoperability in diffusion models is through the \"off-diagonal\" terms representing the payoff for interactions between a node adopting A and a node adopting B. Rather than setting these to 0, we can consider setting them to a value x 2 competing technologies; for technologies X and Y, let qX denote the payoff from an X-X interaction on an edge and qXY denote the payoff from an X-Y interaction on an edge.\nWe consider a setting in which two technologies B and C, which initially coexist with qBC = 0, face the introduction of a third, better technology A at a finite set of nodes.\nWe show an example in which B and C both survive in equilibrium if they set qBC in a particular range of values, but not if they set qBC too low or too high to lie in this range.\nThus, in even in a basic diffusion model with three technologies, one finds cases in which two firms have an incentive to adopt a limited \"strategic alliance,\" partially increasing their interoperability to defend against a new entrant in the market.\n2.\nMODEL\nWe now develop some further notation and definitions that will be useful for expressing the model.\nRecall that we have an infinite \u0394-regular graph G, and strategies A, B, and AB that are used in a coordination game on each edge.\nFor edge (v, w), the payoff\nto each endpoint is 0 if one of the two nodes chooses strategy A and the other chooses strategy B; 1 \u2212 q if one chooses strategy A and the other chooses either A or AB; q if one chooses strategy B and the other chooses either B or AB; and max (q, 1 \u2212 q) if both choose strategy AB.\nThe overall payoff of an agent v is the sum of the above values over all neighbors w of v, minus a cost which is 0 if v chooses A or B and c = r\u0394 if she chooses AB.\nWe refer to the overall game, played by all nodes in G, as a contagion game, and denote it using the tuple (G, q, r).\nThis game can have many Nash equilibria.\nIn particular, the two states where everybody uses technology A or everybody uses technology B are both equilibria of this game.\nAs discussed in the previous section, we are interested in the dynamics of reaching an equilibrium in this game; in particular, we would like to know whether it is possible to move from an all-B equilibrium to an all-A equilibrium by changing the strategy of a finite number of agents, and following a sequence of best-response moves.\nWe provide a formal description of this question via the following two definitions.\nDEFINITION 2.1.\nConsider a contagion game (G, q, r).\nA state in this game is a strategy profile s: V (G) ~ \u2192 {A, B, AB}.\nFor two states s and s' and a vertex v \u2208 V (G), if starting from state s and letting v play her best-response move (breaking ties in favor of A and then AB) we get to the state s', we write s v \u2192 s'.\nSimilarly, for two states s and s' and a finite sequence S = v1, v2,..., vk of vertices of G (where vi's are not necessarily distinct), we say s S \u2192 s' if there is a sequence of states s1,..., sk-1 such that s v1 \u2192 s1 v2 \u2192 \u2192 s'.\nFor an infinite sequence S = v1, v2,...of vk vertices of G, we denote the subsequence v1, v2,..., vk by Sk.\nWe say s \u2192 s' for two states s and s' if for every vertex v \u2208 V (G) S there exists a k0 (v) such that for every k> k0 (v), s Sk \u2192 sk for a state sk with sk (v) = s' (v).\nDEFINITION 2.2.\nFor T \u2286 V (G), we denote by sT the strategy profile that assigns A to every agent in T and B to every agent in V (G) \\ T.\nWe say that technology A can become an epidemic in the game (G, q, r) if there is a finite set T of nodes in G (called the seed set) and a sequence S of vertices in V (G) \\ T (where each vertex can appear more than once) such that sT \u2192 sV (G), i.e., S endowing agents in T with technology A and letting other agents play their best response according to schedule S would lead every agent to eventually adopt strategy A. 2 The above definition requires that the all-A equilibrium be reachable from the initial state by at least one schedule S of best-response moves.\nIn fact, we will show in Section 4 that if A can become an epidemic in a game, then for every schedule of best-response moves of the nodes in V (G) \\ T in which each node is scheduled an infinite number of times, eventually all nodes adopt strategy A. 3\n3.\nEXAMPLES\nWe begin by considering some basic examples that yield epidemic regions with the kinds of non-convexity properties discussed 2Note that in our definition we assume that agents in T are endowed with the strategy A at the beginning.\nAlternatively, one can define the notion of epidemic by allowing agents in T to be endowed with any combination of AB and A, or with just AB.\nHowever, the difference between these definitions is rather minor and our results carry over with little or no change to these alternative models.\n3Note that we assume agents in the seed set T cannot change their strategy.\nFigure 2: The thick line graph\nin Section 1.\nWe first discuss a natural \u0394-regular generalization of the infinite line graph, and for this one we work out the complete analysis that describes the region \u03a9 (G), the set of all pairs (q, r) for which the technology A can become an epidemic.\nWe then describe, without the accompanying detailed analysis, the epidemic regions for the infinite \u0394-regular tree and for the two-dimensional grid.\nThe infinite line and the thick line graph.\nFor a given even integer \u0394, we define the thick line graph L\u0394 as follows: the vertex set of this graph is Z \u00d7 {1, 2,..., \u0394 \/ 2}, where Z is the set of all integers.\nThere is an edge between vertices (x, i) and (x', i') if and only if | x \u2212 x' | = 1.\nFor each x \u2208 Z, we call the set of vertices {(x, i): i \u2208 {1,..., \u0394 \/ 2} the x' th group of vertices.\nFigure 2 shows a picture of L6 Now, assume that starting from a position where every node uses the strategy B, we endow all agents in a group (say, group 0) with the strategy A. Consider the decision faced by the agents in group 1, who have their right-hand neighbors using B and their left-hand neighbors using A. For these agents, the payoffs of strategies A, B, and AB are (1 \u2212 q) \u0394 \/ 2, q\u0394 \/ 2, and \u0394 \/ 2 \u2212 r\u0394, respectively.\nTherefore, if q \u2264 1 and q \u2264 2r, 2 the best response of such an agent is A. Hence, if the above inequality holds and we let agents in groups 1, \u2212 1, 2, \u2212 2,...play their best response in this order, then A will become an epidemic.\nAlso, if we have q> 2r and q \u2264 1 \u2212 2r, the best response of an agent with her neighbors on one side playing A and neighbors on the other side playing B is the strategy AB.\nTherefore, if we let agents in groups 1 and \u2212 1 change to their best response, they would switch their strategy to AB.\nAfter this, agents in group 2 will see AB on their left and B on their right.\nFor these agents (and similarly for the agents in group \u2212 2), the payoff of strategies A, B, and AB are (1 \u2212 q) \u0394 \/ 2, q\u0394, and (q + max (q, 1 \u2212 q)) \u0394 \/ 2 \u2212 r\u0394, respectively.\nTherefore, if max (1, 2q) \u2212 2r \u2265 1 \u2212 q and max (1, 2q) \u2212 2r \u2265 2q, or equivalently, if 2r \u2264 q and q + r \u2264 21, the best response of such an agent is AB.\nHence, if the above inequality holds and we let agents in groups 2, \u2212 2, 3, \u2212 3...play their best response in this order, then every agent (except for agents in group 0) switches to AB.\nNext, if we let agents in groups 1, \u2212 1, 2, \u2212 2,...change their strategy again, for q \u2264 1\/2, every agent will switch to strategy A, and hence A becomes an epidemic .4 4Strictly speaking, since we defined a schedule of moves as a single infinite sequence of vertices in V (G) \\ T, the order 1, \u2212 1, 2, \u2212 2,..., 1, \u2212 1, 2, \u2212 2,...is not a valid schedule.\nHowever, since vertices of G have finite degree, it is not hard to see that any ordering of a multiset containing any (possibly infinite)\nFigure 3: Epidemic regions for the infinite grid\nFigure 4: Epidemic regions for the infinite \u0394-regular tree\nThe above argument shows that for any combination of (q, r) parameters in the marked region in Figure 1, technology A can become an epidemic.\nIt is not hard to see that for points outside this region, A cannot become epidemic.\nFurther examples: trees and grids.\nFigures 3 and 4 show the epidemic regions for the infinite grid and the infinite \u0394-regular tree.\nNote they also exhibit non-convexities.\n4.\nCHARACTERIZATION\nIn this section, we characterize equilibrium properties of contagion games.\nTo this end, we must first argue that contagion games in fact have well-defined and stable equilibria.\nWe then discuss some respects in which the equilibrium reached from an initial state is essentially independent of the order in which best-response updates are performed.\nWe begin with the following lemma, which proves that agents eventually converge to a fixed strategy, and so the final state of a game is well-defined by its initial state and an infinite sequence of moves.\nSpecifically, we prove that once an agent decides to adopt technology A, she never discards it, and once she decides to discard technology B, she never re-adopts it.\nThus, after an infinite number of best-response moves, each agent converges to a single strategy.\nLEMMA 4.1.\nConsider a contagion game (G, q, r) and a (possibly infinite) subset T C _ V (G) of agents.\nLet sT be the strategy profile assigning A to every agent in T and B to every agent in V (G) \\ T. Let S = v1, v2,...be a (possibly infinite) sequence of number of copies of each vertex of V (G) \\ T can be turned into an equivalent schedule of moves.\nFor example, the sequence 1, \u2212 1, 2, \u2212 2, 1, \u2212 1, 3, \u2212 3, 2, \u2212 2,...gives the same outcome as 1, \u2212 1, 2, \u2212 2,..., 1, \u2212 1, 2, \u2212 2,...in the thick line example.\nagents in V (G) \\ T and consider the sequence of states s1, s2,.\n.\n.\nobtained by allowing agents to play their best-response in the order defined by S (i.e., s v1.\ns1.\ns2\nthe following holds:\n\u2022 si (vi +1) = B and si +1 (vi +1) = A, \u2022 si (vi +1) = B and si +1 (vi +1) = AB, \u2022 si (vi +1) = AB and si +1 (vi +1) = A, \u2022 si (vi +1) = si +1 (vi +1).\nPROOF.\nLet X> kv Y indicate that agent v (weakly) prefers strategy X to strategy Y in state sk.\nFor any k let zk A, zk B, and zk AB be the number of neighbors of v with strategies A, B, and AB in state sk, respectively.\nThus, for agent v in state sk, 1.\nA> kv B if (1 \u2212 q) (zkA + zkAB) is greater than q (zkB + zkAB), 2.\nA> kv AB if (1 \u2212 q) (zkA + zkAB) is greater than (1 \u2212 q) zkA + qzkB + max (q, 1 \u2212 q) zk AB \u2212 \u0394r, 3.\nand AB> kv B if (1 \u2212 q) zkA + qzkB + max (q, 1 \u2212 q) zk AB \u2212 \u0394r is greater than q (zkB + zkAB).\nSuppose the lemma is false and consider the smallest i such that the lemma is violated.\nLet v = vi +1 be the agent who played her best response at time i. Thus, either 1.\nsi (v) = A and si +1 (v) = B, or 2.\nsi (v) = A and si +1 (v) = AB, or 3.\nsi (v) = AB and si +1 (v) = B.\nWe show that in the third case, agent v could not have been playing a best response.\nThe other cases are similar.\nIn the third case, we have si (v) = AB and si +1 (v) = B. As\nFurthermore, as i is the earliest time at which the lemma is violated, ziA> zjA and zj AB \u2212 zi AB zA, z' B degSB (v).\nWe show AB is a better strategy than A for v. To show this, we must prove that (1 \u2212 q) zA + qzB + max (q, 1 \u2212\nwhere the last inequality holds by the definition of the blocking structure.\nWe next show that A cannot become epidemic if and only if every co-finite set of vertices contains a blocking structure.\nTo construct a blocking structure for the complement of a finite set T of vertices, endow T with strategy A and consider the outcome of the game for any sequence S which schedules each vertex an infinite number of times.\nLet SAB be the set of vertices with strategy AB and SB be the set of vertices with strategy B in this outcome.\nNote for any v E SAB, AB is a best-response and so is strictly better than strategy A, i.e. q degSB (v) + max (q, 1 \u2212 q) degSAB \u2212 \u0394r> (1 \u2212 q) degSAB (v), from where it follows that degSB (v)> (r\u0394) \/ q.\nThe inequalities for the vertices v E SB can be derived in a similar manner.\nA corollary to the above theorem is that for every infinite graph G, the epidemic regions in the q-r plane for this graph is a finite union of bounded and unbounded polygons.\nThis is because the inequalities defining blocking structures are linear inequalities in q and r, and the coefficients of these inequalities can take only finitely many values.\n5.\nNON-EPIDEMIC REGIONS IN GENERAL GRAPHS\nThe characterization theorem in the previous section provides one way of thinking about the region \u03a9 (G), the set of all (q, r) pairs for which A can become epidemic in the game (G, q, r).\nWe now consider the region \u03a9 = UG\u03a9 (G), where the union is taken over all infinite \u0394-regular graphs; this is the set of all (q, r) values for which A can become epidemic in some \u0394-regular network.\nThe analysis here uses Lemma 4.1 and an argument based on an appropriately defined potential function.\nThe first theorem shows that no point (q, r) with q> 21 belongs to \u03a9.\nSince q> 21 implies that the incumbent technology B is superior, it implies that in any network, a superior incumbent will survive for any level of compatibility.\nTHEOREM 5.1.\nFor every \u0394-regular graph G and parameters q and r, the technology A cannot become an epidemic in the game (G, q, r) if q> 1\/2.\nPROOF.\nAssume, for contradiction, that there is a \u0394-regular graph G and values q> 1\/2 and r, a set T of vertices of G that are initially endowed with the strategy A, and a schedule S of moves for vertices in V (G) \\ T such that this sequence leads to an all-A equilibrium.\nWe derive a contradiction by defining a non-negative \u0394,\npotential function that starts with a finite value and showing that after each best response by some vertex the value of this function decreases by some positive amount bounded away from zero.\nAt any state in the game, let XA, B denote the number of edges in G that have one endpoint using strategy A and the other endpoint using strategy B. Furthermore, let nAB denote the number of agents using the strategy AB.\nThe potential function is the following:\n(recall c = \u0394r is the cost of adopting two technologies).\nSince G has bounded degree and the initial set T is finite, the initial value of this potential function is finite.\nWe now show that every best response move decreases the value of this function by some positive amount bounded away from zero.\nBy Lemma 4.1, we only need to analyze the effect on the potential function for moves of the sort described by the lemma.\nTherefore we have three cases: a node u switches from strategy B to AB, a node u switches from strategy AB to A, or a node u switches from strategy B to A.\nWe consider the first case here; the proofs for the other cases are similar.\nSuppose a node u with strategy B switches to strategy AB.\nLet zAB, zA, and zB denote the number of neighbors of u in partition piece AB, A, and B respectively.\nThus, recalling that q> 1\/2, we see u's payoff with strategy B is q (zAB + zB) whereas his payoff with strategy AB is q (zAB + zB) + (1 \u2212 q) zA \u2212 c.\nIn order for this strategic change to improve u's payoff, it must be the case that\nNow, notice that such a strategic change on the part of u induces a change in the potential function of \u2212 qzA + c as zA edges are removed from the XA, B edges between A and B and the size of partition piece AB is increased by one.\nThis change will be negative so long as zA> c\/q which holds by inequality 1 as q> (1 \u2212 q) for q> 1\/2.\nFurthermore, as zA can take only finitely many values (zA \u2208 {0, 1,..., \u0394}), this change is bounded away from zero.\nThis next theorem shows that for any \u0394, there is a point (q, r) \u2208 ~ \u03a9 for which q <21.\nThis means that there is a setting of the parameters q and r for which the new technology A is superior, but for which the incumbent technology is guaranteed to survive regardless of the underlying network.\nTHEOREM 5.2.\nThere exist q <1\/2 and r such that for every contagion game (G, q, r), A cannot become epidemic.\nPROOF.\nThe proof is based on the potential function from Theorem 5.1:\nWe first show that if q is close enough to 1\/2 and r is chosen appropriately, this potential function is non-increasing.\nSpecifically, let\nwhere \u03b1 is any irrational number strictly between 3\/64 and q. Again, there are three cases corresponding to the three possible strategy changes for a node u. Let zAB, zA, and zB denote the number of neighbors of node u in partition piece AB, A, and B respectively.\nCase 1: B \u2192 AB.\nRecalling that q <1\/2, we see u's payoff with strategy B is q (zAB + zB) whereas his payoff with strategy AB is (1 \u2212 q) (zAB + zA) + qzB \u2212 c.\nIn order for this strategic change to improve u's payoff, it must be the case that\nNow, notice that such a strategic change on the part of u induces a change in the potential function of \u2212 qzA + c as zA edges are removed from the XA, B edges between A and B and the size of partition piece AB is increased by one.\nThis change will be nonpositive so long as zA \u2265 c\/q.\nBy inequality 2 and the fact that zA is an integer,\nSubstituting our choice of parameters, (and noting that q \u2208 [1\/4, 1\/2] and zAB \u2264 \u0394), we see that the term inside the ceiling is less than\nlarger than c\/q.\nCase 2: AB \u2192 A. Recalling that q <1\/2, we see u's payoff with strategy AB is (1 \u2212 q) (zAB + zA) + qzB \u2212 c whereas her payoff with strategy A is (1 \u2212 q) (zAB + zA).\nIn order for this strategic change to improve u's payoff, it must be the case that\nSuch a strategic change on the part of u induces a change in the potential function of qzB \u2212 c as zB edges are added to the XA, B edges between A and B and the size of partition piece AB is decreased by one.\nThis change will be non-positive so long as zB \u2264 c\/q, which holds by inequality 3.\nCase 3: B \u2192 A. Note u's payoff with strategy B is q (zAB + zB) whereas his payoff with strategy A is (1 \u2212 q) (zAB + zA).\nIn order for this strategic change to improve u's payoff, it must be the case that\nSuch a strategic change on the part of u induces a change in the potential function of q (zB \u2212 zA) as zA edges are removed and zB edges are added to the XA, B edges between A and B.\nThis change will be negative so long as zB \u0394 \/ 2 \u2212 q \u2265 \u0394 \/ 4.\nThis contradicts the assumption that u is playing her best response by switching to A.\n6.\nLIMITED COMPATIBILITY\nWe now consider some further ways of modeling compatibility and interoperability.\nWe first consider two technologies, as in the previous sections, and introduce \"off-diagonal\" payoffs to capture a positive benefit in direct A-B interactions.\nWe find that this is\nin fact no more general than the model with zero payoffs for A-B interactions.\nWe then consider extensions to three technologies, identifying situations in which two coexisting incumbent technologies may or may not want to increases their mutual compatibility in the face of a new, third technology.\nTwo technologies.\nA natural relaxation of the two-technology model is to introduce (small) positive payoffs for A-B interaction; that is, cross-technology communication yields some lesser value to both agents.\nWe can model this using a variable xAB representing the payoff gathered by an agent with technology A when her neighbor has technology B, and similarly, a variable xBA representing the payoff gathered by an agent with B when her neighbor has A.\nHere we consider the special case in which these \"off-diagonal\" entries are symmetric, i.e., xAB = xBA = x.\nWe also assume that x 1\/2.\nHowever, if we insert symmetric off-diagonal payoffs x = 1\/4, we have a new game, equivalent to a game parameterized by r' = 5\/16 and q' = 1\/4.\nSince q' <1\/2 and q' <2r', A is epidemic in this game, and thus also in the game with limited compatibility.\nWe now show that generally, if A is the superior technology (i.e., q <1\/2), adding a compatibility term x can only help A spread.\nTHEOREM 6.2.\nLet G be a game without compatibility, parameterized by r and q on a particular network.\nLet G' be that same game, but with an added symmetric compatibility term x.\nIf A is epidemic for G, then A is epidemic for G'.\nPROOF.\nWe will show that any blocking structure in G' is also a blocking structure in G. By our characterization theorem, Theorem 4.6, this implies the desired result.\nWe have that G' is equivalent to a game without compatibility parameterized by q' = (q--x) \/ (1--2x) and r' = r \/ (1--2x).\nConsider a blocking structure (SB, SAB) for G'.\nWe know that for any v E SAB, q'd SB (v)> r' \u0394.\nThus\nas required for a blocking structure in G. Similarly, the two blocking structure constraints for v E SB are only strengthened when we move from G' to G.\nMore than two technologies.\nGiven the complex structure inherent in contagion games with two technologies, the understanding of contagion games with three or more technologies is largely open.\nHere we indicate some of the technical issues that come up with multiple technologies, through a series of initial results.\nThe basic set-up we study is one in which two incumbent technologies B and C are initially coexisting, and a third technology A, superior to both, is introduced initially at a finite set of nodes.\nWe first present a theorem stating that for any even \u0394, there is a contagion game on a \u0394--regular graph in which the two incumbent technologies B and C may find it beneficial to increase their compatibility so as to prevent getting wiped out by the new superior technology A.\nIn particular, we consider a situation in which initially, two technologies B and C with zero compatibility are at a stable state.\nBy a stable state, we mean that no finite perturbation of the current states can lead to an epidemic for either B or C.\nWe also have a technology A that is superior to both B and C, and can become epidemic by forcing a single node to choose A. However, by increasing their compatibility, B and C can maintain their stability and resist an epidemic from A. Let qA denote the payoffs to two adjacent nodes that both choose technology A, and define qB and qC analogously.\nWe will assume qA> qB> qC.\nWe also assume that r, the cost of selecting additional technologies, is sufficiently large so as to ensure that nodes never adopt more than one technology.\nFinally, we consider a compatibility parameter qBC that represents the payoffs to two adjacent nodes when one selects B and the other selects C. Thus our contagion game is now described by five parameters (G, qA, qB, qC, qBC).\nPROOF.\n(Sketch.)\nGiven \u0394, define G by starting with an infinite grid and connecting each node to its nearest \u0394--2 neighbors that are in the same row.\nThe initial state s assigns strategy B to even rows and strategy C to odd rows.\nLet qA = 4k2 + 4k + 1\/2, qB = 2k + 2, qC = 2k + 1, and qBC = 2k + 3\/4.\nThe first, third, and fourth claims in the theorem can be verified by checking the corresponding inequalities.\nThe second claim follows from the first and the observation that the alternating rows contain any plausible epidemic from growing vertically.\nThe above theorem shows that two technologies may both be able to survive the introduction of a new technology by increasing their level of compatibility with each other.\nAs one might expect,\nTable 1: The payoffs in the coordination game.\nEntry (x, y) in row i, column j indicates that the row player gets a payoff of x and the column player gets a payoff of y when the row player plays strategy i and the column player plays strategy j.\nthere are cases when increased compatibility between two technologies helps one technology at the expense of the other.\nSurprisingly, however, there are also instances in which compatibility is in fact harmful to both parties; the next example considers a fixed initial configuration with technologies A, B and C that is at equilibrium when qBC = 0.\nHowever, if this compatibility term is increased sufficiently, equilibrium is lost, and A becomes epidemic.\nEXAMPLE 6.4.\nConsider the union of an infinite two-dimensional grid graph with nodes u (x, y) and an infinite line graph with nodes v (y).\nAdd an edge between u (1, y) and v (y) for all y. For this network, we consider the initial configuration in which all v (y) nodes select A, and node u (x, y) selects B if x <0 and selects C otherwise.\nWe now define the parameters of this game as follows.\nLet qA = 3.95, qB = 1.25, qC = 1, and qBC = 0.\nIt is easily verified that for these values, the initial configuration given above is an equilibrium.\nHowever, now suppose we increase the coordination term, setting qBC = 0.9.\nThis is not an equilibrium, since each node of the form u (0, y) now has an incentive to switch from C (generating a payoff of 3.9) to B (thereby generating a payoff of 3.95).\nHowever, once these nodes have adopted B, the best-response for each node of the form u (1, y) is A (A generates a payoff of 4 where as B only generates a payoff of 3.95).\nFrom here, it is not hard to show that A spreads directly throughout the entire network.","keyphrases":["interoper","limit compat","character","diffus process","game-theoret diffus model","strateg incompat","bilingu","non-convex properti","morri's theorem","contagion threshold","contagion game","potenti function","algorithm game theori","contagion on network","innov diffus"],"prmu":["P","P","P","M","M","M","U","M","U","U","U","U","U","M","R"]} {"id":"J-3","title":"Budget Optimization in Search-Based Advertising Auctions","abstract":"Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work on the auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well. More precisely, this strategy gets at least a 1 \u2212 1\/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.","lvl-1":"Budget Optimization in Search-Based Advertising Auctions Jon Feldman Google, Inc..\nNew York, NY jonfeld@google.com S. Muthukrishnan Google, Inc..\nNew York, NY muthu@google.com Martin P\u00b4al Google, Inc..\nNew York, NY mpal@google.com Cliff Stein \u2217 Department of IEOR Columbia University cliff@ieor.columbia.edu ABSTRACT Internet search companies sell advertisement slots based on users'' search queries via an auction.\nWhile there has been previous work on the auction process and its game-theoretic aspects, most of it focuses on the Internet company.\nIn this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget.\nWe model the entire process and study this budget optimization problem.\nWhile most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well.\nMore precisely, this strategy gets at least a 1 \u2212 1\/e fraction of the maximum clicks possible.\nAs our preliminary experiments show, such uniform strategies are likely to be practical.\nWe also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences -Economics General Terms Algorithms, Economics, Theory.\n1.\nINTRODUCTION Online search is now ubiquitous and Internet search companies such as Google, Yahoo! and MSN let companies and individuals advertise based on search queries posed by users.\nConventional media outlets, such as TV stations or newspapers, price their ad slots individually, and the advertisers buy the ones they can afford.\nIn contrast, Internet search companies find it difficult to set a price explicitly for the advertisements they place in response to user queries.\nThis difficulty arises because supply (and demand) varies widely and unpredictably across the user queries, and they must price slots for billions of such queries in real time.\nThus, they rely on the market to determine suitable prices by using auctions amongst the advertisers.\nIt is a challenging problem to set up the auction in order to effect a stable market in which all the parties (the advertisers, users as well as the Internet search company) are adequately satisfied.\nRecently there has been systematic study of the issues involved in the game theory of the auctions [5, 1, 2], revenue maximization [10], etc..\nThe perspective in this paper is not of the Internet search company that displays the advertisements, but rather of the advertisers.\nThe challenge from an advertiser``s point of view is to understand and interact with the auction mechanism.\nThe advertiser determines a set of keywords of their interest and then must create ads, set the bids for each keyword, and provide a total (often daily) budget.\nWhen a user poses a search query, the Internet search company determines the advertisers whose keywords match the query and who still have budget left over, runs an auction amongst them, and presents the set of ads corresponding to the advertisers who win the auction.\nThe advertiser whose ad appears pays the Internet search company if the user clicks on the ad.\nThe focus in this paper is on how the advertisers bid.\nFor the particular choice of keywords of their interest1 , an advertiser wants to optimize the overall effect of the advertising campaign.\nWhile the effect of an ad campaign in any medium is a complicated phenomenon to quantify, one commonly accepted (and easily quantified) notion in searchbased advertising on the Internet is to maximize the number of clicks.\nThe Internet search companies are supportive to1 The choice of keywords is related to the domain-knowledge of the advertiser, user behavior and strategic considerations.\nInternet search companies provide the advertisers with summaries of the query traffic which is useful for them to optimize their keyword choices interactively.\nWe do not directly address the choice of keywords in this paper, which is addressed elsewhere [13].\n40 wards advertisers and provide statistics about the history of click volumes and prediction about the future performance of various keywords.\nStill, this is a complex problem for the following reasons (among others): \u2022 Individual keywords have significantly different characteristics from each other; e.g., while fishing is a broad keyword that matches many user queries and has many competing advertisers, humane fishing bait is a niche keyword that matches only a few queries, but might have less competition.\n\u2022 There are complex interactions between keywords because a user query may match two or more keywords, since the advertiser is trying to cover all the possible keywords in some domain.\nIn effect the advertiser ends up competing with herself.\nAs a result, the advertisers face a challenging optimization problem.\nThe focus of this paper is to solve this optimization problem.\n1.1 The Budget Optimization Problem We present a short discussion and formulation of the optimization problem faced by advertisers; a more detailed description is in Section 2.\nA given advertiser sees the state of the auctions for searchbased advertising as follows.\nThere is a set K of keywords of interest; in practice, even small advertisers typically have a large set K.\nThere is a set Q of queries posed by the users.\nFor each query q \u2208 Q, there are functions giving the clicksq(b) and costq(b) that result from bidding a particular amount b in the auction for that query, which we model more formally in the next section.\nThere is a bipartite graph G on the two vertex sets representing K and Q. For any query q \u2208 Q, the neighbors of q in K are the keywords that are said to match the query q.2 The budget optimization problem is as follows.\nGiven graph G together with the functions clicksq(\u00b7) and costq(\u00b7) on the queries, as well as a budget U, determine the bids bk for each keyword k \u2208 K such that P q clicksq(bq) is maximized subject to P q costq(bq) \u2264 U, where the effective bid bq on a query is some function of the keyword bids in the neighborhood of q.\nWhile we can cast this problem as a traditional optimization problem, there are different challenges in practice depending on the advertiser``s access to the query and graph information, and indeed the reliability of this information (e.g., it could be based on unstable historical data).\nThus it is important to find solutions to this problem that not only get many clicks, but are also simple, robust and less reliant on the information.\nIn this paper we define the notion of a uniform strategy which is essentially a strategy that bids uniformly on all keywords.\nSince this type of strategy obviates the need to know anything about the particulars of the graph, and effectively aggregates the click and cost functions on the queries, it is quite robust, and thus desirable in practice.\nWhat is surprising is that uniform strategy actually performs well, which we will prove.\n2 The particulars of the matching rule are determined by the Internet search company; here we treat the function as arbitrary.\n1.2 Our Main Results and Technical Overview We present positive and negative results for the budget optimization problem.\nIn particular, we show: \u2022 Nearly all formulations of the problem are NP-Hard.\nIn cases slightly more general than the formulation above, where the clicks have weights, the problem is inapproximable better than a factor of 1 \u2212 1 e , unless P=NP.\n\u2022 We give a (1\u22121\/e)-approximation algorithm for the budget optimization problem.\nThe strategy found by the algorithm is a two-bid uniform strategy, which means that it randomizes between bidding some value b1 on all keywords, and bidding some other value b2 on all keywords until the budget is exhausted3 .\nWe show that this approximation ratio is tight for uniform strategies.\nWe also give a (1\/2)-approximation algorithm that offers a single-bid uniform strategy, only using one value b1.\n(This is tight for single-bid uniform strategies.)\nThese strategies can be computed in time nearly linear in |Q| + |K|, the input size.\nUniform strategies may appear to be naive in first consideration because the keywords vary significantly in their click and cost functions, and there may be complex interaction between them when multiple keywords are relevant to a query.\nAfter all, the optimum can configure arbitrary bids on each of the keywords.\nEven for the simple case when the graph is a matching, the optimal algorithm involves placing different bids on different keywords via a knapsack-like packing (Section 2).\nSo, it might be surprising that a simple two-bid uniform strategy is 63% or more effective compared to the optimum.\nIn fact, our proof is stronger, showing that this strategy is 63% effective against a strictly more powerful adversary who can bid independently on the individual queries, i.e., not be constrained by the interaction imposed by the graph G.\nOur proof of the 1 \u2212 1\/e approximation ratio relies on an adversarial analysis.\nWe define a factor-revealing LP (Section 4) where primal solutions correspond to possible instances, and dual solutions correspond to distributions over bidding strategies.\nBy deriving the optimal solution to this LP, we obtain both the proof of the approximation ratio, and a tight worst-case instance.\nWe have conducted simulations using real auction data from Google.\nThe results of these simulations, which are highlighted at the end of Section 4, suggest that uniform bidding strategies could be useful in practice.\nHowever, important questions remain about (among other things) alternate bidding goals, on-line or stochastic bidding models [11], and game-theoretic concerns [3], which we briefly discuss in Section 8.\n2.\nMODELING A KEYWORD AUCTION We describe an auction from an advertiser``s point of view.\nAn advertiser bids on a keyword, which we can think of as a word or set of words.\nUsers of the search engine submit queries.\nIf the query matches a keyword that has been bid on by an advertiser, then the advertiser is entered into an auction for the available ad slots on the results page.\nWhat constitutes a match varies depending on the search engine.\n3 This type of strategy can also be interpreted as bidding one value (on all keywords) for part of the day, and a different value for the rest of the day.\n41 An advertiser makes a single bid for a keyword that remains in effect for a period of time, say one day.\nThe keyword could match many different user queries throughout the day.\nEach user query might have a different set of advertisers competing for clicks.\nThe advertiser could also bid different amounts on multiple keywords, each matching a (possibly overlapping) set of user queries.\nThe ultimate goal of an advertiser is to maximize traffic to their website, given a certain advertising budget.\nWe now formalize a model of keyword bidding and define an optimization problem that captures this goal.\n2.1 Landscapes We begin by considering the case of a single keyword that matches a single user query.\nIn this section we define the notion of a query landscape that describes the relationship between the advertiser``s bid and what will happen on this query as a result of this bid[9].\nThis definition will be central to the discussion as we continue to more general cases.\n2.1.1 Positions, bids and click-through rate The search results page for a query contains p possible positions in which our ad can appear.\nWe denote the highest (most favorable) position by 1 and lowest by p. Associated with each position i is a value \u03b1[i] that denotes the click-through rate (ctr) of the ad in position i.\nThe ctr is a measure of how likely it is that our ad will receive a click if placed in position i.\nThe ctr can be measured empirically using past history.\nWe assume throughout this work that that \u03b1[i] \u2264 \u03b1[j] if j < i, that is, higher positions receive at least as many clicks as lower positions.\nIn order to place an ad on this page, we must enter the auction that is carried out among all advertisers that have submitted a bid on a keyword that matches the user``s query.\nWe will refer to such an auction as a query auction, to emphasize that there is an auction for each query rather than for each keyword.\nWe assume that the auction is a generalized second price (GSP) auction [5, 7]: the advertisers are ranked in decreasing order of bid, and each advertiser is assigned a price equal to the amount bid by the advertiser below them in the ranking.4 In sponsored search auctions, this advertiser pays only if the user actually clicks on the ad.\nLet (b[1], ... , b[p]) denote the bids of the top p advertisers in this query auction.\nFor notational convenience, we assume that b[0] = \u221e and b[p] = \u03b1[p] = 0.\nSince the auction is a generalized second price auction, higher bids win higher positions; i.e. b[i] \u2265 b[i + 1].\nSuppose that we bid b on some keyword that matches the user``s query, then our position is defined by the largest b[i] that is at most b, that is, pos(b) = arg max i (b[i] : b[i] \u2264 b).\n(1) Since we only pay if the user clicks (and that happens with probability \u03b1[i]), our expected cost for winning position i 4 Google, Yahoo! and MSN all use some variant of the GSP auction.\nIn the Google auction, the advertisers'' bids are multiplied by a quality score before they are ranked; our results carry over to this case as well, which we omit from this paper for clarity.\nAlso, other auctions besides GSP have been considered; e.g., the Vickrey Clark Groves (VCG) auction [14, 4, 7].\nEach auction mechanism will result in a different sort of optimization problem.\nIn the conclusion we point out that for the VCG auction, the bidding optimization problem becomes quite easy.\nwould be cost[i] = \u03b1[i] \u00b7 b[i], where i = pos(b).\nWe use costq(b) and clicksq(b) to denote the expected cost and clicks that result from having a bid b that qualifies for a query auction q, and thus costq(b) = \u03b1[i] \u00b7 b[i] where i = pos(b), (2) clicksq(b) = \u03b1[i] where i = pos(b).\n(3) The following observations about cost and clicks follow immediately from the definitions and equations (1), (2) and (3).\nWe use R+ to denote the nonnegative reals.\nObservation 1.\nFor b \u2208 R+, 1.\n(costq(b), clicksq(b)) can only take on one of a finite set of values Vq = {(cost[1], \u03b1[1]), ... , (cost[p], \u03b1[p])}.\n2.\nBoth costq(b) and clicksq(b) are non-decreasing functions of b. Also, cost-per-click (cpc) costq(b)\/clicksq(b) is non-decreasing in b. 3.\ncostq(b)\/clicksq(b) \u2264 b. For bids (b[1], ... , b[p]) that correspond to the bids of other advertisers, we have: costq(b[i])\/clicksq(b[i]) = b[i], i \u2208 [p].\nWhen the context is clear, we drop the subscript q. 2.1.2 Query Landscapes We can summarize the data contained in the functions cost(b) and clicks(b) as a collection of points in a plot of cost vs. clicks, which we refer to as a landscape.\nFor example, for a query with four slots, a landscape might look like Table 1.\nbid range cost per click cost clicks [$2.60,\u221e) $2.60 $1.30 .5 [$2.00,$2.60) $2.00 $0.90 .45 [$1.60,$2.00) $1.60 $0.40 .25 [$0.50,$1.60) $0.50 $0.10 .2 [$0,$0.50) $0 $0 0 Table 1: A landscape for a query It is convenient to represent this data graphically as in Figure 1 (ignore the dashed line for now).\nHere we graph clicks as a function of cost.\nObserve that in this graph, the cpc (cost(b)\/clicks(b)) of each point is the reciprocal of the slope of the line from the origin to the point.\nSince cost(b), clicks(b) and cost(b)\/clicks(b) are non-decreasing, the slope of the line from the origin to successive points on the plot decreases.\nThis condition is slightly weaker than concavity.\nSuppose we would like to solve the budget optimization problem for a single query landscape.5 As we increase our bid from zero, our cost increases and our expected number of clicks increases, and so we simply submit the highest bid such that we remain within our budget.\nOne problem we see right away is that since there are only a finite set of points in this landscape, we may not be able to target arbitrary budgets efficiently.\nSuppose in the example from Table 1 and Figure 1 that we had a budget 5 Of course it is a bit unrealistic to imagine that an advertiser would have to worry about a budget if only one user query was being considered; however one could imagine multiple instances of the same query and the problem scales.\n42 $0.50 $1.00 $1.50 .1 .2 .3 .4 .5 Clicks Cost Figure 1: A bid landscape.\nof $1.00.\nBidding between $2.00 and $2.60 uses only $0.90, and so we are under-spending.\nBidding more than $2.60 is not an option, since we would then incur a cost of $1.30 and overspend our budget.\n2.1.3 Randomized strategies To rectify this problem and better utilize our available budget, we allow randomized bidding strategies.\nLet B be a distribution on bids b \u2208 R+.\nNow we define cost(B) = Eb\u223cB[cost(b)] and clicks(B) = Eb\u223cB[clicks(b)].\nGraphically, the possible values of (cost(B), clicks(B)) lie in the convex hull of the landscape points.\nThis is represented in Figure 1 by the dashed line.\nTo find a bid distribution B that maximizes clicks subject to a budget, we simply draw a vertical line on the plot where the cost is equal to the budget, and find the highest point on this line in the convex hull.\nThis point will always be the convex combination of at most two original landscape points which themselves lie on the convex hull.\nThus, given the point on the convex hull, it is easy to compute a distribution on two bids which led to this point.\nSummarizing, Lemma 1.\nIf an advertiser is bidding on one keyword, subject to a budget U, then the optimal strategy is to pick a convex combination of (at most) two bids which are at the endpoints of the line on the convex hull at the highest point for cost U.\nThere is one subtlety in this formulation.\nGiven any bidding strategy, randomized or otherwise, the resulting cost is itself a random variable representing the expected cost.\nThus if our budget constraint is a hard budget, we have to deal with the difficulties that arise if our strategy would be over budget.\nTherefore, we think of our budget constraint as soft, that is, we only require that our expected cost be less than the budget.\nIn practice, the budget is often an average daily budget, and thus we don``t worry if we exceed it one day, as long as we are meeting the budget in expectation.\nFurther, either the advertiser or the search engine (possibly both), monitor the cost incurred over the day; hence, the advertiser``s bid can be changed to zero for part of the day, so that the budget is not overspent.6 Thus in the remain6 See https:\/\/adwords.google.com\/support\/bin\/answer.\npy?answer=22183, for example.\nder of this paper, we will formulate a budget constraint that only needs to be respected in expectation.\n2.1.4 Multiple Queries: a Knapsack Problem As a warm-up, we will consider next the case when we have a set of queries, each which its own landscape.\nWe want to bid on each query independently subject to our budget: the resulting optimization problem is a small generalization of the fractional knapsack problem, and was solved in [9].\nThe first step of the algorithm is to take the convex hull of each landscape, as in Figure 1, and remove any landscape points not on the convex hull.\nEach piecewise linear section of the curve represents the incremental number of clicks and cost incurred by moving one``s bid from one particular value to another.\nWe regard these pieces as items in an instance of fractional knapsack with value equal to the incremental number of clicks and size equal to the incremental cost.\nMore precisely, for each piece connecting two consecutive bids b and b on the convex hull, we create a knapsack item with value [clicks(b ) \u2212 clicks(b )] and size [cost(b ) \u2212 cost(b )].\nWe then emulate the greedy algorithm for knapsack, sorting by value\/size (cost-per-click), and choosing greedily until the budget is exhausted.\nIn this reduction to knapsack we have ignored the fact that some of the pieces come from the same landscape and cannot be treated independently.\nHowever, since each curve is concave, the pieces that come from a particular query curve are in increasing order of cost-per-click; thus from each landscape we have chosen for our knapsack a set of pieces that form a prefix of the curve.\n2.2 Keyword Interaction In reality, search advertisers can bid on a large set of keywords, each of them qualifying for a different (possibly overlapping) set of queries, but most search engines do not allow an advertiser to appear twice in the same search results page.7 Thus, if an advertiser has a bid on two different keywords that match the same query, this conflict must be resolved somehow.\nFor example, if an advertiser has a bid out on the keywords shoes and high-heel, then if a user issues the query high-heel shoes, it will match on two different keywords.\nThe search engine specifies, in advance, a rule for resolution based on the query the keyword and the bid.\nA natural rule is to take the keyword with the highest bid, which we adopt here, but our results apply to other resolution rules.\nWe model the keyword interaction problem using an undirected bipartite graph G = (K \u222a Q, E) where K is a set of keywords and Q is a set of queries.\nEach q \u2208 Q has an associated landscape, as defined by costq(b) and clicksq(b).\nAn edge (k, q) \u2208 E means that keyword k matches query q.\nThe advertiser can control their individual keyword bid vector a \u2208 R |K| + specifying a bid ak for each keyword k \u2208 K. (For now, we do not consider randomized bids, but we will introduce that shortly.)\nGiven a particular bid vector a on the keywords, we use the resolution rule of taking the maximum to define the effective bid on query q as bq(a) = max k:(k,q)\u2208E ak.\nBy submitting a bid vector a, the advertiser receives some 7 See https:\/\/adwords.google.com\/support\/bin\/answer.\npy?answer=14179, for example.\n43 number of clicks and pays some cost on each keyword.\nWe use the term spend to denote the total cost; similarly, we use the term traffic to denote the total number of clicks: spend(a)= X q\u2208Q costq(bq(a)); traffic(a)= X q\u2208Q clicksq(bq(a)) We also allow randomized strategies, where an advertiser gives a distribution A over bid vectors a \u2208 R |K| + .\nThe resulting spend and traffic are given by spend(A)=Ea\u223cA[spend(a)]; traffic(A)=Ea\u223cA[traffic(a)] We can now state the problem in its full generality: Budget Optimization Input: a budget U, a keyword-query graph G = (K \u222a Q, E), and landscapes (costq(\u00b7), clicksq(\u00b7)) for each q \u2208 Q. Find: a distribution A over bid vectors a \u2208 R |K| + such that spend(A) \u2264 U and traffic(A) is maximized.\nWe conclude this section with a small example to illustrate some feature of the budget optimization problem.\nSuppose you have two keywords K = {u, v} and two queries Q = {x, y} and edges E = {(u, x), (u, y), (v, y)}.\nSuppose query x has one position with ctr \u03b1x [1] = 1.0, and there is one bid bx 1 = $1.\nQuery y has two positions with ctrs \u03b1y [1] = \u03b1y [2] = 1.0, and bids by 1 = $ and by 2 = $1 To get any clicks from x, an advertiser must bid at least $1 on u. However, because of the structure of the graph, if the advertiser sets bu to $1, then his effective bid is $1 on both x and y. Thus he must trade-off between getting the clicks from x and getting the bargain of a click for $ that would be possible otherwise.\n3.\nUNIFORM BIDDING STRATEGIES As we will show in Section 5, solving the Budget Optimization problem in its full generality is difficult.\nIn addition, it may be difficult to reason about strategies that involve arbitrary distributions over arbitrary bid vectors.\nAdvertisers generally prefer strategies that are easy to understand, evaluate and use within their larger goals.\nWith this motivation, we look at restricted classes of strategies that we can easily compute, explain and analyze.\nWe define a uniform bidding strategy to be a distribution A over bid vectors a \u2208 R |K| + where each bid vector in the distribution is of the form (b, b, ... , b) for some real-valued bid b.\nIn other words, each vector in the distribution bids the same value on every keyword.\nUniform strategies have several advantages.\nFirst, they do not depend on the edges of the interaction graph, since all effective bids on queries are the same.\nThus, they are effective in the face of limited or noisy information about the keyword interaction graph.\nSecond, uniform strategies are also independent of the priority rule being used.\nThird, any algorithm that gives an approximation guarantee will then be valid for any interaction graph over those keywords and queries.\nWe now show that we can compute the best uniform strategy efficiently.\nSuppose we have a set of queries Q, where the landscape Vq for each query q is defined by the set of points Vq = {(costq[1], \u03b1q[1]), ... , (costq[p], \u03b1q[p])}.\nWe define the set of interesting bids Iq = {costq[1]\/\u03b1q [1], ... , costq[p]\/\u03b1q [p]}, let I = \u222aq\u2208QIq, and let N = |I|.\nWe can index the points in I as b1, ... , bN in increasing order.\nThe ith point in our aggregate landscape V is found by summing, over the queries, the cost and clicks associated with bid bi, that is, V = \u222aN i=1( P q\u2208Q costq(bi), P q\u2208Q clicksq(bi)).\nFor any possible bid b, if we use the aggregate landscape just as we would a regular landscape, we exactly represent the cost and clicks associated with making that bid simultaneously on all queries associated with the aggregate landscape.\nTherefore, all the definitions and results of Section 2 about landscapes can be extended to aggregate landscapes, and we can apply Lemma 1 to compute the best uniform strategy (using the convex hull of the points in this aggregate landscape).\nThe running time is dominated by the time to compute the convex hull, which is O(N log N)[12].\nThe resulting strategy is the convex combination of two points on the aggregate landscape.\nDefine a two-bid strategy to be a uniform strategy which puts non-zero weight on at most two bid vectors.\nWe have shown Lemma 2.\nGiven an instance of Budget Optimization in which there are a total of N points in all the landscapes, we can find the best uniform strategy in O(N log N) time.\nFurthermore, this strategy will always be a two-bid strategy.\nPutting these ideas together, we get an O(N log N)-time algorithm for Budget Optimization, where N is the total number of landscape points (we later show that this is a (1 \u2212 1 e )-approximation algorithm): 1.\nAggregate all the points from the individual query landscapes into a single aggregate landscape.\n2.\nFind the convex hull of the points in the aggregate landscape.\n3.\nCompute the point on the convex hull for the given budget, which is the convex combination of two points \u03b1 and \u03b2.\n4.\nOutput the strategy which is the appropriate convex combination of the uniform bid vectors corresponding to \u03b1 and \u03b2.\nWe will also consider a special case of two-bid strategies.\nA single-bid strategy is a uniform strategy which puts nonzero weight on at most one non-zero vector, i.e. advertiser randomizes between bidding a certain amount b\u2217 on all keywords, and not bidding at all.\nA single-bid strategy is even easier to implement in practice than a two-bid strategy.\nFor example, the search engines often allow advertisers to set a maximum daily budget.\nIn this case, the advertiser would simply bid b\u2217 until her budget runs out, and the ad serving system would remove her from all subsequent auctions until the end of the day.\nOne could also use an ad scheduling tool offered by some search companies8 to implement this strategy.\nThe best single-bid strategy can also be computed easily from the aggregate landscape.\nThe optimal strategy for a budget U will either be the point x s.t. cost(x) is as large as possible without exceeding U, or a convex combination of zero and the point y, where cost(y) is as small as possible while larger than U. 8 See https:\/\/adwords.google.com\/support\/bin\/answer.\npy?answer=33227, for example.\n44 B D A C clicks cost cpc A 2 $1 $0.50 B 5 $0.50 $0.10 C 3 $2 $0.67 D 4 $1 $0.25 cpc $0.67 $0.50 $0.25 $0.10 Total clicks: 5 9 11 14 Figure 2: Four queries and their click-price curve.\n4.\nAPPROXIMATION ALGORITHMS In the previous section we proposed using uniform strategies and gave an efficient algorithm to compute the best such strategy.\nIn section we prove that there is always a good uniform strategy: Theorem 3.\nThere always exists a uniform bidding strategy that is (1 \u2212 1 e )-optimal.\nFurthermore, for any > 0, there exists an instance for which all uniform strategies are at most (1 \u2212 1 e + )-optimal.\nWe introduce the notion of a click-price curve, which is central to our analysis.\nThis definition makes it simple to show that there is always a single-bid strategy that is a 1 2 approximation (and this is tight); we then build on this to prove Theorem 3.\n4.1 Click-price curves Consider a set of queries Q, and for each query q \u2208 Q, let (clicksq(\u00b7), costq(\u00b7)) be the corresponding bid landscape.\nConsider an adversarial bidder \u03a9 with the power to bid independently on each query.\nNote that this bidder is more powerful than an optimal bidder, which has to bid on the keywords.\nSuppose this strategy bids b\u2217 q for each query q. Thus, \u03a9 achieves traffic C\u03a9 = P i clicks(b\u2217 i ), and incurs total spend U\u03a9 = P i cost(b\u2217 i ).\nWithout loss of generality we can assume that \u03a9 bids so that for each query q, the cost per click is equal to b\u2217 q , i.e. costq(b\u2217 q )\/clicksq(b\u2217 q ) = b\u2217 q .\nWe may assume this because for some query q, if costq(b\u2217 q)\/clicksq(b\u2217 q) < b\u2217 q , we can always lower b\u2217 q and without changing the cost and clicks.\nTo aid our discussion, we introduce the notion of a clickprice curve (an example of which is shown in Figure 2), which describes the cpc distribution obtained by \u03a9.\nFormally the curve is a non-decreasing function h : [0, C\u03a9] \u2192 R+ defined as h(r) = min{c | P q:b\u2217 q \u2264c clicksq(b\u2217 q ) \u2265 r}.\nAnother way to construct this curve is to sort the queries in increasing order by b\u2217 q = costq(b\u2217 q)\/clicksq(b\u2217 q), then make a step function where the qth step has height b\u2217 q and width clicksq(b\u2217 q ) (see Figure 2).\nNote that the area of each step is costq(b\u2217 q ).\nThe following claim follows immediately: Claim 1.\nU\u03a9 = R C\u03a9 0 h(r)dr. Suppose we wanted to buy some fraction r \/C\u03a9 of the traffic that \u03a9 is getting.\nThe click-price curve says that if we bid h(r ) on every keyword (and therefore every query), we get at least r traffic, since this bid would ensure that for all q such that b\u2217 q \u2264 h(r ) we win as many clicks as \u03a9.\nNote that by bidding h(r ) on every keyword, we may actually get even more than r traffic, since for queries q where b\u2217 q is much less than h(r ) we may win more clicks than \u03a9.\nHowever, all of these extra clicks still cost at most h(r ) per click.\nThus we see that for any r \u2208 [0, C\u03a9], if we bid h(r ) on every keyword, we receive at least r traffic at a total spend of at most h(r ) per click.\nNote that by randomizing between bidding zero and bidding h(r ), we can receive exactly r traffic at a total spend of at most r \u00b7 h(r ).\nWe summarize this discussion in the following lemma: Lemma 4.\nFor any r \u2208 [0, C\u03a9], there exists a single-bid strategy that randomizes between bidding h(r) and bidding zero, and this strategy receives exactly r traffic with total spend at most r \u00b7 h(r).\nLemma 4 describes a landscape as a continuous function.\nFor our lower bounds, we will need to show that given any continuous function, there exists a discrete landscape that approximates it arbitrarily well.\nLemma 5.\nFor any C, U > 0 and non-decreasing function f : [0, C] \u2192 R+ such that R C 0 f(r)dr = U, and any small > 0, there exists an instance of Budget Optimization with budget U + , where the optimal solution achieves C clicks at cost U + , and all uniform bidding strategies are convex combinations of single-bid strategies that achieve exactly r clicks at cost exactly rf(r) by bidding f(r) on all keywords.\nProof.\nConstruct an instance as follows.\nLet > 0 be a small number that we will later define in terms of .\nDefine r0 = 0, r1, r2, ... , rm = C such that ri\u22121 < ri \u2264 ri\u22121 + , f(ri\u22121) \u2264 f(ri) \u2264 f(ri\u22121)+ , and m \u2264 (C +f(C))\/ .\n(This is possible by choosing ri``s spaced by min( , f(ri)\u2212f(ri\u22121))) Now make a query qi for all i \u2208 [m] with bidders bidding h(ri), h(ri+1), ... , h(rm), and ctrs \u03b1[1] = \u03b1[2] = \u00b7 \u00b7 \u00b7 = \u03b1[m\u2212 i+1] = ri \u2212ri\u22121.\nThe graph is a matching with one keyword per query, and so we can imagine the optimal solution as bidding on queries.\nThe optimal solution will always bid exactly h(ri) on query qi, and if it did so on all queries, it would spend U := Pm i=1(ri \u2212 ri\u22121)h(ri).\nDefine small enough so that U = U + , which is always possible since U \u2264 Z C 0 f(r)dr + mX i=1 (ri \u2212 ri\u22121)(h(ri) \u2212 h(ri\u22121)) \u2264 U + 2 m \u2264 U + (C + f(C)).\nNote that the only possible bids (i.e., all others have the same results as one of these) are f(r0), ... , f(rm), and bidding uniformly with f(ri) results in Pi j=1 ri \u2212 ri\u22121 = ri clicks at cost h(ri)ri.\n4.2 A 1 2 -approximation algorithm Using Lemma 4 we can now show that there is a uniform single-bid strategy that is 1 2 -optimal.\nIn addition to being an interesting result in its own right, it also serves as a warm-up for our main result.\nTheorem 6.\nThere always exists a uniform single-bid strategy that is 1 2 -optimal.\nFurthermore, for any > 0 there exists an instance for which all single-bid strategies are at most (1 2 + )-optimal.\n45 Proof.\nApplying Lemma 4 with r = C\u03a9\/2, we see that there is a strategy that achieves traffic C\u03a9\/2 with spend C\u03a9\/2\u00b7h(C\u03a9\/2).\nNow, using the fact that h is a non-decreasing function combined with Claim 1, we have (C\u03a9\/2)h(C\u03a9\/2) \u2264 Z C\u03a9 C\u03a9\/2 h(r)dr \u2264 Z C\u03a9 0 h(r)dr = U\u03a9, (4) which shows that we spend at most U\u03a9.\nWe conclude that there is a 1 2 -optimal single-bid strategy randomizing between bidding C\u03a9\/2 and zero.\nFor the second part of the theorem, we construct a tight example using two queries Q = {x, y}, two keywords K = {u, v}, and edges E = {(u, x), (v, y)}.\nFix some \u03b1 where 0 < \u03b1 \u2264 1, and fix some very small > 0.\nQuery x has two positions, with bids of bx 1 = 1\/\u03b1 and bx 2 = , and with identical click-through rates \u03b1x [1] = \u03b1x [2] = \u03b1.\nQuery y has one position, with a bid of by 1 = 1\/\u03b1 and a click-through rate of \u03b1y [1] = \u03b1.\nThe budget is U = 1 + \u03b1.\nThe optimal solution is to bid on u (and therefore x) and bid 1\/\u03b1 on v (and therefore y), both with probability 1.\nThis achieves a total of 2\u03b1 clicks and spends the budget exactly.\nThe only useful bids are 0, and 1\/\u03b1, since for both queries all other bids are identical in terms of cost and clicks to one of those three.\nAny single-bid solution that uses as its non-zero bid gets at most \u03b1 clicks.\nBidding 1\/\u03b1 on both keywords results in 2\u03b1 clicks and total cost 2.\nThus, since the budget is U = 1 + \u03b1 < 2, a single-bid solution using 1\/\u03b1 can put weight at most (1+ \u03b1)\/2 on the 1\/\u03b1 bid.\nThis results in at most \u03b1(1 + \u03b1) clicks.\nThis can be made arbitrarily close to \u03b1 by lowering .\n4.3 A (1 \u2212 1 e )-approximation algorithm The key to the proof of Theorem 3 is to show that there is a distribution over single-bid strategies from Lemma 4 that obtains at least (1 \u2212 1 e )C\u03a9 clicks.\nIn order to figure out the best distribution, we wrote a linear program that models the behavior of a player who is trying to maximize clicks and an adversary who is trying to create an input that is hard for the player.\nThen using linear programming duality, we were able to derive both an optimal strategy and a tight instance.\nAfter solving the LP numerically, we were also able to see that there is a uniform strategy for the player that always obtains (1 \u2212 1 e )C\u03a9 clicks; and then from the solution were easily able to guess the optimal distribution.\nThis methodology is similar to that used in work on factor-revealing LPs [8, 10].\n4.3.1 An LP for the worst-case click-price curve.\nConsider the adversary``s problem of finding a click-price curve for which no uniform bidding strategy can achieve \u03b1C\u03a9 clicks.\nRecall that by Lemma 1 we can assume that a uniform strategy randomizes between two bids u and v.\nWe also assume that the uniform strategy uses a convex combination of strategies from Lemma 4, which we can assume by Lemma 5.\nThus, to achieve \u03b1C\u03a9 clicks, a uniform strategy must randomize between bids h(u) and h(v) where u \u2264 \u03b1C\u03a9 and v \u2265 \u03b1C\u03a9.\nCall the set of such strategies S. Given a (u, v) \u2208 S, the necessary probabilities in order to achieve \u03b1C\u03a9 clicks are easily determined, and we denote them by p1(u, v) and p2(u, v) respectively.\nNote further that the advertiser is trying to figure out which of these strategies to use, and ultimately wants to compute a distribution over uniform strategies.\nIn the LP, she is actually going to compute a distribution over pairs of strategies in S, which we will then interpret as a distribution over strategies.\nUsing this set of uniform strategies as constraints, we can characterize a set of worst-case click-price curves by the constraints Z C\u03a9 0 h(r)dr \u2264 U \u2200(u, v) \u2208 S p1(u, v)uh(u) + p2(u, v)vh(v) \u2265 U A curve h that satisfies these constraints has the property that all uniform strategies that obtain \u03b1C\u03a9 clicks spend more than U. Discretizing this set of inequalities, and pushing the first constraint into the objective function, we get the following LP over variables hr representing the curve: min X r\u2208{0, ,2 ,...,C\u03a9} \u00b7 hr s.t. \u2200(u, v) \u2208 S, p1(u, v)uhu + p2(u, v)vhv \u2265 U In this LP, S is defined in the discrete domain as S = {(u, v) \u2208 {0, , 2 , ... , C\u03a9}2 : 0 \u2264 u \u2264 \u03b1C\u03a9 \u2264 v \u2264 C\u03a9}.\nSolving this LP for a particular \u03b1, if we get an objective less than U, we know (up to some discretization) that an instance of Budget Optimization exists that cannot be approximated better than \u03b1.\n(The instance is constructed as in the proof of Lemma 5.)\nA binary search yields the smallest such \u03b1 where the objective is exactly U. To obtain a strategy for the advertiser, we look at the dual, constraining the objective to be equal to U in order to get the polytope of optimum solutions: X (u,v)\u2208S wu,v = 1 \u2200(u, v) \u2208 S, X v :(u,v )\u2208S p1(u, v ) \u00b7 u \u00b7 wu,v \u2264 and X u :(u ,v)\u2208S p2(u , v) \u00b7 v \u00b7 wu ,v \u2264 .\nIt is straightforward to show that the second set of constraints is equivalent to the following: \u2200h \u2208 RC\u03a9\/ : X r hr = U, X (u,v)\u2208S wu,v(p1(u, v) \u00b7 u \u00b7 hu + p2(u, v) \u00b7 v \u00b7 hv) \u2264 U.\nHere the variables can be interpreted as weights on strategies in S.\nA point in this polytope represents a convex combination over strategies in S, with the property that for any click-price curve h, the cost of the mixed strategy is at most U.\nSince all strategies in S get at least \u03b1C\u03a9 clicks, we have a strategy that achieves an \u03b1-approximation.\nInterestingly, the equivalence between this polytope and the LP dual above shows that there is a mixture over values r \u2208 [0, C] that achieves an \u03b1-approximation for any curve h.\nAfter a search for the appropriate \u03b1 (which turned out to be 1 \u2212 1 e ), we solved these two LPs and came up with the plots in Figure 3, which reveal not only the right approximation ratio, but also a picture of the worst-case distribution and the approximation-achieving strategy.9 From the pic9 The parameters U and C\u03a9 can be set arbitrarily using scaling arguments.\n46 0 0 C\/e C 0 0 C\/e C Figure 3: The worst-case click-price curve and (1 \u2212 1\/e)-approximate uniform bidding strategy, as found by linear programming.\ntures, we were able to quickly guess the optimal strategy and worst case example.\n4.3.2 Proof of Theorem 3 By Lemma 4, we know that for each r \u2264 U\u03a9, there is a strategy that can obtain traffic r at cost r \u00b7 h(r).\nBy mixing strategies for multiple values of r, we construct a uniform strategy that is guaranteed to achieve at least 1\u2212e\u22121 = 0.63 fraction of \u03a9``s traffic and remain within budget.\nNote that the final resulting bid distribution will have some weight on the zero bid, since the single-bid strategies from Lemma 4 put some weight on bidding zero.\nConsider the following probability density function over such strategies (also depicted in Figure 3): g(r) = j 0 for r < C\u03a9\/e, 1\/r for r \u2265 C\u03a9\/e. Note that R C\u03a9 0 g(r)dr = R C\u03a9 C\u03a9\/e 1 r dr = 1, i.e. g is a probability density function.\nThe traffic achieved by our strategy is equal to traffic = Z C\u03a9 0 g(r)\u00b7r dr = Z C\u03a9 C\u03a9\/e 1 r \u00b7r dr = \u201e 1 \u2212 1 e `` C\u03a9.\nThe expected total spend of this strategy is at most spend = Z C\u03a9 0 g(r) \u00b7 rh(r) dr = Z C\u03a9 C\u03a9\/e h(r) dr \u2264 Z C\u03a9 0 h(r) dr = U\u03a9.\nThus we have shown that there exists a uniform bidding strategy that is (1 \u2212 1 e )-optimal.\nWe now show that no uniform strategy can do better.\nWe will prove that for all > 0 there exists an instance for which all uniform strategies are at most (1 \u2212 1 e + )-optimal.\nFirst we define the following click-price curve over the domain [0, 1]: h(r) = 8 < : 0 for r < e\u22121 1 e \u2212 2 \u201e e \u2212 1 r `` for r \u2265 e\u22121 Note that h is non-decreasing and non-negative.\nSince the curve is over the domain [0, 1] it corresponds to an instance where C\u03a9 = 1.\nNote also that R 1 0 h(r) dr = 1 e\u22122 R 1 1\/e e \u2212 1 r dr = 1.\nThus, this curve corresponds to an instance where U\u03a9 = 1.\nUsing Lemma 5, we construct an actual instance where the best uniform strategies are convex combinations of strategies that bid h(u) and achieve u clicks and u \u00b7 h(u) cost.\nSuppose for the sake of contradiction that there exists a uniform bidding strategy that achieves \u03b1 > 1\u2212e\u22121 traffic on this instance.\nBy Lemma 1 there is always a two-bid optimal uniform bidding strategy and so we may assume that the strategy achieving \u03b1 clicks randomizes over two bids.\nTo achieve \u03b1 clicks, the two bids must be on values h(u) and h(v) with probabilities pu and pv such that pu + pv = 1, 0 \u2264 u \u2264 \u03b1 \u2264 v and puu + pvv = \u03b1.\nTo calculate the spend of this strategy consider two cases: if u = 0 then we are bidding h(v) with probability pv = \u03b1\/v.\nThe spend in this case is: spend = pv \u00b7 v \u00b7 h(v) = \u03b1h(v) = \u03b1e \u2212 \u03b1\/v e \u2212 2 .\nUsing v \u2265 \u03b1 and then \u03b1 > 1 \u2212 1 e we get spend \u2265 \u03b1e \u2212 1 e \u2212 2 > (1 \u2212 1\/e)e \u2212 1 e \u2212 2 = 1, contradicting the assumption.\nWe turn to the case u > 0.\nHere we have pu = v\u2212\u03b1 v\u2212u and pv = \u03b1\u2212u v\u2212u .\nNote that for r \u2208 (0, 1] we have h(r) \u2265 1 e\u22122 (e \u2212 1 r ).\nThus spend \u2265 pu \u00b7 uh(u) + pv \u00b7 vh(v) = (v \u2212 \u03b1)(ue \u2212 1) + (\u03b1 \u2212 u)(ve \u2212 1) (v \u2212 u)(e \u2212 2) = \u03b1e \u2212 1 e \u2212 2 > 1.\nThe final inequality follows from \u03b1 > 1 \u2212 1 e .\nThus in both cases the spend of our strategy is over the budget of 1.\n4.4 Experimental Results We ran simulations using the data available at Google which we briefly summarize here.\nWe took a large advertising campaign, and, using the set of keywords in the campaign, computed three different curves (see Figure 4) for three different bidding strategies.\nThe x-axis is the budget (units removed), and the y-axis is the number of clicks obtained (again without units) by the optimal bid(s) under each respective strategy.\nQuery bidding represents our (unachievable) upper bound \u03a9, bidding on each query independently.\nThe uniform bidding curves represent the results of applying our algorithm: deterministic uses a single bid level, while randomized uses a distribution.\nFor reference, we include the lower bound of a (e \u2212 1)\/e fraction of the top curve.\nThe data clearly demonstrate that the best single uniform bid obtains almost all the possible clicks in practice.\nOf course in a more realistic environment without full knowledge, it is not always possible to find the best such bid, so further investigation is required to make this approach useful.\nHowever, just knowing that there is such a bid available should make the on-line versions of the problem simpler.\n5.\nHARDNESS RESULTS By a reduction from vertex cover we can show the following (proof omitted): Theorem 7.\nBudget Optimization is strongly NP-hard.\n47 Query Bidding Uniform Bidding (randomized) Uniform Bidding (deterministic) Lower bound 0 0 0.5 0.5 1 1 Budget Clicks Figure 4: An example with real data.\nNow suppose we introduce weights on the queries that indicate the relative value of a click from the various search users.\nFormally, we have weights wq for all q \u2208 Q and our goal is maximize the total weighted traffic given a budget.\nCall this the Weighted Keyword Bidding problem.\nWith this additional generalization we can show hardness of approximation via a simple reduction from the Maximum Coverage problem, which is known to be (1\u22121\/e)-hard [6] (proof omitted).\nTheorem 8.\nThe Weighted Keyword Bidding problem is hard to approximate to within (1 \u2212 1\/e).\n6.\nEXACT ALGORITHMS FOR LAMINAR GRAPHS If a graph has special structure, we can sometimes solve the budget optimization problem exactly.\nNote that the knapsack algorithm in Section 2 solves the problem for the case when the graph is a simple matching.\nHere we generalize this to the case when the graph has a laminar structure, which will allow us to impose a (partial) ordering on the possible bid values, and thereby give a pseudopolynomial algorithm via dynamic programming.\nWe first show that to solve the Budget Optimization problem (for general graphs) optimally in pseudopolynomial time, it suffices to provide an algorithm that solves the deterministic case.\nThe proof (omitted) uses ideas similar to Observation 1 and Lemma 1.\nLemma 9.\nLet I be an input to the Budget Optimization problem and suppose that we find the optimal deterministic solution for every possible budget U \u2264 U.\nThen we can find the optimal solution in time O(U log U).\nA collection S of n sets S1, ... , S2 is laminar if, for any two sets Si and Sj, if Si \u2229 Sj = \u2205 then either Si \u2286 Sj or Sj \u2286 Si.\nGiven a keyword interaction graph G, we associate a set of neighboring queries Qk = {q : (k, q) \u2208 E} with each keyword k.\nIf this collection of sets if laminar, we say that the graph has the laminar property.\nNote that a laminar interaction graph would naturally fall out as a consequence of designing a hierarchical keyword set (e.g., shoes, highheel shoes, athletic shoes).\nWe call a solution deterministic if it consists of one bid vector, rather than a general distribution over bid vectors.\nThe following lemma will be useful for giving a structure to the optimal solution, and will allow dynamic programming.\nLemma 10.\nFor keywords i, j \u2208 K, if Qi \u2286 Qj then there exists an optimal deterministic solution to the Budget Optimization problem with ai \u2265 aj.\nWe can view the laminar order as a tree with keyword j as a parent of keyword i if Qj is the minimal set containing Qi.\nIn this case we say that j is a child of i. Given a keyword j with c children i1, ... , ic, we now need to enumerate over all ways to allocate the budget among the children and also over all possible minimum bids for the children.\nA complication is that a node may have many children and thus a term of Uc would not even be pseudopolynomial.\nWe can solve this problem by showing that given any laminar ordering, there is an equivalent one in which each keyword has at most 2 children.\nLemma 11.\nLet G be a graph with the laminar property.\nThere exists another graph G with the same optimal solution to the Budget Optimization problem, where each node has at most two children in the laminar ordering.\nFurthermore, G has at most twice as many nodes as G. Given a graph with at most two children per node, we define F[i, b, U] to be the maximum number of clicks achievable by bidding at least b on each of keywords j s.t. Qj \u2286 Qi (and exactly b on keyword i) while spending at most U. For this definition, we use Z(b, U) to denote set of allowable bids and budgets over children: Z(b, U) = {b, b , U , U : b \u2265 b, U \u2264 U, b \u2265 b, U \u2264 U, U + U \u2264 U} Given a keyword i and a bid ai, compute an incremental spend and traffic associated with bidding ai on keyword i, that is \u02c6t(i, ai) = X q\u2208Qi\\Qi\u22121 clicksq(ai), and \u02c6s(i, ai) = X q\u2208Qi\\Qi\u22121 costq(ai).\nNow we define F[i, b, U] as max b, b ,U ,U \u2208Z(b,U) j F[j , b , U ] + F[j , b , U ] + \u02c6t(i, b) ff (5) if (\u02c6s(i, b) \u2264 U \u2212 U \u2212 U and i > 0), and F[i, b, U] = 0 otherwise.\nLemma 12.\nIf the graph G has the laminar property, then, after applying Lemma 11, the dynamic programming recurrence in (5) finds an optimal deterministic solution to the Budget Optimization problem exactly in O(B3 U3 n) time.\nIn addition, we can apply Lemma 9 to compute the optimal (randomized) solution.\nObserve that in the dynamic program, we have already solved the instance for every budget U \u2264 U, so we can find the randomized solution with no additional asymptotic overhead.\n48 Lemma 13.\nIf the graph G has the laminar property, then, by applying Lemma 11, the dynamic programming recurrence in (5), and Lemma 9, we can find an optimal solution to the Budget Optimization problem in O(B3 U3 n) time.\nThe bounds in this section make pessimistic assumptions about having to try every budget and every level.\nFor many problems, you only need to choose from a discrete set of bid levels (e.g., multiples of one cent).\nDoing so yields the obvious improvement in the bounds.\n7.\nBID OPTIMIZATION UNDER VCG The GSP auction is not the only possible auction one could use for sponsored search.\nIndeed the VCG auction and variants [14, 4, 7, 1] offer alternatives with compelling game-theoretic properties.\nIn this section we argue that the budget optimization problem is easy under the VCG auction.\nFor a full definition of VCG and its application to sponsored search we refer the reader to [1, 2, 5].\nFor the sake of the budget optimization problem we can define VCG by just redefining costq(b) (replacing Equation (2)): costq(b) = p\u22121 X j=i (\u03b1[j] \u2212 \u03b1[j + 1]) \u00b7 b[j] where i = pos(b).\nObservation 1 still holds, and we can construct a landscape as before, where each landscape point corresponds to a particular bid b[i].\nWe claim that in the VCG auction, the landscapes are convex.\nTo see this, consider two consecutive positions i,i + 1.\nThe slope of line segment between the points corresponding to those two positions is cost(b[i]) \u2212 cost(b[i + 1]) clicks(b[i]) \u2212 clicks(b[i + 1]) = (\u03b1[i] \u2212 \u03b1[i + 1]) \u00b7 b[i] \u03b1[i] \u2212 \u03b1[i + 1] = b[i].\nSince b[i] \u2265 b[i + 1], the slopes of the pieces of the landscape decrease, and we get that the curve is convex.\nNow consider running the algorithm described in Section 2.1.4 for finding the optimal bids for a set of queries.\nIn this algorithm we took all the pieces from the landscape curves, sorted them by incremental cpc, then took a prefix of those pieces, giving us bids for each of the queries.\nBut, the equation above shows that each piece has its incremental cpc equal to the bid that achieves it; thus in the case of VCG the pieces are also sorted by bid.\nHence we can obtain any prefix of the pieces via a uniform bid on all the keywords.\nWe conclude that the best uniform bid is an optimal solution to the budget optimization problem.\n8.\nCONCLUDING REMARKS Our algorithmic result presents an intriguing heuristic in practice: bid a single value b on all keywords; at the end of the day, if the budget is under-spent, adjust b to be higher; if budget is overspent, adjust b to be lower; else, maintain b.\nIf the scenario does not change from day to day, this simple strategy will have the same theoretical properties as our one-bid strategy, and in practice, is likely to be much better.\nOf course the scenario does change, however, and so coming up with a stochastic bidding strategy remains an important open direction, explored somewhat by [11, 13].\nAnother interesting generalization is to consider weights on the clicks, which is a way to model conversions.\n(A conversion corresponds to an action on the part of the user who clicked through to the advertiser site; e.g., a sale or an account sign-up.)\nFinally, we have looked at this system as a black box returning clicks as a function of bid, whereas in reality it is a complex repeated game involving multiple advertisers.\nIn [3], it was shown that when a set of advertisers use a strategy similar to the one we suggest here, under a slightly modified first-price auction, the prices approach a well-understood market equilibrium.\nAcknowledgments We thank Rohit Rao, Zoya Svitkina and Adam Wildavsky for helpful discussions.\n9.\nREFERENCES [1] G. Aggarwal, A. Goel and R. Motwani.\nTruthful auctions for pricing search keywords.\nACM Conference on Electronic Commerce, 1-7, 2006.\n[2] G. Aggarwal, J. Feldman and S. Muthukrishnan Bidding to the Top: VCG and Equilibria of Position-Based Auctions Proc.\nWAOA, 2006.\n[3] C. Borgs, J. Chayes, O. Etesami, N. Immorlica, K. Jain, and M. Mahdian.\nDynamics of bid optimization in online advertisement auctions.\nProc.\nWWW 2007.\n[4] E. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11(1):17-33, 1971.\n[5] B. Edelman, M. Ostrovsky and M. Schwarz.\nInternet Advertising and the Generalized Second Price Auction: Selling Billions of Dollars Worth of Keywords.\nSecond workshop on sponsored search auctions, 2006.\n[6] U. Feige.\nA threshold of ln n for approximating set cover.\n28th ACM Symposium on Theory of Computing, 1996, pp. 314-318.\n[7] T. Groves.\nIncentives in teams.\nEconometrica, 41(4): 617-631, 1973.\n[8] K. Jain, M. Mahdian, E. Markakis, A. Sabieri and V. Vazirani.\nGreedy facility location algorithms analyzed using dual fitting with factor-revealing LP.\nJ. ACM, 50(6): 795-824, 2003.\n[9] W. Labio, M. Rose, S. Ramaswamy.\nInternal Document, Google, Inc..\nMay, 2004.\n[10] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani, Adwords and Generalized Online Matching.\nFOCS 2005.\n[11] S. Muthukrishnan, M. P\u00b4al and Z. Svitkina.\nStochastic models for budget optimization in search-based advertising.\nTo appear in 3rd Workshop on Sponsored Search Auctions, WWW 2007.\n[12] F. Preparata and M. Shamos.\nComputational Geometry: An Introduction.\nSpringer-Verlag, New York, NY, 1985.\n[13] P. Rusmevichientong and D. Williamson.\nAn adaptive algorithm for selecting profitable keywords for search-based advertising services Proc.\n7th ACM conference on Electronic commerce, 260 - 269, 2006.\n[14] W. Vickrey.\nCounterspeculation, auctions and competitive-sealed tenders.\nJournal of Finance, 16(1):8-37, 1961.\n49","lvl-3":"Budget Optimization in Search-Based Advertising Auctions\nABSTRACT\nInternet search companies sell advertisement slots based on users' search queries via an auction.\nWhile there has been previous work on the auction process and its game-theoretic aspects, most of it focuses on the Internet company.\nIn this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget.\nWe model the entire process and study this budget optimization problem.\nWhile most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well.\nMore precisely, this strategy gets at least a 1 \u2212 1\/e fraction of the maximum clicks possible.\nAs our preliminary experiments show, such uniform strategies are likely to be practical.\nWe also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.\n1.\nINTRODUCTION\nOnline search is now ubiquitous and Internet search companies such as Google, Yahoo! and MSN let companies and * Work done while visiting Google, Inc., New York, NY.\nindividuals advertise based on search queries posed by users.\nConventional media outlets, such as TV stations or newspapers, price their ad slots individually, and the advertisers buy the ones they can afford.\nIn contrast, Internet search companies find it difficult to set a price explicitly for the advertisements they place in response to user queries.\nThis difficulty arises because supply (and demand) varies widely and unpredictably across the user queries, and they must price slots for billions of such queries in real time.\nThus, they rely on the market to determine suitable prices by using auctions amongst the advertisers.\nIt is a challenging problem to set up the auction in order to effect a stable market in which all the parties (the advertisers, users as well as the Internet search company) are adequately satisfied.\nRecently there has been systematic study of the issues involved in the game theory of the auctions [5, 1, 2], revenue maximization [10], etc. .\nThe perspective in this paper is not of the Internet search company that displays the advertisements, but rather of the advertisers.\nThe challenge from an advertiser's point of view is to understand and interact with the auction mechanism.\nThe advertiser determines a set of keywords of their interest and then must create ads, set the bids for each keyword, and provide a total (often daily) budget.\nWhen a user poses a search query, the Internet search company determines the advertisers whose keywords match the query and who still have budget left over, runs an auction amongst them, and presents the set of ads corresponding to the advertisers who \"win\" the auction.\nThe advertiser whose ad appears pays the Internet search company if the user clicks on the ad.\nThe focus in this paper is on how the advertisers bid.\nFor the particular choice of keywords of their interest1, an advertiser wants to optimize the overall effect of the advertising campaign.\nWhile the effect of an ad campaign in any medium is a complicated phenomenon to quantify, one commonly accepted (and easily quantified) notion in searchbased advertising on the Internet is to maximize the number of clicks.\nThe Internet search companies are supportive to1The choice of keywords is related to the domain-knowledge of the advertiser, user behavior and strategic considerations.\nInternet search companies provide the advertisers with summaries of the query traffic which is useful for them to optimize their keyword choices interactively.\nWe do not directly address the choice of keywords in this paper, which is addressed elsewhere [13].\nwards advertisers and provide statistics about the history of click volumes and prediction about the future performance of various keywords.\nStill, this is a complex problem for the following reasons (among others): 9 Individual keywords have significantly different characteristics from each other; e.g., while \"fishing\" is a broad keyword that matches many user queries and has many competing advertisers, \"humane fishing bait\" is a niche keyword that matches only a few queries, but might have less competition.\n9 There are complex interactions between keywords because a user query may match two or more keywords, since the advertiser is trying to cover all the possible keywords in some domain.\nIn effect the advertiser ends up competing with herself.\nAs a result, the advertisers face a challenging optimization problem.\nThe focus of this paper is to solve this optimization problem.\n1.1 The Budget Optimization Problem\nWe present a short discussion and formulation of the optimization problem faced by advertisers; a more detailed description is in Section 2.\nA given advertiser sees the state of the auctions for searchbased advertising as follows.\nThere is a set K of keywords of interest; in practice, even small advertisers typically have a large set K.\nThere is a set Q of queries posed by the users.\nFor each query q G Q, there are functions giving the clicksq (b) and costq (b) that result from bidding a particular amount b in the auction for that query, which we model more formally in the next section.\nThere is a bipartite graph G on the two vertex sets representing K and Q. For any query q G Q, the neighbors of q in K are the keywords that are said to \"match\" the query q. 2 The budget optimization problem is as follows.\nGiven graph G together with the functions clicksq (.)\nand costq (.)\non the queries, as well as a budget U, determine the bids bk for each keyword k G K such that Pq clicksq (bq) is maximized subject to Pq costq (bq) b [i + 1].\nSuppose that we bid b on some keyword that matches the user's query, then our position is defined by the largest b [i] that is at most b, that is,\nSince we only pay if the user clicks (and that happens with probability \u03b1 [i]), our expected cost for winning position i 4Google, Yahoo! and MSN all use some variant of the GSP auction.\nIn the Google auction, the advertisers' bids are multiplied by a quality score before they are ranked; our results carry over to this case as well, which we omit from this paper for clarity.\nAlso, other auctions besides GSP have been considered; e.g., the Vickrey Clark Groves (VCG) auction [14, 4, 7].\nEach auction mechanism will result in a different sort of optimization problem.\nIn the conclusion we point out that for the VCG auction, the bidding optimization problem becomes quite easy.\nwould be cost [i] = \u03b1 [i] \u00b7 b [i], where i = pos (b).\nWe use cost, (b) and clicks, (b) to denote the expected cost and clicks that result from having a bid b that qualifies for a query auction q, and thus\nThe following observations about cost and clicks follow immediately from the definitions and equations (1), (2) and (3).\nWe use R + to denote the nonnegative reals.\n1.\n(cost, (b), clicks, (b)) can only take on one of a finite set of values V, = {(cost [1], \u03b1 [1]),..., (cost [p], \u03b1 [p])}.\n2.\nBoth cost, (b) and clicks, (b) are non-decreasing functions of b. Also, cost-per-click (cpc) cost, (b) \/ clicks, (b) is non-decreasing in b. 3.\ncost, (b) \/ clicks, (b) r}.\nAnq: b \u2217 q \u2264 c clicksq (b \u2217 other way to construct this curve is to sort the queries in increasing order by b \u2217 q = costq (b \u2217 q) \/ clicksq (b \u2217 q), then make a step function where the qth step has height b \u2217 q and width clicksq (b \u2217 q) (see Figure 2).\nNote that the area of each step is costq (b \u2217 q).\nThe following claim follows immediately:\nSuppose we wanted to buy some fraction r ~ \/ C\u03a9 of the traffic that \u03a9 is getting.\nThe click-price curve says that if we bid h (r ~) on every keyword (and therefore every query), we get at least r ~ traffic, since this bid would ensure that for all q such that b \u2217 q 0 be a small number that we will later define in terms of E. Define r0 = 0, r1, r2,..., rm = C such that ri \u2212 1 0.\nQuery x has two positions, with bids of bx1 = 1 \/ \u03b1 and bx2 = e, and with identical click-through rates \u03b1x [1] = \u03b1x [2] = \u03b1.\nQuery y has one position, with a bid of by1 = 1 \/ \u03b1 and a click-through rate of \u03b1y [1] = \u03b1.\nThe budget is U = 1 + e\u03b1.\nThe optimal solution is to bid a on u (and therefore x) and bid 1 \/ \u03b1 on v (and therefore y), both with probability 1.\nThis achieves a total of 2\u03b1 clicks and spends the budget exactly.\nThe only useful bids are 0, a and 1 \/ \u03b1, since for both queries all other bids are identical in terms of cost and clicks to one of those three.\nAny single-bid solution that uses E as its non-zero bid gets at most \u03b1 clicks.\nBidding 1 \/ \u03b1 on both keywords results in 2\u03b1 clicks and total cost 2.\nThus, since the budget is U = 1 + e\u03b1 <2, a single-bid solution using 1 \/ \u03b1 can put weight at most (1 + e\u03b1) \/ 2 on the 1 \/ \u03b1 bid.\nThis results in at most \u03b1 (1 + e\u03b1) clicks.\nThis can be made arbitrarily close to \u03b1 by lowering E.\n4.3 A (1 \u2212 1e) - approximation algorithm\nThe key to the proof of Theorem 3 is to show that there is a distribution over single-bid strategies from Lemma 4 that obtains at least (1 \u2212 e1) C\u03a9 clicks.\nIn order to figure out the best distribution, we wrote a linear program that models the behavior of a player who is trying to maximize clicks and an adversary who is trying to create an input that is hard for the player.\nThen using linear programming duality, we were able to derive both an optimal strategy and a tight instance.\nAfter solving the LP numerically, we were also able to see that there is a uniform strategy for the player that always obtains (1 \u2212 1e) C\u03a9 clicks; and then from the solution were easily able to \"guess\" the optimal distribution.\nThis methodology is similar to that used in work on factor-revealing LPs [8, 10].\n4.3.1 An LP for the worst-case click-price curve.\nConsider the adversary's problem of finding a click-price curve for which no uniform bidding strategy can achieve \u03b1C\u03a9 clicks.\nRecall that by Lemma 1 we can assume that a uniform strategy randomizes between two bids u and v.\nWe also assume that the uniform strategy uses a convex combination of strategies from Lemma 4, which we can assume by Lemma 5.\nThus, to achieve \u03b1C\u03a9 clicks, a uniform strategy must randomize between bids h (u) and h (v) where u \u2264 \u03b1C\u03a9 and v \u2265 \u03b1C\u03a9.\nCall the set of such strategies S. Given a (u, v) \u2208 S, the necessary probabilities in order to achieve \u03b1C\u03a9 clicks are easily determined, and we denote them by p1 (u, v) and p2 (u, v) respectively.\nNote further that the advertiser is trying to figure out which of these strategies to use, and ultimately wants to compute a distribution over uniform strategies.\nIn the LP, she is actually going to compute a distribution over pairs of strategies in S, which we will then interpret as a distribution over strategies.\nUsing this set of uniform strategies as constraints, we can characterize a set of worst-case click-price curves by the constraints\nA curve h that satisfies these constraints has the property that all uniform strategies that obtain \u03b1C\u03a9 clicks spend more than U. Discretizing this set of inequalities, and pushing the first constraint into the objective function, we get the following LP over variables hr representing the curve:\nIn this LP, S is defined in the discrete domain as S = {(u, v) \u2208 {0, e, 2e,..., C\u03a9} 2: 0 \u2264 u \u2264 \u03b1C\u03a9 \u2264 v \u2264 C\u03a9}.\nSolving this LP for a particular \u03b1, if we get an objective less than U, we know (up to some discretization) that an instance of BUDGET OPTIMIZATION exists that cannot be approximated better than \u03b1.\n(The instance is constructed as in the proof of Lemma 5.)\nA binary search yields the smallest such \u03b1 where the objective is exactly U. To obtain a strategy for the advertiser, we look at the dual, constraining the objective to be equal to U in order to get the polytope of optimum solutions:\nIt is straightforward to show that the second set of constraints is equivalent to the following:\nHere the variables can be interpreted as weights on strategies in S.\nA point in this polytope represents a convex combination over strategies in S, with the property that for any click-price curve h, the cost of the mixed strategy is at most U.\nSince all strategies in S get at least \u03b1C\u03a9 clicks, we have a strategy that achieves an \u03b1-approximation.\nInterestingly, the equivalence between this polytope and the LP dual above shows that there is a mixture over values r \u2208 [0, C] that achieves an \u03b1-approximation for any curve h.\nAfter a search for the appropriate \u03b1 (which turned out to be 1 \u2212 1e), we solved these two LPs and came up with the plots in Figure 3, which reveal not only the right approximation ratio, but also a picture of the worst-case distribution and the approximation-achieving strategy .9 From the pic\nFigure 3: The worst-case click-price curve and (1 \u2212 1\/e) - approximate uniform bidding strategy, as found by linear programming.\ntures, we were able to quickly \"guess\" the optimal strategy and worst case example.\n4.3.2 Proof of Theorem 3\nBy Lemma 4, we know that for each r \u2264 U\u03a9, there is a strategy that can obtain traffic r at cost r \u00b7 h (r).\nBy mixing strategies for multiple values of r, we construct a uniform strategy that is guaranteed to achieve at least 1 \u2212 e \u2212 1 = 0.63 fraction of \u03a9's traffic and remain within budget.\nNote that the \"final\" resulting bid distribution will have some weight on the zero bid, since the single-bid strategies from Lemma 4 put some weight on bidding zero.\nConsider the following probability density function over such strategies (also depicted in Figure 3):\nity density function.\nThe traffic achieved by our strategy is equal to\nThe expected total spend of this strategy is at most\nCn\/e 0 Thus we have shown that there exists a uniform bidding strategy that is (1 \u2212 e1) - optimal.\nWe now show that no uniform strategy can do better.\nWe will prove that for all ~> 0 there exists an instance for which all uniform strategies are at most (1 \u2212 e1 + ~) - optimal.\nFirst we define the following click-price curve over the domain [0, 1]:\nNote that h is non-decreasing and non-negative.\nSince the curve is over the domain [0, 1] it corresponds to an instance where C\u03a9 = 1.\nNote also that R 1 R 1\n1 dr = 1.\nThus, this curve corresponds to an instance where U\u03a9 = 1.\nUsing Lemma 5, we construct an actual instance where the best uniform strategies are convex combinations of strategies that bid h (u) and achieve u clicks and u \u00b7 h (u) cost.\nSuppose for the sake of contradiction that there exists a uniform bidding strategy that achieves \u03b1> 1 \u2212 e \u2212 1 traffic on this instance.\nBy Lemma 1 there is always a two-bid optimal uniform bidding strategy and so we may assume that the strategy achieving \u03b1 clicks randomizes over two bids.\nTo achieve \u03b1 clicks, the two bids must be on values h (u) and h (v) with probabilities pu and pv such that pu + pv = 1, 0 \u2264 u \u2264 \u03b1 \u2264 v and puu + pvv = \u03b1.\nTo calculate the spend of this strategy consider two cases: if u = 0 then we are bidding h (v) with probability pv = \u03b1 \/ v.\nThe spend in this case is:\ncontradicting the assumption.\nWe turn to the case u> 0.\nHere we have pu = v \u2212 \u03b1 v \u2212 u and pv = \u03b1 \u2212 u v \u2212 u.\nNote that for r \u2208 (0, 1] we have h (r) \u2265\n(v \u2212 \u03b1) (ue \u2212 1) + (\u03b1 \u2212 u) (ve \u2212 1) (v \u2212 u) (e \u2212 2) \u03b1e \u2212 1 = e \u2212 2> 1.\nThe final inequality follows from \u03b1> 1 \u2212 1e.\nThus in both cases the spend of our strategy is over the budget of 1.\n\u2751\n4.4 Experimental Results\nWe ran simulations using the data available at Google which we briefly summarize here.\nWe took a large advertising campaign, and, using the set of keywords in the campaign, computed three different curves (see Figure 4) for three different bidding strategies.\nThe x-axis is the budget (units removed), and the y-axis is the number of clicks obtained (again without units) by the optimal bid (s) under each respective strategy.\n\"Query bidding\" represents our (unachievable) upper bound \u03a9, bidding on each query independently.\nThe \"uniform bidding\" curves represent the results of applying our algorithm: \"deterministic\" uses a single bid level, while \"randomized\" uses a distribution.\nFor reference, we include the lower bound of a (e \u2212 1) \/ e fraction of the top curve.\nThe data clearly demonstrate that the best single uniform bid obtains almost all the possible clicks in practice.\nOf course in a more realistic environment without full knowledge, it is not always possible to find the best such bid, so further investigation is required to make this approach useful.\nHowever, just knowing that there is such a bid available should make the on-line versions of the problem simpler.\n5.\nHARDNESS RESULTS\nBy a reduction from vertex cover we can show the following (proof omitted):\nTHEOREM 7.\nBUDGET OPTIMIZATION is strongly NP-hard.\nFigure 4: An example with real data.\nNow suppose we introduce weights on the queries that indicate the relative value of a click from the various search users.\nFormally, we have weights wq for all q E Q and our goal is maximize the total weighted traffic given a budget.\nCall this the WEIGHTED KEYWORD BIDDING problem.\nWith this additional generalization we can show hardness of approximation via a simple reduction from the MAXIMUM COVERAGE problem, which is known to be (1--1\/e) - hard [6] (proof omitted).\nTHEOREM 8.\nThe WEIGHTED KEYWORD BIDDING problem is hard to approximate to within (1--1\/e).\n6.\nEXACT ALGORITHMS FOR LAMINAR GRAPHS\nIf a graph has special structure, we can sometimes solve the budget optimization problem exactly.\nNote that the knapsack algorithm in Section 2 solves the problem for the case when the graph is a simple matching.\nHere we generalize this to the case when the graph has a laminar structure, which will allow us to impose a (partial) ordering on the possible bid values, and thereby give a pseudopolynomial algorithm via dynamic programming.\nWe first show that to solve the BUDGET OPTIMIZATION problem (for general graphs) optimally in pseudopolynomial time, it suffices to provide an algorithm that solves the deterministic case.\nThe proof (omitted) uses ideas similar to Observation 1 and Lemma 1.\nA collection S of n sets S1,..., S2 is laminar if, for any two sets Si and Sj, if Si n Sj = ~ 0 then either Si C Sj or Sj C Si.\nGiven a keyword interaction graph G, we associate a set of neighboring queries Qk = {q: (k, q) E E} with each keyword k.\nIf this collection of sets if laminar, we say that the graph has the laminar property.\nNote that a laminar interaction graph would naturally fall out as a consequence of designing a \"hierarchical\" keyword set (e.g., \"shoes,\" \"highheel shoes,\" \"athletic shoes\").\nWe call a solution deterministic if it consists of one bid vector, rather than a general distribution over bid vectors.\nThe following lemma will be useful for giving a structure to the optimal solution, and will allow dynamic programming.\nLEMMA 10.\nFor keywords i, j E K, if Qi C Qj then there exists an optimal deterministic solution to the BUDGET OPTIMIZATION problem with ai> aj.\nWe can view the laminar order as a tree with keyword j as a parent of keyword i if Qj is the minimal set containing Qi.\nIn this case we say that j is a child of i. Given a keyword j with c children i1,..., ic, we now need to enumerate over all ways to allocate the budget among the children and also over all possible minimum bids for the children.\nA complication is that a node may have many children and thus a term of Uc would not even be pseudopolynomial.\nWe can solve this problem by showing that given any laminar ordering, there is an equivalent one in which each keyword has at most 2 children.\nLEMMA 11.\nLet G be a graph with the laminar property.\nThere exists another graph G' with the same optimal solution to the BUDGET OPTIMIZATION problem, where each node has at most two children in the laminar ordering.\nFurthermore, G' has at most twice as many nodes as G. Given a graph with at most two children per node, we define F [i, b, U] to be the maximum number of clicks achievable by bidding at least b on each of keywords j s.t. Qj C Qi (and exactly b on keyword i) while spending at most U. For this definition, we use Z (b, U) to denote set of allowable bids and budgets over children:\nGiven a keyword i and a bid ai, compute an incremental spend and traffic associated with bidding ai on keyword i, that is\nNow we define F [i, b, U] as\nIn addition, we can apply Lemma 9 to compute the optimal (randomized) solution.\nObserve that in the dynamic program, we have already solved the instance for every budget U' b [i + 1], the slopes of the \"pieces\" of the landscape decrease, and we get that the curve is convex.\nNow consider running the algorithm described in Section 2.1.4 for finding the optimal bids for a set of queries.\nIn this algorithm we took all the pieces from the landscape curves, sorted them by incremental cpc, then took a prefix of those pieces, giving us bids for each of the queries.\nBut, the equation above shows that each piece has its incremental cpc equal to the bid that achieves it; thus in the case of VCG the pieces are also sorted by bid.\nHence we can obtain any prefix of the pieces via a uniform bid on all the keywords.\nWe conclude that the best uniform bid is an optimal solution to the budget optimization problem.\n8.\nCONCLUDING REMARKS\nOur algorithmic result presents an intriguing heuristic in practice: bid a single value b on all keywords; at the end of the day, if the budget is under-spent, adjust b to be higher; if budget is overspent, adjust b to be lower; else, maintain b.\nIf the scenario does not change from day to day, this simple strategy will have the same theoretical properties as our one-bid strategy, and in practice, is likely to be much better.\nOf course the scenario does change, however, and so coming up with a \"stochastic\" bidding strategy remains an important open direction, explored somewhat by [11, 13].\nAnother interesting generalization is to consider weights on the clicks, which is a way to model conversions.\n(A conversion corresponds to an action on the part of the user who clicked through to the advertiser site; e.g., a sale or an account sign-up.)\nFinally, we have looked at this system as a black box returning clicks as a function of bid, whereas in reality it is a complex repeated game involving multiple advertisers.\nIn [3], it was shown that when a set of advertisers use a strategy similar to the one we suggest here, under a slightly modified first-price auction, the prices approach a well-understood market equilibrium.","keyphrases":["budget optim","optim","advertis","auction","internet","bid","keyword","search-base advertis auction","game theori","intrigu heurist","uniform bid strategi","vickrei clark grove","lp","gener second price","sponsor search"],"prmu":["P","P","P","P","P","P","P","M","U","U","R","U","U","U","M"]} {"id":"J-22","title":"Betting on Permutations","abstract":"We consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race. We examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers. Requiring bidders to explicitly list the orderings that they'd like to bet on is both unnatural and intractable, because the number of orderings is n! and the number of subsets of orderings is 2n! . We propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case. Subset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, horse A will finish in positions 4, 9, or 13-21, or that a position will be taken by some subset of candidates, for example horse A, B, or D will finish in position 2. For subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible. Pair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example horse A will beat horse B. We prove that the auctioneer problem becomes NP-hard for pair betting. We identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time. We also show that a natural greedy algorithm gives a poor approximation for indivisible orders.","lvl-1":"Betting on Permutations Yiling Chen Yahoo! Research 45 W. 18th St. 6th Floor New York, NY 10011 Lance Fortnow Department of Computer Science University of Chicago Chicago, IL 60637 Evdokia Nikolova \u2217 CS & AI Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 David M. Pennock Yahoo! Research 45 W. 18th St. 6th Floor New York, NY 10011 ABSTRACT We consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race.\nWe examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers.\nRequiring bidders to explicitly list the orderings that they``d like to bet on is both unnatural and intractable, because the number of orderings is n!\nand the number of subsets of orderings is 2n!\n.\nWe propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case.\nSubset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, horse A will finish in positions 4, 9, or 13-21, or that a position will be taken by some subset of candidates, for example horse A, B, or D will finish in position 2.\nFor subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible.\nPair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example horse A will beat horse B.\nWe prove that the auctioneer problem becomes NP-hard for pair betting.\nWe identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time.\nWe also show that a natural greedy algorithm gives a poor approximation for indivisible orders.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Theory 1.\nINTRODUCTION Buying or selling a financial security in effect is a wager on the security``s value.\nFor example, buying a stock is a bet that the stock``s value is greater than its current price.\nEach trader evaluates his expected profit to decide the quantity to buy or sell according to his own information and subjective probability assessment.\nThe collective interaction of all bets leads to an equilibrium that reflects an aggregation of all the traders'' information and beliefs.\nIn practice, this aggregate market assessment of the security``s value is often more accurate than other forecasts relying on experts, polls, or statistical inference [16, 17, 5, 2, 15].\nConsider buying a security at price fifty-two cents, that pays $1 if and only if a Democrat wins the 2008 US Presidential election.\nThe transaction is a commitment to accept a fifty-two cent loss if a Democrat does not win in return for a forty-eight cent profit if a Democrat does win.\nIn this case of an event-contingent security, the price-the market``s value of the security-corresponds directly to the estimated probability of the event.\nAlmost all existing financial and betting exchanges pair up bilateral trading partners.\nFor example, one trader willing to accept an x dollar loss if a Democrat does not win in return for a y dollar profit if a Democrat wins is matched up with a second trader willing to accept the opposite.\nHowever in many scenarios, even if no bilateral agreements exist among traders, multilateral agreements may be possible.\nFor example, if one trader bets that the Democratic candidate will receive more votes than the Republican candidate, a second trader bets that the Republican candidate will receive more votes than the Libertarian candidate, and a third trader bets that the Libertarian candidate will receive more votes than the Democratic candidate, then, depending on the odds they each offer, there may be a three-way agreeable match even though no two-way matches exist.\nWe propose an exchange where traders have considerable flexibility to naturally and succinctly express their wagers, 326 and examine the computational complexity of the auctioneer``s resulting matching problem of identifying bilateral and multilateral agreements.\nIn particular, we focus on a setting where traders bet on the outcome of a competition among n candidates.\nFor example, suppose that there are n candidates in an election (or n horses in a race, etc.) and thus n!\npossible orderings of candidates after the final vote tally.\nTraders may like to bet on arbitrary properties of the final ordering, for example candidate D will win, candidate D will finish in either first place or last place, candidate D will defeat candidate R, candidates D and R will both defeat candidate L, etc..\nThe goal of the exchange is to search among all the offers to find two or more that together form an agreeable match.\nAs we shall see, the matching problem can be set up as a linear or integer program, depending on whether orders are divisible or indivisible, respectively.\nAttempting to reduce the problem to a bilateral matching problem by explicitly creating n!\nsecurities, one for each possible final ordering, is both cumbersome for the traders and computationally infeasible even for modest sized n. Moreover, traders'' attention would be spread among n!\nindependent choices, making the likelihood of two traders converging at the same time and place seem remote.\nThere is a tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe want to offer traders the most expressive bidding language possible while maintaining computational feasibility.\nWe explore two bidding languages that seem natural from a trader perspective.\nSubset betting, described in Section 3.2, allows traders to bet on which positions in the ranking a candidate will fall, for example candidate D will finish in position 1, 3-5, or 10.\nSymetrically, traders can also bet on which candidates will fall in a particular position.\nIn Section 4, we derive a polynomial-time algorithm for matching (divisible) subset bets.\nThe key to the result is showing that the exponentially big linear program has a corresponding separation problem that reduces to maximum weighted bipartite matching and consequently we can solve it in time polynomial in the number of orders.\nPair betting, described in Section 3.3, allows traders to bet on the final ranking of any two candidates, for example candidate D will defeat candidate R.\nIn Section 5, we show that optimal matching of (divisible or indivisible) pair bets is NP-hard, via a reduction from the unweighted minimum feedback arc set problem.\nWe also provide a polynomiallyverifiable sufficient condition for the existence of a pairbetting match and show that a greedy algorithm offers poor approximation for indivisible pair bets.\n2.\nBACKGROUND AND RELATED WORK We consider permutation betting, or betting on the outcome of a competition among n candidates.\nThe final outcome or state s \u2208 S is an ordinal ranking of the n candidates.\nFor example, the candidates could be horses in a race and the outcome the list of horses in increasing order of their finishing times.\nThe state space S contains all n!\nmutually exclusive and exhaustive permutations of candidates.\nIn a typical horse race, people bet on properties of the outcome like horse A will win, horse A will show, or finish in either first or second place, or horses A and B will finish in first and second place, respectively.\nIn practice at the racetrack, each of these different types of bets are processed in separate pools or groups.\nIn other words, all the win bets are processed together, and all the show bets are processed together, but the two types of bets do not mix.\nThis separation can hurt liquidity and information aggregation.\nFor example, even though horse A is heavily favored to win, that may not directly boost the horse``s odds to show.\nInstead, we describe a central exchange where all bets on the outcome are processed together, thus aggregating liquidity and ensuring that informational inference happens automatically.\nIdeally, we``d like to allow traders to bet on any property of the final ordering they like, stated in exactly the language they prefer.\nIn practice, allowing too flexible a language creates a computational burden for the auctioneer attempting to match willing traders.\nWe explore the tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe consider a framework where people propose to buy securities that pay $1 if and only if some property of the final ordering is true.\nTraders state the price they are willing to pay per share and the number of shares they would like to purchase.\n(Sell orders may not be explicitly needed, since buying the negation of an event is equivalent to selling the event.)\nA divisible order permits the trader to receive fewer shares than requested, as long as the price constraint is met; an indivisible order is an all-or-nothing order.\nThe description of bets in terms of prices and shares is without loss of generality: we can also allow bets to be described in terms of odds, payoff vectors, or any of the diverse array of approaches practiced in financial and gambling circles.\nIn principle, we can do everything we want by explicitly offering n!\nsecurities, one for every state s \u2208 S (or in fact any set of n!\nlinearly independent securities).\nThis is the so-called complete Arrow-Debreu securities market [1] for our setting.\nIn practice, traders do not want to deal with low-level specification of complete orderings: people think more naturally in terms of high-level properties of orderings.\nMoreover, operating n!\nsecurities is infeasible in practice from a computational point of view as n grows.\nA very simple bidding language might allow traders to bet only on who wins the competition, as is done in the win pool at racetracks.\nThe corresponding matching problem is polynomial, however the language is not very expressive.\nA trader who believes that A will defeat B, but that neither will win outright cannot usefully impart his information to the market.\nThe price space of the market reveals the collective estimates of win probabilities but nothing else.\nOur goal is to find languages that are as expressive and intuitive as possible and reveal as much information as possible, while maintaining computational feasibility.\nOur work is in direct analogy to work by Fortnow et.\nal. [6].\nWhereas we explore permutation combinatorics, Fortnow et.\nal. explore Boolean combinatorics.\nThe authors consider a state space of the 2n possible outcomes of n binary variables.\nTraders express bets in Boolean logic.\nThe authors show that divisible matching is co-NP-complete and indivisible matching is \u03a3p 2-complete.\nHanson [9] describes a market scoring rule mechanism which can allow betting on combinatorial number of outcomes.\nThe market starts with a joint probability distribution across all outcomes.\nIt works like a sequential version of a scoring rule.\nAny trader can change the probability distribution as long as he agrees to pay the most recent trader 327 according to the scoring rule.\nThe market maker pays the last trader.\nHence, he bears risk and may incur loss.\nMarket scoring rule mechanisms have a nice property that the worst-case loss of the market maker is bounded.\nHowever, the computational aspects on how to operate the mechanism have not been fully explored.\nOur mechanisms have an auctioneer who does not bear any risk and only matches orders.\nResearch on bidding languages and winner determination in combinatorial auctions [4, 14, 18] considers similar computational challenges in finding an allocation of items to bidders that maximizes the auctioneer``s revenue.\nCombinatorial auctions allow bidders to place distinct values on bundles of goods rather than just on individual goods.\nUncertainty and risk are typically not considered and the central auctioneer problem is to maximize social welfare.\nOur mechanisms allow traders to construct bets for an event with n!\noutcomes.\nUncertainty and risk are considered and the auctioneer problem is to explore arbitrage opportunities and risklessly match up wagers.\n3.\nPERMUTATION BETTING In this section, we define the matching and optimal matching problems that an auctioneer needs to solve in a general permutation betting market.\nWe then illustrate the problem definitions in the context of the subset-betting and pairbetting markets.\n3.1 Securities, Orders and Matching Problems Consider an event with n competing candidates where the outcome (state) is a ranking of the n candidates.\nThe bidding language of a market offering securities in the future outcomes determines the type and number of securities available and directly affects what information can be aggregated about the outcome.\nA fully expressive bidding language can capture any possible information that traders may have about the final ranking; a less expressive language limits the type of information that can be aggregated though it may enable a more efficient solution to the matching problem.\nFor any bidding language and number of securities in a permutation betting market, we can succinctly represent the problem of the auctioneer to risklessly match offers as follows.\nConsider an index set of bets or orders O which traders submit to the auctioneer.\nEach order i \u2208 O is a triple (bi, qi, \u03c6i), where bi denotes how much the trader is willing to pay for a unit share of security \u03c6i and qi is the number of shares of the security he wants to purchase at price bi.\nNaturally, bi \u2208 (0, 1) since a unit of the security pays off at most $1 when the event is realized.\nSince order i is defined for a single security \u03c6i, we will omit the security variable whenever it is clear from the context.\nThe auctioneer can accept or reject each order, or in a divisible world accept a fraction of the order.\nLet xi be the fraction of order i \u2208 O accepted.\nIn the indivisible version of the market xi = 0 or 1 while in the divisible version xi \u2208 [0, 1].\nFurther let Ii(s) be the indicator variable for whether order i is winning in state s, that is Ii(s) = 1 if the order is paid back $1 in state s and Ii(s) = 0 otherwise.\nThere are two possible problems that the auctioneer may want to solve.\nThe simpler one is to find a subset of orders that can be matched risk-free, namely a subset of orders which accepted together give a nonnegative profit to the auctioneer in every possible outcome.\nWe call this problem the existence of a match or sometimes simply, the matching problem.\nThe more complex problem is for the auctioneer to find the optimal match with respect to some criterion such as profit, trading volume, etc..\nDefinition 1 (Existence of match, indivisible orders).\nGiven a set of orders O, does there exist a set of xi \u2208 {0, 1}, i \u2208 O, with at least one xi = 1 such that i (bi \u2212 Ii(s))qixi \u2265 0, \u2200s \u2208 S?\n(1) Similarly we can define the existence of a match with divisible orders.\nDefinition 2 (Existence of match, divisible orders).\nGiven a set of orders O, does there exist a set of xi \u2208 [0, 1], i \u2208 O, with at least one xi > 0 such that i (bi \u2212 Ii(s))qixi \u2265 0, \u2200s \u2208 S?\n(2) The existence of a match is a decision problem.\nIt only returns whether trade can occur at no risk to the auctioneer.\nIn addition to the risk-free requirement, the auctioneer can optimize some criterion in determining the orders to accept.\nSome reasonable objectives include maximizing the total trading volume in the market or the worst-case profit of the auctioneer.\nThe following optimal matching problems are defined for an auctioneer who maximizes his worst-case profit.\nDefinition 3 (Optimal match, indivisible orders).\nGiven a set of orders O, choose xi \u2208 {0, 1} such that the following mixed integer programming problem achieves its optimality max xi,c c (3) s.t. i bi \u2212 Ii(s) qixi \u2265 c, \u2200s \u2208 S xi \u2208 {0, 1}, \u2200i \u2208 O. Definition 4 (Optimal match, divisible orders).\nGiven a set of orders O, choose xi \u2208 [0, 1] such that the following linear programming problem achieves its optimality max xi,c c (4) s.t. i bi \u2212 Ii(s) qixi \u2265 c, \u2200s \u2208 S 0 \u2264 xi \u2264 1, \u2200i \u2208 O.\nThe variable c is the worst-case profit for the auctioneer.\nNote that, strictly speaking, the optimal matching problems do not require to solve the optimization problems (3) and (4), because only the optimal set of orders are needed.\nThe optimal worst-case profit may remain unknown.\n3.2 Subset Betting A subset betting market allows two different types of bets.\nTraders can bet on a subset of positions a candidate may end up at, or they can bet on a subset of candidates that will occupy a particular position.\nA security \u03b1|\u03a6 where \u03a6 is a subset of positions pays off $1 if candidate \u03b1 stands at a position that is an element of \u03a6 and it pays $0 otherwise.\nFor example, security \u03b1|{2, 4} pays $1 when candidate \u03b1 328 is ranked second or fourth.\nSimilarly, a security \u03a8|j where \u03a8 is a subset of candidates pays off $1 if any of the candidates in the set \u03a8 ranks at position j. For instance, security {\u03b1, \u03b3}|2 pays off $1 when either candidate \u03b1 or candidate \u03b3 is ranked second.\nThe auctioneer in a subset betting market faces a nontrivial matching problem, that is to determine which orders to accept among all submitted orders i \u2208 O. Note that although there are only n candidates and n possible positions, the number of available securities to bet on is exponential since a trader may bet on any of the 2n subsets of candidates or positions.\nWith this, it is not immediately clear whether one can even find a trading partner or a match for trade to occur, or that the auctioneer can solve the matching problem in polynomial time.\nIn the next section, we will show that somewhat surprisingly there is an elegant polynomial solution to both the matching and optimal matching problems, based on classic combinatorial problems.\nWhen an order is accepted, the corresponding trader pays the submitted order price bi to the auctioneer and the auctioneer pays the winning orders $1 per share after the outcome is revealed.\nThe auctioneer has to carefully choose which orders and what fractions of them to accept so as to be guaranteed a nonnegative profit in any future state.\nThe following example illustrates the matching problem for indivisible orders in the subset-betting market.\nExample 1.\nSuppose n = 3.\nObjects \u03b1, \u03b2, and \u03b3 compete for positions 1, 2, and 3 in a competition.\nThe auctioneer receives the following 4 orders: (1) buy 1 share \u03b1|{1} at price $0.6; (2) buy 1 share \u03b2|{1, 2} at price $0.7; (3) buy 1 share \u03b3|{1, 3} at price $0.8; and (4) buy 1 share \u03b2|{3} at price $0.7.\nThere are 6 possible states of ordering: \u03b1\u03b2\u03b3, \u03b1\u03b3\u03b2, \u03b2\u03b1\u03b3, \u03b2\u03b3\u03b1, \u03b3\u03b1\u03b2,and \u03b3\u03b2\u03b1.\nThe corresponding statedependent profit of the auctioneer for each order can be calculated as the following vectors, c1 = (\u22120.4, \u22120.4, 0.6, 0.6, 0.6, 0.6) c2 = (\u22120.3, 0.7, \u22120.3, \u22120.3, 0.7, \u22120.3) c3 = (\u22120.2, 0.8, \u22120.2, 0.8, \u22120.2, \u22120.2) c4 = ( 0.7, \u22120.3, 0.7, 0.7, \u22120.3, 0.7).\n6 columns correspond to the 6 future states.\nFor indivisible orders, the auctioneer can either accept orders (2) and (4) and obtain profit vector c = (0.4, 0.4, 0.4, 0.4, 0.4, 0.4), or accept orders (2), (3), and (4) and has profit across state c = (0.2, 1.2, 0.2, 1.2, 0.2, 0.2).\n3.3 Pair Betting A pair betting market allows traders to bet on whether one candidate will rank higher than another candidate, in an outcome which is a permutation of n candidates.\nA security \u03b1 > \u03b2 pays off $ 1 if candidate \u03b1 is ranked higher than candidate \u03b2 and $ 0 otherwise.\nThere are a total of N(N \u22121) different securities offered in the market, each corresponding to an ordered pair of candidates.\nTraders place orders of the form buy qi shares of \u03b1 > \u03b2 at price per share no greater than bi.\nbi in general should be between 0 and 1.\nAgain the order can be either indivisible or divisible and the auctioneer needs to decide what fraction xi of each order to accept so as not to incur any loss, with A B C D E F .99 .99 .5 .5 .5 .99 .99 .99 .99 Figure 1: Every cycle has negative worst-case profit of \u22120.02 (for the cycles of length 4) or less (for the cycles of length 6), however accepting all edges in full gives a positive worst-case profit of 0.44.\nxi \u2208 {0, 1} for indivisible and xi \u2208 [0, 1] for divisible orders.\nThe same definitions for existence of a match and optimal match from Section 3.1 apply.\nThe orders in the pair-betting market have a natural interpretation as a graph, where the candidates are nodes in the graph and each order which ranks a pair of candidates \u03b1 > \u03b2 is represented by a directed edge e = (\u03b1, \u03b2) with price be and weight qe.\nWith this interpretation, it is tempting to assume that a necessary condition for a match is to have a cycle in the graph with a nonnegative worst-case profit.\nAssuming qe = 1 for all e, this is a cycle C with a total of |C| edges such that the worst-case profit for the auctioneer is e\u2208C be \u2212 (|C| \u2212 1) \u2265 0, since in the worst-case state the auctioneer needs to pay $,1 to every order in the cycle except one.\nHowever, the example in Figure 1 shows that this is not the case: we may have a set of orders in which every single cycle has a negative worst-case profit, and yet there is a positive worstcase match overall.\nThe edge labels in the figure are the prices be; both the optimal divisible and indivisible solution in this case accept all orders in full, xe = 1.\n4.\nCOMPLEXITY OF SUBSET BETTING The matching problems of the auctioneer in any permutation market, including the subset betting market have n!\nconstraints.\nBrute-force methods would take exponential time to solve.\nHowever, given the special form of the securities in the subset betting market, we can show that the matching problems for divisible orders can be solved in polynomial time.\nTheorem 1.\nThe existence of a match and the optimal match problems with divisible orders in a subset betting market can both be solved in polynomial time.\n329 Proof.\nConsider the linear programming problem (4) for finding an optimal match.\nThis linear program has |O| + 1 variables, one variable xi for each order i and the profit variable c.\nIt also has exponentially many constraints.\nHowever, we can solve the program in time polynomial in the number of orders |O| by using the ellipsoid algorithm, as long as we can efficiently solve its corresponding separation problem in polynomial time [7, 8].\nThe separation problem for a linear program takes as input a vector of variable values and returns if the vector is feasible, or otherwise it returns a violated constraint.\nFor given values of the variables, a violated constraint in Eq.\n(4) asks whether there is a state or permutation s in which the profit is less than c, and can be rewritten as i Ii(s)qixi < i biqixi \u2212 c \u2200s \u2208 S. (5) Thus it suffices to show how to find efficiently a state s satisfying the above inequality (5) or verify that the opposite inequality holds for all states s.\nWe will show that the separation problem can be reduced to the maximum weighted bipartite matching1 problem [3].\nThe left hand side in Eq.\n(5) is the total money that the auctioneer needs to pay back to the winning traders in state s.\nThe first term on the right hand side is the total money collected by the auctioneer and it is fixed for a given solution vector of xi``s and c.\nA weighted bipartite graph can be constructed between the set of candidates and the set of positions.\nFor every order of the form \u03b1|\u03a6 there are edges from candidate node \u03b1 to every position node in \u03a6.\nFor orders of the form \u03a8|j there are edges from each candidate in \u03a8 to position j. For each order i we put weight qixi on each of these edges.\nAll multi-edges with the same end points are then replaced with a single edge that carries the total weight of the multi-edge.\nEvery state s then corresponds to a perfect matching in the bipartite graph.\nIn addition, the auctioneer pays out to the winners the sum of all edge weights in the perfect matching since every candidate can only stand in one position and every position is taken by one candidate.\nThus, the auctioneer``s worst-cast state and payment are the solution to the maximum weighted bipartite matching problem, which has known polynomial-time algorithms [12, 13].\nHence, the separation problem can be solved in polynomial time.\nNaturally, if the optimal solution to (4) gives a worst-case profit of c\u2217 > 0, there exists a matching.\nThus, the matching problem can be solved in polynomial time also.\n5.\nCOMPLEXITY OF PAIR BETTING In this section we show that a slight change of the bidding language may bring about a dramatic change in the complexity of the optimal matching problem of the auctioneer.\nIn particular, we show that finding the optimal match in the pair betting market is NP-hard for both divisible and indivisible orders.\nWe then identify a polynomially-verifiable sufficient condition for deciding the existence of a match.\nThe hardness results are surprising especially in light of the observation that a pair betting market has a seemingly more restrictive bidding language which only offers n(n\u22121) 1 The notion of perfect matching in a bipartite graph, which we use only in this proof, should not be confused with the notion of matching bets which we use throughout the paper.\nsecurities.\nIn contrast, the subset betting market enables traders to bet on an exponential number of securities and yet had a polynomial time solution for finding the optimal match.\nOur hope is that the comparison of the complexities of the subset and pair betting markets would offer insight into what makes a bidding language expressive while at the same time enabling an efficient matching solution.\nIn all analysis that follows, we assume that traders submit unit orders in pair betting markets, that is qi = 1.\nA set of orders O received by the auctioneer in a pair betting market with unit orders can be represented by a directed graph, G(V, E), where the vertex set V contains candidates that traders bet on.\nAn edge e \u2208 E, denoted (\u03b1, \u03b2, be), represents an order to buy 1 share of the security \u03b1 > \u03b2 at price be.\nAll edges have equal weight of 1.\nWe adopt the following notations throughout the paper: \u2022 G(V, E): original equally weighted directed graph for the set of unit orders O. \u2022 be: price of the order for edge e. \u2022 G\u2217 (V \u2217 , E\u2217 ); a weighted directed graph of accepted orders for optimal matching, where edge weight xe is the quantity of order e accepted by the auctioneer.\nxe = 1 for indivisible orders and 0 < xe \u2264 1 for divisible orders.\n\u2022 H(V, E): a generic weighted directed graph of accepted orders.\n\u2022 k(H): solution to the unweighted minimum feedback arc set problem on graph H. k(H) is the minimum number of edges to remove so that H becomes acyclic.\n\u2022 l(H): solution to the weighted minimum feedback arc set problem on graph H. l(H) is the minimum total weights for the set of edges which, when removed, leave H acyclic.\n\u2022 c(H): worst-case profit of the auctioneer if he accepts all orders in graph H. \u2022 : a sufficiently small positive real number.\nWhere not stated, < 1\/(2|E|) for a graph H(V, E).\nIn other cases, the value is determined in context.\nA feedback arc set of a directed graph is a set of edges which, when removed from the graph, leave a directed acyclic graph (DAG).\nUnweighted minimum feedback arc set problem is to find a feedback arc set with the minimum cardinality, while weighted minimum feedback arc set problem seeks to find a feedback arc set with the minimum total edge weight.\nBoth unweighted and weighted minimum feedback arc set problems have been shown to be NP-complete [10].\nWe will use this result in our complexity analysis on pair betting markets.\n5.1 Optimal Indivisible Matching The auctioneer``s optimal indivisible matching problem is introduced in Definition 3 of Section 3.\nAssuming unit orders and considering the order graph G(V, E), we restate the auctioneer``s optimal matching problem in a pair betting market as picking a subset of edges to accept such that 330 worst-case profit is maximized in the following optimization problem, max xe,c c (6) s.t. e be \u2212 Ie(s) xe \u2265 c, \u2200s \u2208 S xe \u2208 {0, 1}, \u2200e \u2208 E. Without lose of generality, we assume that there are no multi-edges in the order graph G.\nWe show that the optimal matching problem for indivisible orders is NP-hard via a reduction from the unweighted minimum feedback arc set problem.\nThe latter takes as input a directed graph, and asks what is the minimum number of edges to delete from the graph so as to be left with a DAG.\nOur hardness proof is based on the following lemmas.\nLemma 2.\nSuppose the auctioneer accepts all edges in an equally weighted directed graph H(V, E) with edge price be = (1 \u2212 ) and edge weight xe = 1.\nThen the worst-case profit is equal to k(H) \u2212 |E|, where k(H) is the solution to the unweighted minimum feedback arc problem on H. Proof.\nIf the order of an edge gets $1 payoff at the end of the market we call the edge a winning edge, otherwise it is called a losing edge.\nFor any state s, all winning edges necessarily form a DAG.\nConversely, for every DAG there is a state in which the DAG edges are winners (though the remaining edges in G are not necessarily losers).\nSuppose that in state s there are ws winning edges and ls = |E| \u2212 ws losing edges.\nThen, ls is the cardinality of a feedback arc set that consists of all losing edges in state s.\nThe edges that remain after deleting the minimum feedback arc set form the maximum DAG for the graph H. Consider the state smax in which all edges of the maximum DAG are winners.\nThis gives the maximum number of winning edges wmax.\nAll other edges are necessarily losers in the state smax, since any edge which is not in the max DAG must form a cycle together with some of the DAG edges.\nThe number of losing edges in state smax is the cardinality of the minimum feedback arc set of H, that is |E| \u2212 wmax = k(H).\nThe profit of the auctioneer in a state s is profit(s) = e\u2208E be \u2212 w = (1 \u2212 )|E| \u2212 w \u2265 (1 \u2212 )|E| \u2212 wmax, where equality holds when s = smax.\nThus, the worst-case profit is achieved at state smax, profit(smax) = (|E| \u2212 wmax) \u2212 |E| = k(H) \u2212 |E|.\nConsider the graph of accepted orders for optimal matching, G\u2217 (V \u2217 , E\u2217 ), which consists of the optimal subset of edges E\u2217 to be accepted by the auctioneer, that is edges with xe = 1 in the solution of the optimization problem (6).\nWe have the following lemma.\nLemma 3.\nIf the edge prices are be = (1\u2212 ), then the optimal indivisible solution graph G\u2217 has the same unweighted minimum feedback arc set size as the graph of all orders G, that is k(G\u2217 ) = k(G).\nFurthermore, G\u2217 is the smallest such subgraph of G, i.e., it is the subgraph of G with the smallest number of edges, that has the same size of unweighted minimum feedback arc set as G. Proof.\nG\u2217 is a subgraph of G, hence the minimum number of edges to break cycles in G\u2217 is no more than that in G, namely k(G\u2217 ) \u2264 k(G).\nSuppose k(G\u2217 ) < k(G).\nSince both k(G\u2217 ) and k(G) are integers, for any < 1 |E| we have that k(G\u2217 ) \u2212 |E\u2217 | < k(G)\u2212 |E|.\nHence by Lemma 2, the auctioneer has a higher worst-case profit by accepting G than accepting G\u2217 , which contradicts the optimality of G\u2217 .\nFinally, the worst-case profit k(G) \u2212 |E\u2217 | is maximized when |E\u2217 | is minimized.\nHence, G\u2217 is the smallest subgraph of G such that k(G\u2217 ) = k(G).\nThe above two lemmas prove that the maximum worstcase profit in the optimal indivisible matching is directly related to the size of the minimum feedback arc set.\nThus computing each automatically gives the other, hence computing the maximum worst-case profit in the indivisible pair betting problem is NP-hard.\nTheorem 4.\nComputing the maximum worst-case profit in indivisible pair betting is NP-hard.\nProof.\nBy Lemma 3, the maximum worst-case profit which is the optimum to the mixed integer programming problem (6), is k(G) \u2212 |E\u2217 |, where |E\u2217 | is the number of accepted edges.\nSince k(G) is integer and |E\u2217 | \u2264 |E| < 1, solving (6) will automatically give us the cardinality of the minimum feedback arc set of G, k(G).\nBecause the minimum feedback arc set problem is NP-complete [10], computing the maximum worst-case profit is NP-hard.\nTheorem 4 states that solving the optimization problem is hard, because even if the optimal set of orders are provided computing the optimal worst-case profit from accepting those orders is NP-hard.\nHowever, it does not imply whether the optimal matching problem, i.e. finding the optimal set of orders to accept, is NP-hard.\nIt is possible to be able to determine which edges in a graph participating in the optimal match, yet unable to compute the corresponding worst-case profit.\nNext, we prove that the indivisible optimal matching problem is actually NP-hard.\nWe will use the following short fact repeatedly.\nLemma 5 (Edge removal lemma).\nGiven a weighted graph H(V, E), removing a single edge e with weight xe from the graph decreases the weighted minimum feedback arc set solution l(H) by no more than xe and reduces the unweighted minimum feedback arc set solution k(H) by no more than 1.\nProof.\nSuppose the weighted minimum feedback arc set for the graph H \u2212 {e} is F.\nThen F \u222a {e} is a feedback arc set for H, and has total edge weight l(H\u2212{e})+xe.\nBecause l(H) is the solution to the weighted minimum feedback arc set problem on H, we have l(H) \u2264 l(H \u2212{e})+xe, implying that l(H \u2212 {e}) \u2265 l(H) \u2212 xe.\nSimilarly, suppose the unweighted minimum feedback arc set for the graph H \u2212 {e} is F .\nThen F \u222a {e} is a feedback arc set for H, and has set cardinality k(H\u2212{e})+1.\nBecause k(H) is the solution to the unweighted minimum feedback arc set problem on H, we have k(H) \u2264 k(H \u2212 {e}) + 1, giving that k(H \u2212 {e}) \u2265 k(H) \u2212 1.\nTheorem 6.\nFinding the optimal match in indivisible pair betting is NP-hard.\n331 Proof.\nWe reduce from the unweighted minimum feedback arc set problem again, although with a slightly more complex polynomial transformation involving multiple calls to the optimal match oracle.\nConsider an instance graph G of the minimum feedback arc set problem.\nWe are interested in computing k(G), the size of the minimum feedback arc set of G. Suppose we have an oracle which solves the optimal matching problem.\nDenote by optimal match(G ) the output of the optimal matching oracle on graph G with prices be = (1\u2212 ) on all its edges.\nBy Lemma 3, on input G , the oracle optimal match returns the subgraph of G with the smallest number of edges, that has the same size of minimum feedback arc set as G .\nThe following procedure finds k(G) by using polynomially many calls to the optimal match oracle on a sequence of subgraphs of G. set G := G iterations := 0 while (G has nonempty edge set) reset G := optimal match(G ) if (G has nonempty edge set) increment iterations by 1 reset G by removing any edge e end if end while return (iterations) This procedure removes edges from the original graph G layer by layer until the graph is empty, while at the same time computing the minimum feedback arc set size k(G) of the original graph as the number of iterations.\nIn each iteration, we start with a graph G and replace it with the smallest subgraph G that has the same k(G ).\nAt this stage, removing an additional edge e necessarily results in k(G \u2212{e}) = k(G )\u22121, because k(G \u2212{e}) < k(G ) by the optimality of G , and k(G \u2212 {e}) \u2265 k(G ) \u2212 1 by the edgeremoval lemma.\nTherefore, in each iteration the cardinality of the minimum feedback arc set gets reduced exactly by 1.\nHence the number of iterations is equal to k(G).\nNote that this procedure gives a polynomial transformation from the optimal matching problem to the unweighted minimum feedback arc set problem, which calls the optimal matching oracle exactly k(G) \u2264 |E| times, where |E| is the number of edges of G. Hence the optimal matching problem is NP-hard.\n5.2 Optimal Divisible Matching When orders are divisible, the auctioneer``s optimal matching problem is described in Definition 4 of Section 3.\nAssuming unit orders and considering the order graph G(V, E), we restate the auctioneer``s optimal matching problem for divisible orders as choosing quantity of orders to accept, xe \u2208 [0, 1], such that worst-case profit is maximized in the following linear programming problem, max xe,c c (7) s.t. e be \u2212 Ie(s) xe \u2265 c, \u2200s \u2208 S xe \u2208 [0, 1], \u2200e \u2208 E.\nWe still assume that there are no multi-edges in the order graph G.\nWhen orders are divisible, the auctioneer can be better off by accepting partial orders.\nExample 2 shows a situation when accepting partial orders generates higher worst-case profit than the optimal indivisible solution.\nExample 2.\nWe show that the linear program (7) sometimes has a non-integer optimal solution.\nA B C D E F b b b b b b b b b Figure 2: An order graph.\nLetters on edges represent order prices.\nConsider the graph in Figure 2.\nThere are a total of five cycles in the graph: three four-edge cycles ABCD, ABEF, CDEF, and two six-edge cycles ABCDEF and ABEFCD.\nSuppose each edge has price b such that 4b \u2212 3 > 0 and 6b \u2212 5 < 0, namely b \u2208 (.75, .80), for example b = .78.\nWith this, the optimal indivisible solution consists of at most one four-edge cycle, with worst case profit (4b\u22123).\nOn the other hand, taking 1 2 fraction of each of the three four-edge cycles would yield higher worst-case profit of 3 2 (4b \u2212 3).\nDespite the potential profit increase for accepting divisible orders, the auctioneer``s optimal matching problem remains to be NP-hard for divisible orders, which is presented below via several lemmas and theorems.\nLemma 7.\nSuppose the auctioneer accept orders described by a weighted directed graph H(V, E) with edge weight xe to be the quantity accepted for edge order e.\nThe worst-case profit for the auctioneer is c(H) = e\u2208E (be \u2212 1)xe + l(H).\n(8) Proof.\nFor any state s, the winning edges form a DAG.\nThus, the worst-case profit for the auctioneer achieves at the state(s) when the total quantity of losing orders is minimized.\nThe minimum total quantity of losing orders is the solution to weighted minimal feedback arc set problem on H, that is l(H).\nConsider the graph of accepted orders for optimal divisible matching, G\u2217 (V \u2217 , E\u2217 ), which consists of the optimal subset of edges E\u2217 to be accepted by the auctioneer, with edge weight xe > 0 getting from the optimal solution of the linear program (7).\nWe have the following lemmas.\n332 Lemma 8.\nl(G\u2217 ) \u2264 k(G\u2217 ) \u2264 k(G).\nProof.\nl(G\u2217 ) is the solution of the weighted minimum feedback arc set problem on G\u2217 , while k(G\u2217 ) is the solution of the unweighted minimum feedback arc set problem on G\u2217 .\nWhen all edge weights in G\u2217 are 1, l(G\u2217 ) = k(G\u2217 ).\nWhen xe``s are less than 1, l(G\u2217 ) can be less than or equal to k(G\u2217 ).\nSince G\u2217 is a subgraph of G but possibly with different edge weights, k(G\u2217 ) \u2264 k(G).\nHence, we have the above relation.\nLemma 9.\nThere exists some such that when all edge prices be``s are (1 \u2212 ), l(G\u2217 ) = k(G).\nProof.\nFrom lemma 8, l(G\u2217 ) \u2264 k(G).\nWe know that the auctioneer``s worst-case profit when accepting G\u2217 is c(G\u2217 ) = e\u2208E\u2217 (be \u2212 1)xe + l(G\u2217 ) = l(G\u2217 ) \u2212 e\u2208E\u2217 xe.\nWhen he accepts the original order graph G in full, his worstcase profit is c(G) = e\u2208E (be \u2212 1) + k(G) = k(G) \u2212 |E|.\nSuppose l(G\u2217 ) < k(G).\nIf |E| \u2212 e\u2208E\u2217 xe = 0, it means that G\u2217 is G. Hence, l(G\u2217 ) = k(G) regardless of , which contradicts with the assumption l(G\u2217 ) < k(G).\nIf |E| \u2212 e\u2208E\u2217 xe > 0, then when < k(G) \u2212 l(G\u2217 ) |E| \u2212 e\u2208E\u2217 xe , c(G) is strictly greater than c(G\u2217 ), contradicting with the optimality of c(G\u2217 ).\nBecause xe``s are less than 1, l(G\u2217 ) > k(G) is impossible.\nThus, l(G\u2217 ) = k(G).\nTheorem 10.\nFinding the optimal worst-case profit in divisible pair betting is NP-hard.\nProof.\nGiven the optimal set of partial orders to accept for G when edge weights are (1 \u2212 ), if we can calculate the optimal worst-case profit, by lemma 9 we can solve the unweighted minimum feedback arc set problem on G, which is NP-hard.\nHence, finding the optimal worst-case profit is NP-hard.\nTheorem 10 states that solving the linear program (7) is NP-hard.\nSimilarly to the indivisible case, we still need to prove that just finding the optimal divisible match is hard, as opposed to being able to compute the optimal worstcase profit.\nSince in the divisible case the edges do not necessarily have unit weights, the proof in Theorem 6 does not apply directly.\nHowever, with an additional property of the divisible case, we can augment the procedure from the indivisible hardness proof to compute the unweighted minimum feedback arc set size k(G) here as well.\nFirst, note that the optimal divisible subgraph G\u2217 of a graph G is the weighted subgraph with minimum weighted feedback arc set size l(G\u2217 ) = k(G) and smallest sum of edge weights e\u2208E\u2217 xe, since its corresponding worst case profit is k(G) \u2212 e\u2208E\u2217 xe according to lemmas 7 and 9.\nLemma 11.\nSuppose graph H satisfies l(H) = k(H) and we remove edge e from it with weight xe < 1.\nThen, k(H \u2212 {e}) = k(H).\nProof.\nAssume the contrary, namely k(H\u2212{e}) < k(H).\nThen by Lemma 5, k(H \u2212 {e}) = k(H) \u2212 1.\nSince removing a single edge cannot reduce the minimum feedback arc set by more than the edge weight, l(H) \u2212 xe \u2264 l(H \u2212 {e}).\n(9) On the other hand H \u2212 {e} \u2282 H so we have, l(H \u2212 {e}) \u2264 k(H \u2212 {e}) = k(H) \u2212 1 = l(H) \u2212 1.\n(10) Combining (9) and (10), we get xe \u2265 1.\nThe contradiction arises.\nTherefore, removing any edge with less than unit weight from an optimal divisible graph does not change k(H), the minimal feedback arc set size of the unweighted version of the graph.\nWe now can augment the procedure for the indivisible case in Theorem 6, to prove hardness of the divisible version, as follows.\nTheorem 12.\nFinding the optimal match in divisible pair betting is NP-hard.\nProof.\nWe reduce from the unweighted minimum feedback arc set problem for graph G. Suppose we have an oracle for the optimal divisible problem called optimal divisible match, which on input graph H computes edge weights xe \u2208 (0, 1] for the optimal subgraph H\u2217 of H, satisfying l(H\u2217 ) = k(H).\nThe following procedure outputs k(G).\nset G := G iterations := 0 while (G has nonempty edge set) reset G := optimal divisible match(G ) while (G has edges with weight < 1) remove an edge with weight < 1 from G reset G by setting all edge weights to 1 reset G := optimal divisible match(G ) end while if (G has nonempty edge set) increment iterations by 1 reset G by removing any edge e end if end while return (iterations) As in the proof of the corresponding Theorem 6 for the indivisible case, we compute k(G) by iteratively removing edges and recomputing the optimal divisible solution on the remaining subgraph, until all edges are deleted.\nIn each iteration of the outer while loop, the minimum feedback arc set is reduced by 1, thus the number of iterations is equal to k(G).\nIt remains to verify that each iteration reduces k(G) by exactly 1.\nStarting from a graph at the beginning of an iteration, we compute its optimal divisible subgraph.\nWe then keep removing one non-unit weight edge at a time and recomputing the optimal divisible subgraph, until the latter contains only edges with unit weight.\nBy Lemma 11 throughout the iteration so far the minimum feedback arc set of the corresponding unweighted graph remains unchanged.\nOnce the oracle returns a graph G with unit edge weights, removing any edge would reduce the minimum feedback arc set: otherwise G is not optimal since G \u2212 {e} would have 333 the same minimum feedback arc set but smaller total edge weight.\nBy Lemma 5 removing a single edge cannot reduce the minimum feedback arc set by more than one, thus as all edges have unit weight, k(G ) gets reduced by exactly one.\nk(G) is equal to the returned value from the procedure.\nHence, the optimal matching problem for divisible orders is NP-hard.\n5.3 Existence of a Match Knowing that the optimal matching problem is NP-hard for both indivisible and divisible orders in pair betting, we check whether the auctioneer can identify the existence of a match with ease.\nLemma 13 states a sufficient condition for the matching problem with both indivisible and divisible orders.\nLemma 13.\nA sufficient condition for the existence of a match for pair betting is that there exists a cycle C in G such that, e\u2208C be \u2265 |C| \u2212 1, (11) where |C| is the number of edges in the cycle C. Proof.\nThe left-hand side of the inequality (11) represents the total payment that the auctioneer receives by accepting every unit orders in the cycle C in full.\nBecause the direction of an edge represents predicted ordering of the two connected nodes in the final ranking, forming a cycle meaning that there is some logical contradiction on the predicted orderings of candidates.\nHence, whichever state is realized, not all of the edges in the cycle can be winning edges.\nThe worst-case for the auctioneer corresponds to a state where every edge in the cycle gets paid by $ 1 except one, with |C| \u2212 1 be the maximum payment to traders.\nHence, if inequality (11) is satisfied, the auctioneer has non-negative worst-case profit by accepting the orders in the cycle.\nIt can be shown that identifying such a non-negative worstcase profit cycle in an order graph G can be achieved in polynomial time.\nLemma 14.\nIt takes polynomial time to find a cycle in an order graph G(V, E) that has the highest worst-case profit, that is max C\u2208C e\u2208C be \u2212 (|C| \u2212 1) , where C is the set of all cycles in G. Proof.\nBecause e\u2208C be \u2212 (|C| \u2212 1) = e\u2208C (be \u2212 1) + 1 = 1 \u2212 e\u2208C (1 \u2212 be), finding the cycle that gives the highest worst-case profit in the original order graph G is equivalent to finding the shortest cycle in a converted graph H(V, E), where H is achieved by setting the weight for edge e in G to be (1 \u2212 be).\nFinding the shortest cycle in graph H can be done within polynomial time by resorting to the shortest path problem.\nFor any vertex v in V , we consider every neighbor vertex w such that (v, w) \u2208 E.\nWe then find the shortest path from w to v, denoted as path(w, v).\nThe shortest cycle that passes vertex v is found by choosing the w such that e(v,w) + path(w, v) is minimized.\nComparing the shortest cycle found for every vertex, we then can determine the shortest overall cycle for the graph H. Because the short path problem can be solved in polynomial time [3], we can find the solution to our problem in polynomial time.\nIf the worst-case profit for the optimal cycle is non-negative, we know that there exists a match in G. However, the condition in lemma 13 is not a necessary condition for the existence of a match.\nEven if all single cycles in the order graph have negative worst-case profit, the auctioneer may accept multiple interweaving cycles to have positive worstcase profit.\nFigure 1 exhibits such a situation.\nIf the optimal indivisible match consists only of edge disjoint cycles, a natural greedy algorithm can find the cycle that gives the highest worst-case profit, remove its edges from the graph, and proceed until no more cycles exist.\nHowever, we show that such greedy algorithm can give a very poor approximation.\n\u221a n + 1 \u221a n + 1 \u221a n \u221a n + 1 \u221a n + 1 \u221a n + 1 \u221a n + 1 Figure 3: Graph with n vertices and n + \u221a n edges on which the greedy algorithm finds only two cycles, the dotted cycle in the center and the unique remaining cycle.\nThe labels in the faces give the number of edges in the corresponding cycle.\nLemma 15.\nThe greedy algorithm gives at most an O( \u221a n)approximation to the maximum number of disjoint cycles.\nProof.\nConsider the graph in Figure 3 consisting of a cycle with \u221a n edges, each of which participates in another (otherwise disjoint) cycle with \u221a n + 1 edges.\nSuppose all edge weights are (1 \u2212 ).\nThe maximum number of disjoint cycles is clearly \u221a n, taking all cycles with length \u221a n + 1.\nBecause smaller cycles gives higher worst-case profit, the greedy algorithm would first select the cycle of length \u221a n, after which there would be only one remaining cycle of length n. Thus the total number of cycles selected by greedy is 2 and the approximation factor in this case is \u221a n\/2.\nIn light of Lemma 15, one may expect that greedy algorithms would give \u221a n-approximations at best.\nApproxima334 tion algorithms for finding the maximum number of edgedisjoint cycles have been considered by Krivelevich, Nutov and Yuster [11, 19].\nIndeed, for the case of directed graphs, the authors show that a greedy algorithm gives a\u221a n-approximation [11].\nWhen the optimal match does not consist of edge-disjoint cycles as in the example of Figure 3, greedy algorithm trying to finding optimal single cycles fails obviously.\n6.\nCONCLUSION We consider a permutation betting scenario, where traders wager on the final ordering of n candidates.\nWhile it is unnatural and intractable to allow traders to bet directly on the n!\ndifferent final orderings, we propose two expressive betting languages, subset betting and pair betting.\nIn a subset betting market, traders can bet either on a subset of positions that a candidate stands or on a subset of candidates who occupy a specific position in the final ordering.\nPair betting allows traders bet on whether one given candidate ranks higher than another given candidate.\nWe examine the auctioneer problem of matching orders without incurring risk.\nWe find that in a subset betting market an auctioneer can find the optimal set and quantity of orders to accept such that his worst-case profit is maximized in polynomial time if orders are divisible.\nThe complexity changes dramatically for pair betting.\nWe prove that the optimal matching problem for the auctioneer is NP-hard for pair betting with both indivisible and divisible orders via reductions to the minimum feedback arc set problem.\nWe identify a sufficient condition for the existence of a match, which can be verified in polynomial time.\nA natural greedy algorithm has been shown to give poor approximation for indivisible pair betting.\nInteresting open questions for our permutation betting include the computational complexity of optimal indivisible matching for subset betting and the necessary condition for the existence of a match in pair betting markets.\nWe are interested in further exploring better approximation algorithms for pair betting markets.\n7.\nACKNOWLEDGMENTS We thank Ravi Kumar, Yishay Mansour, Amin Saberi, Andrew Tomkins, John Tomlin, and members of Yahoo! Research for valuable insights and discussions.\n8.\nREFERENCES [1] K. J. Arrow.\nThe role of securities in the optimal allocation of risk-bearing.\nReview of Economic Studies, 31(2):91-96, 1964.\n[2] J. E. Berg, R. Forsythe, F. D. Nelson, and T. A. Rietz.\nResults from a dozen years of election futures markets research.\nIn C. A. Plott and V. Smith, editors, Handbook of Experimental Economic Results (forthcoming).\n2001.\n[3] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein.\nIntroduction to Algorithms (Second Edition).\nMIT Press and McGraw-Hill, 2001.\n[4] P. Cramton, Y. Shoham, and R. Steinberg.\nCombinatorial Auctions.\nMIT Press, Cambridge, MA, 2005.\n[5] R. Forsythe, T. A. Rietz, and T. W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[6] L. Fortnow, J. Kilian, D. M. Pennock, and M. P. Wellman.\nBetting boolean-style: A framework for trading in securities based on logical formulas.\nDecision Support Systems, 39(1):87-104, 2004.\n[7] M. Gr\u00a8otschel, L. Lov\u00b4asz, and A. Schrijver.\nThe ellipsoid method and its consequences in combinatorial optimization.\nCombinatorica, 1(2):169-197, 1981.\n[8] M. Gr\u00a8otschel, L. Lov\u00b4asz, and A. Schrijver.\nGeometric Algorithms and Combinatorial Optimization.\nSpringer-Verlag, Berlin Heidelberg, 1993.\n[9] R. D. Hanson.\nCombinatorial information market design.\nInformation Systems Frontiers, 5(1):107-119, 2003.\n[10] R. M. Karp.\nReducibility among combinatorial problems.\nIn Complexity of computer computations (Proc.\nSympos., IBM Thomas J. Watson Res.\nCenter, Yorktown Heights, N.Y.), pages 85-103.\nPlenum, New York, 1972.\n[11] M. Krivelevich, Z. Nutov, and R. Yuster.\nApproximation algorithms for cycle packing problems.\nProceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 556-561, 2005.\n[12] H. W. Kuhn.\nThe hungarian method for the assignment problem.\nNaval Research Logistic Quarterly, 2:83-97, 1955.\n[13] J. Munkres.\nAlgorithms for the assignment and transportation problems.\nJournal of the Society of Industrial and Applied Mathematics, 5(1):32-38, 1957.\n[14] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proceedings of the 2nd ACM Conference on Electronic Commerce (EC``00), Minneapolis, MN, 2000.\n[15] D. M. Pennock, S. Lawrence, C. L. Giles, and F. A. Nielsen.\nThe real power of artificial markets.\nScience, 291:987-988, February 2002.\n[16] C. Plott and S. Sunder.\nEfficiency of experimental security markets with insider information: An application of rational expectations models.\nJournal of Political Economy, 90:663-98, 1982.\n[17] C. Plott and S. Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56:1085-1118, 1988.\n[18] T. Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nArtificial Intelligence, 135:1-54, 2002.\n[19] R. Yuster and Z. Nutov.\nPacking directed cycles efficiently.\nProceedings of the 29th International Symposium on Mathematical Foundations of Computer Science (MFCS), 2004.\n335","lvl-3":"Betting on Permutations\nABSTRACT\nWe consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race.\nWe examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers.\nRequiring bidders to explicitly list the orderings that they'd like to bet on is both unnatural and intractable, because the number of orderings is n!\nand the number of subsets of orderings is 2n!\n.\nWe propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case.\nSubset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, \"horse A will finish in positions 4, 9, or 13-21\", or that a position will be taken by some subset of candidates, for example \"horse A, B, or D will finish in position 2\".\nFor subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible.\nPair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example \"horse A will beat horse B\".\nWe prove that the auctioneer problem becomes NP-hard for pair betting.\nWe identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time.\nWe also show that a natural greedy algorithm gives a poor approximation for indivisible orders.\n1.\nINTRODUCTION\nBuying or selling a financial security in effect is a wager on the security's value.\nFor example, buying a stock is a bet that the stock's value is greater than its current price.\nEach trader evaluates his expected profit to decide the quantity to buy or sell according to his own information and subjective probability assessment.\nThe collective interaction of all bets leads to an equilibrium that reflects an aggregation of all the traders' information and beliefs.\nIn practice, this aggregate market assessment of the security's value is often more accurate than other forecasts relying on experts, polls, or statistical inference [16, 17, 5, 2, 15].\nConsider buying a security at price fifty-two cents, that pays $1 if and only if a Democrat wins the 2008 US Presidential election.\nThe transaction is a commitment to accept a fifty-two cent loss if a Democrat does not win in return for a forty-eight cent profit if a Democrat does win.\nIn this case of an event-contingent security, the price--the market's value of the security--corresponds directly to the estimated probability of the event.\nAlmost all existing financial and betting exchanges pair up bilateral trading partners.\nFor example, one trader willing to accept an x dollar loss if a Democrat does not win in return for a y dollar profit if a Democrat wins is matched up with a second trader willing to accept the opposite.\nHowever in many scenarios, even if no bilateral agreements exist among traders, multilateral agreements may be possible.\nFor example, if one trader bets that the Democratic candidate will receive more votes than the Republican candidate, a second trader bets that the Republican candidate will receive more votes than the Libertarian candidate, and a third trader bets that the Libertarian candidate will receive more votes than the Democratic candidate, then, depending on the odds they each offer, there may be a three-way agreeable match even though no two-way matches exist.\nWe propose an exchange where traders have considerable flexibility to naturally and succinctly express their wagers,\nand examine the computational complexity of the auctioneer's resulting matching problem of identifying bilateral and multilateral agreements.\nIn particular, we focus on a setting where traders bet on the outcome of a competition among n candidates.\nFor example, suppose that there are n candidates in an election (or n horses in a race, etc.) and thus n!\npossible orderings of candidates after the final vote tally.\nTraders may like to bet on arbitrary properties of the final ordering, for example \"candidate D will win\", \"candidate D will finish in either first place or last place\", \"candidate D will defeat candidate R\", \"candidates D and R will both defeat candidate L\", etc. .\nThe goal of the exchange is to search among all the offers to find two or more that together form an agreeable match.\nAs we shall see, the matching problem can be set up as a linear or integer program, depending on whether orders are divisible or indivisible, respectively.\nAttempting to reduce the problem to a bilateral matching problem by explicitly creating n!\nsecurities, one for each possible final ordering, is both cumbersome for the traders and computationally infeasible even for modest sized n. Moreover, traders' attention would be spread among n!\nindependent choices, making the likelihood of two traders converging at the same time and place seem remote.\nThere is a tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe want to offer traders the most expressive bidding language possible while maintaining computational feasibility.\nWe explore two bidding languages that seem natural from a trader perspective.\nSubset betting, described in Section 3.2, allows traders to bet on which positions in the ranking a candidate will fall, for example \"candidate D will finish in position 1, 3-5, or 10\".\nSymetrically, traders can also bet on which candidates will fall in a particular position.\nIn Section 4, we derive a polynomial-time algorithm for matching (divisible) subset bets.\nThe key to the result is showing that the exponentially big linear program has a corresponding separation problem that reduces to maximum weighted bipartite matching and consequently we can solve it in time polynomial in the number of orders.\nPair betting, described in Section 3.3, allows traders to bet on the final ranking of any two candidates, for example \"candidate D will defeat candidate R\".\nIn Section 5, we show that optimal matching of (divisible or indivisible) pair bets is NP-hard, via a reduction from the unweighted minimum feedback arc set problem.\nWe also provide a polynomiallyverifiable sufficient condition for the existence of a pairbetting match and show that a greedy algorithm offers poor approximation for indivisible pair bets.\n2.\nBACKGROUND AND RELATED WORK\nWe consider permutation betting, or betting on the outcome of a competition among n candidates.\nThe final outcome or state s E S is an ordinal ranking of the n candidates.\nFor example, the candidates could be horses in a race and the outcome the list of horses in increasing order of their finishing times.\nThe state space S contains all n!\nmutually exclusive and exhaustive permutations of candidates.\nIn a typical horse race, people bet on properties of the outcome like \"horse A will win\", \"horse A will show, or finish in either first or second place\", or \"horses A and B will finish in first and second place, respectively\".\nIn practice at the racetrack, each of these different types of bets are processed in separate pools or groups.\nIn other words, all the \"win\" bets are processed together, and all the \"show\" bets are processed together, but the two types of bets do not mix.\nThis separation can hurt liquidity and information aggregation.\nFor example, even though horse A is heavily favored to win, that may not directly boost the horse's odds to show.\nInstead, we describe a central exchange where all bets on the outcome are processed together, thus aggregating liquidity and ensuring that informational inference happens automatically.\nIdeally, we'd like to allow traders to bet on any property of the final ordering they like, stated in exactly the language they prefer.\nIn practice, allowing too flexible a language creates a computational burden for the auctioneer attempting to match willing traders.\nWe explore the tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe consider a framework where people propose to buy securities that pay $1 if and only if some property of the final ordering is true.\nTraders state the price they are willing to pay per share and the number of shares they would like to purchase.\n(Sell orders may not be explicitly needed, since buying the negation of an event is equivalent to selling the event.)\nA divisible order permits the trader to receive fewer shares than requested, as long as the price constraint is met; an indivisible order is an all-or-nothing order.\nThe description of bets in terms of prices and shares is without loss of generality: we can also allow bets to be described in terms of odds, payoff vectors, or any of the diverse array of approaches practiced in financial and gambling circles.\nIn principle, we can do everything we want by explicitly offering n!\nsecurities, one for every state s E S (or in fact any set of n!\nlinearly independent securities).\nThis is the so-called complete Arrow-Debreu securities market [1] for our setting.\nIn practice, traders do not want to deal with low-level specification of complete orderings: people think more naturally in terms of high-level properties of orderings.\nMoreover, operating n!\nsecurities is infeasible in practice from a computational point of view as n grows.\nA very simple bidding language might allow traders to bet only on who wins the competition, as is done in the \"win\" pool at racetracks.\nThe corresponding matching problem is polynomial, however the language is not very expressive.\nA trader who believes that A will defeat B, but that neither will win outright cannot usefully impart his information to the market.\nThe price space of the market reveals the collective estimates of win probabilities but nothing else.\nOur goal is to find languages that are as expressive and intuitive as possible and reveal as much information as possible, while maintaining computational feasibility.\nOur work is in direct analogy to work by Fortnow et.\nal. [6].\nWhereas we explore permutation combinatorics, Fortnow et.\nal. explore Boolean combinatorics.\nThe authors consider a state space of the 2n possible outcomes of n binary variables.\nTraders express bets in Boolean logic.\nThe authors show that divisible matching is co-NP-complete and indivisible matching is p2-complete.\nHanson [9] describes a market scoring rule mechanism which can allow betting on combinatorial number of outcomes.\nThe market starts with a joint probability distribution across all outcomes.\nIt works like a sequential version of a scoring rule.\nAny trader can change the probability distribution as long as he agrees to pay the most recent trader\naccording to the scoring rule.\nThe market maker pays the last trader.\nHence, he bears risk and may incur loss.\nMarket scoring rule mechanisms have a nice property that the worst-case loss of the market maker is bounded.\nHowever, the computational aspects on how to operate the mechanism have not been fully explored.\nOur mechanisms have an auctioneer who does not bear any risk and only matches orders.\nResearch on bidding languages and winner determination in combinatorial auctions [4, 14, 18] considers similar computational challenges in finding an allocation of items to bidders that maximizes the auctioneer's revenue.\nCombinatorial auctions allow bidders to place distinct values on bundles of goods rather than just on individual goods.\nUncertainty and risk are typically not considered and the central auctioneer problem is to maximize social welfare.\nOur mechanisms allow traders to construct bets for an event with n!\noutcomes.\nUncertainty and risk are considered and the auctioneer problem is to explore arbitrage opportunities and risklessly match up wagers.\n3.\nPERMUTATION BETTING\n3.1 Securities, Orders and Matching Problems\nDEFINITION 2 (EXISTENCE OF MATCH, DIVISIBLE ORDERS).\n3.2 Subset Betting\n3.3 Pair Betting\n4.\nCOMPLEXITY OF SUBSET BETTING\n5.\nCOMPLEXITY OF PAIR BETTING\n5.1 Optimal Indivisible Matching\n5.2 Optimal Divisible Matching\nweights E\n5.3 Existence of a Match\nCEC (E 1)\n6.\nCONCLUSION\nWe consider a permutation betting scenario, where traders wager on the final ordering of n candidates.\nWhile it is unnatural and intractable to allow traders to bet directly on the n!\ndifferent final orderings, we propose two expressive betting languages, subset betting and pair betting.\nIn a subset betting market, traders can bet either on a subset of positions that a candidate stands or on a subset of candidates who occupy a specific position in the final ordering.\nPair betting allows traders bet on whether one given candidate ranks higher than another given candidate.\nWe examine the auctioneer problem of matching orders without incurring risk.\nWe find that in a subset betting market an auctioneer can find the optimal set and quantity of orders to accept such that his worst-case profit is maximized in polynomial time if orders are divisible.\nThe complexity changes dramatically for pair betting.\nWe prove that the optimal matching problem for the auctioneer is NP-hard for pair betting with both indivisible and divisible orders via reductions to the minimum feedback arc set problem.\nWe identify a sufficient condition for the existence of a match, which can be verified in polynomial time.\nA natural greedy algorithm has been shown to give poor approximation for indivisible pair betting.\nInteresting open questions for our permutation betting include the computational complexity of optimal indivisible matching for subset betting and the necessary condition for the existence of a match in pair betting markets.\nWe are interested in further exploring better approximation algorithms for pair betting markets.","lvl-4":"Betting on Permutations\nABSTRACT\nWe consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race.\nWe examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers.\nRequiring bidders to explicitly list the orderings that they'd like to bet on is both unnatural and intractable, because the number of orderings is n!\nand the number of subsets of orderings is 2n!\n.\nWe propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case.\nSubset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, \"horse A will finish in positions 4, 9, or 13-21\", or that a position will be taken by some subset of candidates, for example \"horse A, B, or D will finish in position 2\".\nFor subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible.\nPair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example \"horse A will beat horse B\".\nWe prove that the auctioneer problem becomes NP-hard for pair betting.\nWe identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time.\nWe also show that a natural greedy algorithm gives a poor approximation for indivisible orders.\n1.\nINTRODUCTION\nBuying or selling a financial security in effect is a wager on the security's value.\nFor example, buying a stock is a bet that the stock's value is greater than its current price.\nEach trader evaluates his expected profit to decide the quantity to buy or sell according to his own information and subjective probability assessment.\nThe collective interaction of all bets leads to an equilibrium that reflects an aggregation of all the traders' information and beliefs.\nConsider buying a security at price fifty-two cents, that pays $1 if and only if a Democrat wins the 2008 US Presidential election.\nIn this case of an event-contingent security, the price--the market's value of the security--corresponds directly to the estimated probability of the event.\nAlmost all existing financial and betting exchanges pair up bilateral trading partners.\nFor example, one trader willing to accept an x dollar loss if a Democrat does not win in return for a y dollar profit if a Democrat wins is matched up with a second trader willing to accept the opposite.\nHowever in many scenarios, even if no bilateral agreements exist among traders, multilateral agreements may be possible.\nWe propose an exchange where traders have considerable flexibility to naturally and succinctly express their wagers,\nand examine the computational complexity of the auctioneer's resulting matching problem of identifying bilateral and multilateral agreements.\nIn particular, we focus on a setting where traders bet on the outcome of a competition among n candidates.\nFor example, suppose that there are n candidates in an election (or n horses in a race, etc.) and thus n!\npossible orderings of candidates after the final vote tally.\nAs we shall see, the matching problem can be set up as a linear or integer program, depending on whether orders are divisible or indivisible, respectively.\nAttempting to reduce the problem to a bilateral matching problem by explicitly creating n!\nsecurities, one for each possible final ordering, is both cumbersome for the traders and computationally infeasible even for modest sized n. Moreover, traders' attention would be spread among n!\nindependent choices, making the likelihood of two traders converging at the same time and place seem remote.\nThere is a tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe want to offer traders the most expressive bidding language possible while maintaining computational feasibility.\nWe explore two bidding languages that seem natural from a trader perspective.\nSubset betting, described in Section 3.2, allows traders to bet on which positions in the ranking a candidate will fall, for example \"candidate D will finish in position 1, 3-5, or 10\".\nSymetrically, traders can also bet on which candidates will fall in a particular position.\nIn Section 4, we derive a polynomial-time algorithm for matching (divisible) subset bets.\nPair betting, described in Section 3.3, allows traders to bet on the final ranking of any two candidates, for example \"candidate D will defeat candidate R\".\nIn Section 5, we show that optimal matching of (divisible or indivisible) pair bets is NP-hard, via a reduction from the unweighted minimum feedback arc set problem.\nWe also provide a polynomiallyverifiable sufficient condition for the existence of a pairbetting match and show that a greedy algorithm offers poor approximation for indivisible pair bets.\n2.\nBACKGROUND AND RELATED WORK\nWe consider permutation betting, or betting on the outcome of a competition among n candidates.\nThe final outcome or state s E S is an ordinal ranking of the n candidates.\nFor example, the candidates could be horses in a race and the outcome the list of horses in increasing order of their finishing times.\nThe state space S contains all n!\nmutually exclusive and exhaustive permutations of candidates.\nIn practice at the racetrack, each of these different types of bets are processed in separate pools or groups.\nInstead, we describe a central exchange where all bets on the outcome are processed together, thus aggregating liquidity and ensuring that informational inference happens automatically.\nIdeally, we'd like to allow traders to bet on any property of the final ordering they like, stated in exactly the language they prefer.\nIn practice, allowing too flexible a language creates a computational burden for the auctioneer attempting to match willing traders.\nWe explore the tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe consider a framework where people propose to buy securities that pay $1 if and only if some property of the final ordering is true.\nTraders state the price they are willing to pay per share and the number of shares they would like to purchase.\nA divisible order permits the trader to receive fewer shares than requested, as long as the price constraint is met; an indivisible order is an all-or-nothing order.\nsecurities, one for every state s E S (or in fact any set of n!\nlinearly independent securities).\nThis is the so-called complete Arrow-Debreu securities market [1] for our setting.\nIn practice, traders do not want to deal with low-level specification of complete orderings: people think more naturally in terms of high-level properties of orderings.\nMoreover, operating n!\nsecurities is infeasible in practice from a computational point of view as n grows.\nA very simple bidding language might allow traders to bet only on who wins the competition, as is done in the \"win\" pool at racetracks.\nThe corresponding matching problem is polynomial, however the language is not very expressive.\nA trader who believes that A will defeat B, but that neither will win outright cannot usefully impart his information to the market.\nThe price space of the market reveals the collective estimates of win probabilities but nothing else.\nOur goal is to find languages that are as expressive and intuitive as possible and reveal as much information as possible, while maintaining computational feasibility.\nOur work is in direct analogy to work by Fortnow et.\nWhereas we explore permutation combinatorics, Fortnow et.\nal. explore Boolean combinatorics.\nThe authors consider a state space of the 2n possible outcomes of n binary variables.\nTraders express bets in Boolean logic.\nThe authors show that divisible matching is co-NP-complete and indivisible matching is p2-complete.\nHanson [9] describes a market scoring rule mechanism which can allow betting on combinatorial number of outcomes.\nThe market starts with a joint probability distribution across all outcomes.\nIt works like a sequential version of a scoring rule.\nAny trader can change the probability distribution as long as he agrees to pay the most recent trader\naccording to the scoring rule.\nThe market maker pays the last trader.\nHence, he bears risk and may incur loss.\nMarket scoring rule mechanisms have a nice property that the worst-case loss of the market maker is bounded.\nHowever, the computational aspects on how to operate the mechanism have not been fully explored.\nOur mechanisms have an auctioneer who does not bear any risk and only matches orders.\nCombinatorial auctions allow bidders to place distinct values on bundles of goods rather than just on individual goods.\nUncertainty and risk are typically not considered and the central auctioneer problem is to maximize social welfare.\nOur mechanisms allow traders to construct bets for an event with n!\noutcomes.\nUncertainty and risk are considered and the auctioneer problem is to explore arbitrage opportunities and risklessly match up wagers.\n6.\nCONCLUSION\nWe consider a permutation betting scenario, where traders wager on the final ordering of n candidates.\nWhile it is unnatural and intractable to allow traders to bet directly on the n!\ndifferent final orderings, we propose two expressive betting languages, subset betting and pair betting.\nIn a subset betting market, traders can bet either on a subset of positions that a candidate stands or on a subset of candidates who occupy a specific position in the final ordering.\nPair betting allows traders bet on whether one given candidate ranks higher than another given candidate.\nWe examine the auctioneer problem of matching orders without incurring risk.\nWe find that in a subset betting market an auctioneer can find the optimal set and quantity of orders to accept such that his worst-case profit is maximized in polynomial time if orders are divisible.\nThe complexity changes dramatically for pair betting.\nWe prove that the optimal matching problem for the auctioneer is NP-hard for pair betting with both indivisible and divisible orders via reductions to the minimum feedback arc set problem.\nWe identify a sufficient condition for the existence of a match, which can be verified in polynomial time.\nA natural greedy algorithm has been shown to give poor approximation for indivisible pair betting.\nInteresting open questions for our permutation betting include the computational complexity of optimal indivisible matching for subset betting and the necessary condition for the existence of a match in pair betting markets.\nWe are interested in further exploring better approximation algorithms for pair betting markets.","lvl-2":"Betting on Permutations\nABSTRACT\nWe consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race.\nWe examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers.\nRequiring bidders to explicitly list the orderings that they'd like to bet on is both unnatural and intractable, because the number of orderings is n!\nand the number of subsets of orderings is 2n!\n.\nWe propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case.\nSubset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, \"horse A will finish in positions 4, 9, or 13-21\", or that a position will be taken by some subset of candidates, for example \"horse A, B, or D will finish in position 2\".\nFor subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible.\nPair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example \"horse A will beat horse B\".\nWe prove that the auctioneer problem becomes NP-hard for pair betting.\nWe identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time.\nWe also show that a natural greedy algorithm gives a poor approximation for indivisible orders.\n1.\nINTRODUCTION\nBuying or selling a financial security in effect is a wager on the security's value.\nFor example, buying a stock is a bet that the stock's value is greater than its current price.\nEach trader evaluates his expected profit to decide the quantity to buy or sell according to his own information and subjective probability assessment.\nThe collective interaction of all bets leads to an equilibrium that reflects an aggregation of all the traders' information and beliefs.\nIn practice, this aggregate market assessment of the security's value is often more accurate than other forecasts relying on experts, polls, or statistical inference [16, 17, 5, 2, 15].\nConsider buying a security at price fifty-two cents, that pays $1 if and only if a Democrat wins the 2008 US Presidential election.\nThe transaction is a commitment to accept a fifty-two cent loss if a Democrat does not win in return for a forty-eight cent profit if a Democrat does win.\nIn this case of an event-contingent security, the price--the market's value of the security--corresponds directly to the estimated probability of the event.\nAlmost all existing financial and betting exchanges pair up bilateral trading partners.\nFor example, one trader willing to accept an x dollar loss if a Democrat does not win in return for a y dollar profit if a Democrat wins is matched up with a second trader willing to accept the opposite.\nHowever in many scenarios, even if no bilateral agreements exist among traders, multilateral agreements may be possible.\nFor example, if one trader bets that the Democratic candidate will receive more votes than the Republican candidate, a second trader bets that the Republican candidate will receive more votes than the Libertarian candidate, and a third trader bets that the Libertarian candidate will receive more votes than the Democratic candidate, then, depending on the odds they each offer, there may be a three-way agreeable match even though no two-way matches exist.\nWe propose an exchange where traders have considerable flexibility to naturally and succinctly express their wagers,\nand examine the computational complexity of the auctioneer's resulting matching problem of identifying bilateral and multilateral agreements.\nIn particular, we focus on a setting where traders bet on the outcome of a competition among n candidates.\nFor example, suppose that there are n candidates in an election (or n horses in a race, etc.) and thus n!\npossible orderings of candidates after the final vote tally.\nTraders may like to bet on arbitrary properties of the final ordering, for example \"candidate D will win\", \"candidate D will finish in either first place or last place\", \"candidate D will defeat candidate R\", \"candidates D and R will both defeat candidate L\", etc. .\nThe goal of the exchange is to search among all the offers to find two or more that together form an agreeable match.\nAs we shall see, the matching problem can be set up as a linear or integer program, depending on whether orders are divisible or indivisible, respectively.\nAttempting to reduce the problem to a bilateral matching problem by explicitly creating n!\nsecurities, one for each possible final ordering, is both cumbersome for the traders and computationally infeasible even for modest sized n. Moreover, traders' attention would be spread among n!\nindependent choices, making the likelihood of two traders converging at the same time and place seem remote.\nThere is a tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe want to offer traders the most expressive bidding language possible while maintaining computational feasibility.\nWe explore two bidding languages that seem natural from a trader perspective.\nSubset betting, described in Section 3.2, allows traders to bet on which positions in the ranking a candidate will fall, for example \"candidate D will finish in position 1, 3-5, or 10\".\nSymetrically, traders can also bet on which candidates will fall in a particular position.\nIn Section 4, we derive a polynomial-time algorithm for matching (divisible) subset bets.\nThe key to the result is showing that the exponentially big linear program has a corresponding separation problem that reduces to maximum weighted bipartite matching and consequently we can solve it in time polynomial in the number of orders.\nPair betting, described in Section 3.3, allows traders to bet on the final ranking of any two candidates, for example \"candidate D will defeat candidate R\".\nIn Section 5, we show that optimal matching of (divisible or indivisible) pair bets is NP-hard, via a reduction from the unweighted minimum feedback arc set problem.\nWe also provide a polynomiallyverifiable sufficient condition for the existence of a pairbetting match and show that a greedy algorithm offers poor approximation for indivisible pair bets.\n2.\nBACKGROUND AND RELATED WORK\nWe consider permutation betting, or betting on the outcome of a competition among n candidates.\nThe final outcome or state s E S is an ordinal ranking of the n candidates.\nFor example, the candidates could be horses in a race and the outcome the list of horses in increasing order of their finishing times.\nThe state space S contains all n!\nmutually exclusive and exhaustive permutations of candidates.\nIn a typical horse race, people bet on properties of the outcome like \"horse A will win\", \"horse A will show, or finish in either first or second place\", or \"horses A and B will finish in first and second place, respectively\".\nIn practice at the racetrack, each of these different types of bets are processed in separate pools or groups.\nIn other words, all the \"win\" bets are processed together, and all the \"show\" bets are processed together, but the two types of bets do not mix.\nThis separation can hurt liquidity and information aggregation.\nFor example, even though horse A is heavily favored to win, that may not directly boost the horse's odds to show.\nInstead, we describe a central exchange where all bets on the outcome are processed together, thus aggregating liquidity and ensuring that informational inference happens automatically.\nIdeally, we'd like to allow traders to bet on any property of the final ordering they like, stated in exactly the language they prefer.\nIn practice, allowing too flexible a language creates a computational burden for the auctioneer attempting to match willing traders.\nWe explore the tradeoff between the expressiveness of the bidding language and the computational complexity of the matching problem.\nWe consider a framework where people propose to buy securities that pay $1 if and only if some property of the final ordering is true.\nTraders state the price they are willing to pay per share and the number of shares they would like to purchase.\n(Sell orders may not be explicitly needed, since buying the negation of an event is equivalent to selling the event.)\nA divisible order permits the trader to receive fewer shares than requested, as long as the price constraint is met; an indivisible order is an all-or-nothing order.\nThe description of bets in terms of prices and shares is without loss of generality: we can also allow bets to be described in terms of odds, payoff vectors, or any of the diverse array of approaches practiced in financial and gambling circles.\nIn principle, we can do everything we want by explicitly offering n!\nsecurities, one for every state s E S (or in fact any set of n!\nlinearly independent securities).\nThis is the so-called complete Arrow-Debreu securities market [1] for our setting.\nIn practice, traders do not want to deal with low-level specification of complete orderings: people think more naturally in terms of high-level properties of orderings.\nMoreover, operating n!\nsecurities is infeasible in practice from a computational point of view as n grows.\nA very simple bidding language might allow traders to bet only on who wins the competition, as is done in the \"win\" pool at racetracks.\nThe corresponding matching problem is polynomial, however the language is not very expressive.\nA trader who believes that A will defeat B, but that neither will win outright cannot usefully impart his information to the market.\nThe price space of the market reveals the collective estimates of win probabilities but nothing else.\nOur goal is to find languages that are as expressive and intuitive as possible and reveal as much information as possible, while maintaining computational feasibility.\nOur work is in direct analogy to work by Fortnow et.\nal. [6].\nWhereas we explore permutation combinatorics, Fortnow et.\nal. explore Boolean combinatorics.\nThe authors consider a state space of the 2n possible outcomes of n binary variables.\nTraders express bets in Boolean logic.\nThe authors show that divisible matching is co-NP-complete and indivisible matching is p2-complete.\nHanson [9] describes a market scoring rule mechanism which can allow betting on combinatorial number of outcomes.\nThe market starts with a joint probability distribution across all outcomes.\nIt works like a sequential version of a scoring rule.\nAny trader can change the probability distribution as long as he agrees to pay the most recent trader\naccording to the scoring rule.\nThe market maker pays the last trader.\nHence, he bears risk and may incur loss.\nMarket scoring rule mechanisms have a nice property that the worst-case loss of the market maker is bounded.\nHowever, the computational aspects on how to operate the mechanism have not been fully explored.\nOur mechanisms have an auctioneer who does not bear any risk and only matches orders.\nResearch on bidding languages and winner determination in combinatorial auctions [4, 14, 18] considers similar computational challenges in finding an allocation of items to bidders that maximizes the auctioneer's revenue.\nCombinatorial auctions allow bidders to place distinct values on bundles of goods rather than just on individual goods.\nUncertainty and risk are typically not considered and the central auctioneer problem is to maximize social welfare.\nOur mechanisms allow traders to construct bets for an event with n!\noutcomes.\nUncertainty and risk are considered and the auctioneer problem is to explore arbitrage opportunities and risklessly match up wagers.\n3.\nPERMUTATION BETTING\nIn this section, we define the matching and optimal matching problems that an auctioneer needs to solve in a general permutation betting market.\nWe then illustrate the problem definitions in the context of the subset-betting and pairbetting markets.\n3.1 Securities, Orders and Matching Problems\nConsider an event with n competing candidates where the outcome (state) is a ranking of the n candidates.\nThe bidding language of a market offering securities in the future outcomes determines the type and number of securities available and directly affects what information can be aggregated about the outcome.\nA fully expressive bidding language can capture any possible information that traders may have about the final ranking; a less expressive language limits the type of information that can be aggregated though it may enable a more efficient solution to the matching problem.\nFor any bidding language and number of securities in a permutation betting market, we can succinctly represent the problem of the auctioneer to risklessly match offers as follows.\nConsider an index set of bets or orders O which traders submit to the auctioneer.\nEach order i E O is a triple (bi, qi, Oi), where bi denotes how much the trader is willing to pay for a unit share of security Oi and qi is the number of shares of the security he wants to purchase at price bi.\nNaturally, bi E (0, 1) since a unit of the security pays off at most $1 when the event is realized.\nSince order i is defined for a single security Oi, we will omit the security variable whenever it is clear from the context.\nThe auctioneer can accept or reject each order, or in a divisible world accept a fraction of the order.\nLet xi be the fraction of order i E O accepted.\nIn the indivisible version of the market xi = 0 or 1 while in the divisible version xi E [0, 1].\nFurther let Ii (s) be the indicator variable for whether order i is winning in state s, that is Ii (s) = 1 if the order is paid back $1 in state s and Ii (s) = 0 otherwise.\nThere are two possible problems that the auctioneer may want to solve.\nThe simpler one is to find a subset of orders that can be matched risk-free, namely a subset of orders which accepted together give a nonnegative profit to the auctioneer in every possible outcome.\nWe call this problem the existence of a match or sometimes simply, the matching problem.\nThe more complex problem is for the auctioneer to find the optimal match with respect to some criterion such as profit, trading volume, etc. .\nSimilarly we can define the existence of a match with divisible orders.\nDEFINITION 2 (EXISTENCE OF MATCH, DIVISIBLE ORDERS).\nGiven a set of orders O, does there exist a set of xi E [0, 1], i E O, with at least one xi> 0 such that\nThe existence of a match is a decision problem.\nIt only returns whether trade can occur at no risk to the auctioneer.\nIn addition to the risk-free requirement, the auctioneer can optimize some criterion in determining the orders to accept.\nSome reasonable objectives include maximizing the total trading volume in the market or the worst-case profit of the auctioneer.\nThe following optimal matching problems are defined for an auctioneer who maximizes his worst-case profit.\nDEFINITION 3 (OPTIMAL MATCH, INDIVISIBLE ORDERS).\nGiven a set of orders O, choose xi E {0, 1} such that the following mixed integer programming problem achieves its optimality max\nGiven a set of orders O, choose xi E [0, 1] such that the following linear programming problem achieves its optimality max\nThe variable c is the worst-case profit for the auctioneer.\nNote that, strictly speaking, the optimal matching problems do not require to solve the optimization problems (3) and (4), because only the optimal set of orders are needed.\nThe optimal worst-case profit may remain unknown.\n3.2 Subset Betting\nA subset betting market allows two different types of bets.\nTraders can bet on a subset of positions a candidate may end up at, or they can bet on a subset of candidates that will occupy a particular position.\nA security (a | 4)) where 4) is a subset of positions pays off $1 if candidate a stands at a position that is an element of 4) and it pays $0 otherwise.\nFor example, security (a | {2, 4}) pays $1 when candidate a\nis ranked second or fourth.\nSimilarly, a security T | j where T is a subset of candidates pays off $1 if any of the candidates in the set T ranks at position j. For instance, security {a, - y} | 2 pays off $1 when either candidate a or candidate - y is ranked second.\nThe auctioneer in a subset betting market faces a nontrivial matching problem, that is to determine which orders to accept among all submitted orders i O. Note that although there are only n candidates and n possible positions, the number of available securities to bet on is exponential since a trader may bet on any of the 2n subsets of candidates or positions.\nWith this, it is not immediately clear whether one can even find a trading partner or a match for trade to occur, or that the auctioneer can solve the matching problem in polynomial time.\nIn the next section, we will show that somewhat surprisingly there is an elegant polynomial solution to both the matching and optimal matching problems, based on classic combinatorial problems.\nWhen an order is accepted, the corresponding trader pays the submitted order price bi to the auctioneer and the auctioneer pays the winning orders $1 per share after the outcome is revealed.\nThe auctioneer has to carefully choose which orders and what fractions of them to accept so as to be guaranteed a nonnegative profit in any future state.\nThe following example illustrates the matching problem for indivisible orders in the subset-betting market.\nEXAMPLE 1.\nSuppose n = 3.\nObjects a,,3, and - y compete for positions 1, 2, and 3 in a competition.\nThe auctioneer receives the following 4 orders: (1) buy 1 share a | {1} at price $0.6; (2) buy 1 share,3 | {1, 2} at price $0.7; (3) buy 1 share - y | {1, 3} at price $0.8; and (4) buy 1 share,3 | {3} at price $0.7.\nThere are 6 possible states of ordering: a,3-y, a-y,3,,3 a-y,,3 - ya, - ya,3, and - y,3 a.\nThe corresponding statedependent profit of the auctioneer for each order can be calculated as the following vectors,\n6 columns correspond to the 6 future states.\nFor indivisible orders, the auctioneer can either accept orders (2) and (4) and obtain profit vector c = (0.4, 0.4, 0.4, 0.4, 0.4, 0.4), or accept orders (2), (3), and (4) and has profit across state c = (0.2, 1.2, 0.2, 1.2, 0.2, 0.2).\n3.3 Pair Betting\nA pair betting market allows traders to bet on whether one candidate will rank higher than another candidate, in an outcome which is a permutation of n candidates.\nA security a>,3 pays off $1 if candidate a is ranked higher than candidate,3 and $0 otherwise.\nThere are a total of N (N \u2212 1) different securities offered in the market, each corresponding to an ordered pair of candidates.\nTraders place orders of the form \"buy qi shares of a>,3 at price per share no greater than bi\".\nbi in general should be between 0 and 1.\nAgain the order can be either indivisible or divisible and the auctioneer needs to decide what fraction xi of each order to accept so as not to incur any loss, with\nFigure 1: Every cycle has negative worst-case profit of \u2212 0.02 (for the cycles of length 4) or less (for the cycles of length 6), however accepting all edges in full gives a positive worst-case profit of 0.44.\nxi {0, 1} for indivisible and xi [0, 1] for divisible orders.\nThe same definitions for existence of a match and optimal match from Section 3.1 apply.\nThe orders in the pair-betting market have a natural interpretation as a graph, where the candidates are nodes in the graph and each order which ranks a pair of candidates a>,3 is represented by a directed edge e = (a,,3) with price be and weight qe.\nWith this interpretation, it is tempting to assume that a necessary condition for a match is to have a cycle in the graph with a nonnegative worst-case profit.\nAssuming qe = 1 for all e, this is a cycle C with a total of | C | edges such that the worst-case profit for the auctioneer is\nsince in the worst-case state the auctioneer needs to pay $,1 to every order in the cycle except one.\nHowever, the example in Figure 1 shows that this is not the case: we may have a set of orders in which every single cycle has a negative worst-case profit, and yet there is a positive worstcase match overall.\nThe edge labels in the figure are the prices be; both the optimal divisible and indivisible solution in this case accept all orders in full, xe = 1.\n4.\nCOMPLEXITY OF SUBSET BETTING\nThe matching problems of the auctioneer in any permutation market, including the subset betting market have n!\nconstraints.\nBrute-force methods would take exponential time to solve.\nHowever, given the special form of the securities in the subset betting market, we can show that the matching problems for divisible orders can be solved in polynomial time.\nPROOF.\nConsider the linear programming problem (4) for finding an optimal match.\nThis linear program has | O | + 1 variables, one variable xi for each order i and the profit variable c.\nIt also has exponentially many constraints.\nHowever, we can solve the program in time polynomial in the number of orders | O | by using the ellipsoid algorithm, as long as we can efficiently solve its corresponding separation problem in polynomial time [7, 8].\nThe separation problem for a linear program takes as input a vector of variable values and returns if the vector is feasible, or otherwise it returns a violated constraint.\nFor given values of the variables, a violated constraint in Eq.\n(4) asks whether there is a state or permutation s in which the profit is less than c, and can be rewritten as\nThus it suffices to show how to find efficiently a state s satisfying the above inequality (5) or verify that the opposite inequality holds for all states s.\nWe will show that the separation problem can be reduced to the maximum weighted bipartite matching' problem [3].\nThe left hand side in Eq.\n(5) is the total money that the auctioneer needs to pay back to the winning traders in state s.\nThe first term on the right hand side is the total money collected by the auctioneer and it is fixed for a given solution vector of xi's and c.\nA weighted bipartite graph can be constructed between the set of candidates and the set of positions.\nFor every order of the form (a | dh) there are edges from candidate node a to every position node in 4).\nFor orders of the form (xF | j) there are edges from each candidate in T to position j. For each order i we put weight qixi on each of these edges.\nAll multi-edges with the same end points are then replaced with a single edge that carries the total weight of the multi-edge.\nEvery state s then corresponds to a perfect matching in the bipartite graph.\nIn addition, the auctioneer pays out to the winners the sum of all edge weights in the perfect matching since every candidate can only stand in one position and every position is taken by one candidate.\nThus, the auctioneer's worst-cast state and payment are the solution to the maximum weighted bipartite matching problem, which has known polynomial-time algorithms [12, 13].\nHence, the separation problem can be solved in polynomial time.\nNaturally, if the optimal solution to (4) gives a worst-case profit of c *> 0, there exists a matching.\nThus, the matching problem can be solved in polynomial time also.\n5.\nCOMPLEXITY OF PAIR BETTING\nIn this section we show that a slight change of the bidding language may bring about a dramatic change in the complexity of the optimal matching problem of the auctioneer.\nIn particular, we show that finding the optimal match in the pair betting market is NP-hard for both divisible and indivisible orders.\nWe then identify a polynomially-verifiable sufficient condition for deciding the existence of a match.\nThe hardness results are surprising especially in light of the observation that a pair betting market has a seemingly more restrictive bidding language which only offers n (n \u2212 1) ` The notion of perfect matching in a bipartite graph, which we use only in this proof, should not be confused with the notion of matching bets which we use throughout the paper.\nsecurities.\nIn contrast, the subset betting market enables traders to bet on an exponential number of securities and yet had a polynomial time solution for finding the optimal match.\nOur hope is that the comparison of the complexities of the subset and pair betting markets would offer insight into what makes a bidding language expressive while at the same time enabling an efficient matching solution.\nIn all analysis that follows, we assume that traders submit unit orders in pair betting markets, that is qi = 1.\nA set of orders O received by the auctioneer in a pair betting market with unit orders can be represented by a directed graph, G (V, E), where the vertex set V contains candidates that traders bet on.\nAn edge e G E, denoted (a, 3, be), represents an order to buy 1 share of the security (a>,3) at price be.\nAll edges have equal weight of 1.\nWe adopt the following notations throughout the paper:\n9 G (V, E): original equally weighted directed graph for the set of unit orders O. 9 be: price of the order for edge e. 9 G * (V *, E *); a weighted directed graph of accepted or\nders for optimal matching, where edge weight xe is the quantity of order e accepted by the auctioneer.\nxe = 1 for indivisible orders and 0 l (H) \u2212 xe.\nSimilarly, suppose the unweighted minimum feedback arc set for the graph H \u2212 {e} is F.\nThen F U {e} is a feedback arc set for H, and has set cardinality k (H \u2212 {e}) +1.\nBecause k (H) is the solution to the unweighted minimum feedback arc set problem on H, we have k (H) k (H) \u2212 1.\nTHEOREM 6.\nFinding the optimal match in indivisible pair betting is NP-hard.\nPROOF.\nWe reduce from the unweighted minimum feedback arc set problem again, although with a slightly more complex polynomial transformation involving multiple calls to the optimal match oracle.\nConsider an instance graph G of the minimum feedback arc set problem.\nWe are interested in computing k (G), the size of the minimum feedback arc set of G. Suppose we have an oracle which solves the optimal matching problem.\nDenote by optimal match (G') the output of the optimal matching oracle on graph G' with prices be = (1 \u2212 e) on all its edges.\nBy Lemma 3, on input G', the oracle optimal match returns the subgraph of G' with the smallest number of edges, that has the same size of minimum feedback arc set as G'.\nThe following procedure finds k (G) by using polynomially many calls to the optimal match oracle on a sequence of subgraphs of G.\nThis procedure removes edges from the original graph G layer by layer until the graph is empty, while at the same time computing the minimum feedback arc set size k (G) of the original graph as the number of iterations.\nIn each iteration, we start with a graph G' and replace it with the smallest subgraph G' that has the same k (G').\nAt this stage, removing an additional edge e necessarily results in k (G' \u2212 {e}) = k (G') \u2212 1, because k (G' \u2212 {e}) k (G') \u2212 1 by the edgeremoval lemma.\nTherefore, in each iteration the cardinality of the minimum feedback arc set gets reduced exactly by 1.\nHence the number of iterations is equal to k (G).\nNote that this procedure gives a polynomial transformation from the optimal matching problem to the unweighted minimum feedback arc set problem, which calls the optimal matching oracle exactly k (G) 0 and 6b \u2212 5 <0, namely b E (.75, .80), for example b = .78.\nWith this, the optimal indivisible solution consists of at most one four-edge cycle, with worst case profit (4b \u2212 3).\nOn the other hand, taking 21 fraction of each of the three four-edge cycles would yield higher worst-case profit of 32 (4b \u2212 3).\nDespite the potential profit increase for accepting divisible orders, the auctioneer's optimal matching problem remains to be NP-hard for divisible orders, which is presented below via several lemmas and theorems.\nLEMMA 7.\nSuppose the auctioneer accept orders described by a weighted directed graph H (V, E) with edge weight xe to be the quantity accepted for edge order e.\nThe worst-case profit for the auctioneer is\nPROOF.\nFor any state s, the winning edges form a DAG.\nThus, the worst-case profit for the auctioneer achieves at the state (s) when the total quantity of losing orders is minimized.\nThe minimum total quantity of losing orders is the solution to weighted minimal feedback arc set problem on H, that is l (H).\nConsider the graph of accepted orders for optimal divisible matching, G * (V *, E *), which consists of the optimal subset of edges E * to be accepted by the auctioneer, with edge weight xe> 0 getting from the optimal solution of the linear program (7).\nWe have the following lemmas.\nPROOF.\nl (G *) is the solution of the weighted minimum feedback arc set problem on G *, while k (G *) is the solution of the unweighted minimum feedback arc set problem on G *.\nWhen all edge weights in G * are 1, l (G *) = k (G *).\nWhen xe's are less than 1, l (G *) can be less than or equal to k (G *).\nSince G * is a subgraph of G but possibly with different edge weights, k (G *) k (G) is impossible.\nThus, l (G *) = k (G).\nTHEOREM 10.\nFinding the optimal worst-case profit in divisible pair betting is NP-hard.\nPROOF.\nGiven the optimal set of partial orders to accept for G when edge weights are (1 \u2212 e), if we can calculate the optimal worst-case profit, by lemma 9 we can solve the unweighted minimum feedback arc set problem on G, which is NP-hard.\nHence, finding the optimal worst-case profit is NP-hard.\nTheorem 10 states that solving the linear program (7) is NP-hard.\nSimilarly to the indivisible case, we still need to prove that just finding the optimal divisible match is hard, as opposed to being able to compute the optimal worstcase profit.\nSince in the divisible case the edges do not necessarily have unit weights, the proof in Theorem 6 does not apply directly.\nHowever, with an additional property of the divisible case, we can augment the procedure from the indivisible hardness proof to compute the unweighted minimum feedback arc set size k (G) here as well.\nFirst, note that the optimal divisible subgraph G * of a graph G is the weighted subgraph with minimum weighted\nweights E\nfeedback arc set size l (G *) = k (G) and smallest sum of edge eEE * xe, since its corresponding worst case profit is (k (G) \u2212 e EeEE * xe) according to lemmas 7 and 9.\nLEMMA 11.\nSuppose graph H satisfies l (H) = k (H) and we remove edge e from it with weight xe <1.\nThen, k (H \u2212 {e}) = k (H).\nPROOF.\nAssume the contrary, namely k (H \u2212 {e}) 1.\nThe contradiction arises.\nTherefore, removing any edge with less than unit weight from an optimal divisible graph does not change k (H), the minimal feedback arc set size of the unweighted version of the graph.\nWe now can augment the procedure for the indivisible case in Theorem 6, to prove hardness of the divisible version, as follows.\nTHEOREM 12.\nFinding the optimal match in divisible pair betting is NP-hard.\nPROOF.\nWe reduce from the unweighted minimum feedback arc set problem for graph G. Suppose we have an oracle for the optimal divisible problem called optimal divisible match, which on input graph H computes edge weights xe E (0, 1] for the optimal subgraph H * of H, satisfying l (H *) = k (H).\nThe following procedure outputs k (G).\nremove an edge with weight <1 from G' reset G' by setting all edge weights to 1\nAs in the proof of the corresponding Theorem 6 for the indivisible case, we compute k (G) by iteratively removing edges and recomputing the optimal divisible solution on the remaining subgraph, until all edges are deleted.\nIn each iteration of the outer while loop, the minimum feedback arc set is reduced by 1, thus the number of iterations is equal to k (G).\nIt remains to verify that each iteration reduces k (G) by exactly 1.\nStarting from a graph at the beginning of an iteration, we compute its optimal divisible subgraph.\nWe then keep removing one non-unit weight edge at a time and recomputing the optimal divisible subgraph, until the latter contains only edges with unit weight.\nBy Lemma 11 throughout the iteration so far the minimum feedback arc set of the corresponding unweighted graph remains unchanged.\nOnce the oracle returns a graph G' with unit edge weights, removing any edge would reduce the minimum feedback arc set: otherwise G' is not optimal since G' \u2212 {e} would have\nthe same minimum feedback arc set but smaller total edge weight.\nBy Lemma 5 removing a single edge cannot reduce the minimum feedback arc set by more than one, thus as all edges have unit weight, k (G) gets reduced by exactly one.\nk (G) is equal to the returned value from the procedure.\nHence, the optimal matching problem for divisible orders is NP-hard.\n5.3 Existence of a Match\nKnowing that the optimal matching problem is NP-hard for both indivisible and divisible orders in pair betting, we check whether the auctioneer can identify the existence of a match with ease.\nLemma 13 states a sufficient condition for the matching problem with both indivisible and divisible orders.\nwhere ICI is the number of edges in the cycle C. PROOF.\nThe left-hand side of the inequality (11) represents the total payment that the auctioneer receives by accepting every unit orders in the cycle C in full.\nBecause the direction of an edge represents predicted ordering of the two connected nodes in the final ranking, forming a cycle meaning that there is some logical contradiction on the predicted orderings of candidates.\nHence, whichever state is realized, not all of the edges in the cycle can be winning edges.\nThe worst-case for the auctioneer corresponds to a state where every edge in the cycle gets paid by $1 except one, with ICI--1 be the maximum payment to traders.\nHence, if inequality (11) is satisfied, the auctioneer has non-negative worst-case profit by accepting the orders in the cycle.\nIt can be shown that identifying such a non-negative worstcase profit cycle in an order graph G can be achieved in polynomial time.\nLEMMA 14.\nIt takes polynomial time to find a cycle in an order graph G (V, E) that has the highest worst-case profit, that is max\nCEC (E 1)\neEC be--(ICI--1), where C is the set of all cycles in G. PROOF.\nBecause E be--(ICI--1) = E (be--1) + 1 = 1--E (1--be),\nfinding the cycle that gives the highest worst-case profit in the original order graph G is equivalent to finding the shortest cycle in a converted graph H (V, E), where H is achieved by setting the weight for edge e in G to be (1--be).\nFinding the shortest cycle in graph H can be done within polynomial time by resorting to the shortest path problem.\nFor any vertex v in V, we consider every neighbor vertex w such that (v, w) E E.\nWe then find the shortest path from w to v, denoted as path (w, v).\nThe shortest cycle that passes vertex v is found by choosing the w such that e (v, w) + path (w, v) is minimized.\nComparing the shortest cycle found for every vertex, we then can determine the shortest overall cycle for the graph H. Because the short path problem can be solved in polynomial time [3], we can find the solution to our problem in polynomial time.\nIf the worst-case profit for the optimal cycle is non-negative, we know that there exists a match in G. However, the condition in lemma 13 is not a necessary condition for the existence of a match.\nEven if all single cycles in the order graph have negative worst-case profit, the auctioneer may accept multiple interweaving cycles to have positive worstcase profit.\nFigure 1 exhibits such a situation.\nIf the optimal indivisible match consists only of edge disjoint cycles, a natural greedy algorithm can find the cycle that gives the highest worst-case profit, remove its edges from the graph, and proceed until no more cycles exist.\nHowever, we show that such greedy algorithm can give a very poor approximation.\nFigure 3: Graph with n vertices and n + \\ \/ n edges on which the greedy algorithm finds only two cycles, the dotted cycle in the center and the unique remaining cycle.\nThe labels in the faces give the number of edges in the corresponding cycle.\nLEMMA 15.\nThe greedy algorithm gives at most an O (\\ \/ n) approximation to the maximum number of disjoint cycles.\ncycle with \\ \/ n edges, each of which participates in another PROOF.\nConsider the graph in Figure 3 consisting of a (otherwise disjoint) cycle with \\ \/ n + 1 edges.\nSuppose all edge weights are (1--e).\nThe maximum number of disjoint cycles is clearly \\ \/ n, taking all cycles with length \\ \/ n + 1.\nBecause smaller cycles gives higher worst-case profit, the greedy algorithm would first select the cycle of length \\ \/ n, after which there would be only one remaining cycle of length n. Thus the total number of cycles selected by greedy is 2 and the approximation factor in this case is \\ \/ n\/2.\nIn light of Lemma 15, one may expect that greedy algorithms would give \\ \/ n-approximations at best.\nApproxima\ntion algorithms for finding the maximum number of edgedisjoint cycles have been considered by Krivelevich, Nutov and Yuster [11, 19].\nIndeed, for the case of directed graphs, the authors show that a greedy algorithm gives a, \/ n-approximation [11].\nWhen the optimal match does not consist of edge-disjoint cycles as in the example of Figure 3, greedy algorithm trying to finding optimal single cycles fails obviously.\n6.\nCONCLUSION\nWe consider a permutation betting scenario, where traders wager on the final ordering of n candidates.\nWhile it is unnatural and intractable to allow traders to bet directly on the n!\ndifferent final orderings, we propose two expressive betting languages, subset betting and pair betting.\nIn a subset betting market, traders can bet either on a subset of positions that a candidate stands or on a subset of candidates who occupy a specific position in the final ordering.\nPair betting allows traders bet on whether one given candidate ranks higher than another given candidate.\nWe examine the auctioneer problem of matching orders without incurring risk.\nWe find that in a subset betting market an auctioneer can find the optimal set and quantity of orders to accept such that his worst-case profit is maximized in polynomial time if orders are divisible.\nThe complexity changes dramatically for pair betting.\nWe prove that the optimal matching problem for the auctioneer is NP-hard for pair betting with both indivisible and divisible orders via reductions to the minimum feedback arc set problem.\nWe identify a sufficient condition for the existence of a match, which can be verified in polynomial time.\nA natural greedy algorithm has been shown to give poor approximation for indivisible pair betting.\nInteresting open questions for our permutation betting include the computational complexity of optimal indivisible matching for subset betting and the necessary condition for the existence of a match in pair betting markets.\nWe are interested in further exploring better approximation algorithms for pair betting markets.","keyphrases":["permut bet","express bet","comput complex","subset bet","greedi algorithm","bilater trade partner","polynomi-time algorithm","inform aggreg","permut combinator","pair-bet market","bipartit graph","minimum feedback","complex polynomi transform","predict market","order match"],"prmu":["P","P","P","P","P","U","M","U","M","U","U","U","M","U","R"]} {"id":"J-23","title":"Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover","abstract":"In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer's goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of frugality ratio for this problem. Informally, the frugality ratio is the ratio of the total payment of a mechanism to a desired payment bound. The ratio captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its frugality ratio. We show that the solution quality is with a constant factor of optimal and the frugality ratio is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a bootstrapping technique, which may be of independent interest.","lvl-1":"Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover \u2217 Edith Elkind Hebrew University of Jerusalem, Israel, and University of Southampton, Southampton, SO17 1BJ, U.K. Leslie Ann Goldberg University of Liverpool Liverpool L69 3BX, U.K. Paul Goldberg University of Liverpool Liverpool L69 3BX, U.K. ABSTRACT In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams.\nThe auctioneer``s goal is to hire a team and pay as little as possible.\nExamples of this setting include shortest-path auctions and vertex-cover auctions.\nRecently, Karlin, Kempe and Tamir introduced a new definition of frugality ratio for this problem.\nInformally, the frugality ratio is the ratio of the total payment of a mechanism to a desired payment bound.\nThe ratio captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction.\nIn this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its frugality ratio.\nWe show that the solution quality is with a constant factor of optimal and the frugality ratio is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties.\nMoreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio.\nAlso, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule.\nWe study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions.\nWe use these new definitions in the proof of our main result for vertex-cover auctions via a bootstrapping technique, which may be of independent interest.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In a set system auction there is a single buyer and many vendors that can provide various services.\nIt is assumed that the buyer``s requirements can be satisfied by various subsets of the vendors; these subsets are called the feasible sets.\nA widely-studied class of setsystem auctions is path auctions, where each vendor is able to sell access to a link in a network, and the feasible sets are those sets whose links contain a path from a given source to a given destination; the study of these auctions has been initiated in the seminal paper by Nisan and Ronen [19] (see also [1, 10, 9, 6, 15, 7, 20]).\nWe assume that each vendor has a cost of providing his services, but submits a possibly larger bid to the auctioneer.\nBased on these bids, the auctioneer selects a feasible subset of vendors, and makes payments to the vendors in this subset.\nEach selected vendor enjoys a profit of payment minus cost.\nVendors want to maximise profit, while the buyer wants to minimise the amount he pays.\nA natural goal in this setting is to design a truthful auction, in which vendors have an incentive to bid their true cost.\nThis can be achieved by paying each selected vendor a premium above her bid in such a way that the vendor has no incentive to overbid.\nAn interesting question in mechanism design is how much the auctioneer will have to overpay in order to ensure truthful bids.\nIn the context of path auctions this topic was first addressed by Archer and Tardos [1].\nThey define the frugality ratio of a mechanism as the ratio between its total payment and the cost of the cheapest path disjoint from the path selected by the mechanism.\nThey show that, for a large class of truthful mechanisms for this problem, the frugality ratio is as large as the number of edges in the shortest path.\nTalwar [21] extends this definition of frugality ratio to general set systems, and studies the frugality ratio of the classical VCG mechanism [22, 4, 14] for many specific set systems, such as minimum spanning trees and set covers.\nWhile the definition of frugality ratio proposed by [1] is wellmotivated and has been instrumental in studying truthful mechanisms for set systems, it is not completely satisfactory.\nConsider, for example, the graph of Figure 1 with the costs cAB = cBC = A B C D Figure 1: The diamond graph 336 cCD = 0, cAC = cBD = 1.\nThis graph is 2-connected and the VCG payment to the winning path ABCD is bounded.\nHowever, the graph contains no A-D path that is disjoint from ABCD, and hence the frugality ratio of VCG on this graph remains undefined.\nAt the same time, there is no monopoly, that is, there is no vendor that appears in all feasible sets.\nIn auctions for other types of set systems, the requirement that there exist a feasible solution disjoint from the selected one is even more severe: for example, for vertex-cover auctions (where vendors correspond to the vertices of some underlying graph, and the feasible sets are vertex covers) the requirement means that the graph must be bipartite.\nTo deal with this problem, Karlin et al. [16] suggest a better benchmark, which is defined for any monopoly-free set system.\nThis quantity, which they denote by \u03bd, intuitively corresponds to the value of a cheapest Nash equilibrium.\nBased on this new definition, the authors construct new mechanisms for the shortest path problem and show that the overpayment of these mechanisms is within a constant factor of optimal.\n1.1 Our results Vertex cover auctions We propose a truthful polynomial-time auction for vertex cover that outputs a solution whose cost is within a factor of 2 of optimal, and whose frugality ratio is at most 2\u0394, where \u0394 is the maximum degree of the graph (Theorem 4).\nWe complement this result by proving (Theorem 5) that for any \u0394 and n, there are graphs of maximum degree \u0394 and size \u0398(n) for which any truthful mechanism has frugality ratio at least \u0394\/2.\nThis means that the solution quality of our auction is with a factor of 2 of optimal and the frugality ratio is within a factor of 4 of the best possible bound for worst-case inputs.\nTo the best of our knowledge, this is the first auction for this problem that enjoys these properties.\nMoreover, we show how to transform any truthful mechanism for the vertex-cover problem into a frugal one while preserving the approximation ratio.\nFrugality ratios Our vertex cover results naturally suggest two modifications of the definition of \u03bd in [16].\nThese modifications can be made independently of each other, resulting in four different payment bounds TUmax, TUmin, NTUmax, and NTUmin, where NTUmin is equal to the original payment bound \u03bd of in [16].\nAll four payment bounds arise as Nash equilibria of certain games (see the full version of this paper [8]); the differences between them can be seen as the price of initiative and the price of cooperation (see Section 3).\nWhile our main result about vertex cover auctions (Theorem 4) is with respect to NTUmin = \u03bd, we make use of the new definitions by first comparing the payment of our mechanism to a weaker bound NTUmax, and then bootstrapping from this result to obtain the desired bound.\nInspired by this application, we embark on a further study of these payment bounds.\nOur results here are as follows: 1.\nWe observe (Proposition 1) that the four payment bounds always obey a particular order that is independent of the choice of the set system and the cost vector, namely, TUmin \u2264 NTUmin \u2264 NTUmax \u2264 TUmax.\nWe provide examples (Proposition 5 and Corollaries 1 and 2) showing that for the vertex cover problem any two consecutive bounds can differ by a factor of n \u2212 2, where n is the number of agents.\nWe then show (Theorem 2) that this separation is almost best possible for general set systems by proving that for any set system TUmax\/TUmin \u2264 n.\nIn contrast, we demonstrate (Theorem 3) that for path auctions TUmax\/TUmin \u2264 2.\nWe provide examples (Propositions 2, 3 and 4) showing that this bound is tight.\nWe see this as an argument for the study of vertexcover auctions, as they appear to be more representative of the general team -selection problem than the widely studied path auctions.\n2.\nWe show (Theorem 1) that for any set system, if there is a cost vector for which TUmin and NTUmin differ by a factor of \u03b1, there is another cost vector that separates NTUmin and NTUmax by the same factor and vice versa; the same is true for the pairs (NTUmin, NTUmax) and (NTUmax, TUmax).\nThis symmetry is quite surprising, since, e.g., TUmin and NTUmax are obtained from NTUmin by two very different transformations.\nThis observation suggests that the four payment bounds should be studied in a unified framework; moreover, it leads us to believe that the bootstrapping technique of Theorem 4 may have other applications.\n3.\nWe evaluate the payment bounds introduced here with respect to a checklist of desirable features.\nIn particular, we note that the payment bound \u03bd = NTUmin of [16] exhibits some counterintuitive properties, such as nonmonotonicity with respect to adding a new feasible set (Proposition 7), and is NP-hard to compute (Theorem 6), while some of the other payment bounds do not suffer from these problems.\nThis can be seen as an argument in favour of using weaker but efficiently computable bounds NTUmax and TUmax.\nRelated work Vertex-cover auctions have been studied in the past by Talwar [21] and Calinescu [5].\nBoth of these papers are based on the definition of frugality ratio used in [1]; as mentioned before, this means that their results only apply to bipartite graphs.\nTalwar [21] shows that the frugality ratio of VCG is at most \u0394.\nHowever, since finding the cheapest vertex cover is an NP-hard problem, the VCG mechanism is computationally infeasible.\nThe first (and, to the best of our knowledge, only) paper to investigate polynomial-time truthful mechanisms for vertex cover is [5].\nThis paper studies an auction that is based on the greedy allocation algorithm, which has an approximation ratio of log n.\nWhile the main focus of [5] is the more general set cover problem, the results of [5] imply a frugality ratio of 2\u03942 for vertex cover.\nOur results improve on those of [21] as our mechanism is polynomial-time computable, as well as on those of [5], as our mechanism has a better approximation ratio, and we prove a stronger bound on the frugality ratio; moreover, this bound also applies to the mechanism of [5].\n2.\nPRELIMINARIES In most of this paper, we discuss auctions for set systems.\nA set system is a pair (E, F), where E is the ground set, |E| = n, and F is a collection of feasible sets, which are subsets of E. Two particular types of set systems are of interest to us - shortest path systems, in which the ground set consists of all edges of a network, and the feasible sets are paths between two specified vertices s and t, and vertex cover systems, in which the elements of the ground set are the vertices of a graph, and the feasible sets are vertex covers of this graph.\nIn set system auctions, each element e of the ground set is owned by an independent agent and has an associated non-negative cost ce.\nThe goal of the centre is to select (purchase) a feasible set.\nEach element e in the selected set incurs a cost of ce.\nThe elements that are not selected incur no costs.\nThe auction proceeds as follows: all elements of the ground set make their bids, the centre selects a feasible set based on the bids and makes payments to the agents.\nFormally, an auction is defined by an allocation rule A : Rn \u2192 F and a payment rule P : Rn \u2192 Rn .\nThe allocation rule takes as input a vector of bids and decides which of the sets in F should be selected.\nThe payment rule also takes as input a vector of bids and decides how much to pay to each agent.\nThe standard requirements are individual rationality, i.e., the payment to each agent should be at least as high as his incurred cost (0 for agents not in the selected set and ce for agents in the 337 selected set) and incentive compatibility, or truthfulness, i.e., each agent``s dominant strategy is to bid his true cost.\nAn allocation rule is monotone if an agent cannot increase his chance of getting selected by raising his bid.\nFormally, for any bid vector b and any e \u2208 E, if e \u2208 A(b) then e \u2208 A(b1, ... , be, ... , bn) for any be > be.\nGiven a monotone allocation rule A and a bid vector b, the threshold bid te of an agent e \u2208 A(b) is the highest bid of this agent that still wins the auction, given that the bids of other participants remain the same.\nFormally, te = sup{be \u2208 R | e \u2208 A(b1, ... , be, ... , bn)}.\nIt is well known (see, e.g. [19, 13]) that any auction that has a monotone allocation rule and pays each agent his threshold bid is truthful; conversely, any truthful auction has a monotone allocation rule.\nThe VCG mechanism is a truthful mechanism that maximises the social welfare and pays 0 to the losing agents.\nFor set system auctions, this simply means picking a cheapest feasible set, paying each agent in the selected set his threshold bid, and paying 0 to all other agents.\nNote, however, that the VCG mechanism may be difficult to implement, since finding a cheapest feasible set may be intractable.\nIf U is a set of agents, c(U) denotes P w\u2208U cw.\nSimilarly, b(U) denotes P w\u2208U bw.\n3.\nFRUGALITY RATIOS We start by reproducing the definition of the quantity \u03bd from [16, Definition 4].\nLet (E, F) be a set system and let S be a cheapest feasible set with respect to the true costs ce.\nThen \u03bd(c, S) is the solution to the following optimisation problem.\nMinimise B = P e\u2208S be subject to (1) be \u2265 ce for all e \u2208 E (2) P e\u2208S\\T be \u2264 P e\u2208T \\S ce for all T \u2208 F (3) for every e \u2208 S, there is a Te \u2208 F such that e \u2208 Te andP e \u2208S\\Te be = P e \u2208Te\\S ce The bound \u03bd(c, S) can be seen as an outcome of a two-stage process, where first each agent e \u2208 S makes a bid be stating how much it wants to be paid, and then the centre decides whether to accept these bids.\nThe behaviour of both parties is affected by the following considerations.\nFrom the centre``s point of view, the set S must remain the most attractive choice, i.e., it must be among the cheapest feasible sets under the new costs ce = ce for e \u2208 S, ce = be for e \u2208 S (condition (2)).\nThe reason for that is that if (2) is violated for some set T, the centre would prefer T to S. On the other hand, no agent would agree to a payment that does not cover his costs (condition (1)), and moreover, each agent tries to maximise his profit by bidding as high as possible, i.e., none of the agents can increase his bid without violating condition (2) (condition (3)).\nThe centre wants to minimise the total payout, so \u03bd(c, S) corresponds to the best possible outcome from the centre``s point of view.\nThis definition captures many important aspects of our intuition about `fair'' payments.\nHowever, it can be modified in two ways, both of which are still quite natural, but result in different payment bounds.\nFirst, we can consider the worst rather than the best possible outcome for the centre.\nThat is, we can consider the maximum total payment that the agents can extract by jointly selecting their bids subject to (1), (2), and (3).\nSuch a bound corresponds to maximising B subject to (1), (2), and (3) rather than minimising it.\nIf it is the agents who make the original bids (rather than the centre), this kind of bidding behaviour is plausible.\nOn the other hand, in a game in which the centre proposes payments to the agents in S and the agents accept them as long as (1), (2) and (3) are satisfied, we would be likely to observe a total payment of \u03bd(c, S).\nHence, the difference between these two definitions can be seen as the price of initiative.\nSecond, the agents may be able to make payments to each other.\nIn this case, if they can extract more money from the centre by agreeing on a vector of bids that violates individual rationality (i.e., condition (1)) for some bidders, they might be willing to do so, as the agents who are paid below their costs will be compensated by other members of the group.\nThe bids must still be realistic, i.e., they have to satisfy be \u2265 0.\nThe resulting change in payments can be seen as the price of co-operation and corresponds to replacing condition (1) with the following weaker condition (1\u2217 ): be \u2265 0 for all e \u2208 E. (1\u2217 ) By considering all possible combinations of these modifications, we obtain four different payment bounds, namely \u2022 TUmin(c, S), which is the solution to the optimisation problem Minimise B subject to (1\u2217 ), (2), and (3).\n\u2022 TUmax(c, S), which is the solution to the optimisation problem Maximise B subject to (1\u2217 ), (2), and (3).\n\u2022 NTUmin(c, S), which is the solution to the optimisation problem Minimise B subject to (1), (2), and (3).\n\u2022 NTUmax(c, S), which is the solution to the optimisation problem Maximise B subject to (1), (2), (3).\nThe abbreviations TU and NTU correspond, respectively, to transferable utility and non-transferable utility, i.e., the agents'' ability\/inability to make payments to each other.\nFor concreteness, we will take TUmin(c) to be TUmin(c, S) where S is the lexicographically least amongst the cheapest feasible sets.\nWe define TUmax(c), NTUmin(c), NTUmax(c) and \u03bd(c) similarly, though we will see in Section 6.3 that, in fact, NTUmin(c, S) and NTUmax(c, S) are independent of the choice of S. Note that the quantity \u03bd(c) from [16] is NTUmin(c).\nThe second modification (transferable utility) is more intuitively appealing in the context of the maximisation problem, as both assume some degree of co-operation between the agents.\nWhile the second modification can be made without the first, the resulting payment bound TUmin(c, S) is too strong to be a realistic benchmark, at least for general set systems.\nIn particular, it can be smaller than the total cost of the cheapest feasible set S (see Section 6).\nNevertheless, we provide the definition as well as some results about TUmin(c, S) in the paper, both for completeness and because we believe that it may help to understand which properties of the payment bounds are important for our proofs.\nAnother possibility would be to introduce an additional constraint P e\u2208S be \u2265P e\u2208S ce in the definition of TUmin(c, S) (note that this condition holds automatically for TUmax(c, S), as TUmax(c, S) \u2265 NTUmax(c, S)); however, such a definition would have no direct game-theoretic interpretation, and some of our results (in particular, the ones in Section 4) would no longer be true.\nREMARK 1.\nFor the payment bounds that are derived from maximisation problems, (i.e., TUmax(c, S) and NTUmax(c, S)), constraints of type (3) are redundant and can be dropped.\nHence, TUmax(c, S) and NTUmax(c, S) are solutions to linear programs, and therefore can be computed in polynomial time as long as we have a separation oracle for constraints in (2).\nIn contrast, 338 NTUmin(c, S) can be NP-hard to compute even if the size of F is polynomial (see Section 6).\nThe first and third inequalities in the following observation follow from the fact that condition (1\u2217 ) is strictly weaker than condition (1).\nPROPOSITION 1.\nTUmin(c, S) \u2264 NTUmin(c, S) \u2264 NTUmax(c, S) \u2264 TUmax(c, S).\nLet M be a truthful mechanism for (E, F).\nLet pM(c) denote the total payments of M when the actual costs are c.\nA frugality ratio of M with respect to a payment bound is the ratio between the payment of M and this payment bound.\nIn particular, \u03c6TUmin(M) = sup c pM(c)\/TUmin(c), \u03c6TUmax(M) = sup c pM(c)\/TUmax(c), \u03c6NTUmin(M) = sup c pM(c)\/NTUmin(c), \u03c6NTUmax(M) = sup c pM(c)\/NTUmax(c).\nWe conclude this section by showing that there exist set systems and respective cost vectors for which all four payment bounds are different.\nIn the next section, we quantify this difference, both for general set systems, and for specific types of set systems, such as path auctions or vertex cover auctions.\nEXAMPLE 1.\nConsider the shortest-path auction on the graph of Figure 1.\nThe cheapest feasible sets are all paths from A to D.\nIt can be verified, using the reasoning of Propositions 2 and 3 below, that for the cost vector cAB = cCD = 2, cBC = 1, cAC = cBD = 5, we have \u2022 TUmax(c) = 10 (with bAB = bCD = 5, bBC = 0), \u2022 NTUmax(c) = 9 (with bAB = bCD = 4, bBC = 1), \u2022 NTUmin(c) = 7 (with bAB = bCD = 2, bBC = 3), \u2022 TUmin(c) = 5 (with bAB = bCD = 0, bBC = 5).\n4.\nCOMPARING PAYMENT BOUNDS 4.1 Path auctions We start by showing that for path auctions any two consecutive payment bounds can differ by at least a factor of 2.\nPROPOSITION 2.\nThere is an instance of the shortest-path problem for which we have NTUmax(c)\/NTUmin(c) \u2265 2.\nPROOF.\nThis construction is due to David Kempe [17].\nConsider the graph of Figure 1 with the edge costs cAB = cBC = cCD = 0, cAC = cBD = 1.\nUnder these costs, ABCD is the cheapest path.\nThe inequalities in (2) are bAB + bBC \u2264 cAC = 1, bBC + bCD \u2264 cBD = 1.\nBy condition (3), both of these inequalities must be tight (the former one is the only inequality involving bAB, and the latter one is the only inequality involving bCD).\nThe inequalities in (1) are bAB \u2265 0, bBC \u2265 0, bCD \u2265 0.\nNow, if the goal is to maximise bAB + bBC + bCD, the best choice is bAB = bCD = 1, bBC = 0, so NTUmax(c) = 2.\nOn the other hand, if the goal is to minimise bAB + bBC + bCD, one should set bAB = bCD = 0, bBC = 1, so NTUmin(c) = 1.\nPROPOSITION 3.\nThere is an instance of the shortest-path problem for which we have TUmax(c)\/NTUmax(c) \u2265 2.\nPROOF.\nAgain, consider the graph of Figure 1.\nLet the edge costs be cAB = cCD = 0, cBC = 1, cAC = cBD = 1.\nABCD is the lexicographically-least cheapest path, so we can assume that S = {AB, BC, CD}.\nThe inequalities in (2) are the same as in the previous example, and by the same argument both of them are, in fact, equalities.\nThe inequalities in (1) are bAB \u2265 0, bBC \u2265 1, bCD \u2265 0.\nOur goal is to maximise bAB + bBC + bCD.\nIf we have to respect the inequalities in (1), we have to set bAB = bCD = 0, bBC = 1, so NTUmax(c) = 1.\nOtherwise, we can set bAB = bCD = 1, bBC = 0, so TUmax(c) \u2265 2.\nPROPOSITION 4.\nThere is an instance of the shortest-path problem for which we have NTUmin(c)\/TUmin(c) \u2265 2.\nPROOF.\nThis construction is also based on the graph of Figure 1.\nThe edge costs are cAB = cCD = 1, cBC = 0, cAC = cBD = 1.\nABCD is the lexicographically least cheapest path, so we can assume that S = {AB, BC, CD}.\nAgain, the inequalities in (2) are the same, and both are, in fact, equalities.\nThe inequalities in (1) are bAB \u2265 1, bBC \u2265 0, bCD \u2265 1.\nOur goal is to minimise bAB + bBC +bCD.\nIf we have to respect the inequalities in (1), we have to set bAB = bCD = 1, bBC = 0, so NTUmin(c) = 2.\nOtherwise, we can set bAB = bCD = 0, bBC = 1, so TUmin(c) \u2264 1.\nIn Section 4.4 (Theorem 3), we show that the separation results in Propositions 2, 3, and 4 are optimal.\n4.2 Connections between separation results The separation results for path auctions are obtained on the same graph using very similar cost vectors.\nIt turns out that this is not coincidental.\nNamely, we can prove the following theorem.\nTHEOREM 1.\nFor any set system (E, F), and any feasible set S, max c TUmax(c, S) NTUmax(c, S) = max c NTUmax(c, S) NTUmin(c, S) , max c NTUmax(c, S) NTUmin(c, S) = max c NTUmin(c, S) TUmin(c, S) , where the maximum is over all cost vectors c for which S is a cheapest feasible set.\nThe proof of the theorem follows directly from the four lemmas proved below; more precisely, the first equality in Theorem 1 is obtained by combining Lemmas 1 and 2, and the second equality is obtained by combining Lemmas 3 and 4.\nWe prove Lemma 1 here; the proofs of Lemmas 2- 4 are similar and can be found in the full version of this paper [8].\nLEMMA 1.\nSuppose that c is a cost vector for (E, F) such that S is a cheapest feasible set and TUmax(c, S)\/NTUmax(c, S) = \u03b1.\nThen there is a cost vector c such that S is a cheapest feasible set and NTUmax(c , S)\/NTUmin(c , S) \u2265 \u03b1.\nPROOF.\nSuppose that TUmax(c, S) = X and NTUmax(c, S) = Y where X\/Y = \u03b1.\nAssume without loss of generality that S consists of elements 1, ... , k, and let b1 = (b1 1, ... , b1 k) and b2 = (b2 1, ... , b2 k) be the bid vectors that correspond to TUmax(c, S) and NTUmax(c, S), respectively.\nConstruct the cost vector c by setting ci = ci for i \u2208 S, ci = min{ci, b1 i } for i \u2208 S. Clearly, S is a cheapest set under c .\nMoreover, as the costs of elements outside of S remained the same, the right-hand sides of all constraints in (2) did not change, so any bid vector that satisfies (2) and (3) with respect to c, also satisfies them with respect to c .\nWe will construct two bid vectors b3 and b4 that satisfy conditions (1), (2), and (3) for the cost vector c , and 339 X X X X X 0 X 12 3 X 4 5 6 Figure 2: Graph that separates payment bounds for vertex cover, n = 7 have P i\u2208S b3 i = X, P i\u2208S b4 i = Y .\nAs NTUmax(c , S) \u2265 X and NTUmin(c , S) \u2264 Y , this implies the lemma.\nWe can set b3 i = b1 i : this bid vector satisfies conditions (2) and (3) since b1 does, and we have b1 i \u2265 min{ci, b1 i } = ci, which means that b3 satisfies condition (1).\nFurthermore, we can set b4 i = b2 i .\nAgain, b4 satisfies conditions (2) and (3) since b2 does, and since b2 satisfies condition (1), we have b2 i \u2265 ci \u2265 ci, which means that b4 satisfies condition (1).\nLEMMA 2.\nSuppose c is a cost vector for (E, F) such that S is a cheapest feasible set and NTUmax(c, S)\/NTUmin(c, S) = \u03b1.\nThen there is a cost vector c such that S is a cheapest feasible set and TUmax(c , S)\/NTUmax(c , S) \u2265 \u03b1.\nLEMMA 3.\nSuppose that c is a cost vector for (E, F) such that S is a cheapest feasible set and NTUmax(c, S)\/NTUmin(c, S) = \u03b1.\nThen there is a cost vector c such that S is a cheapest feasible set and NTUmin(c , S)\/TUmin(c , S) \u2265 \u03b1.\nLEMMA 4.\nSuppose that c is a cost vector for (E, F) such that S is a cheapest feasible set and NTUmin(c, S)\/TUmin(c, S) = \u03b1.\nThen there is a cost vector c such that S is a cheapest feasible set and NTUmax(c , S)\/NTUmin(c , S) \u2265 \u03b1.\n4.3 Vertex-cover auctions In contrast to the case of path auctions, for vertex-cover auctions the gap between NTUmin(c) and NTUmax(c) (and hence between NTUmax(c) and TUmax(c), and between TUmin(c) and NTUmin(c)) can be proportional to the size of the graph.\nPROPOSITION 5.\nFor any n \u2265 3, there is a an n-vertex graph and a cost vector c for which TUmax(c)\/NTUmax(c) \u2265 n \u2212 2.\nPROOF.\nThe underlying graph consists of an (n \u2212 1)-clique on the vertices X1, ... , Xn\u22121, and an extra vertex X0 adjacent to Xn\u22121.\nThe costs are cX1 = cX2 = \u00b7 \u00b7 \u00b7 = cXn\u22122 = 0, cX0 = cXn\u22121 = 1.\nWe can assume that S = {X0, X1, ... , Xn\u22122} (this is the lexicographically first vertex cover of cost 1).\nFor this set system, the constraints in (2) are bXi + bX0 \u2264 cXn\u22121 = 1 for i = 1, ... , n \u2212 2.\nClearly, we can satisfy conditions (2) and (3) by setting bXi = 1 for i = 1, ... , n \u2212 2, bX0 = 0.\nHence, TUmax(c) \u2265 n \u2212 2.\nFor NTUmax(c), there is an additional constraint bX0 \u2265 1, so the best we can do is to set bXi = 0 for i = 1, ... , n \u2212 2, bX0 = 1, which implies NTUmax(c) = 1.\nCombining Proposition 5 with Lemmas 1 and 3, we derive the following corollaries.\nCOROLLARY 1.\nFor any n \u2265 3, we can construct an instance of the vertex cover problem on a graph of size n that satisfies NTUmax(c)\/NTUmin(c) \u2265 n \u2212 2.\nCOROLLARY 2.\nFor any n \u2265 3, we can construct an instance of the vertex cover problem on a graph of size n that satisfies NTUmin(c)\/TUmin(c) \u2265 n \u2212 2.\nj+2ix ij P \\ P ij+2P \\ P yijixix j j+1 i j+2ij+1 y y i i j+2ie j e j+1 e ij+1 P \\ P Figure 3: Proof of Theorem 3: constraints for \u02c6Pij and \u02c6Pij+2 do not overlap 4.4 Upper bounds It turns out that the lower bound proved in the previous subsection is almost tight.\nMore precisely, the following theorem shows that no two payment bounds can differ by more than a factor of n; moreover, this is the case not just for the vertex cover problem, but for general set systems.\nWe bound the gap between TUmax(c) and TUmin(c).\nSince TUmin(c) \u2264 NTUmin(c) \u2264 NTUmax(c) \u2264 TUmax(c), this bound applies to any pair of payment bounds.\nTHEOREM 2.\nFor any set system (E, F) and any cost vector c, we have TUmax(c)\/TUmin(c) \u2264 n. PROOF.\nAssume wlog that the winning set S consists of elements 1, ... , k. Let c1, ... , ck be the true costs of elements in S, let b1, ... , bk be their bids that correspond to TUmin(c), and let b1 , ... , bk be their bids that correspond to TUmax(c).\nConsider the conditions (2) and (3) for S.\nOne can pick a subset L of at most k inequalities in (2) so that for each i = 1, ... , k there is at least one inequality in L that is tight for bi.\nSuppose that the jth inequality in L is of the form bi1 + \u00b7 \u00b7 \u00b7 + bit \u2264 c(Tj \\ S).\nFor bi, all inequalities in L are, in fact, equalities.\nHence, by adding up all of them we obtain k P i=1,...,k bi \u2265 P j=1,...,k c(Tj \\ S).\nOn the other hand, all these inequalities appear in condition (2), so they must hold for bi , i.e., P i=1,...,k bi \u2264 P j=1,...,k c(Tj \\ S).\nCombining these two inequalities, we obtain nTUmin(c) \u2265 kTUmin(c) \u2265 TUmax(c).\nREMARK 2.\nThe final line of the proof of Theorem 2 shows that, in fact, the upper bound on TUmax(c)\/TUmin(c) can be strengthened to the size of the winning set, k. Note that in Proposition 5, as well as in Corollaries 1 and 2, k = n\u22121, so these results do not contradict each other.\nFor path auctions, this upper bound can be improved to 2, matching the lower bounds of Section 4.1.\nTHEOREM 3.\nFor any instance of the shortest path problem, TUmax(c) \u2264 2 TUmin(c).\nPROOF.\nGiven a network (G, s, t), assume without loss of generality that the lexicographically-least cheapest s-t path, P, in G is {e1, ... , ek}, where e1 = (s, v1), e2 = (v1, v2), ... , ek = (vk\u22121, t).\nLet c1, ... , ck be the true costs of e1, ... , ek, and let b = (b1, ... , bk) and b = (b1 , ... , bk ) be bid vectors that correspond to TUmin(c) and TUmax(c), respectively.\nFor any i = 1, ... , k, there is a constraint in (2) that is tight for bi with respect to the bid vector b , i.e., an s-t path Pi that avoids ei and satisfies b (P \\Pi) = c(Pi \\P).\nWe can assume without loss of generality that Pi coincides with P up to some vertex xi, then deviates from P to avoid ei, and finally returns to P at a vertex 340 yi and coincides with P from then on (clearly, it might happen that s = xi or t = yi).\nIndeed, if Pi deviates from P more than once, one of these deviations is not necessary to avoid ei and can be replaced with the respective segment of P without increasing the cost of Pi.\nAmong all paths of this form, let \u02c6Pi be the one with the largest value of yi, i.e., the rightmost one.\nThis path corresponds to an inequality Ii of the form bxi+1 + \u00b7 \u00b7 \u00b7 + byi \u2264 c( \u02c6Pi \\ P).\nAs in the proof of Theorem 2, we construct a set of tight constraints L such that every variable bi appears in at least one of these constraints; however, now we have to be more careful about the choice of constraints in L.\nWe construct L inductively as follows.\nStart by setting L = {I1}.\nAt the jth step, suppose that all variables up to (but not including) bij appear in at least one inequality in L. Add Iij to L. Note that for any j we have yij+1 > yij .\nThis is because the inequalities added to L during the first j steps did not cover bij+1 .\nSee Figure 3.\nSince yij+2 > yij+1 , we must also have xij+2 > yij : otherwise, \u02c6Pij+1 would not be the rightmost constraint for bij+1 .\nTherefore, the variables in Iij+2 and Iij do not overlap, and hence no bi can appear in more than two inequalities in L.\nNow we follow the argument of the proof of Theorem 2 to finish.\nBy adding up all of the (tight) inequalities in L for bi we obtain 2 P i=1,...,k bi \u2265 P j=1,...,k c( \u02c6Pj \\ P).\nOn the other hand, all these inequalities appear in condition (2), so they must hold for bi , i.e., P i=1,...,k bi \u2264 P j=1,...,k c( \u02c6Pj \\ P), so TUmax(c) \u2264 2TUmin(c).\n5.\nTRUTHFUL MECHANISMS FOR VERTEX COVER Recall that for a vertex-cover auction on a graph G = (V, E), an allocation rule is an algorithm that takes as input a bid bv for each vertex and returns a vertex cover \u02c6S of G.\nAs explained in Section 2, we can combine a monotone allocation rule with threshold payments to obtain a truthful auction.\nTwo natural examples of monotone allocation rules are Aopt, i.e., the algorithm that finds an optimal vertex cover, and the greedy algorithm AGR.\nHowever, Aopt cannot be guaranteed to run in polynomial time unless P = NP and AGR has approximation ratio of log n. Another approximation algorithm for vertex cover, which has approximation ratio 2, is the local ratio algorithm ALR [2, 3].\nThis algorithm considers the edges of G one by one.\nGiven an edge e = (u, v), it computes = min{bu, bv} and sets bu = bu \u2212 , bv = bv \u2212 .\nAfter all edges have been processed, ALR returns the set of vertices {v | bv = 0}.\nIt is not hard to check that if the order in which the edges are considered is independent of the bids, then this algorithm is monotone as well.\nHence, we can use it to construct a truthful auction that is guaranteed to select a vertex cover whose cost is within a factor of 2 from the optimal.\nHowever, while the quality of the solution produced by ALR is much better than that of AGR, we still need to show that its total payment is not too high.\nIn the next subsection, we bound the frugality ratio of ALR (and, more generally, all algorithms that satisfy the condition of local optimality, defined later) by 2\u0394, where \u0394 is the maximum degree of G.\nWe then prove a matching lower bound showing that for some graphs the frugality ratio of any truthful auction is at least \u0394\/2.\n5.1 Upper bound We say that an allocation rule is locally optimal if whenever bv >P w\u223cv bw, the vertex v is not chosen.\nNote that for any such rule the threshold bid of v satisfies tv \u2264 P w\u223cv bw.\nCLAIM 1.\nThe algorithms Aopt, AGR, and ALR are locally optimal.\nTHEOREM 4.\nAny vertex cover auction M that has a locally optimal and monotone allocation rule and pays each agent his threshold bid has frugality ratio \u03c6NTUmin(M) \u2264 2\u0394.\nTo prove Theorem 4, we first show that the total payment of any locally optimal mechanism does not exceed \u0394c(V ).\nWe then demonstrate that NTUmin(c) \u2265 c(V )\/2.\nBy combining these two results, the theorem follows.\nLEMMA 5.\nConsider a graph G = (V, E) with maximum degree \u0394.\nLet M be a vertex-cover auction on G that satisfies the conditions of Theorem 4.\nThen for any cost vector c, the total payment of M satisfies pM(c) \u2264 \u0394c(V ).\nPROOF.\nFirst note that any such auction is truthful, so we can assume that each agent``s bid is equal to his cost.\nLet \u02c6S be the vertex cover selected by M.\nThen by local optimality pM(c) = X v\u2208 \u02c6S tv \u2264 X v\u2208 \u02c6S X w\u223cv cw \u2264 X w\u2208V \u0394cw = \u0394c(V ).\nWe now derive a lower bound on TUmax(c); while not essential for the proof of Theorem 4, it helps us build the intuition necessary for that proof.\nLEMMA 6.\nFor a vertex cover instance G = (V, E) in which S is a minimum vertex cover, TUmax(c, S) \u2265 c(V \\ S) PROOF.\nFor a vertex w with at least one neighbour in S, let d(w) denote the number of neighbours that w has in S. Consider the bid vector b in which, for each v \u2208 S, bv = P w\u223cv,w\u2208S cw d(w) .\nThen P v\u2208S bv = P v\u2208S P w\u223cv,w\u2208S cw\/d(w) = P w \/\u2208S cw = c(V \\ S).\nTo finish we want to show that b is feasible in the sense that it satisfies (2).\nConsider a vertex cover T, and extend the bid vector b by assigning bv = cv for v \/\u2208 S.\nThen b(T) = c(T \\S)+b(S\u2229T) \u2265 c(T \\S)+ X v\u2208S\u2229T X w\u2208S\u2229T :w\u223cv cw d(w) , and since all edges between S \u2229 T and S go to S \u2229 T, the righthand-side is equal to c(T \\S)+ X w\u2208S\u2229T cw = c(T \\S)+c(S \u2229T) = c(V \\S) = b(S).\nNext, we prove a lower bound on NTUmax(c, S); we will then use it to obtain a lower bound on NTUmin(c).\nLEMMA 7.\nFor a vertex cover instance G = (V, E) in which S is a minimum vertex cover, NTUmax(c, S) \u2265 c(V \\ S) PROOF.\nIf c(S) \u2265 c(V \\ S), by condition (1) we are done.\nTherefore, for the rest of the proof we assume that c(S) < c(V \\ S).\nWe show how to construct a bid vector (be)e\u2208S that satisfies conditions (1) and (2) such that b(S) \u2265 c(V \\ S); clearly, this implies NTUmax(c, S) \u2265 c(V \\ S).\nRecall that a network flow problem is described by a directed graph \u0393 = (V\u0393, E\u0393), a source node s \u2208 V\u0393, a sink node t \u2208 V\u0393, and a vector of capacity constraints ae, e \u2208 E\u0393.\nConsider a network (V\u0393, E\u0393) such that V\u0393 = V \u222a{s, t}, E\u0393 = E1 \u222aE2 \u222aE3, where E1 = {(s, v) | v \u2208 S}, E2 = {(v, w) | v \u2208 S, w \u2208 341 V \\ S, (v, w) \u2208 E}, E3 = {(w, t) | w \u2208 V \\ S}.\nSince S is a vertex cover for G, no edge of E can have both of its endpoints in V \\ S, and by construction, E2 contains no edges with both endpoints in S. Therefore, the graph (V, E2) is bipartite with parts (S, V \\ S).\nSet the capacity constraints for e \u2208 E\u0393 as follows: a(s,v) = cv, a(w,t) = cw, a(v,w) = +\u221e for all v \u2208 S, w \u2208 V \\ S. Recall that a cut is a partition of the vertices in V\u0393 into two sets C1 and C2 so that s \u2208 C1, t \u2208 C2; we denote such a cut by C = (C1, C2).\nAbusing notation, we write e = (u, v) \u2208 C if u \u2208 C1, v \u2208 C2 or u \u2208 C2, v \u2208 C1, and say that such an edge e = (u, v) crosses the cut C.\nThe capacity of a cut C is computed as cap(C) = P (v,w)\u2208C a(v,w).\nWe have cap(s, V \u222a{t}) = c(S), cap({s} \u222a V, t) = c(V \\ S).\nLet Cmin = ({s} \u222a S \u222a W , {t} \u222a S \u222a W ) be a minimum cut in \u0393, where S , S \u2286 S, W , W \u2286 V \\ S. See Figure 4.\nAs cap(Cmin) \u2264 cap(s, V \u222a {t}) = c(S) < +\u221e, and any edge in E2 has infinite capacity, no edge (u, v) \u2208 E2 crosses Cmin.\nConsider the network \u0393 = (V\u0393 , E\u0393 ), where V\u0393 = {s} \u222a S \u222a W \u222a {t}, E\u0393 = {(u, v) \u2208 E\u0393 | u, v \u2208 V\u0393 }.\nClearly, C = ({s} \u222a S \u222a W , {t}) is a minimum cut in \u0393 (otherwise, there would exist a smaller cut for \u0393).\nAs cap(C ) = c(W ), we have c(S ) \u2265 c(W ).\nNow, consider the network \u0393 = (V\u0393 , E\u0393 ), where V\u0393 = {s} \u222a S \u222a W \u222a {t}, E\u0393 = {(u, v) \u2208 E\u0393 | u, v \u2208 V\u0393 }.\nSimilarly, C = ({s}, S \u222a W \u222a {t}) is a minimum cut in \u0393 , cap(C ) = c(S ).\nAs the size of a maximum flow from s to t is equal to the capacity of a minimum cut separating s and t, there exists a flow F = (fe)e\u2208E\u0393 of size c(S ).\nThis flow has to saturate all edges between s and S , i.e., f(s,v) = cv for all v \u2208 S .\nNow, increase the capacities of all edges between s and S to +\u221e.\nIn the modified network, the capacity of a minimum cut (and hence the size of a maximum flow) is c(W ), and a maximum flow F = (fe)e\u2208E\u0393 can be constructed by greedily augmenting F. Set bv = cv for all v \u2208 S , bv = f(s,v) for all v \u2208 S .\nAs F is constructed by augmenting F, we have bv \u2265 cv for all v \u2208 S, i.e., condition (1) is satisfied.\nNow, let us check that no vertex cover T \u2286 V can violate condition (2).\nSet T1 = T \u2229 S , T2 = T \u2229 S , T3 = T \u2229 W , T4 = T \u2229 W ; our goal is to show that b(S \\ T1) + b(S \\ T2) \u2264 c(T3)+c(T4).\nConsider all edges (u, v) \u2208 E such that u \u2208 S \\T1.\nIf (u, v) \u2208 E2 then v \u2208 T3 (no edge in E2 can cross the cut), and if u, v \u2208 S then v \u2208 T1\u222aT2.\nHence, T1\u222aT3\u222aS is a vertex cover for G, and therefore c(T1)+ c(T3)+ c(S ) \u2265 c(S) = c(T1)+ c(S \\ T1) + c(S ).\nConsequently, c(T3) \u2265 c(S \\ T1) = b(S \\ T1).\nNow, consider the vertices in S \\T2.\nAny edge in E2 that starts in one of these vertices has to end in T4 (this edge has to be covered by T, and it cannot go across the cut).\nTherefore, the total flow out of S \\T2 is at most the total flow out of T4, i.e., b(S \\T2) \u2264 c(T4).\nHence, b(S \\ T1) + b(S \\ T2) \u2264 c(T3) + c(T4).\nFinally, we derive a lower bound on the payment bound that is of interest to us, namely, NTUmin(c).\nLEMMA 8.\nFor a vertex cover instance G = (V, E) in which S is a minimum vertex cover, NTUmin(c, S) \u2265 c(V \\ S) PROOF.\nSuppose for contradiction that c is a cost vector with minimum-cost vertex cover S and NTUmin(c, S) < c(V \\S).\nLet b be the corresponding bid vector and let c be a new cost vector with cv = bv for v \u2208 S and cv = cv for v \u2208 S. Condition (2) guarantees that S is an optimal solution to the cost vector c .\nNow compute a bid vector b corresponding to NTUmax(c , S).\nWe S'' W'''' S'''' W'' s t T1 T3 T2 T4 0 00 1 11 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 0000 11 1111 00 00 11 11 0 00 1 11 0 00 1 11 0 00 1 11 0000000 00000000000000 00000000000000 0000000 00000000000000 00000000000000 0000000 00000000000000 1111111 11111111111111 11111111111111 1111111 11111111111111 11111111111111 1111111 11111111111111 0000 00000000 00000000 0000 00000000 00000000 0000 00000000 1111 11111111 11111111 1111 11111111 11111111 1111 11111111 00 0000 0000 00 0000 0000 00 0000 11 1111 1111 11 1111 1111 11 1111 00 0000 0000 00 0000 0000 00 0000 11 1111 1111 11 1111 1111 11 1111 00000 0000000000 0000000000 00000 0000000000 0000000000 00000 0000000000 11111 1111111111 1111111111 11111 1111111111 1111111111 11111 1111111111 0000000 00000000000000 00000000000000 0000000 00000000000000 00000000000000 0000000 00000000000000 1111111 11111111111111 11111111111111 1111111 11111111111111 11111111111111 1111111 11111111111111 0000000 00000000000000 00000000000000 0000000 00000000000000 00000000000000 0000000 00000000000000 1111111 11111111111111 11111111111111 1111111 11111111111111 11111111111111 1111111 11111111111111 0000 00000000 00000000 0000 00000000 00000000 0000 00000000 1111 11111111 11111111 1111 11111111 11111111 1111 11111111 00 0000 0000 00 0000 0000 00 0000 11 1111 1111 11 1111 1111 11 1111000 000000 000000 000 000000 000000 000 000000 000000 000 000 111 111111 111111 111 111111 111111 111 111111 111111 111 111 0000000 00000000000000 00000000000000 0000000 00000000000000 00000000000000 0000000 00000000000000 00000000000000 0000000 00000000000000 1111111 11111111111111 11111111111111 1111111 11111111111111 11111111111111 1111111 11111111111111 11111111111111 1111111 11111111111111 000 000000 000000 000 000000 000000 000 000000 000000 000 000000 111 111111 111111 111 111111 111111 111 111111 111111 111 111111 000000 000000000000 000000000000 000000 000000000000 000000000000 000000 000000000000 000000000000 000000 000000000000 111111 111111111111 111111111111 111111 111111111111 111111111111 111111 111111111111 111111111111 111111 111111111111 000000 000000000000 000000000000 000000 000000000000 000000000000 000000 000000000000 000000000000 000000 000000000000 111111 111111111111 111111111111 111111 111111111111 111111111111 111111 111111111111 111111111111 111111 111111111111 000 000000 000000 000 000000 000000 000 000000 000000 000 000000 111 111111 111111 111 111111 111111 111 111111 111111 111 111111 Figure 4: Proof of Lemma 7.\nDashed lines correspond to edges in E \\ E2 claim that bv = cv for any v \u2208 S. Indeed, suppose that bv > cv for some v \u2208 S (bv = cv for v \u2208 S by construction).\nAs b satisfies conditions (1)-(3), among the inequalities in (2) there is one that is tight for v and the bid vector b.\nThat is, b(S \\ T) = c(T \\ S).\nBy the construction of c , c (S \\ T) = c (T \\ S).\nNow since bw \u2265 cw for all w \u2208 S, bv > cv implies b (S \\T) > c (S \\T) = c (T \\S).\nBut this violates (2).\nSo we now know b = c .\nHence, we have NTUmax(c , S) = P v\u2208S bv = NTUmin(c, S) < c(V \\ S), giving a contradiction to the fact that NTUmax(c , S) \u2265 c (V \\S) which we proved in Lemma 7.\nAs NTUmin(c, S) satisfies condition (1), it follows that we have NTUmin(c, S) \u2265 c(S).\nTogether will Lemma 8, this implies NTUmin(c, S) \u2265 max{c(V \\ S), c(S)} \u2265 c(V )\/2.\nCombined with Lemma 5, this completes the proof of Theorem 4.\nREMARK 3.\nAs NTUmin(c) \u2264 NTUmax(c) \u2264 TUmax(c), our bound of 2\u0394 extends to the smaller frugality ratios that we consider, i.e., \u03c6NTUmax(M) and \u03c6TUmax(M).\nIt is not clear whether it extends to the larger frugality ratio \u03c6TUmin(M).\nHowever, the frugality ratio \u03c6TUmin(M) is not realistic because the payment bound TUmin(c) is inappropriately low - we show in Section 6 that TUmin(c) can be significantly smaller than the total cost of a cheapest vertex cover.\nExtensions We can also apply our results to monotone vertex-cover algorithms that do not necessarily output locally-optimal solutions.\nTo do so, we simply take the vertex cover produced by any such algorithm and transform it into a locally-optimal one, considering the vertices in lexicographic order and replacing a vertex v with its neighbours whenever bv > P u\u223cv bu.\nNote that if a vertex u has been added to the vertex cover during this process, it means that it has a neighbour whose bid is higher than bu, so after one pass all vertices in the vertex cover satisfy bv \u2264 P u\u223cv bu.\nThis procedure is monotone in bids, and it can only decrease the cost of the vertex cover.\nTherefore, using it on top of a monotone allocation rule with approx342 imation ratio \u03b1, we obtain a monotone locally-optimal allocation rule with approximation ratio \u03b1.\nCombining it with threshold payments, we get an auction with \u03c6NTUmin \u2264 2\u0394.\nSince any truthful auction has a monotone allocation rule, this procedure transforms any truthful mechanism for the vertex-cover problem into a frugal one while preserving the approximation ratio.\n5.2 Lower bound In this subsection, we prove that the upper bound of Theorem 4 is essentially optimal.\nOur proof uses the techniques of [9], where the authors prove a similar result for shortest-path auctions.\nTHEOREM 5.\nFor any \u0394 > 0 and any n, there exist a graph G of maximum degree \u0394 and size N > n such that for any truthful mechanism M on G we have \u03c6NTUmin(M) \u2265 \u0394\/2.\nPROOF.\nGiven n and \u0394, set k = n\/2\u0394 .\nLet G be the graph that consists of k blocks B1, ... , Bk of size 2\u0394 each, where each Bi is a complete bipartite graph with parts Li and Ri, |Li| = |Ri| = \u0394.\nWe will consider two families of cost vectors for G. Under a cost vector x \u2208 X, each block Bi has one vertex of cost 1; all other vertices cost 0.\nUnder a cost vector y \u2208 Y , there is one block that has two vertices of cost 1, one in each part, all other blocks have one vertex of cost 1, and all other vertices cost 0.\nClearly, |X| = (2\u0394)k , |Y | = k(2\u0394)k\u22121 \u03942 .\nWe will now construct a bipartite graph W with the vertex set X \u222a Y as follows.\nConsider a cost vector y \u2208 Y that has two vertices of cost 1 in Bi; let these vertices be vl \u2208 Li and vr \u2208 Ri.\nBy changing the cost of either of these vertices to 0, we obtain a cost vector in X. Let xl and xr be the cost vectors obtained by changing the cost of vl and vr, respectively.\nThe vertex cover chosen by M(y) must either contain all vertices in Li or it must contain all vertices in Ri.\nIn the former case, we put in W an edge from y to xl and in the latter case we put in W an edge from y to xr (if the vertex cover includes all of Bi, W contains both of these edges).\nThe graph W has at least k(2\u0394)k\u22121 \u03942 edges, so there must exist an x \u2208 X of degree at least k\u0394\/2.\nLet y1, ... , yk\u0394\/2 be the other endpoints of the edges incident to x, and for each i = 1, ... , k\u0394\/2, let vi be the vertex whose cost is different under x and yi; note that all vi are distinct.\nIt is not hard to see that NTUmin(x) \u2264 k: the cheapest vertex cover contains the all-0 part of each block, and we can satisfy conditions (1)-(3) by letting one of the vertices in the all-0 part of each block to bid 1, while all other the vertices in the cheapest set bid 0.\nOn the other hand, by monotonicity of M we have vi \u2208 M(x) for i = 1, ... , k\u0394\/2 (vi is in the winning set under yi, and x is obtained from yi by decreasing the cost of vi), and moreover, the threshold bid of each vi is at least 1, so the total payment of M on x is at least k\u0394\/2.\nHence, \u03c6NTUmin(M) \u2265 M(x)\/NTUmin(x) \u2265 \u0394\/2.\nREMARK 4.\nThe lower bound of Theorem 5 can be generalised to randomised mechanisms, where a randomised mechanism is considered to be truthful if it can be represented as a probability distribution over truthful mechanisms.\nIn this case, instead of choosing the vertex x \u2208 X with the highest degree, we put both (y, xl) and (y, xr) into W , label each edge with the probability that the respective part of the block is chosen, and pick x \u2208 X with the highest weighted degree.\nThe argument can be further extended to a more permissive definition of truthfulness for randomised mechanisms, but this discussion is beyond the scope of this paper.\n6.\nPROPERTIES OF PAYMENT BOUNDS In this section we consider several desirable properties of payment bounds and evaluate the four payment bounds proposed in this paper with respect to them.\nThe particular properties that we are interested in are independence of the choice of S (Section 6.3), monotonicity (Section 6.4.1), computational hardness (Section 6.4.2), and the relationship with other reasonable bounds, such as the total cost of the cheapest set (Section 6.1), or the total VCG payment (Section 6.2).\n6.1 Comparison with total cost Our first requirement is that a payment bound should not be less than the total cost of the selected set.\nPayment bounds are used to evaluate the performance of set-system auctions.\nThe latter have to satisfy individual rationality, i.e., the payment to each agent must be at least as large as his incurred costs; it is only reasonable to require the payment bound to satisfy the same requirement.\nClearly, NTUmax(c) and NTUmin(c) satisfy this requirement due to condition (1), and so does TUmax(c), since TUmax(c) \u2265 NTUmax(c).\nHowever, TUmin(c) fails this test.\nThe example of Proposition 4 shows that for path auctions, TUmin(c) can be smaller than the total cost by a factor of 2.\nMoreover, there are set systems and cost vectors for which TUmin(c) is smaller than the cost of the cheapest set S by a factor of \u03a9(n).\nConsider, for example, the vertex-cover auction for the graph of Proposition 5 with the costs cX1 = \u00b7 \u00b7 \u00b7 = cXn\u22122 = cXn\u22121 = 1, cX0 = 0.\nThe cost of a cheapest vertex cover is n \u2212 2, and the lexicographically first vertex cover of cost n\u22122 is {X0, X1, ... , Xn\u22122}.\nThe constraints in (2) are bXi + bX0 \u2264 cXn\u22121 = 1.\nClearly, we can satisfy conditions (2) and (3) by setting bX1 = \u00b7 \u00b7 \u00b7 = bXn\u22122 = 0, bX0 = 1, which means that TUmin(c) \u2264 1.\nThis observation suggests that the payment bound TUmin(c) is too strong to be realistic, since it can be substantially lower than the cost of the cheapest feasible set.\nNevertheless, some of the positive results that were proved in [16] for NTUmin(c) go through for TUmin(c) as well.\nIn particular, one can show that if the feasible sets are the bases of a monopolyfree matroid, then \u03c6TUmin(VCG) = 1.\nTo show that \u03c6TUmin(VCG) is at most 1, one must prove that the VCG payment is at most TUmin(c).\nThis is shown for NTUmin(c) in the first paragraph of the proof of Theorem 5 in [16].\nTheir argument does not use condition (1) at all, so it also applies to TUmin(c).\nOn the other hand, \u03c6TUmin(VCG) \u2265 1 since \u03c6TUmin(VCG) \u2265 \u03c6NTUmin(VCG) and \u03c6NTUmin(VCG) \u2265 1 by Proposition 7 of [16] (and also by Proposition 6 below).\n6.2 Comparison with VCG payments Another measure of suitability for payment bounds is that they should not result in frugality ratios that are less then 1 for wellknown truthful mechanisms.\nIf this is indeed the case, the payment bound may be too weak, as it becomes too easy to design mechanisms that perform well with respect to it.\nIt particular, a reasonable requirement is that a payment bound should not exceed the total payment of the classical VCG mechanism.\nThe following proposition shows that NTUmax(c), and therefore also NTUmin(c) and TUmin(c), do not exceed the VCG payment pVCG(c).\nThe proof essentially follows the argument of Proposition 7 of [16] and can be found in the full version of this paper [8].\nPROPOSITION 6.\n\u03c6NTUmax(VCG) \u2265 1.\nProposition 6 shows that none of the payment bounds TUmin(c), NTUmin(c) and NTUmax(c) exceeds the payment of VCG.\nHowever, the payment bound TUmax(c) can be larger that the total 343 VCG payment.\nIn particular, for the instance in Proposition 5, the VCG payment is smaller than TUmax(c) by a factor of n \u2212 2.\nWe have already seen that TUmax(c) \u2265 n \u2212 2.\nOn the other hand, under VCG, the threshold bid of any Xi, i = 1, ... , n \u2212 2, is 0: if any such vertex bids above 0, it is deleted from the winning set together with X0 and replaced with Xn\u22121.\nSimilarly, the threshold bid of X0 is 1, because if X0 bids above 1, it can be replaced with Xn\u22121.\nSo the VCG payment is 1.\nThis result is not surprising: the definition of TUmax(c) implicitly assumes there is co-operation between the agents, while the computation of VCG payments does not take into account any interaction between them.\nIndeed, co-operation enables the agents to extract higher payments under VCG.\nThat is, VCG is not groupstrategyproof.\nThis suggests that as a payment bound, TUmax(c) may be too liberal, at least in a context where there is little or no co-operation between agents.\nPerhaps TUmax(c) can be a good benchmark for measuring the performance of mechanisms designed for agents that can form coalitions or make side payments to each other, in particular, group-strategyproof mechanisms.\nAnother setting in which bounding \u03c6TUmax is still of some interest is when, for the underlying problem, the optimal allocation and VCG payments are NP-hard to compute.\nIn this case, finding a polynomial-time computable mechanism with good frugality ratio with respect to TUmax(c) is a non-trivial task, while bounding the frugality ratio with respect to more challenging payment bounds could be too difficult.\nTo illustrate this point, compare the proofs of Lemma 6 and Lemma 7: both require some effort, but the latter is much more difficult than the former.\n6.3 The choice of S All payment bounds defined in this paper correspond to the total bid of all elements in the cheapest feasible set, where ties are broken lexicographically.\nWhile this definition ensures that our payment bounds are well-defined, the particular choice of the drawresolution rule appears arbitrary, and one might wonder if our payment bounds are sufficiently robust to be independent of this choice.\nIt turns out that is indeed the case for NTUmin(c) and NTUmax(c), i.e., these bounds do not depend on the draw-resolution rule.\nTo see this, suppose that two feasible sets S1 and S2 have the same cost.\nIn the computation of NTUmin(c, S1), all vertices in S1 \\S2 would have to bid their true cost, since otherwise S2 would become cheaper than S1.\nHence, any bid vector for S1 can only have be = ce for e \u2208 S1 \u2229 S2, and hence constitutes a valid bid vector for S2 and vice versa.\nA similar argument applies to NTUmax(c).\nHowever, for TUmin(c) and TUmax(c) this is not the case.\nFor example, consider the set system E = {e1, e2, e3, e4, e5}, F = {S1 = {e1, e2}, S2 = {e2, e3, e4}, S3 = {e4, e5}} with the costs c1 = 2, c2 = c3 = c4 = 1, c5 = 3.\nThe cheapest sets are S1 and S2.\nNow TUmax(c, S1) \u2264 4, as the total bid of the elements in S1 cannot exceed the total cost of S3.\nOn the other hand, TUmax(c, S2) \u2265 5, as we can set b2 = 3, b3 = 0, b4 = 2.\nSimilarly, TUmin(c, S1) = 4, because the inequalities in (2) are b1 \u2264 2 and b1 + b2 \u2264 4.\nBut TUmin(c, S2) \u2264 3 as we can set b2 = 1, b3 = 2, b4 = 0.\n6.4 Negative results for NTUmin(c) and TUmin(c) The results in [16] and our vertex cover results are proved for the frugality ratio \u03c6NTUmin.\nIndeed, it can be argued that \u03c6NTUmin is the best definition of frugality ratio, because among all reasonable payment bounds (i.e., ones that are at least as large as the cost of the cheapest feasible set), it is most demanding of the algorithm.\nHowever, NTUmin(c) is not always the easiest or the most natural payment bound to work with.\nIn this subsection, we discuss several disadvantages of NTUmin(c) (and also TUmin(c)) as compared to NTUmax(c) and TUmax(c).\n6.4.1 Nonmonotonicity The first problem with NTUmin(c) is that it is not monotone with respect to F, i.e., it may increase when one adds a feasible set to F. (It is, however, monotone in the sense that a losing agent cannot become a winner by raising his cost.)\nIntuitively, a good payment bound should satisfy this monotonicity requirement, as adding a feasible set increases the competition, so it should drive the prices down.\nNote that this indeed the case for NTUmax(c) and TUmax(c) since a new feasible set adds a constraint in (2), thus limiting the solution space for the respective linear program.\nPROPOSITION 7.\nAdding a feasible set to F can increase the value of NTUmin(c) by a factor of \u03a9(n).\nPROOF.\nLet E = {x, xx, y1, ... , yn, z1, ... , zn}.\nSet Y = {y1, ... , yn}, S = Y \u222a {x}, Ti = Y \\ {yi} \u222a {zi}, i = 1, ... , n, and suppose that F = {S, T1, ... , Tn}.\nThe costs are cx = 0, cxx = 0, cyi = 0, czi = 1 for i = 1, ... , n. Note that S is the cheapest feasible set.\nLet F = F \u222a {T0}, where T0 = Y \u222a {xx}.\nFor F, the bid vector by1 = \u00b7 \u00b7 \u00b7 = byn = 0, bx = 1 satisfies (1), (2), and (3), so NTUmin(c) \u2264 1.\nFor F , S is still the lexicographically-least cheapest set.\nAny optimal solution has bx = 0 (by constraint in (2) with T0).\nCondition (3) for yi implies bx + byi = czi = 1, so byi = 1 and NTUmin(c) = n. For path auctions, it has been shown [18] that NTUmin(c) is non-monotone in a slightly different sense, i.e., with respect to adding a new edge (agent) rather than a new feasible set (a team of existing agents).\nREMARK 5.\nWe can also show that NTUmin(c) is non-monotone for vertex cover.\nIn this case, adding a new feasible set corresponds to deleting edges from the graph.\nIt turns out that deleting a single edge can increase NTUmin(c) by a factor of n \u2212 2; the construction is similar to that of Proposition 5.\n6.4.2 NP-Hardness Another problem with NTUmin(c, S) is that it is NP-hard to compute even if the number of feasible sets is polynomial in n. Again, this puts it at a disadvantage compared to NTUmax(c, S) and TUmax(c, S) (see Remark 1).\nTHEOREM 6.\nComputing NTUmin(c) is NP-hard, even when the lexicographically-least feasible set S is given in the input.\nPROOF.\nWe reduce EXACT COVER BY 3-SETS(X3C) to our problem.\nAn instance of X3C is given by a universe G = {g1, ... , gn} and a collection of subsets C1, ... , Cm, Ci \u2282 G, |Ci| = 3, where the goal is to decide whether one can cover G by n\/3 of these sets.\nObserve that if this is indeed the case, each element of G is contained in exactly one set of the cover.\nLEMMA 9.\nConsider a minimisation problem P of the following form: Minimise P i=1,...,n bi under conditions (1) bi \u2265 0 for all i = 1, ... , n; (2) for any j = 1, ... , k we have P bi\u2208Sj bi \u2264 aj, where Sj \u2286 {b1, ... , bn}; (3) for each bj , one of the constraints in (2) involving it is tight.\nFor any such P, one can construct a set system S and a vector of costs c such that NTUmin(c) is the optimal solution to P. PROOF.\nThe construction is straightforward: there is an element of cost 0 for each bi, an element of cost aj for each aj, the feasible solutions are {b1, ... , bn}, or any set obtained from {b1, ... , bn} by replacing the elements in Sj by aj.\n344 By this lemma, all we have to do to prove Theorem 6 is to show how to solve X3C by using the solution to a minimisation problem of the form given in Lemma 9.\nWe do this as follows.\nFor each Ci, we introduce 4 variables xi, \u00afxi, ai, and bi.\nAlso, for each element gj of G there is a variable dj.\nWe use the following set of constraints: \u2022 In (1), we have constraints xi \u2265 0, \u00afxi \u2265 0, ai \u2265 0, bi \u2265 0, dj \u2265 0 for all i = 1, ... , m, j = 1, ... , n. \u2022 In (2), for all i = 1, ... , m, we have the following 5 constraints: xi + \u00afxi \u2264 1, xi +ai \u2264 1, \u00afxi +ai \u2264 1, xi +bi \u2264 1, \u00afxi + bi \u2264 1.\nAlso, for all j = 1, ... , n we have a constraint of the form xi1 + \u00b7 \u00b7 \u00b7 + xik + dj \u2264 1, where Ci1 , ... , Cik are the sets that contain gj.\nThe goal is to minimize z = P i(xi + \u00afxi + ai + bi) + P j dj.\nObserve that for each j, there is only one constraint involving dj , so by condition (3) it must be tight.\nConsider the two constraints involving ai.\nOne of them must be tight, and therefore xi +\u00afxi +ai +bi \u2265 xi +\u00afxi +ai \u2265 1.\nHence, for any feasible solution to (1)-(3) we have z \u2265 m. Now, suppose that there is an exact set cover.\nSet dj = 0 for j = 1, ... , n. Also, if Ci is included in this cover, set xi = 1, \u00afxi = ai = bi = 0, otherwise set \u00afxi = 1, xi = ai = bi = 0.\nClearly, all inequalities in (2) are satisfied (we use the fact that each element is covered exactly once), and for each variable, one of the constraints involving it is tight.\nThis assignment results in z = m. Conversely, suppose there is a feasible solution with z = m.\nAs each addend of the form xi + \u00afxi + ai + bi contributes at least 1, we have xi + \u00afxi + ai + bi = 1 for all i, dj = 0 for all j.\nWe will now show that for each i, either xi = 1 and \u00afxi = 0, or xi = 0 and \u00afxi = 1.\nFor the sake of contradiction, suppose that xi = \u03b4 < 1, \u00afxi = \u03b4 < 1.\nAs one of the constraints involving ai must be tight, we have ai \u2265 min{1 \u2212 \u03b4, 1 \u2212 \u03b4 }.\nSimilarly, bi \u2265 min{1 \u2212 \u03b4, 1 \u2212 \u03b4 }.\nHence, xi + \u00afxi + ai + bi = 1 = \u03b4 +\u03b4 +2 min{1\u2212\u03b4, 1\u2212\u03b4 } > 1.\nTo finish the proof, note that for each j = 1, ... , m we have xi1 + \u00b7 \u00b7 \u00b7 + xik + dj = 1 and dj = 0, so the subsets that correspond to xi = 1 constitute a set cover.\nREMARK 6.\nIn the proofs of Proposition 7 and Theorem 6 all constraints in (1) are of the form be \u2265 0.\nHence, the same results are true for TUmin(c).\nREMARK 7.\nFor shortest-path auctions, the size of F can be superpolynomial.\nHowever, there is a polynomial-time separation oracle for constraints in (2) (to construct one, use any algorithm for finding shortest paths), so one can compute NTUmax(c) and TUmax(c) in polynomial time.\nOn the other hand, recently and independently it was shown [18] that computing NTUmin(c) for shortest-path auctions is NP-hard.\n7.\nREFERENCES [1] A. Archer and E. Tardos, Frugal path mechanisms.\nIn Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 991-999, 2002 [2] R. Bar-Yehuda, K. Bendel, A. Freund, and D. Rawitz, Local ratio: A unified framework for approximation algorithms.\nIn Memoriam: Shimon Even 1935-2004.\nACM Comput.\nSurv., 36(4):422-463, 2004 [3] R. Bar-Yehuda and S. Even, A local-ratio theorem for approximating the weighted vertex cover problem.\nAnnals of Discrete Mathematics, 25:27-46, 1985 [4] E. Clarke, Multipart pricing of public goods.\nPublic Choice, 8:17-33, 1971 [5] G. Calinescu, Bounding the payment of approximate truthful mechanisms.\nIn Proceedings of the 15th International Symposium on Algorithms and Computation, pages 221-233, 2004 [6] A. Czumaj and A. Ronen, On the expected payment of mechanisms for task allocation.\nIn Proceedings of the 5th ACM Conference on Electronic Commerce (EC``04), 2004 [7] E. Elkind, True costs of cheap labor are hard to measure: edge deletion and VCG payments in graphs.\nIn Proceedings of the 6th ACM Conference on Electronic Commerce (EC``05), 2005 [8] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Frugality ratios and improved truthful mechanisms for vertex cover.\nAvailable from http:\/\/arxiv.org\/abs\/cs\/0606044, 2006 [9] E. Elkind, A. Sahai, and K. Steiglitz, Frugality in path auctions.\nIn Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 694-702, 2004 [10] J. Feigenbaum, C. H. Papadimitriou, R. Sami, and S. Shenker, A BGP-based mechanism for lowest-cost routing.\nIn Proceedings of the 21st Symposium on Principles of Distributed Computing, pages 173-182, 2002 [11] A. Fiat, A. Goldberg, J. Hartline, and A. Karlin, Competitive generalized auctions.\nIn Proceedings of the 34th Annual ACM Symposium on Theory of Computation, pages 72-81, 2002 [12] R. Garg, V. Kumar, A. Rudra and A. Verma, Coalitional games on graphs: core structures, substitutes and frugality.\nIn Proceedings of the 4th ACM Conference on Electronic Commerce (EC``03), 2005 [13] A. Goldberg, J. Hartline, and A. Wright, Competitive auctions and digital goods.\nIn Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 735-744, 2001 [14] T. Groves, Incentives in teams.\nEconometrica, 41(4):617-631, 1973 [15] N. Immorlica, D. Karger, E. Nikolova, and R. Sami, First-price path auctions.\nIn Proceedings of the 6th ACM Conference on Electronic Commerce (EC``05), 2005 [16] A. R. Karlin, D. Kempe, and T. Tamir, Beyond VCG: frugality of truthful mechanisms.\nIn Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pages 615-626, 2005 [17] D. Kempe, Personal communication, 2006 [18] N. Chen, A. R. Karlin, Cheap labor can be expensive, In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 735-744, 2007 [19] N. Nisan and A. Ronen, Algorithmic mechanism design.\nIn Proceedings of the 31st Annual ACM Symposium on Theory of Computation, pages 129-140, 1999 [20] A. Ronen and R. Talisman, Towards generic low payment mechanisms for decentralized task allocation.\nIn Proceedings of the 7th International IEEE Conference on E-Commerce Technology, 2005 [21] K. Talwar, The price of truth: frugality in truthful mechanisms.\nIn Proceedings of 20th International Symposium on Theoretical Aspects of Computer Science, 2003 [22] W. Vickrey, Counterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961 345","lvl-3":"Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover *\nIn set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams.\nThe auctioneer's goal is to hire a team and pay as little as possible.\nExamples of this setting include shortest-path auctions and vertex-cover auctions.\nRecently, Karlin, Kempe and Tamir introduced a new definition offrugality ratio for this problem.\nInformally, the \"frugality ratio\" is the ratio of the total payment of a mechanism to a desired payment bound.\nThe ratio captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction.\nIn this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its frugality ratio.\nWe show that the solution quality is with a constant factor of optimal and the frugality ratio is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties.\nMoreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio.\nAlso, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule.\nWe study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions.\nWe use these new definitions in the proof of our main result for vertex-cover auctions via a bootstrapping technique, which may be of independent interest.\n1.\nINTRODUCTION\nIn a set system auction there is a single buyer and many vendors that can provide various services.\nIt is assumed that the buyer's requirements can be satisfied by various subsets of the vendors; these subsets are called the feasible sets.\nA widely-studied class of setsystem auctions is path auctions, where each vendor is able to sell access to a link in a network, and the feasible sets are those sets whose links contain a path from a given source to a given destination; the study of these auctions has been initiated in the seminal paper by Nisan and Ronen [19] (see also [1, 10, 9, 6, 15, 7, 20]).\nWe assume that each vendor has a cost of providing his services, but submits a possibly larger bid to the auctioneer.\nBased on these bids, the auctioneer selects a feasible subset of vendors, and makes payments to the vendors in this subset.\nEach selected vendor enjoys a profit of payment minus cost.\nVendors want to maximise profit, while the buyer wants to minimise the amount he pays.\nA natural goal in this setting is to design a truthful auction, in which vendors have an incentive to bid their true cost.\nThis can be achieved by paying each selected vendor a premium above her bid in such a way that the vendor has no incentive to overbid.\nAn interesting question in mechanism design is how much the auctioneer will have to overpay in order to ensure truthful bids.\nIn the context of path auctions this topic was first addressed by Archer and Tardos [1].\nThey define the frugality ratio of a mechanism as the ratio between its total payment and the cost of the cheapest path disjoint from the path selected by the mechanism.\nThey show that, for a large class of truthful mechanisms for this problem, the frugality ratio is as large as the number of edges in the shortest path.\nTalwar [21] extends this definition of frugality ratio to general set systems, and studies the frugality ratio of the classical VCG mechanism [22, 4, 14] for many specific set systems, such as minimum spanning trees and set covers.\nWhile the definition of frugality ratio proposed by [1] is wellmotivated and has been instrumental in studying truthful mechanisms for set systems, it is not completely satisfactory.\nConsider, for example, the graph of Figure 1 with the costs CAB = CBC =\nFigure 1: The diamond graph\ncCD = 0, cAC = cBD = 1.\nThis graph is 2-connected and the VCG payment to the winning path ABCD is bounded.\nHowever, the graph contains no A--D path that is disjoint from ABCD, and hence the frugality ratio of VCG on this graph remains undefined.\nAt the same time, there is no monopoly, that is, there is no vendor that appears in all feasible sets.\nIn auctions for other types of set systems, the requirement that there exist a feasible solution disjoint from the selected one is even more severe: for example, for vertex-cover auctions (where vendors correspond to the vertices of some underlying graph, and the feasible sets are vertex covers) the requirement means that the graph must be bipartite.\nTo deal with this problem, Karlin et al. [16] suggest a better benchmark, which is defined for any monopoly-free set system.\nThis quantity, which they denote by \u03bd, intuitively corresponds to the value of a cheapest Nash equilibrium.\nBased on this new definition, the authors construct new mechanisms for the shortest path problem and show that the overpayment of these mechanisms is within a constant factor of optimal.\n1.1 Our results\nVertex cover auctions We propose a truthful polynomial-time auction for vertex cover that outputs a solution whose cost is within a factor of 2 of optimal, and whose frugality ratio is at most 2\u0394, where \u0394 is the maximum degree of the graph (Theorem 4).\nWe complement this result by proving (Theorem 5) that for any \u0394 and n, there are graphs of maximum degree \u0394 and size \u0398 (n) for which any truthful mechanism has frugality ratio at least \u0394 \/ 2.\nThis means that the solution quality of our auction is with a factor of 2 of optimal and the frugality ratio is within a factor of 4 of the best possible bound for worst-case inputs.\nTo the best of our knowledge, this is the first auction for this problem that enjoys these properties.\nMoreover, we show how to transform any truthful mechanism for the vertex-cover problem into a frugal one while preserving the approximation ratio.\nFrugality ratios Our vertex cover results naturally suggest two modifications of the definition of \u03bd in [16].\nThese modifications can be made independently of each other, resulting in four different payment bounds TUmax, TUmin, NTUmax, and NTUmin, where NTUmin is equal to the original payment bound \u03bd of in [16].\nAll four payment bounds arise as Nash equilibria of certain games (see the full version of this paper [8]); the differences between them can be seen as \"the price of initiative\" and \"the price of cooperation\" (see Section 3).\nWhile our main result about vertex cover auctions (Theorem 4) is with respect to NTUmin = \u03bd, we make use of the new definitions by first comparing the payment of our mechanism to a weaker bound NTUmax, and then bootstrapping from this result to obtain the desired bound.\nInspired by this application, we embark on a further study of these payment bounds.\nOur results here are as follows: 1.\nWe observe (Proposition 1) that the four payment bounds always obey a particular order that is independent of the choice of the set system and the cost vector, namely, TUmin be.\nGiven a monotone allocation rule A and a bid vector b, the threshold bid te of an agent e E A (b) is the highest bid of this agent that still wins the auction, given that the bids of other participants remain the same.\nFormally, te = sup {b ~ e E R | e E A (b1,..., b ~ e,..., bn)}.\nIt is well known (see, e.g. [19, 13]) that any auction that has a monotone allocation rule and pays each agent his threshold bid is truthful; conversely, any truthful auction has a monotone allocation rule.\nThe VCG mechanism is a truthful mechanism that maximises the \"social welfare\" and pays 0 to the losing agents.\nFor set system auctions, this simply means picking a cheapest feasible set, paying each agent in the selected set his threshold bid, and paying 0 to all other agents.\nNote, however, that the VCG mechanism may be difficult to implement, since finding a cheapest feasible set may be intractable.\nIf U is a set of agents, c (U) denotes Ew \u2208 U cw.\nSimilarly, b (U) denotes Ew \u2208 U bw.\n3.\nFRUGALITY RATIOS\n4.\nCOMPARING PAYMENT BOUNDS\n4.1 Path auctions\n4.2 Connections between separation results\n4.3 Vertex-cover auctions\n4.4 Upper bounds\n5.\nTRUTHFUL MECHANISMS FOR VERTEX COVER\n5.1 Upper bound\nExtensions\n5.2 Lower bound\n6.\nPROPERTIES OF PAYMENT BOUNDS\n6.1 Comparison with total cost\n6.2 Comparison with VCG payments\n6.3 The choice of S\n6.4 Negative results for NTUmin (c) and TUmin (c)\n6.4.1 Nonmonotonicity\n6.4.2 NP-Hardness","lvl-4":"Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover *\nIn set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams.\nThe auctioneer's goal is to hire a team and pay as little as possible.\nExamples of this setting include shortest-path auctions and vertex-cover auctions.\nRecently, Karlin, Kempe and Tamir introduced a new definition offrugality ratio for this problem.\nInformally, the \"frugality ratio\" is the ratio of the total payment of a mechanism to a desired payment bound.\nThe ratio captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction.\nIn this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its frugality ratio.\nWe show that the solution quality is with a constant factor of optimal and the frugality ratio is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties.\nMoreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio.\nAlso, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule.\nWe study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions.\nWe use these new definitions in the proof of our main result for vertex-cover auctions via a bootstrapping technique, which may be of independent interest.\n1.\nINTRODUCTION\nIn a set system auction there is a single buyer and many vendors that can provide various services.\nIt is assumed that the buyer's requirements can be satisfied by various subsets of the vendors; these subsets are called the feasible sets.\nWe assume that each vendor has a cost of providing his services, but submits a possibly larger bid to the auctioneer.\nBased on these bids, the auctioneer selects a feasible subset of vendors, and makes payments to the vendors in this subset.\nEach selected vendor enjoys a profit of payment minus cost.\nVendors want to maximise profit, while the buyer wants to minimise the amount he pays.\nA natural goal in this setting is to design a truthful auction, in which vendors have an incentive to bid their true cost.\nThis can be achieved by paying each selected vendor a premium above her bid in such a way that the vendor has no incentive to overbid.\nAn interesting question in mechanism design is how much the auctioneer will have to overpay in order to ensure truthful bids.\nIn the context of path auctions this topic was first addressed by Archer and Tardos [1].\nThey define the frugality ratio of a mechanism as the ratio between its total payment and the cost of the cheapest path disjoint from the path selected by the mechanism.\nThey show that, for a large class of truthful mechanisms for this problem, the frugality ratio is as large as the number of edges in the shortest path.\nTalwar [21] extends this definition of frugality ratio to general set systems, and studies the frugality ratio of the classical VCG mechanism [22, 4, 14] for many specific set systems, such as minimum spanning trees and set covers.\nWhile the definition of frugality ratio proposed by [1] is wellmotivated and has been instrumental in studying truthful mechanisms for set systems, it is not completely satisfactory.\nConsider, for example, the graph of Figure 1 with the costs CAB = CBC =\nFigure 1: The diamond graph\nThis graph is 2-connected and the VCG payment to the winning path ABCD is bounded.\nHowever, the graph contains no A--D path that is disjoint from ABCD, and hence the frugality ratio of VCG on this graph remains undefined.\nAt the same time, there is no monopoly, that is, there is no vendor that appears in all feasible sets.\nTo deal with this problem, Karlin et al. [16] suggest a better benchmark, which is defined for any monopoly-free set system.\nBased on this new definition, the authors construct new mechanisms for the shortest path problem and show that the overpayment of these mechanisms is within a constant factor of optimal.\n1.1 Our results\nVertex cover auctions We propose a truthful polynomial-time auction for vertex cover that outputs a solution whose cost is within a factor of 2 of optimal, and whose frugality ratio is at most 2\u0394, where \u0394 is the maximum degree of the graph (Theorem 4).\nWe complement this result by proving (Theorem 5) that for any \u0394 and n, there are graphs of maximum degree \u0394 and size \u0398 (n) for which any truthful mechanism has frugality ratio at least \u0394 \/ 2.\nThis means that the solution quality of our auction is with a factor of 2 of optimal and the frugality ratio is within a factor of 4 of the best possible bound for worst-case inputs.\nTo the best of our knowledge, this is the first auction for this problem that enjoys these properties.\nMoreover, we show how to transform any truthful mechanism for the vertex-cover problem into a frugal one while preserving the approximation ratio.\nFrugality ratios Our vertex cover results naturally suggest two modifications of the definition of \u03bd in [16].\nThese modifications can be made independently of each other, resulting in four different payment bounds TUmax, TUmin, NTUmax, and NTUmin, where NTUmin is equal to the original payment bound \u03bd of in [16].\nWhile our main result about vertex cover auctions (Theorem 4) is with respect to NTUmin = \u03bd, we make use of the new definitions by first comparing the payment of our mechanism to a weaker bound NTUmax, and then bootstrapping from this result to obtain the desired bound.\nInspired by this application, we embark on a further study of these payment bounds.\nOur results here are as follows: 1.\nWe observe (Proposition 1) that the four payment bounds always obey a particular order that is independent of the choice of the set system and the cost vector, namely, TUmin be.\nGiven a monotone allocation rule A and a bid vector b, the threshold bid te of an agent e E A (b) is the highest bid of this agent that still wins the auction, given that the bids of other participants remain the same.\nFormally, te = sup {b ~ e E R | e E A (b1,..., b ~ e,..., bn)}.\nIt is well known (see, e.g. [19, 13]) that any auction that has a monotone allocation rule and pays each agent his threshold bid is truthful; conversely, any truthful auction has a monotone allocation rule.\nThe VCG mechanism is a truthful mechanism that maximises the \"social welfare\" and pays 0 to the losing agents.\nFor set system auctions, this simply means picking a cheapest feasible set, paying each agent in the selected set his threshold bid, and paying 0 to all other agents.\nNote, however, that the VCG mechanism may be difficult to implement, since finding a cheapest feasible set may be intractable.\nIf U is a set of agents, c (U) denotes Ew \u2208 U cw.\nSimilarly, b (U) denotes Ew \u2208 U bw.\n3.\nFRUGALITY RATIOS\nWe start by reproducing the definition of the quantity \u03bd from [16, Definition 4].\nLet (E, F) be a set system and let S be a cheapest feasible set with respect to the true costs ce.\nThen \u03bd (c, S) is the solution to the following optimisation problem.\nMinimise B = Ee \u2208 S be subject to\n(1) be> ce for all e E E (2) Ee \u2208 S \\ T be 0.\nThe resulting change in payments can be seen as \"the price of co-operation\" and corresponds to replacing condition (1) with the following weaker condition (1 \u2217): be> 0 for all e E E. (1 \u2217) By considering all possible combinations of these modifications, we obtain four different payment bounds, namely\n\u2022 TUmin (c, S), which is the solution to the optimisation problem \"Minimise B\" subject to (1 \u2217), (2), and (3).\n\u2022 TUmax (c, S), which is the solution to the optimisation problem \"Maximise B\" subject to (1 \u2217), (2), and (3).\n\u2022 NTUmin (c, S), which is the solution to the optimisation problem \"Minimise B\" subject to (1), (2), and (3).\n\u2022 NTUmax (c, S), which is the solution to the optimisation problem \"Maximise B\" subject to (1), (2), (3).\nThe abbreviations TU and NTU correspond, respectively, to transferable utility and non-transferable utility, i.e., the agents' ability\/inability to make payments to each other.\nFor concreteness, we will take TUmin (c) to be TUmin (c, S) where S is the lexicographically least amongst the cheapest feasible sets.\nWe define TUmax (c), NTUmin (c), NTUmax (c) and \u03bd (c) similarly, though we will see in Section 6.3 that, in fact, NTUmin (c, S) and NTUmax (c, S) are independent of the choice of S. Note that the quantity \u03bd (c) from [16] is NTUmin (c).\nThe second modification (transferable utility) is more intuitively appealing in the context of the maximisation problem, as both assume some degree of co-operation between the agents.\nWhile the second modification can be made without the first, the resulting payment bound TUmin (c, S) is too strong to be a realistic benchmark, at least for general set systems.\nIn particular, it can be smaller than the total cost of the cheapest feasible set S (see Section 6).\nNevertheless, we provide the definition as well as some results about TUmin (c, S) in the paper, both for completeness and because we believe that it may help to understand which properties of the payment bounds are important for our proofs.\nAnother possibility would be to introduce an additional constraint E Ee \u2208 S be> e \u2208 S ce in the definition of TUmin (c, S) (note that this condition holds automatically for TUmax (c, S), as TUmax (c, S)> NTUmax (c, S)); however, such a definition would have no direct game-theoretic interpretation, and some of our results (in particular, the ones in Section 4) would no longer be true.\nREMARK 1.\nFor the payment bounds that are derivedfrom maximisation problems, (i.e., TUmax (c, S) and NTUmax (c, S)), constraints of type (3) are redundant and can be dropped.\nHence, TUmax (c, S) and NTUmax (c, S) are solutions to linear programs, and therefore can be computed in polynomial time as long as we have a separation oracle for constraints in (2).\nIn contrast,\nNTUmin (c, S) can be NP-hard to compute even if the size of F is polynomial (see Section 6).\nThe first and third inequalities in the following observation follow from the fact that condition (1 \u2217) is strictly weaker than condition (1).\nLet M be a truthful mechanism for (E, F).\nLet pM (c) denote the total payments of M when the actual costs are c.\nA frugality ratio of M with respect to a payment bound is the ratio between the payment of M and this payment bound.\nIn particular,\nWe conclude this section by showing that there exist set systems and respective cost vectors for which all four payment bounds are different.\nIn the next section, we quantify this difference, both for general set systems, and for specific types of set systems, such as path auctions or vertex cover auctions.\nEXAMPLE 1.\nConsider the shortest-path auction on the graph of Figure 1.\nThe cheapest feasible sets are all paths from A to D.\nIt can be verified, using the reasoning of Propositions 2 and 3 below, that for the cost vector cAB = cCD = 2, cBC = 1, cAC = cBD = 5, we have\n\u2022 TUmax (c) = 10 (with bAB = bCD = 5, bBC = 0), \u2022 NTUmax (c) = 9 (with bAB = bCD = 4, bBC = 1), \u2022 NTUmin (c) = 7 (with bAB = bCD = 2, bBC = 3), \u2022 TUmin (c) = 5 (with bAB = bCD = 0, bBC = 5).\n4.\nCOMPARING PAYMENT BOUNDS\n4.1 Path auctions\nWe start by showing that for path auctions any two consecutive payment bounds can differ by at least a factor of 2.\nPROPOSITION 2.\nThere is an instance of the shortest-path problem for which we have NTUmax (c) \/ NTUmin (c)> 2.\nPROOF.\nThis construction is due to David Kempe [17].\nConsider the graph of Figure 1 with the edge costs cAB = cBC = cCD = 0, cAC = cBD = 1.\nUnder these costs, ABCD is the cheapest path.\nThe inequalities in (2) are bAB + bBC 0, bBC> 0, bCD> 0.\nNow, if the goal is to maximise bAB + bBC + bCD, the best choice is bAB = bCD = 1, bBC = 0, so NTUmax (c) = 2.\nOn the other hand, if the goal is to minimise bAB + bBC + bCD, one should set bAB = bCD = 0, bBC = 1, so NTUmin (c) = 1.\nPROPOSITION 3.\nThere is an instance of the shortest-path problem for which we have TUmax (c) \/ NTUmax (c)> 2.\nPROOF.\nAgain, consider the graph of Figure 1.\nLet the edge costs be cAB = cCD = 0, cBC = 1, cAC = cBD = 1.\nABCD is the lexicographically-least cheapest path, so we can assume that S = {AB, BC, CD}.\nThe inequalities in (2) are the same as in the previous example, and by the same argument both of them are, in fact, equalities.\nThe inequalities in (1) are bAB> 0, bBC> 1, bCD> 0.\nOur goal is to maximise bAB + bBC + bCD.\nIf we have to respect the inequalities in (1), we have to set bAB = bCD = 0,\nPROPOSITION 4.\nThere is an instance of the shortest-path problem for which we have NTUmin (c) \/ TUmin (c)> 2.\nPROOF.\nThis construction is also based on the graph of Figure 1.\nThe edge costs are cAB = cCD = 1, cBC = 0, cAC = cBD = 1.\nABCD is the lexicographically least cheapest path, so we can assume that S = {AB, BC, CD}.\nAgain, the inequalities in (2) are the same, and both are, in fact, equalities.\nThe inequalities in (1) are bAB> 1, bBC> 0, bCD> 1.\nOur goal is to minimise bAB + bBC + bCD.\nIf we have to respect the inequalities in (1), we have to set bAB = bCD = 1, bBC = 0, so NTUmin (c) = 2.\nOtherwise, we can set bAB = bCD = 0, bBC = 1, so TUmin (c) <1.\nIn Section 4.4 (Theorem 3), we show that the separation results in Propositions 2, 3, and 4 are optimal.\n4.2 Connections between separation results\nThe separation results for path auctions are obtained on the same graph using very similar cost vectors.\nIt turns out that this is not coincidental.\nNamely, we can prove the following theorem.\nTHEOREM 1.\nFor any set system (E, F), and any feasible set S,\nwhere the maximum is over all cost vectors c for which S is a cheapest feasible set.\nThe proof of the theorem follows directly from the four lemmas proved below; more precisely, the first equality in Theorem 1 is obtained by combining Lemmas 1 and 2, and the second equality is obtained by combining Lemmas 3 and 4.\nWe prove Lemma 1 here; the proofs of Lemmas 2--4 are similar and can be found in the full version of this paper [8].\nLEMMA 1.\nSuppose that c is a cost vector for (E, F) such that S is a cheapestfeasible set and TUmax (c, S) \/ NTUmax (c, S) = \u03b1.\nThen there is a cost vector c ~ such that S is a cheapest feasible set and NTUmax (c ~, S) \/ NTUmin (c ~, S)> \u03b1.\nPROOF.\nSuppose that TUmax (c, S) = X and NTUmax (c, S) = Y where X\/Y = \u03b1.\nAssume without loss of generality that S consists of elements 1,..., k, and let b1 = (b1 1,..., b1k) and b2 = (b2 1,..., b2k) be the bid vectors that correspond to TUmax (c, S) and NTUmax (c, S), respectively.\nConstruct the cost vector c ~ by setting c ~ i = ci for i \u2208 ~ S, c ~ i = min {ci, b1i} for i \u2208 S. Clearly, S is a cheapest set under c ~.\nMoreover, as the costs of elements outside of S remained the same, the right-hand sides of all constraints in (2) did not change, so any bid vector that satisfies (2) and (3) with respect to c, also satisfies them with respect to c ~.\nWe will construct two bid vectors b3 and b4 that satisfy conditions (1), (2), and (3) for the cost vector c ~, and\nFigure 2: Graph that separates payment bounds for vertex cover, n = 7\nhave EicS b3i = X, EicS b4i = Y.\nAs NTUmax (c', S)> X and NTUmin (c', S) min {ci, b1i} = c ` i, which means that b3 satisfies condition (1).\nFurthermore, we can set b4i = b2i.\nAgain, b4 satisfies conditions (2) and (3) since b2 does, and since b2 satisfies condition (1), we have b2i> ci> c ` i, which means that b4 satisfies condition (1).\nLEMMA 2.\nSuppose c is a cost vector for (E, F) such that S is a cheapestfeasible set and NTUmax (c, S) \/ NTUmin (c, S) = \u03b1.\nThen there is a cost vector c' such that S is a cheapest feasible set and TUmax (c', S) \/ NTUmax (c', S)> \u03b1.\nLEMMA 3.\nSuppose that c is a cost vector for (E, F) such that S is a cheapestfeasible set and NTUmax (c, S) \/ NTUmin (c, S) = \u03b1.\nThen there is a cost vector c' such that S is a cheapest feasible set and NTUmin (c', S) \/ TUmin (c', S)> \u03b1.\nLEMMA 4.\nSuppose that c is a cost vector for (E, F) such that S is a cheapest feasible set and NTUmin (c, S) \/ TUmin (c, S) = \u03b1.\nThen there is a cost vector c' such that S is a cheapest feasible set and NTUmax (c', S) \/ NTUmin (c', S)> \u03b1.\n4.3 Vertex-cover auctions\nIn contrast to the case of path auctions, for vertex-cover auctions the gap between NTUmin (c) and NTUmax (c) (and hence between NTUmax (c) and TUmax (c), and between TUmin (c) and NTUmin (c)) can be proportional to the size of the graph.\nPROPOSITION 5.\nFor any n> 3, there is a an n-vertex graph and a cost vector c for which TUmax (c) \/ NTUmax (c)> n \u2212 2.\nPROOF.\nThe underlying graph consists of an (n \u2212 1) - clique on the vertices X1,..., Xn_1, and an extra vertex X0 adjacent to Xn_1.\nThe costs are cX1 = cX2 = \u00b7 \u00b7 \u00b7 = cXn \u2212 2 = 0, cX0 = cXn \u2212 1 = 1.\nWe can assume that S = {X0, X1,..., Xn_2} (this is the lexicographically first vertex cover of cost 1).\nFor this set system, the constraints in (2) are bXi + bX0 Ej = 1,..., k c (Tj \\ S).\nOn the other hand, all these inequalities appear in condition (2), so they must hold for b\" i, i.e., Ei = 1,..., k b\" i the threshold bid of v satisfies tv \u2264 P\nTHEOREM 4.\nAny vertex cover auction M that has a locally optimal and monotone allocation rule and pays each agent his threshold bid has frugality ratio \u03c6NTUmin (M) \u2264 2\u0394.\nTo prove Theorem 4, we first show that the total payment of any locally optimal mechanism does not exceed \u0394c (V).\nWe then demonstrate that NTUmin (c) \u2265 c (V) \/ 2.\nBy combining these two results, the theorem follows.\nLEMMA 5.\nConsider a graph G = (V, E) with maximum degree \u0394.\nLet M be a vertex-cover auction on G that satisfies the conditions of Theorem 4.\nThen for any cost vector c, the total payment of M satisfies pM (c) \u2264 \u0394c (V).\nPROOF.\nFirst note that any such auction is truthful, so we can assume that each agent's bid is equal to his cost.\nLet S\u02c6 be the vertex cover selected by M.\nThen by local optimality\nWe now derive a lower bound on TUmax (c); while not essential for the proof of Theorem 4, it helps us build the intuition necessary for that proof.\nLEMMA 6.\nFor a vertex cover instance G = (V, E) in which S is a minimum vertex cover, TUmax (c, S) \u2265 c (V \\ S) PROOF.\nFor a vertex w with at least one neighbour in S, let d (w) denote the number of neighbours that w has in S. Consider\nc (V \\ S).\nTo finish we want to show that b is feasible in the sense that it satisfies (2).\nConsider a vertex cover T, and extend the bid vector b by assigning bv = cv for v \u2208 \/ S. Then\nan d since all edges between S \u2229 T and S go to S \u2229 T, the righthand-side is equal to\nNext, we prove a lower bound on NTUmax (c, S); we will then use it to obtain a lower bound on NTUmin (c).\nLEMMA 7.\nFor a vertex cover instance G = (V, E) in which S is a minimum vertex cover, NTUmax (c, S) \u2265 c (V \\ S) PROOF.\nIf c (S) \u2265 c (V \\ S), by condition (1) we are done.\nTherefore, for the rest of the proof we assume that c (S) yij.\nThis is because the inequalities added to L during the first j steps did not cover b ~ ij +1.\nSee Figure 3.\nSince yij +2> yij +1, we must also have xij +2> yij: otherwise, \u02c6Pij +1 would not be the \"rightmost\" constraint for b ~ ij +1.\nTherefore, the variables in Iij +2 and Iij do not overlap, and hence no b ~ i can appear in more than two inequalities in L.\nNow we follow the argument of the proof of Theorem 2 to finish.\nBy adding up all of the (tight) inequalities in L for b ~ i we obtain 2 Pi = 1,..., k b ~ i \u2265 Pj = 1,..., k c (\na vertex cover for G, no edge of E can have both of its endpoints in V \\ S, and by construction, E2 contains no edges with both endpoints in S. Therefore, the graph (V, E2) is bipartite with parts (S, V \\ S).\nSet the capacity constraints for e E E\u0393 as follows: a (s, v) = cv, a (w, t) = cw, a (v, w) = + oo for all v E S, w E V \\ S. Recall that a cut is a partition of the vertices in V\u0393 into two sets C1 and C2 so that s E C1, t E C2; we denote such a cut by\nLet Cmin = ({s} U S ~ U W ~, {t} U S ~ ~ U W ~ ~) be a minimum cut in \u0393, where S ~, S ~ ~ C _ S, W ~, W ~ ~ C _ V \\ S. See Figure 4.\nAs cap (Cmin) c (W ~).\nNow, consider the network \u0393 ~ ~ = (V\u0393,,, E\u0393,,), where V\u0393,, = {s} U S ~ ~ U W ~ ~ U {t}, E\u0393,, = {(u, v) E E\u0393 | u, v E V\u0393,,}.\nSimilarly, C ~ ~ = ({s}, S ~ ~ U W ~ ~ U {t}) is a minimum cut in \u0393 ~ ~, cap (C ~ ~) = c (S ~ ~).\nAs the size of a maximum flow from s to t is equal to the capacity of a minimum cut separating s and t, there exists a flow F = (fe) e \u2208 E\u0393,, of size c (S ~ ~).\nThis flow has to saturate all edges between s and S ~ ~, i.e., f (s, v) = cv for all v E S ~ ~.\nNow, increase the capacities of all edges between s and S ~ ~ to + oo.\nIn the modified network, the capacity of a minimum cut (and hence the size of a maximum flow) is c (W ~ ~), and a maximum flow F ~ = (f ~ e) e \u2208 E\u0393,, can be constructed by greedily augmenting F. Set bv = cv for all v E S ~, bv = f ~ (s, v) for all v E S ~ ~.\nAs F ~ is constructed by augmenting F, we have bv> cv for all v E S, i.e., condition (1) is satisfied.\nNow, let us check that no vertex cover T C _ V can violate condition (2).\nSet T1 = T n S ~, T2 = T n S ~ ~, T3 = T n W ~, T4 = T n W ~ ~; our goal is to show that b (S ~ \\ T1) + b (S ~ ~ \\ T2) c (S) = c (T1) + c (S ~ \\ T1) + c (S ~ ~).\nConsequently, c (T3)> c (S ~ \\ T1) = b (S ~ \\ T1).\nNow, consider the vertices in S ~ ~ \\ T2.\nAny edge in E2 that starts in one of these vertices has to end in T4 (this edge has to be covered by T, and it cannot go across the cut).\nTherefore, the total flow out of S ~ ~ \\ T2 is at most the total flow out of T4, i.e., b (S ~ ~ \\ T2) c (V \\ S) PROOF.\nSuppose for contradiction that c is a cost vector with minimum-cost vertex cover S and NTUmin (c, S) c ~ v for some v E S (b ~ v = c ~ v for v E ~ S by construction).\nAs b satisfies conditions (1)--(3), among the inequalities in (2) there is one that is tight for v and the bid vector b.\nThat is, b (S \\ T) = c (T \\ S).\nBy the construction of c ~, c ~ (S \\ T) = c ~ (T \\ S).\nNow since b ~ w> c ~ w for all w E S, b ~ v> c ~ v implies b ~ (S \\ T)> c ~ (S \\ T) = c ~ (T \\ S).\nBut this violates (2).\nSo we now know b ~ = c ~.\nHence, we have NTUmax (c ~, S) = Pv \u2208 S bv = NTUmin (c, S) c ~ (V \\ S) which we proved in Lemma 7.\nAs NTUmin (c, S) satisfies condition (1), it follows that we have NTUmin (c, S)> c (S).\nTogether will Lemma 8, this implies NTUmin (c, S)> max {c (V \\ S), c (S)}> c (V) \/ 2.\nCombined with Lemma 5, this completes the proof of Theorem 4.\nREMARK 3.\nAs NTUmin (c) Pu \u223c v bu.\nNote that if a vertex u has been added to the vertex cover during this process, it means that it has a neighbour whose bid is higher than bu, so after one pass all vertices in the vertex cover satisfy bv 0 and any n, there exist a graph G of maximum degree \u0394 and size N> n such that for any truthful mechanism M on G we have \u03c6NTUmin (M)> \u0394 \/ 2.\nPROOF.\nGiven n and \u0394, set k = [n\/2 \u0394].\nLet G be the graph that consists of k blocks B1,..., Bk of size 2\u0394 each, where each Bi is a complete bipartite graph with parts Li and Ri, | Li | = | Ri | = \u0394.\nWe will consider two families of cost vectors for G. Under a cost vector x G X, each block Bi has one vertex of cost 1; all other vertices cost 0.\nUnder a cost vector y G Y, there is one block that has two vertices of cost 1, one in each part, all other blocks have one vertex of cost 1, and all other vertices cost 0.\nClearly, | X | = (2\u0394) k, | Y | = k (2\u0394) k \u2212 1\u03942.\nWe will now construct a bipartite graph W with the vertex set X U Y as follows.\nConsider a cost vector y G Y that has two vertices of cost 1 in Bi; let these vertices be vl G Li and v,.\nG Ri.\nBy changing the cost of either of these vertices to 0, we obtain a cost vector in X. Let xl and x,.\nbe the cost vectors obtained by changing the cost of vl and v,.\n, respectively.\nThe vertex cover chosen by M (y) must either contain all vertices in Li or it must contain all vertices in Ri.\nIn the former case, we put in W an edge from y to xl and in the latter case we put in W an edge from y to x,.\n(if the vertex cover includes all of Bi, W contains both of these edges).\nThe graph W has at least k (2\u0394) k \u2212 1\u03942 edges, so there must exist an x G X of degree at least k\u0394 \/ 2.\nLet y1,..., yk\u0394 \/ 2 be the other endpoints of the edges incident to x, and for each i = 1,..., k\u0394 \/ 2, let vi be the vertex whose cost is different under x and yi; note that all vi are distinct.\nIt is not hard to see that NTUmin (x) M (x) \/ NTUmin (x)> \u0394 \/ 2.\nREMARK 4.\nThe lower bound of Theorem 5 can be generalised to randomised mechanisms, where a randomised mechanism is considered to be truthful if it can be represented as a probability distribution over truthful mechanisms.\nIn this case, instead of choosing the vertex x G X with the highest degree, we put both (y, xl) and (y, x,.)\ninto W, label each edge with the probability that the respective part of the block is chosen, and pick x G X with the highest weighted degree.\nThe argument can be further extended to a more permissive definition of truthfulness for randomised mechanisms, but this discussion is beyond the scope of this paper.\n6.\nPROPERTIES OF PAYMENT BOUNDS\nIn this section we consider several desirable properties of payment bounds and evaluate the four payment bounds proposed in this paper with respect to them.\nThe particular properties that we are interested in are independence of the choice of S (Section 6.3), monotonicity (Section 6.4.1), computational hardness (Section 6.4.2), and the relationship with other reasonable bounds, such as the total cost of the cheapest set (Section 6.1), or the total VCG payment (Section 6.2).\n6.1 Comparison with total cost\nOur first requirement is that a payment bound should not be less than the total cost of the selected set.\nPayment bounds are used to evaluate the performance of set-system auctions.\nThe latter have to satisfy individual rationality, i.e., the payment to each agent must be at least as large as his incurred costs; it is only reasonable to require the payment bound to satisfy the same requirement.\nClearly, NTUmax (c) and NTUmin (c) satisfy this requirement due to condition (1), and so does TUmax (c), since TUmax (c)> NTUmax (c).\nHowever, TUmin (c) fails this test.\nThe example of Proposition 4 shows that for path auctions, TUmin (c) can be smaller than the total cost by a factor of 2.\nMoreover, there are set systems and cost vectors for which TUmin (c) is smaller than the cost of the cheapest set S by a factor of \u03a9 (n).\nConsider, for example, the vertex-cover auction for the graph of Proposition 5 with the costs cX1 = \u00b7 \u00b7 \u00b7 = cXn \u2212 2 = cXn \u2212 1 = 1, cX0 = 0.\nThe cost of a cheapest vertex cover is n \u2212 2, and the lexicographically first vertex cover of cost n \u2212 2 is {X0, X1,..., Xn \u2212 2}.\nThe constraints in (2) are bXi + bX0 1 since \u03c6TUmin (VCG)> \u03c6NTUmin (VCG) and \u03c6NTUmin (VCG)> 1 by Proposition 7 of [16] (and also by Proposition 6 below).\n6.2 Comparison with VCG payments\nAnother measure of suitability for payment bounds is that they should not result in frugality ratios that are less then 1 for wellknown truthful mechanisms.\nIf this is indeed the case, the payment bound may be too weak, as it becomes too easy to design mechanisms that perform well with respect to it.\nIt particular, a reasonable requirement is that a payment bound should not exceed the total payment of the classical VCG mechanism.\nThe following proposition shows that NTUmax (c), and therefore also NTUmin (c) and TUmin (c), do not exceed the VCG payment pVCG (c).\nThe proof essentially follows the argument of Proposition 7 of [16] and can be found in the full version of this paper [8].\nProposition 6 shows that none of the payment bounds TUmin (c), NTUmin (c) and NTUmax (c) exceeds the payment of VCG.\nHowever, the payment bound TUmax (c) can be larger that the total\nVCG payment.\nIn particular, for the instance in Proposition 5, the VCG payment is smaller than TUmax (c) by a factor of n \u2212 2.\nWe have already seen that TUmax (c)> n \u2212 2.\nOn the other hand, under VCG, the threshold bid of any Xi, i = 1,..., n \u2212 2, is 0: if any such vertex bids above 0, it is deleted from the winning set together with X0 and replaced with Xn \u2212 1.\nSimilarly, the threshold bid of X0 is 1, because if X0 bids above 1, it can be replaced with Xn \u2212 1.\nSo the VCG payment is 1.\nThis result is not surprising: the definition of TUmax (c) implicitly assumes there is co-operation between the agents, while the computation of VCG payments does not take into account any interaction between them.\nIndeed, co-operation enables the agents to extract higher payments under VCG.\nThat is, VCG is not groupstrategyproof.\nThis suggests that as a payment bound, TUmax (c) may be too liberal, at least in a context where there is little or no co-operation between agents.\nPerhaps TUmax (c) can be a good benchmark for measuring the performance of mechanisms designed for agents that can form coalitions or make side payments to each other, in particular, group-strategyproof mechanisms.\nAnother setting in which bounding \u03c6TUmax is still of some interest is when, for the underlying problem, the optimal allocation and VCG payments are NP-hard to compute.\nIn this case, finding a polynomial-time computable mechanism with good frugality ratio with respect to TUmax (c) is a non-trivial task, while bounding the frugality ratio with respect to more challenging payment bounds could be too difficult.\nTo illustrate this point, compare the proofs of Lemma 6 and Lemma 7: both require some effort, but the latter is much more difficult than the former.\n6.3 The choice of S\nAll payment bounds defined in this paper correspond to the total bid of all elements in the cheapest feasible set, where ties are broken lexicographically.\nWhile this definition ensures that our payment bounds are well-defined, the particular choice of the drawresolution rule appears arbitrary, and one might wonder if our payment bounds are sufficiently robust to be independent of this choice.\nIt turns out that is indeed the case for NTUmin (c) and NTUmax (c), i.e., these bounds do not depend on the draw-resolution rule.\nTo see this, suppose that two feasible sets S1 and S2 have the same cost.\nIn the computation of NTUmin (c, S1), all vertices in S1 \\ S2 would have to bid their true cost, since otherwise S2 would become cheaper than S1.\nHence, any bid vector for S1 can only have be = ~ ce for e \u2208 S1 \u2229 S2, and hence constitutes a valid bid vector for S2 and vice versa.\nA similar argument applies to NTUmax (c).\nHowever, for TUmin (c) and TUmax (c) this is not the case.\nFor example, consider the set system\nwith the costs c1 = 2, c2 = c3 = c4 = 1, c5 = 3.\nThe cheapest sets are S1 and S2.\nNow TUmax (c, S1) <4, as the total bid of the elements in S1 cannot exceed the total cost of S3.\nOn the other hand, TUmax (c, S2)> 5, as we can set b2 = 3, b3 = 0, b4 = 2.\nSimilarly, TUmin (c, S1) = 4, because the inequalities in (2) are\n6.4 Negative results for NTUmin (c) and TUmin (c)\nThe results in [16] and our vertex cover results are proved for the frugality ratio \u03c6NTUmin.\nIndeed, it can be argued that \u03c6NTUmin is the \"best\" definition of frugality ratio, because among all reasonable payment bounds (i.e., ones that are at least as large as the cost of the cheapest feasible set), it is most demanding of the algorithm.\nHowever, NTUmin (c) is not always the easiest or the most natural payment bound to work with.\nIn this subsection, we discuss several disadvantages of NTUmin (c) (and also TUmin (c)) as compared to NTUmax (c) and TUmax (c).\n6.4.1 Nonmonotonicity\nThe first problem with NTUmin (c) is that it is not monotone with respect to F, i.e., it may increase when one adds a feasible set to F. (It is, however, monotone in the sense that a losing agent cannot become a winner by raising his cost.)\nIntuitively, a good payment bound should satisfy this monotonicity requirement, as adding a feasible set increases the competition, so it should drive the prices down.\nNote that this indeed the case for NTUmax (c) and TUmax (c) since a new feasible set adds a constraint in (2), thus limiting the solution space for the respective linear program.\nPROPOSITION 7.\nAdding a feasible set to F can increase the value of NTUmin (c) by a factor of \u03a9 (n).\nPROOF.\nLet E = {x, xx, y1,..., yn, z1,..., zn}.\nSet Y = {y1,..., yn}, S = Y \u222a {x}, Ti = Y \\ {yi} \u222a {zi}, i = 1,..., n, and suppose that F = {S, T1,..., Tn}.\nThe costs are cx = 0, cxx = 0, cyl = 0, czl = 1 for i = 1,..., n. Note that S is the cheapest feasible set.\nLet F4 = F \u222a {T0}, where T0 = Y \u222a {xx}.\nFor F, the bid vector by1 = \u00b7 \u00b7 \u00b7 = by.\n= 0, bx = 1 satisfies (1), (2), and (3), so NTUmin (c) <1.\nFor F4, S is still the lexicographically-least cheapest set.\nAny optimal solution has bx = 0 (by constraint in (2) with T0).\nCondition (3) for yi implies bx + byl = czl = 1, so byl = 1 and NTUmin (c) = n. For path auctions, it has been shown [18] that NTUmin (c) is non-monotone in a slightly different sense, i.e., with respect to adding a new edge (agent) rather than a new feasible set (a team of existing agents).\nREMARK 5.\nWe can also show that NTUmin (c) is non-monotone for vertex cover.\nIn this case, adding a new feasible set corresponds to deleting edges from the graph.\nIt turns out that deleting a single edge can increase NTUmin (c) by a factor of n \u2212 2; the construction is similar to that of Proposition 5.\n6.4.2 NP-Hardness\nAnother problem with NTUmin (c, S) is that it is NP-hard to compute even if the number of feasible sets is polynomial in n. Again, this puts it at a disadvantage compared to NTUmax (c, S) and TUmax (c, S) (see Remark 1).\nTHEOREM 6.\nComputing NTUmin (c) is NP-hard, even when the lexicographically-least feasible set S is given in the input.\nPROOF.\nWe reduce EXACT COVER BY 3-SETS (X3C) to our problem.\nAn instance of X3C is given by a universe G = {g1,..., gn} and a collection of subsets C1,..., Cm, Ci \u2282 G, | Ci | = 3, where the goal is to decide whether one can cover G by n\/3 of these sets.\nObserve that if this is indeed the case, each element of G is contained in exactly one set of the cover.\nform: Minimise P\nPROOF.\nThe construction is straightforward: there is an element of cost 0 for each bi, an element of cost aj for each aj, the feasible solutions are {b1,..., bn}, or any set obtained from {b1,..., bn} by replacing the elements in Sj by aj.\nBy this lemma, all we have to do to prove Theorem 6 is to show how to solve X3C by using the solution to a minimisation problem of the form given in Lemma 9.\nWe do this as follows.\nFor each Ci, we introduce 4 variables xi, \u00af xi, ai, and bi.\nAlso, for each element gj of G there is a variable dj.\nWe use the following set of constraints:\n\u2022 In (1), we have constraints xi> 0, \u00af xi> 0, ai> 0, bi> 0, dj> 0 for all i = 1,..., m, j = 1,..., n. \u2022 In (2), for all i = 1,..., m, we have the following 5 constraints: xi + \u00af xi <1, xi + ai <1, \u00af xi + ai <1, xi + bi <1, \u00af xi + bi <1.\nAlso, for all j = 1,..., n we have a constraint of the form xi1 + \u00b7 \u00b7 \u00b7 + xik + dj <1, where Ci1,..., Cik are the sets that contain gj.\nThe goal is to minimize z = Pi (xi + \u00af xi + ai + bi) + Pj dj.\nObserve that for each j, there is only one constraint involving dj, so by condition (3) it must be tight.\nConsider the two constraints involving ai.\nOne of them must be tight, and therefore xi + \u00af xi + ai + bi> xi + \u00af xi + ai> 1.\nHence, for any feasible solution to (1)--(3) we have z> m. Now, suppose that there is an exact set cover.\nSet dj = 0 for j = 1,..., n. Also, if Ci is included in this cover, set xi = 1, \u00af xi = ai = bi = 0, otherwise set \u00af xi = 1, xi = ai = bi = 0.\nClearly, all inequalities in (2) are satisfied (we use the fact that each element is covered exactly once), and for each variable, one of the constraints involving it is tight.\nThis assignment results in z = m. Conversely, suppose there is a feasible solution with z = m.\nAs each addend of the form xi + \u00af xi + ai + bi contributes at least 1, we have xi + \u00af xi + ai + bi = 1 for all i, dj = 0 for all j.\nWe will now show that for each i, either xi = 1 and \u00af xi = 0, or xi = 0 and \u00af xi = 1.\nFor the sake of contradiction, suppose that xi = \u03b4 <1, \u00af xi = \u03b4' <1.\nAs one of the constraints involving ai must be tight, we have ai> min {1 \u2212 \u03b4, 1 \u2212 \u03b4'}.\nSimilarly, bi> min {1 \u2212 \u03b4, 1 \u2212 \u03b4'}.\nHence, xi + \u00af xi + ai + bi = 1 = \u03b4 + \u03b4' + 2 min {1 \u2212 \u03b4, 1 \u2212 \u03b4'}> 1.\nTo finish the proof, note that for each j = 1,..., m we have xi1 + \u00b7 \u00b7 \u00b7 + xik + dj = 1 and dj = 0, so the subsets that correspond to xi = 1 constitute a set cover.\nREMARK 6.\nIn the proofs of Proposition 7 and Theorem 6 all constraints in (1) are of the form be> 0.\nHence, the same results are true for TUmin (c).\nREMARK 7.\nFor shortest-path auctions, the size of F can be superpolynomial.\nHowever, there is a polynomial-time separation oracle for constraints in (2) (to construct one, use any algorithm for finding shortest paths), so one can compute NTUmax (c) and TUmax (c) in polynomial time.\nOn the other hand, recently and independently it was shown [18] that computing NTUmin (c) for shortest-path auctions is NP-hard.","keyphrases":["frugal ratio","frugal","vertex cover","auction","bootstrap techniqu","vertex-cover auction","transfer util","consecut payment bound","monoton alloc rule","co-oper","polynomi-time","nonmonoton"],"prmu":["P","P","P","P","P","M","U","M","M","U","U","U"]} {"id":"H-32","title":"Interesting Nuggets and Their Impact on Definitional Question Answering","abstract":"Current approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets. This is insufficient as they do not address the novelty factor that a definitional nugget must also possess. This paper proposes to address the deficiency by building a Human Interest Model from external knowledge. It is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic. We compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.","lvl-1":"Interesting Nuggets and Their Impact on Definitional Question Answering Kian-Wei Kor Department of Computer Science School of Computing National University of Singapore dkor@comp.nus.edu.sg Tat-Seng Chua Department of Computer Science School of Computing National University of Singapore chuats@comp.nus.edu.sg ABSTRACT Current approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets.\nThis is insufficient as they do not address the novelty factor that a definitional nugget must also possess.\nThis paper proposes to address the deficiency by building a Human Interest Model from external knowledge.\nIt is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic.\nWe compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval Models; H.1.2 [User\/Machine Systems]: Human Factors General Terms Algorithms, Human Factors, Experimentation 1.\nDEFINITIONAL QUESTION ANSWERING Definitional Question Answering was first introduced to the TExt Retrieval Conference Question Answering Track main task in 2003.\nThe Definition questions, also called Other questions in recent years, are defined as follows.\nGiven a question topic X, the task of a definitional QA system is akin to answering the question What is X?\nor Who is X?\n.\nThe definitional QA system is to search through a news corpus and return return a set of answers that best describes the question topic.\nEach answer should be a unique topic-specific nugget that makes up one facet in the definition of the question topic.\n1.1 The Two Aspects of Topic Nuggets Officially, topic-specific answer nuggets or simply topic nuggets are described as informative nuggets.\nEach informative nugget is a sentence fragment that describe some factual information about the topic.\nDepending on the topic type and domain, this can include topic properties, relationships the topic has with some closely related entity, or events that happened to the topic.\nFrom observation of the answer set for definitional question answering from TREC 2003 to 2005, it seems that a significant number of topic nuggets cannot simply be described as informative nuggets.\nRather, these topic nuggets have a trivia-like quality associated with them.\nTypically, these are out of the ordinary pieces of information about a topic that can pique a human reader``s interest.\nFor this reason, we decided to define answer nuggets that can evoke human interest as interesting nuggets.\nIn essence, interesting nuggets answer the questions What is X famous for?\n, What defines X?\nor What is extraordinary about X?\n.\nWe now have two very different perspective as to what constitutes an answer to Definition questions.\nAn answer can be some important factual information about the topic or some novel and interesting aspect about the topic.\nThis duality of informativeness and interestingness can be clearly observed in the five vital answer nuggets for a TREC 2005 topic of George Foreman.\nCertain answer nuggets are more informative while other nuggets are more interesting in nature.\nInformative Nuggets - Was graduate of Job Corps.\n- Became oldest world champion in boxing history.\nInteresting Nuggets - Has lent his name to line of food preparation products.\n- Waved American flag after winning 1968 Olympics championship.\n- Returned to boxing after 10 yr hiatus.\nAs an African-American professional heavyweight boxer, an average human reader would find the last three nuggets about George Foreman interesting because boxers do not usually lend their names to food preparation products, nor do boxers retire for 10 years before returning to the ring and become the world``s oldest boxing champion.\nForeman``s waving of the American flag at the Olympics is interesting because the innocent action caused some AfricanAmericans to accuse Foreman of being an Uncle Tom.\nAs seen here, interesting nuggets has some surprise factor or unique quality that makes them interesting to human readers.\n1.2 Identifying Interesting Nuggets Since the original official description for definitions comprise of identifying informative nuggets, most research has focused entirely on identifying informative nuggets.\nIn this paper, we focus on exploring the properties of interesting nuggets and develop ways of identify such interesting nuggets.\nA Human Interest Model definitional question answering system is developed with emphasis on identifying interesting nuggets in order to evaluate the impact of interesting nuggets on the performance of a definitional question answering system.\nWe further experimented with combining the Human Interest Model with a lexical pattern based definitional question answering system in order to capture both informative and interesting nuggets.\n2.\nRELATED WORK There are currently two general methods for Definitional Question Answering.\nThe more common method uses a lexical patternbased approach was first proposed by Blair-Goldensohn et al. [1] and Xu et al. [14].\nBoth groups predominantly used patterns such as copulas and appositives, as well as manually crafted lexicosyntactic patterns to identify sentences that contain informative nuggets.\nFor example, Xu et al. used 40 manually defined structured patterns in their 2003 definitional question answering system.\nSince then, in an attempt to capture a wider class of informational nuggets, many such systems of increasing complexity has been created.\nA recent system by Harabagiu et al. [6] created a definitional question answering system that combines the use of 150 manually defined positive and negative patterns, named entity relations and specially crafted information extraction templates for 33 target domains.\nHere, a musician template may contain lexical patterns that identify information such as the musician``s musical style, songs sung by the musician and the band, if any, that the musician belongs to.\nAs one can imagine, this is a knowledge intensive approach that requires an expert linguist to manually define all possible lexical or syntactic patterns required to identify specific types of information.\nThis process requires a lot of manual labor, expertise and is not scalable.\nThis lead to the development of the soft-pattern approach by Cui et al. [4, 11].\nInstead of manually encoding patterns, answers to previous definitional question answering evaluations were converted into generic patterns and a probabilistic model is trained to identify such patterns in sentences.\nGiven a potential answer sentence, the probabilistic model outputs a probability that indicates how likely the sentence matches one or more patterns that the model has seen in training.\nSuch lexicalosyntactic patterns approach have been shown to be adept at identifying factual informative nuggets such as a person``s birthdate, or the name of a company``s CEO.\nHowever, these patterns are either globally applicable to all topics or to a specific set of entities such as musicians or organizations.\nThis is in direct contrast to interesting nuggets that are highly specific to individual topics and not to a set of entities.\nFor example, the interesting nuggets for George Foreman are specific only George Foreman and no other boxer or human being.\nTopic specificity or topic relevance is thus an important criteria that helps identify interesting nuggets.\nThis leads to the exploration of the second relevance-based approach that has been used in definitional question answering.\nPredominantly, this approach has been used as a backup method for identifying definitional sentences when the primary method of lexicalosyntactic patterns failed to find a sufficient number of informative nuggets [1].\nA similar approach has also been used as a baseline system for TREC 2003 [14].\nMore recently, Chen et al. [3] adapted a bi-gram or bi-term language model for definitional Question Answering.\nGenerally, the relevance-based approach requires a definitional corpus that contain documents highly relevant to the topic.\nThe baseline system in TREC 2003 simply uses the topic words as its definitional corpus.\nBlair-Goldensohn et al. [1] uses a machine learner to include in the definitonal corpus sentences that are likely to be definitional.\nChen et al. [3] collect snippets from Google to build its definitional corpus.\nFrom the definitional corpus, a definitional centroid vector is built or a set of centroid words are selected.\nThis centroid vector or set of centroid words is taken to be highly indicative of the topic.\nSystems can then use this centroid to identify definitional answers by using a variety of distance metrics to compare against sentences found in the set of retrieved documents for the topic.\nBlairGoldensohn et al. [1] uses Cosine similarity to rank sentences by centrality.\nChen et al. [3] builds a bigram language model using the 350 most frequently occurring google snippet terms, described in their paper as an ordered centroid, to estimate the probability that a sentence is similar to the ordered centroid.\nAs described here, the relevance-based approach is highly specific to individual topics due to its dependence on a topic specific definitional corpus.\nHowever if individual sentences are viewed as a document, then relevance-based approaches essentially use the collected topic specific centroid words as a form of document retrieval with automated query expansion to identify strongly relevant sentences.\nThus such methods identify relevant sentences and not sentences containing definitional nuggets.\nYet, the TREC 2003 baseline system [14] outperformed all but one other system.\nThe bi-term language model [3] is able to report results that are highly competitive to state-of-the-art results using this retrieval-based approach.\nAt TREC 2006, a simple weighted sum of all terms model with terms weighted using solely Google snippets outperformed all other systems by a significant margin [7].\nWe believe that interesting nuggets often come in the form of trivia, novel or rare facts about the topic that tend to strongly cooccur with direct mention of topic keywords.\nThis may explain why relevance-based method can perform competitively in definitional question answering.\nHowever, simply comparing against a single centroid vector or set of centroid words may have over emphasized topic relevance and has only identified interesting definitional nuggets in an indirect manner.\nStill, relevance based retrieval methods can be used as a starting point in identifying interesting nuggets.\nWe will describe how we expand upon such methods to identify interesting nuggets in the next section.\n3.\nHUMAN INTEREST MODEL Getting a computer system to identify sentences that a human reader would find interesting is a tall order.\nHowever, there are many documents on the world wide web that are contain concise, human written summaries on just about any topic.\nWhat``s more, these documents are written explicitly for human beings and will contain information about the topic that most human readers would be interested in.\nAssuming we can identify such relevant documents on the web, we can leverage them to assist in identifying definitional answers to such topics.\nWe can take the assumption that most sentences found within these web documents will contain interesting facets about the topic at hand.\nThis greatly simplifies the problem to that of finding within the AQUAINT corpus sentences similar to those found in web documents.\nThis approach has been successfully used in several factoid and list Question Answering systems [11] and we feel the use of such an approach for definitional or Other question answering is justified.\nIdentifying interesting nuggets requires computing machinery to understand world knowledge and human insight.\nThis is still a very challenging task and the use of human written documents dramatically simplifies the complexity of the task.\nIn this paper, we report on such an approach by experimenting with a simple word-level edit distance based weighted term comparison algorithm.\nWe use the edit distance algorithm to score the similarity of a pair of sentences, with one sentence coming from web resources and the other sentence selected from the AQUAINT corpus.\nThrough a series of experiments, we will show that even such a simple approach can be very effective at definitional question answering.\n3.1 Web Resources There exists on the internet articles on just about any topic a human can think of.\nWhat``s more, many such articles are centrally located on several prominent websites, making them an easily accessible source of world knowledge.\nFor our work on identifying interesting nuggets, we focused on finding short one or two page articles on the internet that are highly relevant to our desired topic.\nSuch articles are useful as they contain concise information about the topic.\nMore importantly, the articles are written by humans, for human readers and thus contain the critical human world knowledge that a computer system currently is unable to capture.\nWe leverage this world knowledge by collecting articles for each topic from the following external resources to build our Interest Corpus for each topic.\nWikipedia is a Web-based, free-content encyclopedia written collaboratively by volunteers.\nThis resource has been used by many Question Answering system as a source of knowledge about each topic.\nWe use a snapshot of Wikipedia taken in March 2006 and include the most relevant article in the Interest Corpus.\nNewsLibrary is a searchable archive of news articles from over 100 different newspaper agencies.\nFor each topic, we download the 50 most relevant articles and include the title and first paragraph of each article in the Interest Corpus.\nGoogle Snippets are retrieved by issuing the topic as a query to the Google search engine.\nFrom the search results, we extracted the top 100 snippets.\nWhile Google snippets are not articles, we find that they provide a wide coverage of authorative information about most topics.\nDue to their comprehensive coverage of a wide variety of topics, the above resources form the bulk of our Interest Corpus.\nWe also extracted documents from other resources.\nHowever, as these resources are more specific in nature, we do not always get any single relevant document.\nThese resources are listed below.\nBiography.com is the website for the Biography television cable channel.\nThe channel``s website contains searchable biographies on over 25,000 notable people.\nIf the topic is a person and we can find a relevant biography on the person, we include it it in our Interest Corpus.\nBartleby.com contains a searchable copy of several resources including the Columbia Encyclopedia, the World Factbook, and several English dictionaries.\ns9.com is a biography dictionary on over 33,000 notable people.\nLike Biography.com, we include the most relevant biography we can find in the Interest Corpus.\nGoogle Definitions Google search engine offers a feature called Definitions that provides the definition for a query, if it has one.\nWe use this feature and extract whatever definitions the Google search engine has found for each topic into the Interest Corpus.\nFigure 1: Human Interest Model Architecture.\nWordNet WordNet is an well-known electronic semantic lexicon for the English language.\nBesides grouping English words into sets of synonyms called synsets, it also provide a short definition on the meaning of words found in each synset.\nWe add this short definition, if there is one, into our Interest Corpus.\nWe have two major uses for this topic specific Interest Corpus, as a source of sentences containing interesting nuggets and as a unigram language model of topic terms, I. 3.2 Multiple Interesting Centroids We have seen that interesting nuggets are highly specific to a topic.\nRelevance-based approaches such as the bigram language model used by Chen et al. [3] are focused on identifying highly relevant sentences and pick up definitional answer nuggets as an indirect consequence.\nWe believe that the use of only a single collection of centroid words has over-emphasized topic relevance and choose instead to use multiple centroids.\nSince sentences in the Interest Corpus of articles we collected from the internet are likely to contain nuggets that are of interest to human readers, we can essentially use each sentence as pseudocentroids.\nEach sentence in the Interest Corpus essentially raises a different aspect of the topic for consideration as a sentence of interest to human readers.\nBy performing a pairwise sentence comparison between sentences in the Interest Corpus and candidate sentences retrieved from the AQUAINT corpus, we increase the number of sentence comparisons from O(n) to O(nm).\nHere, n is the number of potential candidate sentences and m is the number of sentences in the Interest Corpus.\nIn return, we obtain a diverse ranked list of answers that are individually similar to various sentences found in the topic``s Interest Corpus.\nAn answer can only be highly ranked if it is strongly similar to a sentence in the Interest Corpus, and is also strongly relevant to the topic.\n3.3 Implementation Figure 1 shows the system architecture for the proposed Human Interest-based definitional QA system.\nThe AQUAINT Retrieval module shown in Figure 1 reuses a document retrieval module of a current Factoid and List Question Answering system we have implemented.\nGiven a set of words describing the topic, the AQUAINT Retrieval module does query expansion using Google and searches an index of AQUAINT documents to retrieve the 800 most relevant documents for consideration.\nThe Web Retrieval module on the other hand, searches the online resources described in Section 3.1 for interesting documents in order to populate the Interest Corpus.\nThe HIM Ranker, or Human Interest Model Ranking module, is the implementation of what is described in this paper.\nThe module first builds the unigram language model, I, from the collected web documents.\nThis language model will be used to weight the importance of terms within sentences.\nNext, a sentence chunker is used to segment all 800 retrieved documents into individual sentences.\nEach of these sentences can be a potential answer sentence that will be independently ranked by interestingness.\nWe rank sentences by interestingness using sentences from both the Interest Corpus of external documents as well as the unigram language model we built earlier which we use to weight terms.\nA candidate sentence in our top 800 relevant AQUAINT documents is considered interesting if it is highly similar in content to a sentence found in our collection of external web-documents.\nTo achieve this, we perform a pairwise similarity comparison between a candidate sentence and sentences in our external documents using a weighted-term edit distance algorithm.\nTerm weights are used to adjust the relative importance of each unique term found in the Interest Corpus.\nWhen both sentences share the same term, the similarity score is incremented by the two times the term``s weight and every dissimilar term decrements the similarity score by the dissimilar term``s weight.\nWe choose the highest achieved similarity score for a candidate sentence as the Human Interest Model score for the candidate sentence.\nIn this manner, every candidate sentence is ranked by interestingness.\nFinally, to obtain the answer set, we select the top 12 highest ranked and non redundant sentences as definitional answers for the topic.\n4.\nINITIAL EXPERIMENTS The Human Interest-based system described in the previous section is designed to identify only interesting nuggets and not informative nuggets.\nThus, it can be described as a handicapped system that only deals with half the problem in definitional question answering.\nThis is done in order to explore how interestingness plays a factor in definitional answers.\nIn order to compare and contrast the differences between informative and interesting nuggets, we also implemented the soft-pattern bigram model proposed by Cui et al. [4, 11].\nIn order to ensure comparable results, both systems are provided identical input data.\nSince both system require the use of external resources, they are both provided the same web articles retrieved by our Web Retrieval module.\nBoth systems also rank the same same set of candidate sentences in the form of 800 most relevant documents as retrieved by our AQUAINT Retrieval module.\nFor the experiments, we used the TREC 2004 question set to tune any system parameters and use the TREC 2005 question sets to test the both systems.\nBoth systems are evaluated the results using the standard scoring methodology for TREC definitions.\nTREC provides a list of vital and okay nuggets for each question topic.\nEvery question is scored on nugget recall (NR) and nugget precision (NP) and a single final score is computed using F-Measure (see equation 1) with \u03b2 = 3 to emphasize nugget recall.\nHere, NR is the number of vital nuggets returned divided by total number of vital nuggets while NP is computed using a minimum allowed character length function defined in [12].\nThe evaluation is automatically conducted using Pourpre v1.0c [10].\nFScore = \u03b22 \u2217 NP \u2217 NR (\u03b22 + 1)NP + NR (1) System F3-Score Best TREC 2005 System 0.2480 Soft-Pattern (SP) 0.2872 Human Interest Model (HIM) 0.3031 Table 1: Performance on TREC 2005 Question Set Figure 2: Performance by entity types.\n4.1 Informativeness vs Interestingness Our first experiment compares the performance of solely identifying interesting nuggets against solely identifying informative nuggets.\nWe compare the results attained by the Human Interest Model that only identify interesting nuggets with the results of the syntactic pattern finding Soft-Pattern model as well as the result of the top performing definitional system in TREC 2005 [13].\nTable 1 shows the F3 score the three systems for the TREC 2005 question set.\nThe Human Interest Model clearly outperform both soft pattern and the best TREC 2005 system with a F3 score of 0.303.\nThe result is also comparable with the result of a human manual run, which attained a F3 score of 0.299 on the same question set [9].\nThis result is confirmation that interesting nuggets does indeed play a significant role in picking up definitional answers, and may be more vital than using information finding lexical patterns.\nIn order to get a better perspective of how well the Human Interest Model performs for different types of topics, we manually divided the TREC 2005 topics into four broad categories of PERSON, ORGANIZATION, THING and EVENT as listed in Table 3.\nThese categories conform to TREC``s general division of question topics into 4 main entity types [13].\nThe performance of Human Interest Model and Soft Pattern Bigram Model for each entity type can be seen in Figure 2.\nBoth systems exhibit consistent behavior across entity types, with the best performance coming from PERSON and ORGANIZATION topics and the worst performance from THING and EVENT topics.\nThis can mainly be attributed to our selection of web-based resources for the definitional corpus used by both system.\nIn general, it is harder to locate a single web article that describes an event or a general object.\nHowever given the same set of web-based information, the Human Interest Model consistently outperforms the soft-pattern model for all four entity types.\nThis suggests that the Human Interest Model is better able to leverage the information found in web resources to identify definitional answers.\n5.\nREFINEMENTS Encouraged by the initial experimental results, we explored two further optimization of the basic algorithm.\n5.1 Weighting Interesting Terms The word trivia refer to tidbits of unimportant or uncommon information.\nAs we have noted, interesting nuggets often has a trivialike quality that makes them of interest to human beings.\nFrom this description of interesting nuggets and trivia, we hypothesize that interesting nuggets are likely to occur rarely in a text corpora.\nThere is a possibility that some low-frequency terms may actually be important in identifying interesting nuggets.\nA standard unigram language model would not capture these low-frequency terms as important terms.\nTo explore this possibility, we experimented with three different term weighting schemes that can provide more weight to certain low-frequency terms.\nThe weighting schemes we considered include commonly used TFIDF, as well as information theoretic Kullback-Leiber divergence and Jensen-Shannon divergence [8].\nTFIDF, or Term Frequency \u00d7 Inverse Document Frequency, is a standard Information Retrieval weighting scheme that balances the importance of a term in a document and in a corpus.\nFor our experiments, we compute the weight of each term as tf \u00d7 log( N nt ), where tf is the term frequency, nt is the number of sentences in the Interest Corpus having the term and N is the total number of sentences in the Interest Corpus.\nKullback-Leibler Divergence (Equation 2) is also called KL Divergence or relative entropy, can be viewed as measuring the dissimilarity between two probability distributions.\nHere, we treat the AQUAINT corpus as a unigram language model of general English [15], A, and the Interest Corpus as a unigram language model consisting of topic specific terms and general English terms, I. General English words are likely to have similar distributions in both language models I and A. Thus using KL Divergence as a term weighting scheme will cause strong weights to be given to topicspecific terms because their distribution in the Interest Corpus they occur significantly more often or less often than in general English.\nIn this way, high frequency centroid terms as well as low frequency rare but topic-specific terms are both identified and highly weighted using KL Divergence.\nDKL(I A) = t I(t)log I(t) A(t) (2) Due to the power law distribution of terms in natural language, there are only a small number of very frequent terms and a large number of rare terms in both I and A.\nWhile the common terms in English consist of stop words, the common terms in the topic specific corpus, I, consist of both stop words and relevant topic words.\nThese high frequency topic specific words occur very much more frequently in I than in A.\nAs a result, we found that KL Divergence has a bias towards highly frequent topic terms as we are measuring direct dissimilarity against a model of general English where such topic terms are very rare.\nFor this reason, we explored another divergence measure as a possible term weighting scheme.\nJensen-Shannon Divergence or JS Divergence extends upon KL Divergence as seen in Equation 3.\nAs with KL Divergence, we also use JS divergence to measure the dissimilarity between our two language models, I and A. DJS(I A) = 1 2 centsDKL I I+A 2 \u00a1+ DKL A I+A 2 \u00a1# (3) Figure 3: Performance by various term weighting schemes on the Human Interest Model.\nHowever, JS Divergence has additional properties1 of being symmetric and non-negative as seen in Equation 4.\nThe symmetric property gives a more balanced measure of dissimilarity and avoids the bias that KL divergence has.\nDJS(I A) = DJS(A I) = 0 I = A > 0 I <> A (4) We conducted another experiment, substituting the unigram languge model weighting scheme we used in the initial experiments with the three term weighting schemes described above.\nAs lower bound reference, we included a term weighting scheme consisting of a constant 1 for all terms.\nFigure 3 show the result of applying the five different term weighting schemes on the Human Interest Model.\nTFIDF performed the worst as we had anticipated.\nThe reason is that most terms only appear once within each sentence, resulting in a term frequency of 1 for most terms.\nThis causes the IDF component to be the main factor in scoring sentences.\nAs we are computing the Inverse Document Frequency for terms in the Interest Corpus collected from web resources, IDF heavily down-weights highly frequency topic terms and relevant terms.\nThis results in TFIDF favoring all low frequency terms over high frequency terms in the Interest Corpus.\nDespite this, the TFIDF weighting scheme only scored a slight 0.0085 lower than our lower bound reference of constant weights.\nWe view this as a positive indication that low frequency terms can indeed be useful in finding interesting nuggets.\nBoth KL and JS divergence performed marginally better than the uniform language model probabilistic scheme that we used in our initial experiments.\nFrom inspection of the weighted list of terms, we observed that while low frequency relevant terms were boosted in strength, high frequency relevant terms still dominate the top of the weighted term list.\nOnly a handful of low frequency terms were weighted as strongly as topic keywords and combined with their low frequency, may have limited the impact of re-weighting such terms.\nHowever we feel that despite this, Jensen-Shannon divergence does provide a small but measurable increase in the performance of our Human Interest Model.\n1 JS divergence also has the property of being bounded, allowing the results to be treated as a probability if required.\nHowever, the bounded property is not required here as we are only treating the divergence computed by JS divergence as term weights 5.2 Selecting Web Resources In one of our initial experiments, we observed that the quality of web resources included in the Interest Corpus may have a direct impact on the results we obtain.\nWe wanted to determine what impact the choice of web resources have on the performance of our Human Interest Model.\nFor this reason, we split our collection of web resources into four major groups listed here: N - News: Title and first paragraph of the top 50 most relevant articles found in NewsLibrary.\nW - Wikipedia: Text from the most relevant article found in Wikipedia.\nS - Snippets: Snippets extracted from the top 100 most relevant links after querying Google.\nM - Miscellaneous sources: Combination of content (when available) from secondary sources including biography.com, s9.com, bartleby.com articles, Google definitions and WordNet definitions.\nWe conducted a gamut of runs on the TREC 2005 question set using all possible combinations of the above four groups of web resources to identify the best possible combination.\nAll runs were conducted on Human Interest Model using JS divergence as term weighting scheme.\nThe runs were sorted in descending F3-Score and the top 3 best performing runs for each entity class are listed in Table 2 together with earlier reported F3-scores from Figure 2 as a baseline reference.\nA consistent trend can be observed for each entity class.\nFor PERSON and EVENT topics, NewsLibrary articles are the main source of interesting nuggets with Google snippets and miscellaneous articles offering additional supporting evidence.\nThis seem intuitive for events as newspapers predominantly focus on reporting breaking newsworthy events and are thus excellent sources of interesting nuggets.\nWe had expected Wikipedia rather than news articles to be a better source of interesting facts about people and were surprised to discover that news articles outperformed Wikipedia.\nWe believe that the reason is because the people selected as topics thus far have been celebrities or well known public figures.\nHuman readers are likely to be interested in news events that spotlight these personalities.\nConversely for ORGANIZATION and THING topics, the best source of interesting nuggets come from Wikipedia``s most relevant article on the topic with Google snippets again providing additional information for organizations.\nWith an oracle that can classify topics by entity class with 100% accuracy and by using the best web resources for each entity class as shown in Table 2, we can attain a F3-Score of 0.3158.\n6.\nUNIFYING INFORMATIVENESS WITH INTERESTINGNESS We have thus far been comparing the Human Interest Model against the Soft-Pattern model in order to understand the differences between interesting and informative nuggets.\nHowever from the perspective of a human reader, both informative and interesting nuggets are useful and definitional.\nInformative nuggets present a general overview of the topic while interesting nuggets give readers added depth and insight by providing novel and unique aspects about the topic.\nWe believe that a good definitional question answering system should provide the reader with a combined mixture of both nugget types as a definitional answer set.\nRank PERSON ORG THING EVENT Baseline Unigram Weighting Scheme, N+W+S+M 0.3279 0.3630 0.2551 0.2644 1 N+S+M W+S W+M N+M 0.3584 0.3709 0.2688 0.2905 2 N+S N+W+S W+S+M N+S+M 0.3469 0.3702 0.2665 0.2745 3 N+M N+W+S+M W+S N+S 0.3431 0.3680 0.2616 0.2690 Table 2: Top 3 runs using different web resources for each entity class We now have two very different experts at identifying definitions.\nThe Soft Pattern Bigram Model proposed by Cui et al. is an expert in identifying informative nuggets.\nThe Human Interest Model we have described in this paper on the other hand is an expert in finding interesting nuggets.\nWe had initially hoped to unify the two separate definitional question answering systems by applying an ensemble learning method [5] such as voting or boosting in order to attain a good mixture of informative and interesting nuggets in our answer set.\nHowever, none of the ensemble learning methods we attempted could outperform our Human Interest Model.\nThe reason is that both systems are picking up very different sentences as definitional answers.\nIn essence, our two experts are disagreeing on which sentences are definitional.\nIn the top 10 sentences from both systems, only 4.4% of these sentences appeared in both answer sets.\nThe remaining answers were completely different.\nEven when we examined the top 500 sentences generated by both systems, the agreement rate was still an extremely low 5.3%.\nYet, despite the low agreement rate between both systems, each individual system is still able to attain a relatively high F3 score.\nThere is a distinct possibility that each system may be selecting different sentences with different syntactic structures but actually have the same or similar semantic content.\nThis could result in both systems having the same nuggets marked as correct even though the source answer sentences are structurally different.\nUnfortunately, we are unable to automatically verify this as the evaluation software we are using does not report correctly identified answer nuggets.\nTo verify if both systems are selecting the same answer nuggets, we randomly selected a subset of 10 topics from the TREC 2005 question set and manually identified correct answer nuggets (as defined by TREC accessors) from both systems.\nWhen we compared the answer nuggets found by both system for this subset of topics, we found that the nugget agreement rate between both systems was 16.6%.\nWhile the nugget agreement rate is higher than the sentence agreement rate, both systems are generally still picking up different answer nuggets.\nWe view this as further indication that definitions are indeed made up of a mixture of informative and interesting nuggets.\nIt is also indication that in general, interesting and informative nuggets are quite different in nature.\nThere are thus rational reasons and practical motivation in unifying answers from both the pattern based and corpus based approaches.\nHowever, the differences between the two systems also cause issues when we attempt to combine both answer sets.\nCurrently, the best approach we found for combining both answer sets is to merge and re-rank both answer sets with boosting agreements.\nWe first normalize the top 1,000 ranked sentences from each system, to obtain the Normalized Human Interest Model score, him(s), and the Normalized Soft Pattern Bigram Model score, sp(s), for every unique sentence, s. For each sentence, the two separate scores for are then unified into a single score using Equation 5.\nWhen only one system believes that the sentence is definitional, we simply retain that system``s normalized score as the unified score.\nWhen both systems agree agree that the sentence is definitional, the sentence``s score is boosted by the degree of agreement between between both systems.\nScore(s) = max(shim, ssp)1\u2212min(shim,ssp) (5) In order to maintain a diverse set of answers as well as to ensure that similar sentences are not given similar ranking, we further re-rank our combined list of answers using Maximal Marginal Relevance or MMR [2].\nUsing the approach described here, we achieve a F3 score of 0.3081.\nThis score is equivalent to the initial Human Interest Model score of 0.3031 but fails to outperform the optimized Human Interest Model model.\n7.\nCONCLUSION This paper has presented a novel perspective for answering definitional questions through the identification of interesting nuggets.\nInteresting nuggets are uncommon pieces of information about the topic that can evoke a human reader``s curiosity.\nThe notion of an average human reader is an important consideration in our approach.\nThis is very different from the lexico-syntactic pattern approach where the context of a human reader is not even considered when finding answers for definitional question answering.\nUsing this perspective, we have shown that using a combination of a carefully selected external corpus, matching against multiple centroids and taking into consideration rare but highly topic specific terms, we can build a definitional question answering module that is more focused on identifying nuggets that are of interest to human beings.\nExperimental results has shown this approach can significantly outperform state-of-the-art definitional question answering systems.\nWe further showed that at least two different types of answer nuggets are required to form a more thorough set of definitional answers.\nWhat seems to be a good set of definition answers is some general information that provides a quick informative overview mixed together with some novel or interesting aspects about the topic.\nThus we feel that a good definitional question answering system would need to pick up both informative and interesting nugget types in order to provide a complete definitional coverage on all important aspects of the topic.\nWhile we have attempted to build such a system by combining our proposed Human Interest Model with Cui et al.``s Soft Pattern Bigram Model, the inherent differences between both types of nuggets seemingly caused by the low agreement rates between both models have made this a difficult task.\nIndeed, this is natural as the two models have been designed to identify two very different types of definition answers using very different types of features.\nAs a result, we are currently only able to achieve a hybrid system that has the same level of performance as our proposed Human Interest Model.\nWe approached the problem of definitional question answering from a novel perspective, with the notion that interest factor plays a role in identifying definitional answers.\nAlthough the methods we used are simple, they have been shown experimentally to be effective.\nOur approach may also provide some insight into a few anomalies in past definitional question answering``s trials.\nFor instance, the top definitional system at the recent TREC 2006 evaluation was able to significantly outperform all other systems using relatively simple unigram probabilities extracted from Google snippets.\nWe suspect the main contributor to the system``s performance Entity Type Topics ORGANIZATION DePauw University, Merck & Co., Norwegian Cruise Lines (NCL), United Parcel Service (UPS), Little League Baseball, Cliffs Notes, American Legion, Sony Pictures Entertainment (SPE), Telefonica of Spain, Lions Club International, AMWAY, McDonald``s Corporation, Harley-Davidson, U.S. Naval Academy, OPEC, NATO, International Bureau of Universal Postal Union (UPU), Organization of Islamic Conference (OIC), PBGC PERSON Bing Crosby, George Foreman, Akira Kurosawa, Sani Abacha, Enrico Fermi, Arnold Palmer, Woody Guthrie, Sammy Sosa, Michael Weiss, Paul Newman, Jesse Ventura, Rose Crumb, Rachel Carson, Paul Revere, Vicente Fox, Rocky Marciano, Enrico Caruso, Pope Pius XII, Kim Jong Il THING F16, Bollywood, Viagra, Howdy Doody Show, Louvre Museum, meteorites, Virginia wine, Counting Crows, Boston Big Dig, Chunnel, Longwood Gardens, Camp David, kudzu, U.S. Medal of Honor, tsunami, genome, Food-for-Oil Agreement, Shiite, Kinmen Island EVENT Russian submarine Kursk sinks, Miss Universe 2000 crowned, Port Arthur Massacre, France wins World Cup in soccer, Plane clips cable wires in Italian resort, Kip Kinkel school shooting, Crash of EgyptAir Flight 990, Preakness 1998, first 2000 Bush-Gore presidential debate , 1998 indictment and trial of Susan McDougal, return of Hong Kong to Chinese sovereignty, 1998 Nagano Olympic Games, Super Bowl XXXIV, 1999 North American International Auto Show, 1980 Mount St. Helens eruption, 1998 Baseball World Series, Hindenburg disaster, Hurricane Mitch Table 3: TREC 2005 Topics Grouped by Entity Type is Google``s PageRank algorithm, which mainly consider the number of linkages, has an indirect effect of ranking web documents by the degree of human interest.\nIn our future work, we seek to further improve on the combined system by incorporating more evidence in support of correct definitional answers or to filter away obviously wrong answers.\n8.\nREFERENCES [1] S. Blair-Goldensohn, K. R. McKeown, and A. H. Schlaikjer.\nA hybrid approach for qa track definitional questions.\nIn TREC ``03: Proceedings of the 12th Text REtrieval Conference, Gaithersburg, Maryland, 2003.\n[2] J. G. Carbonell and J. Goldstein.\nThe use of MMR, diversity-based reranking for reordering documents and producing summaries.\nIn Research and Development in Information Retrieval, pages 335-336, 1998.\n[3] Y. Chen, M. Zhou, and S. Wang.\nReranking answers for definitional qa using language modeling.\nIn Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 1081-1088, Sydney, Australia, July 2006.\nAssociation for Computational Linguistics.\n[4] H. Cui, M.-Y.\nKan, and T.-S.\nChua.\nGeneric soft pattern models for definitional question answering.\nIn SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 384-391, New York, NY, USA, 2005.\nACM Press.\n[5] T. G. Dietterich.\nEnsemble methods in machine learning.\nLecture Notes in Computer Science, 1857:1-15, 2000.\n[6] S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, A. Hickl, and P. Wang.\nEmploying two question answering systems at trec 2005.\nIn TREC ``05: Proceedings of the 14th Text REtrieval Conference, Gaithersburg, Maryland, 2005.\n[7] M. Kaisser, S. Scheible, and B. Webber.\nExperiments at the university of edinburgh for the trec 2006 qa track.\nIn TREC ``06 Notebook: Proceedings of the 14th Text REtrieval Conference, Gaithersburg, Maryland, 2006.\nNational Institute of Standards and Technology.\n[8] J. Lin.\nDivergence measures based on the shannon entropy.\nIEEE Transactions on Information Theory, 37(1):145 - 151, Jan 1991.\n[9] J. Lin, E. Abels, D. Demner-Fushman, D. W. Oard, P. Wu, and Y. Wu.\nA menagerie of tracks at maryland: Hard, enterprise, qa, and genomics, oh my!\nIn TREC ``05: Proceedings of the 14th Text REtrieval Conference, Gaithersburg, Maryland, 2005.\n[10] J. Lin and D. Demner-Fushman.\nAutomatically evaluating answers to definition questions.\nIn Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 931-938, Vancouver, British Columbia, Canada, October 2005.\nAssociation for Computational Linguistics.\n[11] R. Sun, J. Jiang, Y. F. Tan, H. Cui, T.-S.\nChua, and M.-Y.\nKan..\nUsing syntactic and semantic relation analysis in question answering.\nIn TREC ``05: Proceedings of the 14th Text REtrieval Conference, Gaithersburg, Maryland, 2005.\n[12] E. M. Voorhees.\nOverview of the trec 2003 question answering track.\nIn Text REtrieval Conference 2003, Gaithersburg, Maryland, 2003.\nNational Institute of Standards and Technology.\n[13] E. M. Voorhees.\nOverview of the trec 2005 question answering track.\nIn TREC ``05: Proceedings of the 14th Text REtrieval Conference, Gaithersburg, Maryland, 2005.\nNational Institute of Standards and Technology.\n[14] J. Xu, A. Licuanan, and R. Weischedel.\nTREC 2003 QA at BBN: Answering definitional questions.\nIn TREC ``03: Proceedings of the 12th Text REtrieval Conference, Gaithersburg, Maryland, 2003.\n[15] D. Zhang and W. S. Lee.\nA language modeling approach to passage question answering.\nIn TREC ``03: Proceedings of the 12th Text REtrieval Conference, Gaithersburg, Maryland, 2003.","lvl-3":"Interesting Nuggets and Their Impact on Definitional Question Answering\nABSTRACT\nCurrent approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets.\nThis is insufficient as they do not address the novelty factor that a definitional nugget must also possess.\nThis paper proposes to address the deficiency by building a \"Human Interest Model\" from external knowledge.\nIt is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic.\nWe compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.\n1.\nDEFINITIONAL QUESTION ANSWERING\nDefinitional Question Answering was first introduced to the TExt Retrieval Conference Question Answering Track main task in 2003.\nThe Definition questions, also called Other questions in recent years, are defined as follows.\nGiven a question topic X, the task of a definitional QA system is akin to answering the question \"What is X?\"\nor \"Who is X?\"\n.\nThe definitional QA system is to search through a news corpus and return return a set of answers that best describes the question topic.\nEach answer should be a unique topic-specific nugget that makes up one facet in the definition of the question topic.\n1.1 The Two Aspects of Topic Nuggets\nOfficially, topic-specific answer nuggets or simply topic nuggets are described as \"informative nuggets\".\nEach informative nugget is a sentence fragment that describe some factual information about the topic.\nDepending on the topic type and domain, this can include topic properties, relationships the topic has with some closely related entity, or events that happened to the topic.\nFrom observation of the answer set for definitional question answering from TREC 2003 to 2005, it seems that a significant number of topic nuggets cannot simply be described as informative nuggets.\nRather, these topic nuggets have a trivia-like quality associated with them.\nTypically, these are out of the ordinary pieces of information about a topic that can pique a human reader's interest.\nFor this reason, we decided to define answer nuggets that can evoke human interest as \"interesting nuggets\".\nIn essence, interesting nuggets answer the questions \"What is X famous for?\"\n, \"What defines X?\"\nor \"What is extraordinary about X?\"\n.\nWe now have two very different perspective as to what constitutes an answer to Definition questions.\nAn answer can be some important factual information about the topic or some novel and interesting aspect about the topic.\nThis duality of informativeness and interestingness can be clearly observed in the five vital answer nuggets for a TREC 2005 topic of \"George Foreman\".\nCertain answer nuggets are more informative while other nuggets are more interesting in nature.\nInformative Nuggets\n- Was graduate of Job Corps.\n- Became oldest world champion in boxing history.\nInteresting Nuggets\n- Has lent his name to line of food preparation products.\n- Waved American flag after winning 1968 Olympics championship.\n- Returned to boxing after 10 yr hiatus.\nAs an African-American professional heavyweight boxer, an average human reader would find the last three nuggets about George Foreman interesting because boxers do not usually lend their names to food preparation products, nor do boxers retire for 10 years before returning to the ring and become the world's oldest boxing champion.\nForeman's waving of the American flag at the Olympics is interesting because the innocent action caused some AfricanAmericans to accuse Foreman of being an Uncle Tom.\nAs seen here, interesting nuggets has some surprise factor or unique quality that makes them interesting to human readers.\n1.2 Identifying Interesting Nuggets\nSince the original official description for definitions comprise of\nidentifying informative nuggets, most research has focused entirely on identifying informative nuggets.\nIn this paper, we focus on exploring the properties of interesting nuggets and develop ways of identify such interesting nuggets.\nA\" Human Interest Model\" definitional question answering system is developed with emphasis on identifying interesting nuggets in order to evaluate the impact of interesting nuggets on the performance of a definitional question answering system.\nWe further experimented with combining the Human Interest Model with a lexical pattern based definitional question answering system in order to capture both informative and interesting nuggets.\n2.\nRELATED WORK\nThere are currently two general methods for Definitional Question Answering.\nThe more common method uses a lexical patternbased approach was first proposed by Blair-Goldensohn et al. [1] and Xu et al. [14].\nBoth groups predominantly used patterns such as copulas and appositives, as well as manually crafted lexicosyntactic patterns to identify sentences that contain informative nuggets.\nFor example, Xu et al. used 40 manually defined \"structured patterns\" in their 2003 definitional question answering system.\nSince then, in an attempt to capture a wider class of informational nuggets, many such systems of increasing complexity has been created.\nA recent system by Harabagiu et al. [6] created a definitional question answering system that combines the use of 150 manually defined positive and negative patterns, named entity relations and specially crafted information extraction templates for 33 target domains.\nHere, a musician template may contain lexical patterns that identify information such as the musician's musical style, songs sung by the musician and the band, if any, that the musician belongs to.\nAs one can imagine, this is a knowledge intensive approach that requires an expert linguist to manually define all possible lexical or syntactic patterns required to identify specific types of information.\nThis process requires a lot of manual labor, expertise and is not scalable.\nThis lead to the development of the soft-pattern approach by Cui et al. [4, 11].\nInstead of manually encoding patterns, answers to previous definitional question answering evaluations were converted into generic patterns and a probabilistic model is trained to identify such patterns in sentences.\nGiven a potential answer sentence, the probabilistic model outputs a probability that indicates how likely the sentence matches one or more patterns that the model has seen in training.\nSuch lexicalosyntactic patterns approach have been shown to be adept at identifying factual informative nuggets such as a person's birthdate, or the name of a company's CEO.\nHowever, these patterns are either globally applicable to all topics or to a specific set of entities such as musicians or organizations.\nThis is in direct contrast to interesting nuggets that are highly specific to individual topics and not to a set of entities.\nFor example, the interesting nuggets for George Foreman are specific only George Foreman and no other boxer or human being.\nTopic specificity or topic relevance is thus an important criteria that helps identify interesting nuggets.\nThis leads to the exploration of the second relevance-based approach that has been used in definitional question answering.\nPredominantly, this approach has been used as a backup method for identifying definitional sentences when the primary method of lexicalosyntactic patterns failed to find a sufficient number of informative nuggets [1].\nA similar approach has also been used as a baseline system for TREC 2003 [14].\nMore recently, Chen et al. [3] adapted a bi-gram or bi-term language model for definitional Question Answering.\nGenerally, the relevance-based approach requires a \"definitional corpus\" that contain documents highly relevant to the topic.\nThe baseline system in TREC 2003 simply uses the topic words as its definitional corpus.\nBlair-Goldensohn et al. [1] uses a machine learner to include in the definitonal corpus sentences that are likely to be definitional.\nChen et al. [3] collect snippets from Google to build its definitional corpus.\nFrom the definitional corpus, a definitional centroid vector is built or a set of centroid words are selected.\nThis centroid vector or set of centroid words is taken to be highly indicative of the topic.\nSystems can then use this centroid to identify definitional answers by using a variety of distance metrics to compare against sentences found in the set of retrieved documents for the topic.\nBlairGoldensohn et al. [1] uses Cosine similarity to rank sentences by \"centrality\".\nChen et al. [3] builds a bigram language model using the 350 most frequently occurring google snippet terms, described in their paper as an ordered centroid, to estimate the probability that a sentence is similar to the ordered centroid.\nAs described here, the relevance-based approach is highly specific to individual topics due to its dependence on a topic specific definitional corpus.\nHowever if individual sentences are viewed as a document, then relevance-based approaches essentially use the collected topic specific centroid words as a form of document retrieval with automated query expansion to identify strongly relevant sentences.\nThus such methods identify relevant sentences and not sentences containing definitional nuggets.\nYet, the TREC 2003 baseline system [14] outperformed all but one other system.\nThe bi-term language model [3] is able to report results that are highly competitive to state-of-the-art results using this retrieval-based approach.\nAt TREC 2006, a simple weighted sum of all terms model with terms weighted using solely Google snippets outperformed all other systems by a significant margin [7].\nWe believe that interesting nuggets often come in the form of trivia, novel or rare facts about the topic that tend to strongly cooccur with direct mention of topic keywords.\nThis may explain why relevance-based method can perform competitively in definitional question answering.\nHowever, simply comparing against a single centroid vector or set of centroid words may have over emphasized topic relevance and has only identified interesting definitional nuggets in an indirect manner.\nStill, relevance based retrieval methods can be used as a starting point in identifying interesting nuggets.\nWe will describe how we expand upon such methods to identify interesting nuggets in the next section.\n3.\nHUMAN INTEREST MODEL\n3.1 Web Resources\n3.2 Multiple Interesting Centroids\n3.3 Implementation\n4.\nINITIAL EXPERIMENTS\n4.1 Informativeness vs Interestingness\n5.\nREFINEMENTS\n5.1 Weighting Interesting Terms\n5.2 Selecting Web Resources\n6.\nUNIFYING INFORMATIVENESS WITH INTERESTINGNESS\n7.\nCONCLUSION\nThis paper has presented a novel perspective for answering definitional questions through the identification of interesting nuggets.\nInteresting nuggets are uncommon pieces of information about the topic that can evoke a human reader's curiosity.\nThe notion of an\" average human reader\" is an important consideration in our approach.\nThis is very different from the lexico-syntactic pattern approach where the context of a human reader is not even considered when finding answers for definitional question answering.\nUsing this perspective, we have shown that using a combination of a carefully selected external corpus, matching against multiple centroids and taking into consideration rare but highly topic specific terms, we can build a definitional question answering module that is more focused on identifying nuggets that are of interest to human beings.\nExperimental results has shown this approach can significantly outperform state-of-the-art definitional question answering systems.\nWe further showed that at least two different types of answer nuggets are required to form a more thorough set of definitional answers.\nWhat seems to be a good set of definition answers is some general information that provides a quick informative overview mixed together with some novel or interesting aspects about the topic.\nThus we feel that a good definitional question answering system would need to pick up both informative and interesting nugget types in order to provide a complete definitional coverage on all important aspects of the topic.\nWhile we have attempted to build such a system by combining our proposed Human Interest Model with Cui et al.'s Soft Pattern Bigram Model, the inherent differences between both types of nuggets seemingly caused by the low agreement rates between both models have made this a difficult task.\nIndeed, this is natural as the two models have been designed to identify two very different types of definition answers using very different types of features.\nAs a result, we are currently only able to achieve a hybrid system that has the same level of performance as our proposed Human Interest Model.\nWe approached the problem of definitional question answering from a novel perspective, with the notion that interest factor plays a role in identifying definitional answers.\nAlthough the methods we used are simple, they have been shown experimentally to be effective.\nOur approach may also provide some insight into a few anomalies in past definitional question answering's trials.\nFor instance, the top definitional system at the recent TREC 2006 evaluation was able to significantly outperform all other systems using relatively simple unigram probabilities extracted from Google snippets.\nWe suspect the main contributor to the system's performance\nTable 3: TREC 2005 Topics Grouped by Entity Type\nis Google's PageRank algorithm, which mainly consider the number of linkages, has an indirect effect of ranking web documents by the degree of human interest.\nIn our future work, we seek to further improve on the combined system by incorporating more evidence in support of correct definitional answers or to filter away obviously wrong answers.","lvl-4":"Interesting Nuggets and Their Impact on Definitional Question Answering\nABSTRACT\nCurrent approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets.\nThis is insufficient as they do not address the novelty factor that a definitional nugget must also possess.\nThis paper proposes to address the deficiency by building a \"Human Interest Model\" from external knowledge.\nIt is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic.\nWe compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.\n1.\nDEFINITIONAL QUESTION ANSWERING\nDefinitional Question Answering was first introduced to the TExt Retrieval Conference Question Answering Track main task in 2003.\nThe Definition questions, also called Other questions in recent years, are defined as follows.\nGiven a question topic X, the task of a definitional QA system is akin to answering the question \"What is X?\"\n.\nThe definitional QA system is to search through a news corpus and return return a set of answers that best describes the question topic.\nEach answer should be a unique topic-specific nugget that makes up one facet in the definition of the question topic.\n1.1 The Two Aspects of Topic Nuggets\nOfficially, topic-specific answer nuggets or simply topic nuggets are described as \"informative nuggets\".\nEach informative nugget is a sentence fragment that describe some factual information about the topic.\nFrom observation of the answer set for definitional question answering from TREC 2003 to 2005, it seems that a significant number of topic nuggets cannot simply be described as informative nuggets.\nRather, these topic nuggets have a trivia-like quality associated with them.\nTypically, these are out of the ordinary pieces of information about a topic that can pique a human reader's interest.\nFor this reason, we decided to define answer nuggets that can evoke human interest as \"interesting nuggets\".\nIn essence, interesting nuggets answer the questions \"What is X famous for?\"\n, \"What defines X?\"\n.\nWe now have two very different perspective as to what constitutes an answer to Definition questions.\nAn answer can be some important factual information about the topic or some novel and interesting aspect about the topic.\nThis duality of informativeness and interestingness can be clearly observed in the five vital answer nuggets for a TREC 2005 topic of \"George Foreman\".\nCertain answer nuggets are more informative while other nuggets are more interesting in nature.\nInformative Nuggets\n- Became oldest world champion in boxing history.\nInteresting Nuggets\n- Returned to boxing after 10 yr hiatus.\nAs seen here, interesting nuggets has some surprise factor or unique quality that makes them interesting to human readers.\n1.2 Identifying Interesting Nuggets\nSince the original official description for definitions comprise of\nidentifying informative nuggets, most research has focused entirely on identifying informative nuggets.\nIn this paper, we focus on exploring the properties of interesting nuggets and develop ways of identify such interesting nuggets.\nA\" Human Interest Model\" definitional question answering system is developed with emphasis on identifying interesting nuggets in order to evaluate the impact of interesting nuggets on the performance of a definitional question answering system.\nWe further experimented with combining the Human Interest Model with a lexical pattern based definitional question answering system in order to capture both informative and interesting nuggets.\n2.\nRELATED WORK\nThere are currently two general methods for Definitional Question Answering.\nThe more common method uses a lexical patternbased approach was first proposed by Blair-Goldensohn et al. [1] and Xu et al. [14].\nBoth groups predominantly used patterns such as copulas and appositives, as well as manually crafted lexicosyntactic patterns to identify sentences that contain informative nuggets.\nFor example, Xu et al. used 40 manually defined \"structured patterns\" in their 2003 definitional question answering system.\nSince then, in an attempt to capture a wider class of informational nuggets, many such systems of increasing complexity has been created.\nA recent system by Harabagiu et al. [6] created a definitional question answering system that combines the use of 150 manually defined positive and negative patterns, named entity relations and specially crafted information extraction templates for 33 target domains.\nAs one can imagine, this is a knowledge intensive approach that requires an expert linguist to manually define all possible lexical or syntactic patterns required to identify specific types of information.\nInstead of manually encoding patterns, answers to previous definitional question answering evaluations were converted into generic patterns and a probabilistic model is trained to identify such patterns in sentences.\nSuch lexicalosyntactic patterns approach have been shown to be adept at identifying factual informative nuggets such as a person's birthdate, or the name of a company's CEO.\nHowever, these patterns are either globally applicable to all topics or to a specific set of entities such as musicians or organizations.\nThis is in direct contrast to interesting nuggets that are highly specific to individual topics and not to a set of entities.\nFor example, the interesting nuggets for George Foreman are specific only George Foreman and no other boxer or human being.\nTopic specificity or topic relevance is thus an important criteria that helps identify interesting nuggets.\nThis leads to the exploration of the second relevance-based approach that has been used in definitional question answering.\nPredominantly, this approach has been used as a backup method for identifying definitional sentences when the primary method of lexicalosyntactic patterns failed to find a sufficient number of informative nuggets [1].\nA similar approach has also been used as a baseline system for TREC 2003 [14].\nMore recently, Chen et al. [3] adapted a bi-gram or bi-term language model for definitional Question Answering.\nGenerally, the relevance-based approach requires a \"definitional corpus\" that contain documents highly relevant to the topic.\nThe baseline system in TREC 2003 simply uses the topic words as its definitional corpus.\nBlair-Goldensohn et al. [1] uses a machine learner to include in the definitonal corpus sentences that are likely to be definitional.\nChen et al. [3] collect snippets from Google to build its definitional corpus.\nFrom the definitional corpus, a definitional centroid vector is built or a set of centroid words are selected.\nThis centroid vector or set of centroid words is taken to be highly indicative of the topic.\nSystems can then use this centroid to identify definitional answers by using a variety of distance metrics to compare against sentences found in the set of retrieved documents for the topic.\nBlairGoldensohn et al. [1] uses Cosine similarity to rank sentences by \"centrality\".\nAs described here, the relevance-based approach is highly specific to individual topics due to its dependence on a topic specific definitional corpus.\nHowever if individual sentences are viewed as a document, then relevance-based approaches essentially use the collected topic specific centroid words as a form of document retrieval with automated query expansion to identify strongly relevant sentences.\nThus such methods identify relevant sentences and not sentences containing definitional nuggets.\nYet, the TREC 2003 baseline system [14] outperformed all but one other system.\nThe bi-term language model [3] is able to report results that are highly competitive to state-of-the-art results using this retrieval-based approach.\nAt TREC 2006, a simple weighted sum of all terms model with terms weighted using solely Google snippets outperformed all other systems by a significant margin [7].\nWe believe that interesting nuggets often come in the form of trivia, novel or rare facts about the topic that tend to strongly cooccur with direct mention of topic keywords.\nThis may explain why relevance-based method can perform competitively in definitional question answering.\nHowever, simply comparing against a single centroid vector or set of centroid words may have over emphasized topic relevance and has only identified interesting definitional nuggets in an indirect manner.\nStill, relevance based retrieval methods can be used as a starting point in identifying interesting nuggets.\nWe will describe how we expand upon such methods to identify interesting nuggets in the next section.\n7.\nCONCLUSION\nThis paper has presented a novel perspective for answering definitional questions through the identification of interesting nuggets.\nInteresting nuggets are uncommon pieces of information about the topic that can evoke a human reader's curiosity.\nThe notion of an\" average human reader\" is an important consideration in our approach.\nThis is very different from the lexico-syntactic pattern approach where the context of a human reader is not even considered when finding answers for definitional question answering.\nUsing this perspective, we have shown that using a combination of a carefully selected external corpus, matching against multiple centroids and taking into consideration rare but highly topic specific terms, we can build a definitional question answering module that is more focused on identifying nuggets that are of interest to human beings.\nExperimental results has shown this approach can significantly outperform state-of-the-art definitional question answering systems.\nWe further showed that at least two different types of answer nuggets are required to form a more thorough set of definitional answers.\nWhat seems to be a good set of definition answers is some general information that provides a quick informative overview mixed together with some novel or interesting aspects about the topic.\nThus we feel that a good definitional question answering system would need to pick up both informative and interesting nugget types in order to provide a complete definitional coverage on all important aspects of the topic.\nIndeed, this is natural as the two models have been designed to identify two very different types of definition answers using very different types of features.\nAs a result, we are currently only able to achieve a hybrid system that has the same level of performance as our proposed Human Interest Model.\nWe approached the problem of definitional question answering from a novel perspective, with the notion that interest factor plays a role in identifying definitional answers.\nAlthough the methods we used are simple, they have been shown experimentally to be effective.\nOur approach may also provide some insight into a few anomalies in past definitional question answering's trials.\nFor instance, the top definitional system at the recent TREC 2006 evaluation was able to significantly outperform all other systems using relatively simple unigram probabilities extracted from Google snippets.\nWe suspect the main contributor to the system's performance\nTable 3: TREC 2005 Topics Grouped by Entity Type\nIn our future work, we seek to further improve on the combined system by incorporating more evidence in support of correct definitional answers or to filter away obviously wrong answers.","lvl-2":"Interesting Nuggets and Their Impact on Definitional Question Answering\nABSTRACT\nCurrent approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets.\nThis is insufficient as they do not address the novelty factor that a definitional nugget must also possess.\nThis paper proposes to address the deficiency by building a \"Human Interest Model\" from external knowledge.\nIt is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic.\nWe compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.\n1.\nDEFINITIONAL QUESTION ANSWERING\nDefinitional Question Answering was first introduced to the TExt Retrieval Conference Question Answering Track main task in 2003.\nThe Definition questions, also called Other questions in recent years, are defined as follows.\nGiven a question topic X, the task of a definitional QA system is akin to answering the question \"What is X?\"\nor \"Who is X?\"\n.\nThe definitional QA system is to search through a news corpus and return return a set of answers that best describes the question topic.\nEach answer should be a unique topic-specific nugget that makes up one facet in the definition of the question topic.\n1.1 The Two Aspects of Topic Nuggets\nOfficially, topic-specific answer nuggets or simply topic nuggets are described as \"informative nuggets\".\nEach informative nugget is a sentence fragment that describe some factual information about the topic.\nDepending on the topic type and domain, this can include topic properties, relationships the topic has with some closely related entity, or events that happened to the topic.\nFrom observation of the answer set for definitional question answering from TREC 2003 to 2005, it seems that a significant number of topic nuggets cannot simply be described as informative nuggets.\nRather, these topic nuggets have a trivia-like quality associated with them.\nTypically, these are out of the ordinary pieces of information about a topic that can pique a human reader's interest.\nFor this reason, we decided to define answer nuggets that can evoke human interest as \"interesting nuggets\".\nIn essence, interesting nuggets answer the questions \"What is X famous for?\"\n, \"What defines X?\"\nor \"What is extraordinary about X?\"\n.\nWe now have two very different perspective as to what constitutes an answer to Definition questions.\nAn answer can be some important factual information about the topic or some novel and interesting aspect about the topic.\nThis duality of informativeness and interestingness can be clearly observed in the five vital answer nuggets for a TREC 2005 topic of \"George Foreman\".\nCertain answer nuggets are more informative while other nuggets are more interesting in nature.\nInformative Nuggets\n- Was graduate of Job Corps.\n- Became oldest world champion in boxing history.\nInteresting Nuggets\n- Has lent his name to line of food preparation products.\n- Waved American flag after winning 1968 Olympics championship.\n- Returned to boxing after 10 yr hiatus.\nAs an African-American professional heavyweight boxer, an average human reader would find the last three nuggets about George Foreman interesting because boxers do not usually lend their names to food preparation products, nor do boxers retire for 10 years before returning to the ring and become the world's oldest boxing champion.\nForeman's waving of the American flag at the Olympics is interesting because the innocent action caused some AfricanAmericans to accuse Foreman of being an Uncle Tom.\nAs seen here, interesting nuggets has some surprise factor or unique quality that makes them interesting to human readers.\n1.2 Identifying Interesting Nuggets\nSince the original official description for definitions comprise of\nidentifying informative nuggets, most research has focused entirely on identifying informative nuggets.\nIn this paper, we focus on exploring the properties of interesting nuggets and develop ways of identify such interesting nuggets.\nA\" Human Interest Model\" definitional question answering system is developed with emphasis on identifying interesting nuggets in order to evaluate the impact of interesting nuggets on the performance of a definitional question answering system.\nWe further experimented with combining the Human Interest Model with a lexical pattern based definitional question answering system in order to capture both informative and interesting nuggets.\n2.\nRELATED WORK\nThere are currently two general methods for Definitional Question Answering.\nThe more common method uses a lexical patternbased approach was first proposed by Blair-Goldensohn et al. [1] and Xu et al. [14].\nBoth groups predominantly used patterns such as copulas and appositives, as well as manually crafted lexicosyntactic patterns to identify sentences that contain informative nuggets.\nFor example, Xu et al. used 40 manually defined \"structured patterns\" in their 2003 definitional question answering system.\nSince then, in an attempt to capture a wider class of informational nuggets, many such systems of increasing complexity has been created.\nA recent system by Harabagiu et al. [6] created a definitional question answering system that combines the use of 150 manually defined positive and negative patterns, named entity relations and specially crafted information extraction templates for 33 target domains.\nHere, a musician template may contain lexical patterns that identify information such as the musician's musical style, songs sung by the musician and the band, if any, that the musician belongs to.\nAs one can imagine, this is a knowledge intensive approach that requires an expert linguist to manually define all possible lexical or syntactic patterns required to identify specific types of information.\nThis process requires a lot of manual labor, expertise and is not scalable.\nThis lead to the development of the soft-pattern approach by Cui et al. [4, 11].\nInstead of manually encoding patterns, answers to previous definitional question answering evaluations were converted into generic patterns and a probabilistic model is trained to identify such patterns in sentences.\nGiven a potential answer sentence, the probabilistic model outputs a probability that indicates how likely the sentence matches one or more patterns that the model has seen in training.\nSuch lexicalosyntactic patterns approach have been shown to be adept at identifying factual informative nuggets such as a person's birthdate, or the name of a company's CEO.\nHowever, these patterns are either globally applicable to all topics or to a specific set of entities such as musicians or organizations.\nThis is in direct contrast to interesting nuggets that are highly specific to individual topics and not to a set of entities.\nFor example, the interesting nuggets for George Foreman are specific only George Foreman and no other boxer or human being.\nTopic specificity or topic relevance is thus an important criteria that helps identify interesting nuggets.\nThis leads to the exploration of the second relevance-based approach that has been used in definitional question answering.\nPredominantly, this approach has been used as a backup method for identifying definitional sentences when the primary method of lexicalosyntactic patterns failed to find a sufficient number of informative nuggets [1].\nA similar approach has also been used as a baseline system for TREC 2003 [14].\nMore recently, Chen et al. [3] adapted a bi-gram or bi-term language model for definitional Question Answering.\nGenerally, the relevance-based approach requires a \"definitional corpus\" that contain documents highly relevant to the topic.\nThe baseline system in TREC 2003 simply uses the topic words as its definitional corpus.\nBlair-Goldensohn et al. [1] uses a machine learner to include in the definitonal corpus sentences that are likely to be definitional.\nChen et al. [3] collect snippets from Google to build its definitional corpus.\nFrom the definitional corpus, a definitional centroid vector is built or a set of centroid words are selected.\nThis centroid vector or set of centroid words is taken to be highly indicative of the topic.\nSystems can then use this centroid to identify definitional answers by using a variety of distance metrics to compare against sentences found in the set of retrieved documents for the topic.\nBlairGoldensohn et al. [1] uses Cosine similarity to rank sentences by \"centrality\".\nChen et al. [3] builds a bigram language model using the 350 most frequently occurring google snippet terms, described in their paper as an ordered centroid, to estimate the probability that a sentence is similar to the ordered centroid.\nAs described here, the relevance-based approach is highly specific to individual topics due to its dependence on a topic specific definitional corpus.\nHowever if individual sentences are viewed as a document, then relevance-based approaches essentially use the collected topic specific centroid words as a form of document retrieval with automated query expansion to identify strongly relevant sentences.\nThus such methods identify relevant sentences and not sentences containing definitional nuggets.\nYet, the TREC 2003 baseline system [14] outperformed all but one other system.\nThe bi-term language model [3] is able to report results that are highly competitive to state-of-the-art results using this retrieval-based approach.\nAt TREC 2006, a simple weighted sum of all terms model with terms weighted using solely Google snippets outperformed all other systems by a significant margin [7].\nWe believe that interesting nuggets often come in the form of trivia, novel or rare facts about the topic that tend to strongly cooccur with direct mention of topic keywords.\nThis may explain why relevance-based method can perform competitively in definitional question answering.\nHowever, simply comparing against a single centroid vector or set of centroid words may have over emphasized topic relevance and has only identified interesting definitional nuggets in an indirect manner.\nStill, relevance based retrieval methods can be used as a starting point in identifying interesting nuggets.\nWe will describe how we expand upon such methods to identify interesting nuggets in the next section.\n3.\nHUMAN INTEREST MODEL\nGetting a computer system to identify sentences that a human reader would find interesting is a tall order.\nHowever, there are many documents on the world wide web that are contain concise, human written summaries on just about any topic.\nWhat's more, these documents are written explicitly for human beings and will contain information about the topic that most human readers would be interested in.\nAssuming we can identify such relevant documents on the web, we can leverage them to assist in identifying definitional answers to such topics.\nWe can take the assumption that most sentences found within these web documents will contain interesting facets about the topic at hand.\nThis greatly simplifies the problem to that of finding within the AQUAINT corpus sentences similar to those found in web documents.\nThis approach has been successfully used in several factoid and list Question Answering systems [11] and we feel the use of such an approach for definitional or \"Other\" question answering is justified.\nIdentifying interesting nuggets requires computing machinery to understand world knowledge and human insight.\nThis is still a very challenging task and the use of human written documents dramatically simplifies the complexity of the task.\nIn this paper, we report on such an approach by experimenting with a simple word-level edit distance based weighted term comparison algorithm.\nWe use the edit distance algorithm to score the similarity of a pair of sentences, with one sentence coming from web resources and the other sentence selected from the AQUAINT corpus.\nThrough a series of experiments, we will show that even such a simple approach can be very effective at definitional question answering.\n3.1 Web Resources\nThere exists on the internet articles on just about any topic a human can think of.\nWhat's more, many such articles are centrally located on several prominent websites, making them an easily accessible source of world knowledge.\nFor our work on identifying interesting nuggets, we focused on finding short one or two page articles on the internet that are highly relevant to our desired topic.\nSuch articles are useful as they contain concise information about the topic.\nMore importantly, the articles are written by humans, for human readers and thus contain the critical human world knowledge that a computer system currently is unable to capture.\nWe leverage this world knowledge by collecting articles for each topic from the following external resources to build our \"Interest Corpus\" for each topic.\nWikipedia is a Web-based, free-content encyclopedia written collaboratively by volunteers.\nThis resource has been used by many Question Answering system as a source of knowledge about each topic.\nWe use a snapshot of Wikipedia taken in March 2006 and include the most relevant article in the Interest Corpus.\nNewsLibrary is a searchable archive of news articles from over 100 different newspaper agencies.\nFor each topic, we download the 50 most relevant articles and include the title and first paragraph of each article in the Interest Corpus.\nGoogle Snippets are retrieved by issuing the topic as a query to the Google search engine.\nFrom the search results, we extracted the top 100 snippets.\nWhile Google snippets are not articles, we find that they provide a wide coverage of authorative information about most topics.\nDue to their comprehensive coverage of a wide variety of topics, the above resources form the bulk of our Interest Corpus.\nWe also extracted documents from other resources.\nHowever, as these resources are more specific in nature, we do not always get any single relevant document.\nThese resources are listed below.\nBiography.com is the website for the Biography television cable channel.\nThe channel's website contains searchable biographies on over 25,000 notable people.\nIf the topic is a person and we can find a relevant biography on the person, we include it it in our Interest Corpus.\nBartleby.com contains a searchable copy of several resources including the Columbia Encyclopedia, the World Factbook, and several English dictionaries.\ns9.com is a biography dictionary on over 33,000 notable people.\nLike Biography.com, we include the most relevant biography we can find in the Interest Corpus.\nGoogle Definitions Google search engine offers a feature called \"Definitions\" that provides the definition for a query, if it has one.\nWe use this feature and extract whatever definitions the Google search engine has found for each topic into the Interest Corpus.\nFigure 1: Human Interest Model Architecture.\nWordNet WordNet is an well-known electronic semantic lexicon for the English language.\nBesides grouping English words into sets of synonyms called synsets, it also provide a short definition on the meaning of words found in each synset.\nWe add this short definition, if there is one, into our Interest Corpus.\nWe have two major uses for this topic specific Interest Corpus, as a source of sentences containing interesting nuggets and as a unigram language model of topic terms, I.\n3.2 Multiple Interesting Centroids\nWe have seen that interesting nuggets are highly specific to a topic.\nRelevance-based approaches such as the bigram language model used by Chen et al. [3] are focused on identifying highly relevant sentences and pick up definitional answer nuggets as an indirect consequence.\nWe believe that the use of only a single collection of centroid words has over-emphasized topic relevance and choose instead to use multiple \"centroids\".\nSince sentences in the Interest Corpus of articles we collected from the internet are likely to contain nuggets that are of interest to human readers, we can essentially use each sentence as \"pseudocentroids\".\nEach sentence in the Interest Corpus essentially raises a different aspect of the topic for consideration as a sentence of interest to human readers.\nBy performing a pairwise sentence comparison between sentences in the Interest Corpus and candidate sentences retrieved from the AQUAINT corpus, we increase the number of sentence comparisons from O (n) to O (nm).\nHere, n is the number of potential candidate sentences and m is the number of sentences in the Interest Corpus.\nIn return, we obtain a diverse ranked list of answers that are individually similar to various sentences found in the topic's Interest Corpus.\nAn answer can only be highly ranked if it is strongly similar to a sentence in the Interest Corpus, and is also strongly relevant to the topic.\n3.3 Implementation\nFigure 1 shows the system architecture for the proposed Human Interest-based definitional QA system.\nThe AQUAINT Retrieval module shown in Figure 1 reuses a document retrieval module of a current Factoid and List Question Answering system we have implemented.\nGiven a set of words describing the topic, the AQUAINT Retrieval module does query expansion using Google and searches an index of AQUAINT documents to retrieve the 800 most relevant documents for consideration.\nThe Web Retrieval module on the other hand, searches the online\nresources described in Section 3.1 for \"interesting\" documents in order to populate the Interest Corpus.\nThe HIM Ranker, or Human Interest Model Ranking module, is the implementation of what is described in this paper.\nThe module first builds the unigram language model, I, from the collected web documents.\nThis language model will be used to weight the importance of terms within sentences.\nNext, a sentence chunker is used to segment all 800 retrieved documents into individual sentences.\nEach of these sentences can be a potential answer sentence that will be independently ranked by interestingness.\nWe rank sentences by interestingness using sentences from both the Interest Corpus of external documents as well as the unigram language model we built earlier which we use to weight terms.\nA candidate sentence in our top 800 relevant AQUAINT documents is considered interesting if it is highly similar in content to a sentence found in our collection of external web-documents.\nTo achieve this, we perform a pairwise similarity comparison between a candidate sentence and sentences in our external documents using a weighted-term edit distance algorithm.\nTerm weights are used to adjust the relative importance of each unique term found in the Interest Corpus.\nWhen both sentences share the same term, the similarity score is incremented by the two times the term's weight and every dissimilar term decrements the similarity score by the dissimilar term's weight.\nWe choose the highest achieved similarity score for a candidate sentence as the Human Interest Model score for the candidate sentence.\nIn this manner, every candidate sentence is ranked by interestingness.\nFinally, to obtain the answer set, we select the top 12 highest ranked and non redundant sentences as definitional answers for the topic.\n4.\nINITIAL EXPERIMENTS\nThe Human Interest-based system described in the previous section is designed to identify only interesting nuggets and not informative nuggets.\nThus, it can be described as a handicapped system that only deals with half the problem in definitional question answering.\nThis is done in order to explore how interestingness plays a factor in definitional answers.\nIn order to compare and contrast the differences between informative and interesting nuggets, we also implemented the soft-pattern bigram model proposed by Cui et al. [4, 11].\nIn order to ensure comparable results, both systems are provided identical input data.\nSince both system require the use of external resources, they are both provided the same web articles retrieved by our Web Retrieval module.\nBoth systems also rank the same same set of candidate sentences in the form of 800 most relevant documents as retrieved by our AQUAINT Retrieval module.\nFor the experiments, we used the TREC 2004 question set to tune any system parameters and use the TREC 2005 question sets to test the both systems.\nBoth systems are evaluated the results using the standard scoring methodology for TREC definitions.\nTREC provides a list of vital and okay nuggets for each question topic.\nEvery question is scored on nugget recall (NR) and nugget precision (NP) and a single final score is computed using F-Measure (see equation 1) with \u03b2 = 3 to emphasize nugget recall.\nHere, NR is the number of vital nuggets returned divided by total number of vital nuggets while NP is computed using a minimum allowed character length function defined in [12].\nThe evaluation is automatically conducted using Pourpre v1 .0 c [10].\nTable 1: Performance on TREC 2005 Question Set\nFigure 2: Performance by entity types.\n4.1 Informativeness vs Interestingness\nOur first experiment compares the performance of solely identifying interesting nuggets against solely identifying informative nuggets.\nWe compare the results attained by the Human Interest Model that only identify interesting nuggets with the results of the syntactic pattern finding Soft-Pattern model as well as the result of the top performing definitional system in TREC 2005 [13].\nTable 1 shows the F3 score the three systems for the TREC 2005 question set.\nThe Human Interest Model clearly outperform both soft pattern and the best TREC 2005 system with a F3 score of 0.303.\nThe result is also comparable with the result of a human manual run, which attained a F3 score of 0.299 on the same question set [9].\nThis result is confirmation that interesting nuggets does indeed play a significant role in picking up definitional answers, and may be more vital than using information finding lexical patterns.\nIn order to get a better perspective of how well the Human Interest Model performs for different types of topics, we manually divided the TREC 2005 topics into four broad categories of PERSON, ORGANIZATION, THING and EVENT as listed in Table 3.\nThese categories conform to TREC's general division of question topics into 4 main entity types [13].\nThe performance of Human Interest Model and Soft Pattern Bigram Model for each entity type can be seen in Figure 2.\nBoth systems exhibit consistent behavior across entity types, with the best performance coming from PERSON and ORGANIZATION topics and the worst performance from THING and EVENT topics.\nThis can mainly be attributed to our selection of web-based resources for the definitional corpus used by both system.\nIn general, it is harder to locate a single web article that describes an event or a general object.\nHowever given the same set of web-based information, the Human Interest Model consistently outperforms the soft-pattern model for all four entity types.\nThis suggests that the Human Interest Model is better able to leverage the information found in web resources to identify definitional answers.\n5.\nREFINEMENTS\nEncouraged by the initial experimental results, we explored two further optimization of the basic algorithm.\n5.1 Weighting Interesting Terms\nThe word trivia refer to tidbits of unimportant or uncommon information.\nAs we have noted, interesting nuggets often has a trivialike quality that makes them of interest to human beings.\nFrom this description of interesting nuggets and trivia, we hypothesize that interesting nuggets are likely to occur rarely in a text corpora.\nThere is a possibility that some low-frequency terms may actually be important in identifying interesting nuggets.\nA standard unigram language model would not capture these low-frequency terms as important terms.\nTo explore this possibility, we experimented with three different term weighting schemes that can provide more weight to certain low-frequency terms.\nThe weighting schemes we considered include commonly used TFIDF, as well as information theoretic Kullback-Leiber divergence and Jensen-Shannon divergence [8].\nTFIDF, or Term Frequency x Inverse Document Frequency, is a standard Information Retrieval weighting scheme that balances the importance of a term in a document and in a corpus.\nFor our experiments, we compute the weight of each term as tf x log (ntN), where tf is the term frequency, nt is the number of sentences in the Interest Corpus having the term and N is the total number of sentences in the Interest Corpus.\nKullback-Leibler Divergence (Equation 2) is also called KL Divergence or relative entropy, can be viewed as measuring the dissimilarity between two probability distributions.\nHere, we treat the AQUAINT corpus as a unigram language model of general English [15], A, and the Interest Corpus as a unigram language model consisting of topic specific terms and general English terms, I. General English words are likely to have similar distributions in both language models I and A. Thus using KL Divergence as a term weighting scheme will cause strong weights to be given to topicspecific terms because their distribution in the Interest Corpus they occur significantly more often or less often than in general English.\nIn this way, high frequency centroid terms as well as low frequency rare but topic-specific terms are both identified and highly weighted using KL Divergence.\nDue to the power law distribution of terms in natural language, there are only a small number of very frequent terms and a large number of rare terms in both I and A.\nWhile the common terms in English consist of stop words, the common terms in the topic specific corpus, I, consist of both stop words and relevant topic words.\nThese high frequency topic specific words occur very much more frequently in I than in A.\nAs a result, we found that KL Divergence has a bias towards highly frequent topic terms as we are measuring direct dissimilarity against a model of general English where such topic terms are very rare.\nFor this reason, we explored another divergence measure as a possible term weighting scheme.\nJensen-Shannon Divergence or JS Divergence extends upon KL Divergence as seen in Equation 3.\nAs with KL Divergence, we also use JS divergence to measure the dissimilarity between our two language models, I and A.\nFigure 3: Performance by various term weighting schemes on the Human Interest Model.\nHowever, JS Divergence has additional properties1 of being symmetric and non-negative as seen in Equation 4.\nThe symmetric property gives a more balanced measure of dissimilarity and avoids the bias that KL divergence has.\nWe conducted another experiment, substituting the unigram languge model weighting scheme we used in the initial experiments with the three term weighting schemes described above.\nAs lower bound reference, we included a term weighting scheme consisting of a constant 1 for all terms.\nFigure 3 show the result of applying the five different term weighting schemes on the Human Interest Model.\nTFIDF performed the worst as we had anticipated.\nThe reason is that most terms only appear once within each sentence, resulting in a term frequency of 1 for most terms.\nThis causes the IDF component to be the main factor in scoring sentences.\nAs we are computing the Inverse Document Frequency for terms in the Interest Corpus collected from web resources, IDF heavily down-weights highly frequency topic terms and relevant terms.\nThis results in TFIDF favoring all low frequency terms over high frequency terms in the Interest Corpus.\nDespite this, the TFIDF weighting scheme only scored a slight 0.0085 lower than our lower bound reference of constant weights.\nWe view this as a positive indication that low frequency terms can indeed be useful in finding interesting nuggets.\nBoth KL and JS divergence performed marginally better than the uniform language model probabilistic scheme that we used in our initial experiments.\nFrom inspection of the weighted list of terms, we observed that while low frequency relevant terms were boosted in strength, high frequency relevant terms still dominate the top of the weighted term list.\nOnly a handful of low frequency terms were weighted as strongly as topic keywords and combined with their low frequency, may have limited the impact of re-weighting such terms.\nHowever we feel that despite this, Jensen-Shannon divergence does provide a small but measurable increase in the performance of our Human Interest Model.\n1JS divergence also has the property of being bounded, allowing the results to be treated as a probability if required.\nHowever, the bounded property is not required here as we are only treating the divergence computed by JS divergence as term weights\n5.2 Selecting Web Resources\nIn one of our initial experiments, we observed that the quality of web resources included in the Interest Corpus may have a direct impact on the results we obtain.\nWe wanted to determine what impact the choice of web resources have on the performance of our Human Interest Model.\nFor this reason, we split our collection of web resources into four major groups listed here: N - News: Title and first paragraph of the top 50 most relevant articles found in NewsLibrary.\nW - Wikipedia: Text from the most relevant article found in Wikipedia.\nS - Snippets: Snippets extracted from the top 100 most relevant links after querying Google.\nM - Miscellaneous sources: Combination of content (when available) from secondary sources including biography.com, s9.com, bartleby.com articles, Google definitions and WordNet definitions.\nWe conducted a gamut of runs on the TREC 2005 question set using all possible combinations of the above four groups of web resources to identify the best possible combination.\nAll runs were conducted on Human Interest Model using JS divergence as term weighting scheme.\nThe runs were sorted in descending F3-Score and the top 3 best performing runs for each entity class are listed in Table 2 together with earlier reported F3-scores from Figure 2 as a baseline reference.\nA consistent trend can be observed for each entity class.\nFor PERSON and EVENT topics, NewsLibrary articles are the main source of interesting nuggets with Google snippets and miscellaneous articles offering additional supporting evidence.\nThis seem intuitive for events as newspapers predominantly focus on reporting breaking newsworthy events and are thus excellent sources of interesting nuggets.\nWe had expected Wikipedia rather than news articles to be a better source of interesting facts about people and were surprised to discover that news articles outperformed Wikipedia.\nWe believe that the reason is because the people selected as topics thus far have been celebrities or well known public figures.\nHuman readers are likely to be interested in news events that spotlight these personalities.\nConversely for ORGANIZATION and THING topics, the best source of interesting nuggets come from Wikipedia's most relevant article on the topic with Google snippets again providing additional information for organizations.\nWith an oracle that can classify topics by entity class with 100% accuracy and by using the best web resources for each entity class as shown in Table 2, we can attain a F3-Score of 0.3158.\n6.\nUNIFYING INFORMATIVENESS WITH INTERESTINGNESS\nWe have thus far been comparing the Human Interest Model against the Soft-Pattern model in order to understand the differences between interesting and informative nuggets.\nHowever from the perspective of a human reader, both informative and interesting nuggets are useful and definitional.\nInformative nuggets present a general overview of the topic while interesting nuggets give readers added depth and insight by providing novel and unique aspects about the topic.\nWe believe that a good definitional question answering system should provide the reader with a combined mixture of both nugget types as a definitional answer set.\nTable 2: Top 3 runs using different web resources for each entity class\nWe now have two very different \"experts\" at identifying definitions.\nThe Soft Pattern Bigram Model proposed by Cui et al. is an expert in identifying informative nuggets.\nThe Human Interest Model we have described in this paper on the other hand is an expert in finding interesting nuggets.\nWe had initially hoped to unify the two separate definitional question answering systems by applying an ensemble learning method [5] such as voting or boosting in order to attain a good mixture of informative and interesting nuggets in our answer set.\nHowever, none of the ensemble learning methods we attempted could outperform our Human Interest Model.\nThe reason is that both systems are picking up very different sentences as definitional answers.\nIn essence, our two experts are disagreeing on which sentences are definitional.\nIn the top 10 sentences from both systems, only 4.4% of these sentences appeared in both answer sets.\nThe remaining answers were completely different.\nEven when we examined the top 500 sentences generated by both systems, the agreement rate was still an extremely low 5.3%.\nYet, despite the low agreement rate between both systems, each individual system is still able to attain a relatively high F3 score.\nThere is a distinct possibility that each system may be selecting different sentences with different syntactic structures but actually have the same or similar semantic content.\nThis could result in both systems having the same nuggets marked as correct even though the source answer sentences are structurally different.\nUnfortunately, we are unable to automatically verify this as the evaluation software we are using does not report correctly identified answer nuggets.\nTo verify if both systems are selecting the same answer nuggets, we randomly selected a subset of 10 topics from the TREC 2005 question set and manually identified correct answer nuggets (as defined by TREC accessors) from both systems.\nWhen we compared the answer nuggets found by both system for this subset of topics, we found that the nugget agreement rate between both systems was 16.6%.\nWhile the nugget agreement rate is higher than the sentence agreement rate, both systems are generally still picking up different answer nuggets.\nWe view this as further indication that definitions are indeed made up of a mixture of informative and interesting nuggets.\nIt is also indication that in general, interesting and informative nuggets are quite different in nature.\nThere are thus rational reasons and practical motivation in unifying answers from both the pattern based and corpus based approaches.\nHowever, the differences between the two systems also cause issues when we attempt to combine both answer sets.\nCurrently, the best approach we found for combining both answer sets is to merge and re-rank both answer sets with boosting agreements.\nWe first normalize the top 1,000 ranked sentences from each system, to obtain the Normalized Human Interest Model score, him (s), and the Normalized Soft Pattern Bigram Model score,\nsp (s), for every unique sentence, s. For each sentence, the two separate scores for are then unified into a single score using Equation 5.\nWhen only one system believes that the sentence is definitional, we simply retain that system's normalized score as the unified score.\nWhen both systems agree agree that the sentence is definitional, the sentence's score is boosted by the degree of agreement between between both systems.\nIn order to maintain a diverse set of answers as well as to ensure that similar sentences are not given similar ranking, we further re-rank our combined list of answers using Maximal Marginal Relevance or MMR [2].\nUsing the approach described here, we achieve a F3 score of 0.3081.\nThis score is equivalent to the initial Human Interest Model score of 0.3031 but fails to outperform the optimized Human Interest Model model.\n7.\nCONCLUSION\nThis paper has presented a novel perspective for answering definitional questions through the identification of interesting nuggets.\nInteresting nuggets are uncommon pieces of information about the topic that can evoke a human reader's curiosity.\nThe notion of an\" average human reader\" is an important consideration in our approach.\nThis is very different from the lexico-syntactic pattern approach where the context of a human reader is not even considered when finding answers for definitional question answering.\nUsing this perspective, we have shown that using a combination of a carefully selected external corpus, matching against multiple centroids and taking into consideration rare but highly topic specific terms, we can build a definitional question answering module that is more focused on identifying nuggets that are of interest to human beings.\nExperimental results has shown this approach can significantly outperform state-of-the-art definitional question answering systems.\nWe further showed that at least two different types of answer nuggets are required to form a more thorough set of definitional answers.\nWhat seems to be a good set of definition answers is some general information that provides a quick informative overview mixed together with some novel or interesting aspects about the topic.\nThus we feel that a good definitional question answering system would need to pick up both informative and interesting nugget types in order to provide a complete definitional coverage on all important aspects of the topic.\nWhile we have attempted to build such a system by combining our proposed Human Interest Model with Cui et al.'s Soft Pattern Bigram Model, the inherent differences between both types of nuggets seemingly caused by the low agreement rates between both models have made this a difficult task.\nIndeed, this is natural as the two models have been designed to identify two very different types of definition answers using very different types of features.\nAs a result, we are currently only able to achieve a hybrid system that has the same level of performance as our proposed Human Interest Model.\nWe approached the problem of definitional question answering from a novel perspective, with the notion that interest factor plays a role in identifying definitional answers.\nAlthough the methods we used are simple, they have been shown experimentally to be effective.\nOur approach may also provide some insight into a few anomalies in past definitional question answering's trials.\nFor instance, the top definitional system at the recent TREC 2006 evaluation was able to significantly outperform all other systems using relatively simple unigram probabilities extracted from Google snippets.\nWe suspect the main contributor to the system's performance\nTable 3: TREC 2005 Topics Grouped by Entity Type\nis Google's PageRank algorithm, which mainly consider the number of linkages, has an indirect effect of ranking web documents by the degree of human interest.\nIn our future work, we seek to further improve on the combined system by incorporating more evidence in support of correct definitional answers or to filter away obviously wrong answers.","keyphrases":["interest","interest nugget","definit question answer","inform nugget","human interest","extern knowledg","linguist us","human interest comput","new corpu","question topic","sentenc fragment","human reader","uniqu qualiti","surpris factor","lexic pattern","manual labor","baselin system"],"prmu":["P","P","P","P","P","P","M","R","U","R","M","M","U","M","M","U","U"]} {"id":"H-26","title":"A Support Vector Method for Optimizing Average Precision","abstract":"Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores.","lvl-1":"A Support Vector Method for Optimizing Average Precision Yisong Yue Cornell University Ithaca, NY, USA yyue@cs.cornell.edu Thomas Finley Cornell University Ithaca, NY, USA tomf@cs.cornell.edu Filip Radlinski Cornell University Ithaca, NY, USA filip@cs.cornell.edu Thorsten Joachims Cornell University Ithaca, NY, USA tj@cs.cornell.edu ABSTRACT Machine learning is commonly used to improve ranked retrieval systems.\nDue to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems.\nExisting approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive.\nIn contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP.\nWe evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea.\nIn most cases we show our method to produce statistically significant improvements in MAP scores.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval Models General Terms Algorithm, Theory, Experimentation 1.\nINTRODUCTION State of the art information retrieval systems commonly use machine learning techniques to learn ranking functions.\nHowever, most current approaches do not optimize for the evaluation measure most often used, namely Mean Average Precision (MAP).\nInstead, current algorithms tend to take one of two general approaches.\nThe first approach is to learn a model that estimates the probability of a document being relevant given a query (e.g., [18, 14]).\nIf solved effectively, the ranking with best MAP performance can easily be derived from the probabilities of relevance.\nHowever, achieving high MAP only requires finding a good ordering of the documents.\nAs a result, finding good probabilities requires solving a more difficult problem than necessary, likely requiring more training data to achieve the same MAP performance.\nThe second common approach is to learn a function that maximizes a surrogate measure.\nPerformance measures optimized include accuracy [17, 15], ROCArea [1, 5, 10, 11, 13, 21] or modifications of ROCArea [4], and NDCG [2, 3].\nLearning a model to optimize for such measures might result in suboptimal MAP performance.\nIn fact, although some previous systems have obtained good MAP performance, it is known that neither achieving optimal accuracy nor ROCArea can guarantee optimal MAP performance[7].\nIn this paper, we present a general approach for learning ranking functions that maximize MAP performance.\nSpecifically, we present an SVM algorithm that globally optimizes a hinge-loss relaxation of MAP.\nThis approach simplifies the process of obtaining ranking functions with high MAP performance by avoiding additional intermediate steps and heuristics.\nThe new algorithm also makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for accuracy and ROCArea.\nIn contrast to recent work directly optimizing for MAP performance by Metzler & Croft [16] and Caruana et al. [6], our technique is computationally efficient while finding a globally optimal solution.\nLike [6, 16], our method learns a linear model, but is much more efficient in practice and, unlike [16], can handle many thousands of features.\nWe now describe the algorithm in detail and provide proof of correctness.\nFollowing this, we provide an analysis of running time.\nWe finish with empirical results from experiments on the TREC 9 and TREC 10 Web Track corpus.\nWe have also developed a software package implementing our algorithm that is available for public use1 .\n2.\nTHE LEARNING PROBLEM Following the standard machine learning setup, our goal is to learn a function h : X \u2192 Y between an input space X (all possible queries) and output space Y (rankings over a corpus).\nIn order to quantify the quality of a prediction, \u02c6y = h(x), we will consider a loss function \u2206 : Y \u00d7 Y \u2192 .\n\u2206(y, \u02c6y) quantifies the penalty for making prediction \u02c6y if the correct output is y.\nThe loss function allows us to incorporate specific performance measures, which we will exploit 1 http:\/\/svmrank.yisongyue.com for optimizing MAP.\nWe restrict ourselves to the supervised learning scenario, where input\/output pairs (x, y) are available for training and are assumed to come from some fixed distribution P(x, y).\nThe goal is to find a function h such that the risk (i.e., expected loss), R\u2206 P (h) = Z X\u00d7Y \u2206(y, h(x))dP(x, y), is minimized.\nOf course, P(x, y) is unknown.\nBut given a finite set of training pairs, S = {(xi, yi) \u2208 X \u00d7 Y : i = 1, ... , n}, the performance of h on S can be measured by the empirical risk, R\u2206 S (h) = 1 n nX i=1 \u2206(yi, h(xi)).\nIn the case of learning a ranked retrieval function, X denotes a space of queries, and Y the space of (possibly weak) rankings over some corpus of documents C = {d1, ... ,d|C|}.\nWe can define average precision loss as \u2206map(y, \u02c6y) = 1 \u2212 MAP(rank(y), rank(\u02c6y)), where rank(y) is a vector of the rank values of each document in C. For example, for a corpus of two documents, {d1, d2}, with d1 having higher rank than d2, rank(y ) = (1, 0).\nWe assume true rankings have two rank values, where relevant documents have rank value 1 and non-relevant documents rank value 0.\nWe further assume that all predicted rankings are complete rankings (no ties).\nLet p = rank(y) and \u02c6p = rank(\u02c6y).\nThe average precision score is defined as MAP(p, \u02c6p) = 1 rel X j:pj =1 Prec@j, where rel = |{i : pi = 1}| is the number of relevant documents, and Prec@j is the percentage of relevant documents in the top j documents in predicted ranking \u02c6y.\nMAP is the mean of the average precision scores of a group of queries.\n2.1 MAP vs ROCArea Most learning algorithms optimize for accuracy or ROCArea.\nWhile optimizing for these measures might achieve good MAP performance, we use two simple examples to show it can also be suboptimal in terms of MAP.\nROCArea assigns equal penalty to each misordering of a relevant\/non-relevant pair.\nIn contrast, MAP assigns greater penalties to misorderings higher up in the predicted ranking.\nUsing our notation, ROCArea can be defined as ROC(p, \u02c6p) = 1 rel \u00b7 (|C| \u2212 rel) X i:pi=1 X j:pj =0 1[\u02c6pi>\u02c6pj ], where p is the true (weak) ranking, \u02c6p is the predicted ranking, and 1[b] is the indicator function conditioned on b. Doc ID 1 2 3 4 5 6 7 8 p 1 0 0 0 0 1 1 0 rank(h1(x)) 8 7 6 5 4 3 2 1 rank(h2(x)) 1 2 3 4 5 6 7 8 Table 1: Toy Example and Models Suppose we have a hypothesis space with only two hypothesis functions, h1 and h2, as shown in Table 1.\nThese two hypotheses predict a ranking for query x over a corpus of eight documents.\nHypothesis MAP ROCArea h1(x) 0.59 0.47 h2(x) 0.51 0.53 Table 2: Performance of Toy Models Table 2 shows the MAP and ROCArea scores of h1 and h2.\nHere, a learning method which optimizes for ROCArea would choose h2 since that results in a higher ROCArea score, but this yields a suboptimal MAP score.\n2.2 MAP vs Accuracy Using a very similar example, we now demonstrate how optimizing for accuracy might result in suboptimal MAP.\nModels which optimize for accuracy are not directly concerned with the ranking.\nInstead, they learn a threshold such that documents scoring higher than the threshold can be classified as relevant and documents scoring lower as nonrelevant.\nDoc ID 1 2 3 4 5 6 7 8 9 10 11 p 1 0 0 0 0 1 1 1 1 0 0 rank(h1(x)) 11 10 9 8 7 6 5 4 3 2 1 rank(h2(x)) 1 2 3 4 5 6 7 8 9 10 11 Table 3: Toy Example and Models We consider again a hypothesis space with two hypotheses.\nTable 3 shows the predictions of the two hypotheses on a single query x. Hypothesis MAP Best Acc.\nh1(q) 0.70 0.64 h2(q) 0.64 0.73 Table 4: Performance of Toy Models Table 4 shows the MAP and best accuracy scores of h1(q) and h2(q).\nThe best accuracy refers to the highest achievable accuracy on that ranking when considering all possible thresholds.\nFor instance, with h1(q), a threshold between documents 1 and 2 gives 4 errors (documents 6-9 incorrectly classified as non-relevant), yielding an accuracy of 0.64.\nSimilarly, with h2(q), a threshold between documents 5 and 6 gives 3 errors (documents 10-11 incorrectly classified as relevant, and document 1 as non-relevant), yielding an accuracy of 0.73.\nA learning method which optimizes for accuracy would choose h2 since that results in a higher accuracy score, but this yields a suboptimal MAP score.\n3.\nOPTIMIZING AVERAGE PRECISION We build upon the approach used by [13] for optimizing ROCArea.\nUnlike ROCArea, however, MAP does not decompose linearly in the examples and requires a substantially extended algorithm, which we describe in this section.\nRecall that the true ranking is a weak ranking with two rank values (relevant and non-relevant).\nLet Cx and C\u00afx denote the set of relevant and non-relevant documents of C for query x, respectively.\nWe focus on functions which are parametrized by a weight vector w, and thus wish to find w to minimize the empirical risk, R\u2206 S (w) \u2261 R\u2206 S (h(\u00b7; w)).\nOur approach is to learn a discriminant function F : X \u00d7 Y \u2192 over input-output pairs.\nGiven query x, we can derive a prediction by finding the ranking y that maximizes the discriminant function: h(x; w) = argmax y\u2208Y F(x, y; w).\n(1) We assume F to be linear in some combined feature representation of inputs and outputs \u03a8(x, y) \u2208 RN , i.e., F(x, y; w) = wT \u03a8(x, y).\n(2) The combined feature function we use is \u03a8(x, y) = 1 |Cx| \u00b7 |C\u00afx| X i:di\u2208Cx X j:dj \u2208C\u00afx [yij (\u03c6(x, di) \u2212 \u03c6(x, dj))] , where \u03c6 : X \u00d7 C \u2192 N is a feature mapping function from a query\/document pair to a point in N dimensional space2 .\nWe represent rankings as a matrix of pairwise orderings, Y \u2282 {\u22121, 0, +1}|C|\u00d7|C| .\nFor any y \u2208 Y, yij = +1 if di is ranked ahead of dj, and yij = \u22121 if dj is ranked ahead of di, and yij = 0 if di and dj have equal rank.\nWe consider only matrices which correspond to valid rankings (i.e, obeying antisymmetry and transitivity).\nIntuitively, \u03a8 is a summation over the vector differences of all relevant\/non-relevant document pairings.\nSince we assume predicted rankings to be complete rankings, yij is either +1 or \u22121 (never 0).\nGiven a learned weight vector w, predicting a ranking (i.e. solving equation (1)) given query x reduces to picking each yij to maximize wT \u03a8(x, y).\nAs is also discussed in [13], this is attained by sorting the documents by wT \u03c6(x, d) in descending order.\nWe will discuss later the choices of \u03c6 we used for our experiments.\n3.1 Structural SVMs The above formulation is very similar to learning a straightforward linear model while training on the pairwise difference of relevant\/non-relevant document pairings.\nMany SVM-based approaches optimize over these pairwise differences (e.g., [5, 10, 13, 4]), although these methods do not optimize for MAP during training.\nPreviously, it was not clear how to incorporate non-linear multivariate loss functions such as MAP loss directly into global optimization problems such as SVM training.\nWe now present a method based on structural SVMs [19] to address this problem.\nWe use the structural SVM formulation, presented in Optimization Problem 1, to learn a w \u2208 RN .\nOptimization Problem 1.\n(Structural SVM) min w,\u03be\u22650 1 2 w 2 + C n nX i=1 \u03bei (3) s.t. \u2200i, \u2200y \u2208 Y \\ yi : wT \u03a8(xi, yi) \u2265 wT \u03a8(xi, y) + \u2206(yi, y) \u2212 \u03bei (4) The objective function to be minimized (3) is a tradeoff between model complexity, w 2 , and a hinge loss relaxation of MAP loss, P \u03bei.\nAs is usual in SVM training, C is a 2 For example, one dimension might be the number of times the query words appear in the document.\nAlgorithm 1 Cutting plane algorithm for solving OP 1 within tolerance .\n1: Input: (x1, y1), ... , (xn, yn), C, 2: Wi \u2190 \u2205 for all i = 1, ... , n 3: repeat 4: for i = 1, ... , n do 5: H(y; w) \u2261 \u2206(yi, y) + wT \u03a8(xi, y) \u2212 wT \u03a8(xi, yi) 6: compute \u02c6y = argmaxy\u2208Y H(y; w) 7: compute \u03bei = max{0, maxy\u2208Wi H(y; w)} 8: if H(\u02c6y; w) > \u03bei + then 9: Wi \u2190 Wi \u222a {\u02c6y} 10: w \u2190 optimize (3) over W = S i Wi 11: end if 12: end for 13: until no Wi has changed during iteration parameter that controls this tradeoff and can be tuned to achieve good performance in different training tasks.\nFor each (xi, yi) in the training set, a set of constraints of the form in equation (4) is added to the optimization problem.\nNote that wT \u03a8(x, y) is exactly our discriminant function F(x, y; w) (see equation (2)).\nDuring prediction, our model chooses the ranking which maximizes the discriminant (1).\nIf the discriminant value for an incorrect ranking y is greater than for the true ranking yi (e.g., F(xi, y; w) > F(xi, yi; w)), then corresponding slack variable, \u03bei, must be at least \u2206(yi, y) for that constraint to be satisfied.\nTherefore, the sum of slacks, P \u03bei, upper bounds the MAP loss.\nThis is stated formally in Proposition 1.\nProposition 1.\nLet \u03be\u2217 (w) be the optimal solution of the slack variables for OP 1 for a given weight vector w.\nThen 1 n Pn i=1 \u03bei is an upper bound on the empirical risk R\u2206 S (w).\n(see [19] for proof) Proposition 1 shows that OP 1 learns a ranking function that optimizes an upper bound on MAP error on the training set.\nUnfortunately there is a problem: a constraint is required for every possible wrong output y, and the number of possible wrong outputs is exponential in the size of C. Fortunately, we may employ Algorithm 1 to solve OP 1.\nAlgorithm 1 is a cutting plane algorithm, iteratively introducing constraints until we have solved the original problem within a desired tolerance [19].\nThe algorithm starts with no constraints, and iteratively finds for each example (xi, yi) the output \u02c6y associated with the most violated constraint.\nIf the corresponding constraint is violated by more than we introduce \u02c6y into the working set Wi of active constraints for example i, and re-solve (3) using the updated W.\nIt can be shown that Algorithm 1``s outer loop is guaranteed to halt within a polynomial number of iterations for any desired precision .\nTheorem 1.\nLet \u00afR = maxi maxy \u03a8(xi, yi) \u2212 \u03a8(xi, y) , \u00af\u2206 = maxi maxy \u2206(yi, y), and for any > 0, Algorithm 1 terminates after adding at most max 2n \u00af\u2206 , 8C \u00af\u2206 \u00afR2 2 ff constraints to the working set W. (see [19] for proof) However, within the inner loop of this algorithm we have to compute argmaxy\u2208Y H(y; w), where H(y; w) = \u2206(yi, y) + wT \u03a8(xi, y) \u2212 wT \u03a8(xi, yi), or equivalently, argmax y\u2208Y \u2206(yi, y) + wT \u03a8(xi, y), since wT \u03a8(xi, yi) is constant with respect to y. Though closely related to the classification procedure, this has the substantial complication that we must contend with the additional \u2206(yi, y) term.\nWithout the ability to efficiently find the most violated constraint (i.e., solve argmaxy\u2208Y H(y, w)), the constraint generation procedure is not tractable.\n3.2 Finding the Most Violated Constraint Using OP 1 and optimizing to ROCArea loss (\u2206roc), the problem of finding the most violated constraint, or solving argmaxy\u2208Y H(y, w) (henceforth argmax H), is addressed in [13].\nSolving argmax H for \u2206map is more difficult.\nThis is primarily because ROCArea decomposes nicely into a sum of scores computed independently on each relative ordering of a relevant\/non-relevant document pair.\nMAP, on the other hand, does not decompose in the same way as ROCArea.\nThe main algorithmic contribution of this paper is an efficient method for solving argmax H for \u2206map.\nOne useful property of \u2206map is that it is invariant to swapping two documents with equal relevance.\nFor example, if documents da and db are both relevant, then swapping the positions of da and db in any ranking does not affect \u2206map.\nBy extension, \u2206map is invariant to any arbitrary permutation of the relevant documents amongst themselves and of the non-relevant documents amongst themselves.\nHowever, this reshu\ufb04ing will affect the discriminant score, wT \u03a8(x, y).\nThis leads us to Observation 1.\nObservation 1.\nConsider rankings which are constrained by fixing the relevance at each position in the ranking (e.g., the 3rd document in the ranking must be relevant).\nEvery ranking which satisfies the same set of constraints will have the same \u2206map.\nIf the relevant documents are sorted by wT \u03c6(x, d) in descending order, and the non-relevant documents are likewise sorted by wT \u03c6(x, d), then the interleaving of the two sorted lists which satisfies the constraints will maximize H for that constrained set of rankings.\nObservation 1 implies that in the ranking which maximizes H, the relevant documents will be sorted by wT \u03c6(x, d), and the non-relevant documents will also be sorted likewise.\nBy first sorting the relevant and non-relevant documents, the problem is simplified to finding the optimal interleaving of two sorted lists.\nFor the rest of our discussion, we assume that the relevant documents and non-relevant documents are both sorted by descending wT \u03c6(x, d).\nFor convenience, we also refer to relevant documents as {dx 1 , ... dx |Cx|} = Cx , and non-relevant documents as {d\u00afx 1 , ... d\u00afx |C\u00afx|} = C\u00afx .\nWe define \u03b4j(i1, i2), with i1 < i2, as the change in H from when the highest ranked relevant document ranked after d\u00afx j is dx i1 to when it is dx i2 .\nFor i2 = i1 + 1, we have \u03b4j(i, i + 1) = 1 |Cx| \u201e j j + i \u2212 j \u2212 1 j + i \u2212 1 `` \u2212 2 \u00b7 (sx i \u2212 s\u00afx j ), (5) where si = wT \u03c6(x, di).\nThe first term in (5) is the change in \u2206map when the ith relevant document has j non-relevant documents ranked before it, as opposed to j \u22121.\nThe second term is the change in the discriminant score, wT \u03a8(x, y), when yij changes from +1 to \u22121.\n... , dx i , d\u00afx j , dx i+1, ... ... , d\u00afx j , dx i , dx i+1, ... Figure 1: Example for \u03b4j(i, i + 1) Figure 1 gives a conceptual example for \u03b4j(i, i + 1).\nThe bottom ranking differs from the top only where d\u00afx j slides up one rank.\nThe difference in the value of H for these two rankings is exactly \u03b4j(i, i + 1).\nFor any i1 < i2, we can then define \u03b4j(i1, i2) as \u03b4j(i1, i2) = i2\u22121 X k=i1 \u03b4j(k, k + 1), (6) or equivalently, \u03b4j(i1, i2) = i2\u22121 X k=i1 '' 1 |Cx| \u201e j j + k \u2212 j \u2212 1 j + k \u2212 1 `` \u2212 2 \u00b7 (sx k \u2212 s\u00afx j ) .\nLet o1, ... , o|C\u00afx| encode the positions of the non-relevant documents, where dx oj is the highest ranked relevant document ranked after the jth non-relevant document.\nDue to Observation 1, this encoding uniquely identifies a complete ranking.\nWe can recover the ranking as yij = 8 >>>< >>>: 0 if i = j sign(si \u2212 sj) if di, dj equal relevance sign(oj \u2212 i \u2212 0.5) if di = dx i , dj = d\u00afx j sign(j \u2212 oi + 0.5) if di = d\u00afx i , dj = dx j .\n(7) We can now reformulate H into a new objective function, H (o1, ... , o|C\u00afx||w) = H(\u00afy|w) + |C\u00afx | X k=1 \u03b4k(ok, |Cx | + 1), where \u00afy is the true (weak) ranking.\nConceptually H starts with a perfect ranking \u00afy, and adds the change in H when each successive non-relevant document slides up the ranking.\nWe can then reformulate the argmax H problem as argmax H = argmax o1,...,o|C\u00afx| |C\u00afx | X k=1 \u03b4k(ok, |Cx | + 1) (8) s.t. o1 \u2264 ... \u2264 o|C\u00afx|.\n(9) Algorithm 2 describes the algorithm used to solve equation (8).\nConceptually, Algorithm 2 starts with a perfect ranking.\nThen for each successive non-relevant document, the algorithm modifies the solution by sliding that document up the ranking to locally maximize H while keeping the positions of the other non-relevant documents constant.\n3.2.1 Proof of Correctness Algorithm 2 is greedy in the sense that it finds the best position of each non-relevant document independently from the other non-relevant documents.\nIn other words, the algorithm maximizes H for each non-relevant document, d\u00afx j , Algorithm 2 Finding the Most Violated Constraint (argmax H) for Algorithm 1 with \u2206map 1: Input: w, Cx , C\u00afx 2: sort Cx and C\u00afx in descending order of wT \u03c6(x, d) 3: sx i \u2190 wT \u03c6(x, dx i ), i = 1, ... , |Cx | 4: s\u00afx i \u2190 wT \u03c6(x, d\u00afx i ), i = 1, ... , |C\u00afx | 5: for j = 1, ... , |C\u00afx | do 6: optj \u2190 argmaxk \u03b4j(k, |Cx | + 1) 7: end for 8: encode \u02c6y according to (7) 9: return \u02c6y without considering the positions of the other non-relevant documents, and thus ignores the constraints of (9).\nIn order for the solution to be feasible, the jth non-relevant document must be ranked after the first j \u2212 1 non-relevant documents, thus satisfying opt1 \u2264 opt2 \u2264 ... \u2264 opt|C\u00afx|.\n(10) If the solution is feasible, the it clearly solves (8).\nTherefore, it suffices to prove that Algorithm 2 satisfies (10).\nWe first prove that \u03b4j(\u00b7, \u00b7) is monotonically decreasing in j. Lemma 1.\nFor any 1 \u2264 i1 < i2 \u2264 |Cx | + 1 and 1 \u2264 j < |C\u00afx |, it must be the case that \u03b4j+1(i1, i2) \u2264 \u03b4j(i1, i2).\nProof.\nRecall from (6) that both \u03b4j(i1, i2) and \u03b4j+1(i1, i2) are summations of i2 \u2212 i1 terms.\nWe will show that each term in the summation of \u03b4j+1(i1, i2) is no greater than the corresponding term in \u03b4j(i1, i2), or \u03b4j+1(k, k + 1) \u2264 \u03b4j(k, k + 1) for k = i1, ... , i2 \u2212 1.\nEach term in \u03b4j(k, k +1) and \u03b4j+1(k, k +1) can be further decomposed into two parts (see (5)).\nWe will show that each part of \u03b4j+1(k, k + 1) is no greater than the corresponding part in \u03b4j(k, k + 1).\nIn other words, we will show that both j + 1 j + k + 1 \u2212 j j + k \u2264 j j + k \u2212 j \u2212 1 j + k \u2212 1 (11) and \u22122 \u00b7 (sx k \u2212 s\u00afx j+1) \u2264 \u22122 \u00b7 (sx k \u2212 s\u00afx j ) (12) are true for the aforementioned values of j and k.\nIt is easy to see that (11) is true by observing that for any two positive integers 1 \u2264 a < b, a + 1 b + 1 \u2212 a b \u2264 a b \u2212 a \u2212 1 b \u2212 1 , and choosing a = j and b = j + k.\nThe second inequality (12) holds because Algorithm 2 first sorts d\u00afx in descending order of s\u00afx , implying s\u00afx j+1 \u2264 s\u00afx j .\nThus we see that each term in \u03b4j+1 is no greater than the corresponding term in \u03b4j, which completes the proof.\nThe result of Lemma 1 leads directly to our main correctness result: Theorem 2.\nIn Algorithm 2, the computed values of optj satisfy (10), implying that the solution returned by Algorithm 2 is feasible and thus optimal.\nProof.\nWe will prove that optj \u2264 optj+1 holds for any 1 \u2264 j < |C\u00afx |, thus implying (10).\nSince Algorithm 2 computes optj as optj = argmax k \u03b4j(k, |Cx | + 1), (13) then by definition of \u03b4j (6), for any 1 \u2264 i < optj, \u03b4j(i, optj) = \u03b4j(i, |Cx | + 1) \u2212 \u03b4j(optj, |Cx | + 1) < 0.\nUsing Lemma 1, we know that \u03b4j+1(i, optj) \u2264 \u03b4j(i, optj) < 0, which implies that for any 1 \u2264 i < optj, \u03b4j+1(i, |Cx | + 1) \u2212 \u03b4j+1(optj, |Cx | + 1) < 0.\nSuppose for contradiction that optj+1 < optj.\nThen \u03b4j+1(optj+1, |Cx | + 1) < \u03b4j+1(optj, |Cx | + 1), which contradicts (13).\nTherefore, it must be the case that optj \u2264 optj+1, which completes the proof.\n3.2.2 Running Time The running time of Algorithm 2 can be split into two parts.\nThe first part is the sort by wT \u03c6(x, d), which requires O(n log n) time, where n = |Cx | + |C\u00afx |.\nThe second part computes each optj, which requires O(|Cx | \u00b7 |C\u00afx |) time.\nThough in the worst case this is O(n2 ), the number of relevant documents, |Cx |, is often very small (e.g., constant with respect to n), in which case the running time for the second part is simply O(n).\nFor most real-world datasets, Algorithm 2 is dominated by the sort and has complexity O(n log n).\nAlgorithm 1 is guaranteed to halt in a polynomial number of iterations [19], and each iteration runs Algorithm 2.\nVirtually all well-performing models were trained in a reasonable amount of time (usually less than one hour).\nOnce training is complete, making predictions on query x using the resulting hypothesis h(x|w) requires only sorting by wT \u03c6(x, d).\nWe developed our software using a Python interface3 to SVMstruct , since the Python language greatly simplified the coding process.\nTo improve performance, it is advisable to use the standard C implementation4 of SVMstruct .\n4.\nEXPERIMENT SETUP The main goal of our experiments is to evaluate whether directly optimizing MAP leads to improved MAP performance compared to conventional SVM methods that optimize a substitute loss such as accuracy or ROCArea.\nWe empirically evaluate our method using two sets of TREC Web Track queries, one each from TREC 9 and TREC 10 (topics 451-500 and 501-550), both of which used the WT10g corpus.\nFor each query, TREC provides the relevance judgments of the documents.\nWe generated our features using the scores of existing retrieval functions on these queries.\nWhile our method is agnostic to the meaning of the features, we chose to use existing retrieval functions as a simple yet effective way of acquiring useful features.\nAs such, our 3 http:\/\/www.cs.cornell.edu\/~tomf\/svmpython\/ 4 http:\/\/svmlight.joachims.org\/svm_struct.html Dataset Base Funcs Features TREC 9 Indri 15 750 TREC 10 Indri 15 750 TREC 9 Submissions 53 2650 TREC 10 Submissions 18 900 Table 5: Dataset Statistics experiments essentially test our method``s ability to re-rank the highly ranked documents (e.g., re-combine the scores of the retrieval functions) to improve MAP.\nWe compare our method against the best retrieval functions trained on (henceforth base functions), as well as against previously proposed SVM methods.\nComparing with the best base functions tests our method``s ability to learn a useful combination.\nComparing with previous SVM methods allows us to test whether optimizing directly for MAP (as opposed to accuracy or ROCArea) achieves a higher MAP score in practice.\nThe rest of this section describes the base functions and the feature generation method in detail.\n4.1 Choosing Retrieval Functions We chose two sets of base functions for our experiments.\nFor the first set, we generated three indices over the WT10g corpus using Indri5 .\nThe first index was generated using default settings, the second used Porter-stemming, and the last used Porter-stemming and Indri``s default stopwords.\nFor both TREC 9 and TREC 10, we used the description portion of each query and scored the documents using five of Indri``s built-in retrieval methods, which are Cosine Similarity, TFIDF, Okapi, Language Model with Dirichlet Prior, and Language Model with Jelinek-Mercer Prior.\nAll parameters were kept as their defaults.\nWe computed the scores of these five retrieval methods over the three indices, giving 15 base functions in total.\nFor each query, we considered the scores of documents found in the union of the top 1000 documents of each base function.\nFor our second set of base functions, we used scores from the TREC 9 [8] and TREC 10 [9] Web Track submissions.\nWe used only the non-manual, non-short submissions from both years.\nFor TREC 9 and TREC 10, there were 53 and 18 such submissions, respectively.\nA typical submission contained scores of its top 1000 documents.\nb ca wT \u03c6(x,d) f(d|x) Figure 2: Example Feature Binning 4.2 Generating Features In order to generate input examples for our method, a concrete instantiation of \u03c6 must be provided.\nFor each doc5 http:\/\/www.lemurproject.org TREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.242 - 0.236Best Func.\n0.204 39\/11 ** 0.181 37\/13 ** 2nd Best 0.199 38\/12 ** 0.174 43\/7 ** 3rd Best 0.188 34\/16 ** 0.174 38\/12 ** Table 6: Comparison with Indri Functions ument d scored by a set of retrieval functions F on query x, we generate the features as a vector \u03c6(x, d) = 1[f(d|x)>k] : \u2200f \u2208 F, \u2200k \u2208 Kf , where f(d|x) denotes the score that retrieval function f assigns to document d for query x, and each Kf is a set of real values.\nFrom a high level, we are expressing the score of each retrieval function using |Kf | + 1 bins.\nSince we are using linear kernels, one can think of the learning problem as finding a good piecewise-constant combination of the scores of the retrieval functions.\nFigure 2 shows an example of our feature mapping method.\nIn this example we have a single feature F = {f}.\nHere, Kf = {a, b, c}, and the weight vector is w = wa, wb, wc .\nFor any document d and query x, we have wT \u03c6(x, d) = 8 >>< >>: 0 if f(d|x) < a wa if a \u2264 f(d|x) < b wa + wb if b \u2264 f(d|x) < c wa + wb + wc if c \u2264 f(d|x) .\nThis is expressed qualitatively in Figure 2, where wa and wb are positive, and wc is negative.\nWe ran our main experiments using four choices of F: the set of aforementioned Indri retrieval functions for TREC 9 and TREC 10, and the Web Track submissions for TREC 9 and TREC 10.\nFor each F and each function f \u2208 F, we chose 50 values for Kf which are reasonably spaced and capture the sensitive region of f. Using the four choices of F, we generated four datasets for our main experiments.\nTable 5 contains statistics of the generated datasets.\nThere are many ways to generate features, and we are not advocating our method over others.\nThis was simply an efficient means to normalize the outputs of different functions and allow for a more expressive model.\n5.\nEXPERIMENTS For each dataset in Table 5, we performed 50 trials.\nFor each trial, we train on 10 randomly selected queries, and select another 5 queries at random for a validation set.\nModels were trained using a wide range of C values.\nThe model which performed best on the validation set was selected and tested on the remaining 35 queries.\nAll queries were selected to be in the training, validation and test sets the same number of times.\nUsing this setup, we performed the same experiments while using our method (SVM\u2206 map), an SVM optimizing for ROCArea (SVM\u2206 roc) [13], and a conventional classification SVM (SVMacc) [20].\nAll SVM methods used a linear kernel.\nWe reported the average performance of all models over the 50 trials.\n5.1 Comparison with Base Functions In analyzing our results, the first question to answer is, can SVM\u2206 map learn a model which outperforms the best base TREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.290 - 0.287Best Func.\n0.280 28\/22 0.283 29\/21 2nd Best 0.269 30\/20 0.251 36\/14 ** 3rd Best 0.266 30\/20 0.233 36\/14 ** Table 7: Comparison with TREC Submissions TREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.284 - 0.288Best Func.\n0.280 27\/23 0.283 31\/19 2nd Best 0.269 30\/20 0.251 36\/14 ** 3rd Best 0.266 30\/20 0.233 35\/15 ** Table 8: Comparison with TREC Subm.\n(w\/o best) functions?\nTable 6 presents the comparison of SVM\u2206 map with the best Indri base functions.\nEach column group contains the macro-averaged MAP performance of SVM\u2206 map or a base function.\nThe W\/L columns show the number of queries where SVM\u2206 map achieved a higher MAP score.\nSignificance tests were performed using the two-tailed Wilcoxon signed rank test.\nTwo stars indicate a significance level of 0.95.\nAll tables displaying our experimental results are structured identically.\nHere, we find that SVM\u2206 map significantly outperforms the best base functions.\nTable 7 shows the comparison when trained on TREC submissions.\nWhile achieving a higher MAP score than the best base functions, the performance difference between SVM\u2206 map the base functions is not significant.\nGiven that many of these submissions use scoring functions which are carefully crafted to achieve high MAP, it is possible that the best performing submissions use techniques which subsume the techniques of the other submissions.\nAs a result, SVM\u2206 map would not be able to learn a hypothesis which can significantly out-perform the best submission.\nHence, we ran the same experiments using a modified dataset where the features computed using the best submission were removed.\nTable 8 shows the results (note that we are still comparing against the best submission though we are not using it for training).\nNotice that while the performance of SVM\u2206 map degraded slightly, the performance was still comparable with that of the best submission.\n5.2 Comparison w\/ Previous SVM Methods The next question to answer is, does SVM\u2206 map produce higher MAP scores than previous SVM methods?\nTables 9 and 10 present the results of SVM\u2206 map, SVM\u2206 roc, and SVMacc when trained on the Indri retrieval functions and TREC submissions, respectively.\nTable 11 contains the corresponding results when trained on the TREC submissions without the best submission.\nTo start with, our results indicate that SVMacc was not competitive with SVM\u2206 map and SVM\u2206 roc, and at times underperformed dramatically.\nAs such, we tried several approaches to improve the performance of SVMacc.\n5.2.1 Alternate SVMacc Methods One issue which may cause SVMacc to underperform is the severe imbalance between relevant and non-relevant docTREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.242 - 0.236SVM\u2206 roc 0.237 29\/21 0.234 24\/26 SVMacc 0.147 47\/3 ** 0.155 47\/3 ** SVMacc2 0.219 39\/11 ** 0.207 43\/7 ** SVMacc3 0.113 49\/1 ** 0.153 45\/5 ** SVMacc4 0.155 48\/2 ** 0.155 48\/2 ** Table 9: Trained on Indri Functions TREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.290 - 0.287SVM\u2206 roc 0.282 29\/21 0.278 35\/15 ** SVMacc 0.213 49\/1 ** 0.222 49\/1 ** SVMacc2 0.270 34\/16 ** 0.261 42\/8 ** SVMacc3 0.133 50\/0 ** 0.182 46\/4 ** SVMacc4 0.233 47\/3 ** 0.238 46\/4 ** Table 10: Trained on TREC Submissions uments.\nThe vast majority of the documents are not relevant.\nSVMacc2 addresses this problem by assigning more penalty to false negative errors.\nFor each dataset, the ratio of the false negative to false positive penalties is equal to the ratio of the number non-relevant and relevant documents in that dataset.\nTables 9, 10 and 11 indicate that SVMacc2 still performs significantly worse than SVM\u2206 map.\nAnother possible issue is that SVMacc attempts to find just one discriminating threshold b that is query-invariant.\nIt may be that different queries require different values of b. Having the learning method trying to find a good b value (when one does not exist) may be detrimental.\nWe took two approaches to address this issue.\nThe first method, SVMacc3, converts the retrieval function scores into percentiles.\nFor example, for document d, query q and retrieval function f, if the score f(d|q) is in the top 90% of the scores f(\u00b7|q) for query q, then the converted score is f (d|q) = 0.9.\nEach Kf contains 50 evenly spaced values between 0 and 1.\nTables 9, 10 and 11 show that the performance of SVMacc3 was also not competitive with SVM\u2206 map.\nThe second method, SVMacc4, normalizes the scores given by f for each query.\nFor example, assume for query q that f outputs scores in the range 0.2 to 0.7.\nThen for document d, if f(d|q) = 0.6, the converted score would be f (d|q) = (0.6 \u2212 0.2)\/(0.7 \u2212 0.2) = 0.8.\nEach Kf contains 50 evenly spaced values between 0 and 1.\nAgain, Tables 9, 10 and 11 show that SVMacc4 was not competitive with SVM\u2206 map 5.2.2 MAP vs ROCArea SVM\u2206 roc performed much better than SVMacc in our experiments.\nWhen trained on Indri retrieval functions (see Table 9), the performance of SVM\u2206 roc was slight, though not significantly, worse than the performances of SVM\u2206 map.\nHowever, Table 10 shows that SVM\u2206 map did significantly outperform SVM\u2206 roc when trained on the TREC submissions.\nTable 11 shows the performance of the models when trained on the TREC submissions with the best submission removed.\nThe performance of most models degraded by a small amount, with SVM\u2206 map still having the best performance.\nTREC 9 TREC 10 Model MAP W\/L MAP W\/L SVM\u2206 map 0.284 - 0.288SVM\u2206 roc 0.274 31\/19 ** 0.272 38\/12 ** SVMacc 0.215 49\/1 ** 0.211 50\/0 ** SVMacc2 0.267 35\/15 ** 0.258 44\/6 ** SVMacc3 0.133 50\/0 ** 0.174 46\/4 ** SVMacc4 0.228 46\/4 ** 0.234 45\/5 ** Table 11: Trained on TREC Subm.\n(w\/o Best) 6.\nCONCLUSIONS AND FUTURE WORK We have presented an SVM method that directly optimizes MAP.\nIt provides a principled approach and avoids difficult to control heuristics.\nWe formulated the optimization problem and presented an algorithm which provably finds the solution in polynomial time.\nWe have shown empirically that our method is generally superior to or competitive with conventional SVMs methods.\nOur new method makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for Accuracy and ROCArea.\nThe computational cost for training is very reasonable in practice.\nSince other methods typically require tuning multiple heuristics, we also expect to train fewer models before finding one which achieves good performance.\nThe learning framework used by our method is fairly general.\nA natural extension of this framework would be to develop methods to optimize for other important IR measures, such as Normalized Discounted Cumulative Gain [2, 3, 4, 12] and Mean Reciprocal Rank.\n7.\nACKNOWLEDGMENTS This work was funded under NSF Award IIS-0412894, NSF CAREER Award 0237381, and a gift from Yahoo! Research.\nThe third author was also partly supported by a Microsoft Research Fellowship.\n8.\nREFERENCES [1] B. T. Bartell, G. W. Cottrell, and R. K. Belew.\nAutomatic combination of multiple ranked retrieval systems.\nIn Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR), 1994.\n[2] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.\nLearning to rank using gradient descent.\nIn Proceedings of the International Conference on Machine Learning (ICML), 2005.\n[3] C. J. C. Burges, R. Ragno, and Q. Le.\nLearning to rank with non-smooth cost functions.\nIn Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS), 2006.\n[4] Y. Cao, J. Xu, T.-Y.\nLiu, H. Li, Y. Huang, and H.-W.\nHon. Adapting ranking SVM to document retrieval.\nIn Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR), 2006.\n[5] B. Carterette and D. Petkova.\nLearning a ranking from pairwise preferences.\nIn Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR), 2006.\n[6] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes.\nEnsemble selection from libraries of models.\nIn Proceedings of the International Conference on Machine Learning (ICML), 2004.\n[7] J. Davis and M. Goadrich.\nThe relationship between precision-recall and ROC curves.\nIn Proceedings of the International Conference on Machine Learning (ICML), 2006.\n[8] D. Hawking.\nOverview of the TREC-9 web track.\nIn Proceedings of TREC-2000, 2000.\n[9] D. Hawking and N. Craswell.\nOverview of the TREC-2001 web track.\nIn Proceedings of TREC-2001, Nov. 2001.\n[10] R. Herbrich, T. Graepel, and K. Obermayer.\nLarge margin rank boundaries for ordinal regression.\nAdvances in large margin classifiers, 2000.\n[11] A. Herschtal and B. Raskutti.\nOptimising area under the ROC curve using gradient descent.\nIn Proceedings of the International Conference on Machine Learning (ICML), 2004.\n[12] K. Jarvelin and J. Kekalainen.\nIr evaluation methods for retrieving highly relevant documents.\nIn Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR), 2000.\n[13] T. Joachims.\nA support vector method for multivariate performance measures.\nIn Proceedings of the International Conference on Machine Learning (ICML), pages 377-384, New York, NY, USA, 2005.\nACM Press.\n[14] J. Lafferty and C. Zhai.\nDocument language models, query models, and risk minimization for information retrieval.\nIn Proceedings of the ACM Conference on Research and Development in Information Retrieval (SIGIR), pages 111-119, 2001.\n[15] Y. Lin, Y. Lee, and G. Wahba.\nSupport vector machines for classification in nonstandard situations.\nMachine Learning, 46:191-202, 2002.\n[16] D. Metzler and W. B. Croft.\nA markov random field model for term dependencies.\nIn Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 472-479, 2005.\n[17] K. Morik, P. Brockhausen, and T. Joachims.\nCombining statistical learning with a knowledge-based approach.\nIn Proceedings of the International Conference on Machine Learning, 1999.\n[18] S. Robertson.\nThe probability ranking principle in ir.\njournal of documentation.\nJournal of Documentation, 33(4):294-304, 1977.\n[19] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun.\nLarge margin methods for structured and interdependent output variables.\nJournal of Machine Learning Research (JMLR), 6(Sep):1453-1484, 2005.\n[20] V. Vapnik.\nStatistical Learning Theory.\nWiley and Sons Inc., 1998.\n[21] L. Yan, R. Dodier, M. Mozer, and R. Wolniewicz.\nOptimizing classifier performance via approximation to the Wilcoxon-Mann-Witney statistic.\nIn Proceedings of the International Conference on Machine Learning (ICML), 2003.","lvl-3":"A Support Vector Method for Optimizing Average Precision\nABSTRACT\nMachine learning is commonly used to improve ranked retrieval systems.\nDue to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems.\nExisting approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive.\nIn contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP.\nWe evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea.\nIn most cases we show our method to produce statistically significant improvements in MAP scores.\n1.\nINTRODUCTION\nState of the art information retrieval systems commonly use machine learning techniques to learn ranking functions.\nHowever, most current approaches do not optimize for the evaluation measure most often used, namely Mean Average Precision (MAP).\nInstead, current algorithms tend to take one of two general approaches.\nThe first approach is to learn a model that estimates the probability of a document being relevant given\na query (e.g., [18, 14]).\nIf solved effectively, the ranking with best MAP performance can easily be derived from the probabilities of relevance.\nHowever, achieving high MAP only requires finding a good ordering of the documents.\nAs a result, finding good probabilities requires solving a more difficult problem than necessary, likely requiring more training data to achieve the same MAP performance.\nThe second common approach is to learn a function that maximizes a surrogate measure.\nPerformance measures optimized include accuracy [17, 15], ROCArea [1, 5, 10, 11, 13, 21] or modifications of ROCArea [4], and NDCG [2, 3].\nLearning a model to optimize for such measures might result in suboptimal MAP performance.\nIn fact, although some previous systems have obtained good MAP performance, it is known that neither achieving optimal accuracy nor ROCArea can guarantee optimal MAP performance [7].\nIn this paper, we present a general approach for learning ranking functions that maximize MAP performance.\nSpecifically, we present an SVM algorithm that globally optimizes a hinge-loss relaxation of MAP.\nThis approach simplifies the process of obtaining ranking functions with high MAP performance by avoiding additional intermediate steps and heuristics.\nThe new algorithm also makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for accuracy and ROCArea.\nIn contrast to recent work directly optimizing for MAP performance by Metzler & Croft [16] and Caruana et al. [6], our technique is computationally efficient while finding a globally optimal solution.\nLike [6, 16], our method learns a linear model, but is much more efficient in practice and, unlike [16], can handle many thousands of features.\nWe now describe the algorithm in detail and provide proof of correctness.\nFollowing this, we provide an analysis of running time.\nWe finish with empirical results from experiments on the TREC 9 and TREC 10 Web Track corpus.\nWe have also developed a software package implementing our algorithm that is available for public user.\n2.\nTHE LEARNING PROBLEM\n2.2 MAP vs Accuracy\n2.1 MAP vs ROCArea\n3.\nOPTIMIZING AVERAGE PRECISION\n3.1 Structural SVMs\n3.2 Finding the Most Violated Constraint\n3.2.1 Proof of Correctness\n3.2.2 Running Time\n4.\nEXPERIMENT SETUP\n4.1 Choosing Retrieval Functions\n4.2 Generating Features\n5.\nEXPERIMENTS\n5.1 Comparison with Base Functions\n5.2 Comparison w \/ Previous SVM Methods\n5.2.1 Alternate SVMacc Methods\n5.2.2 MAP vs ROCArea\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented an SVM method that directly optimizes MAP.\nIt provides a principled approach and avoids difficult to control heuristics.\nWe formulated the optimization problem and presented an algorithm which provably finds the solution in polynomial time.\nWe have shown empirically that our method is generally superior to or competitive with conventional SVMs methods.\nOur new method makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for Accuracy and ROCArea.\nThe computational cost for training is very reasonable in practice.\nSince other methods typically require tuning multiple heuristics, we also expect to train fewer models before finding one which achieves good performance.\nThe learning framework used by our method is fairly general.\nA natural extension of this framework would be to develop methods to optimize for other important IR measures, such as Normalized Discounted Cumulative Gain [2, 3, 4, 12] and Mean Reciprocal Rank.","lvl-4":"A Support Vector Method for Optimizing Average Precision\nABSTRACT\nMachine learning is commonly used to improve ranked retrieval systems.\nDue to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems.\nExisting approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive.\nIn contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP.\nWe evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea.\nIn most cases we show our method to produce statistically significant improvements in MAP scores.\n1.\nINTRODUCTION\nState of the art information retrieval systems commonly use machine learning techniques to learn ranking functions.\nHowever, most current approaches do not optimize for the evaluation measure most often used, namely Mean Average Precision (MAP).\nInstead, current algorithms tend to take one of two general approaches.\nThe first approach is to learn a model that estimates the probability of a document being relevant given\nIf solved effectively, the ranking with best MAP performance can easily be derived from the probabilities of relevance.\nHowever, achieving high MAP only requires finding a good ordering of the documents.\nAs a result, finding good probabilities requires solving a more difficult problem than necessary, likely requiring more training data to achieve the same MAP performance.\nThe second common approach is to learn a function that maximizes a surrogate measure.\nPerformance measures optimized include accuracy [17, 15], ROCArea [1, 5, 10, 11, 13, 21] or modifications of ROCArea [4], and NDCG [2, 3].\nLearning a model to optimize for such measures might result in suboptimal MAP performance.\nIn fact, although some previous systems have obtained good MAP performance, it is known that neither achieving optimal accuracy nor ROCArea can guarantee optimal MAP performance [7].\nIn this paper, we present a general approach for learning ranking functions that maximize MAP performance.\nSpecifically, we present an SVM algorithm that globally optimizes a hinge-loss relaxation of MAP.\nThis approach simplifies the process of obtaining ranking functions with high MAP performance by avoiding additional intermediate steps and heuristics.\nThe new algorithm also makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for accuracy and ROCArea.\nIn contrast to recent work directly optimizing for MAP performance by Metzler & Croft [16] and Caruana et al. [6], our technique is computationally efficient while finding a globally optimal solution.\nWe now describe the algorithm in detail and provide proof of correctness.\nFollowing this, we provide an analysis of running time.\nWe have also developed a software package implementing our algorithm that is available for public user.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented an SVM method that directly optimizes MAP.\nIt provides a principled approach and avoids difficult to control heuristics.\nWe formulated the optimization problem and presented an algorithm which provably finds the solution in polynomial time.\nWe have shown empirically that our method is generally superior to or competitive with conventional SVMs methods.\nOur new method makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for Accuracy and ROCArea.\nSince other methods typically require tuning multiple heuristics, we also expect to train fewer models before finding one which achieves good performance.\nThe learning framework used by our method is fairly general.","lvl-2":"A Support Vector Method for Optimizing Average Precision\nABSTRACT\nMachine learning is commonly used to improve ranked retrieval systems.\nDue to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems.\nExisting approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive.\nIn contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP.\nWe evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea.\nIn most cases we show our method to produce statistically significant improvements in MAP scores.\n1.\nINTRODUCTION\nState of the art information retrieval systems commonly use machine learning techniques to learn ranking functions.\nHowever, most current approaches do not optimize for the evaluation measure most often used, namely Mean Average Precision (MAP).\nInstead, current algorithms tend to take one of two general approaches.\nThe first approach is to learn a model that estimates the probability of a document being relevant given\na query (e.g., [18, 14]).\nIf solved effectively, the ranking with best MAP performance can easily be derived from the probabilities of relevance.\nHowever, achieving high MAP only requires finding a good ordering of the documents.\nAs a result, finding good probabilities requires solving a more difficult problem than necessary, likely requiring more training data to achieve the same MAP performance.\nThe second common approach is to learn a function that maximizes a surrogate measure.\nPerformance measures optimized include accuracy [17, 15], ROCArea [1, 5, 10, 11, 13, 21] or modifications of ROCArea [4], and NDCG [2, 3].\nLearning a model to optimize for such measures might result in suboptimal MAP performance.\nIn fact, although some previous systems have obtained good MAP performance, it is known that neither achieving optimal accuracy nor ROCArea can guarantee optimal MAP performance [7].\nIn this paper, we present a general approach for learning ranking functions that maximize MAP performance.\nSpecifically, we present an SVM algorithm that globally optimizes a hinge-loss relaxation of MAP.\nThis approach simplifies the process of obtaining ranking functions with high MAP performance by avoiding additional intermediate steps and heuristics.\nThe new algorithm also makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for accuracy and ROCArea.\nIn contrast to recent work directly optimizing for MAP performance by Metzler & Croft [16] and Caruana et al. [6], our technique is computationally efficient while finding a globally optimal solution.\nLike [6, 16], our method learns a linear model, but is much more efficient in practice and, unlike [16], can handle many thousands of features.\nWe now describe the algorithm in detail and provide proof of correctness.\nFollowing this, we provide an analysis of running time.\nWe finish with empirical results from experiments on the TREC 9 and TREC 10 Web Track corpus.\nWe have also developed a software package implementing our algorithm that is available for public user.\n2.\nTHE LEARNING PROBLEM\nFollowing the standard machine learning setup, our goal is to learn a function h: X--+ Y between an input space X (all possible queries) and output space Y (rankings over a corpus).\nIn order to quantify the quality of a prediction, y\u02c6 = h (x), we will consider a loss function \u0394: Y x Y--+ R. \u0394 (y, \u02c6y) quantifies the penalty for making prediction y\u02c6 if the correct output is y.\nThe loss function allows us to incorporate specific performance measures, which we will exploit\nfor optimizing MAP.\nWe restrict ourselves to the supervised learning scenario, where input\/output pairs (x, y) are available for training and are assumed to come from some fixed distribution P (x, y).\nThe goal is to find a function h such that the risk (i.e., expec\nis minimized.\nOf course, P (x, y) is unknown.\nBut given a finite set of training pairs, S = {(xi, yi) \u2208 X \u00d7 Y: i = 1,..., n}, the performance of h on S can be measured by the empirical risk, \u0394 (yi, h (xi)).\nIn the case of learning a ranked retrieval function, X denotes a space of queries, and Y the space of (possibly weak) rankings over some corpus of documents C = {d1,..., d | C |}.\nWe can define average precision loss as\nwhere rank (y) is a vector of the rank values of each document in C. For example, for a corpus of two documents, {d1, d2}, with d1 having higher rank than d2, rank (y') = (1, 0).\nWe assume true rankings have two rank values, where relevant documents have rank value 1 and non-relevant documents rank value 0.\nWe further assume that all predicted rankings are complete rankings (no ties).\nLet p = rank (y) and p\u02c6 = rank (\u02c6y).\nThe average precision score is defined as two hypotheses predict a ranking for query x over a corpus of eight documents.\nTable 2: Performance of Toy Models\nTable 2 shows the MAP and ROCArea scores of h1 and h2.\nHere, a learning method which optimizes for ROCArea would choose h2 since that results in a higher ROCArea score, but this yields a suboptimal MAP score.\n2.2 MAP vs Accuracy\nUsing a very similar example, we now demonstrate how optimizing for accuracy might result in suboptimal MAP.\nModels which optimize for accuracy are not directly concerned with the ranking.\nInstead, they learn a threshold such that documents scoring higher than the threshold can be classified as relevant and documents scoring lower as nonrelevant.\nwhere rel = | {i: pi = 1} | is the number of relevant documents, and Prec@j is the percentage of relevant documents in the top j documents in predicted ranking \u02c6y.\nMAP is the mean of the average precision scores of a group of queries.\n2.1 MAP vs ROCArea\nMost learning algorithms optimize for accuracy or ROCArea.\nWhile optimizing for these measures might achieve good MAP performance, we use two simple examples to show it can also be suboptimal in terms of MAP.\nROCArea assigns equal penalty to each misordering of a relevant\/non-relevant pair.\nIn contrast, MAP assigns greater penalties to misorderings higher up in the predicted ranking.\nUsing our notation, ROCArea can be defined as\nwhere p is the true (weak) ranking, p\u02c6 is the predicted ranking, and 1 [b] is the indicator function conditioned on b.\nTable 1: Toy Example and Models\nSuppose we have a hypothesis space with only two hypothesis functions, h1 and h2, as shown in Table 1.\nThese\nTable 4: Performance of Toy Models\nTable 4 shows the MAP and best accuracy scores of h1 (q) and h2 (q).\nThe best accuracy refers to the highest achievable accuracy on that ranking when considering all possible thresholds.\nFor instance, with h1 (q), a threshold between documents 1 and 2 gives 4 errors (documents 6-9 incorrectly classified as non-relevant), yielding an accuracy of 0.64.\nSimilarly, with h2 (q), a threshold between documents 5 and 6 gives 3 errors (documents 10-11 incorrectly classified as relevant, and document 1 as non-relevant), yielding an accuracy of 0.73.\nA learning method which optimizes for accuracy would choose h2 since that results in a higher accuracy score, but this yields a suboptimal MAP score.\n3.\nOPTIMIZING AVERAGE PRECISION\nWe build upon the approach used by [13] for optimizing ROCArea.\nUnlike ROCArea, however, MAP does not decompose linearly in the examples and requires a substantially extended algorithm, which we describe in this section.\nRecall that the true ranking is a weak ranking with two rank values (relevant and non-relevant).\nLet Cx and C \u00af x denote the set of relevant and non-relevant documents of C for query x, respectively.\nWe focus on functions which are parametrized by a weight vector w, and thus wish to find w to minimize the empirical risk, R\u0394S (w) \u2261 R\u0394S (h (\u00b7; w)).\nOur approach is to learn a discriminant function F: X \u00d7 Y \u2192 \u03bei + ~ then 9: Wi \u2190 Wi \u222a {\u02c6y} w \u2190 optimize (3) over W = S 10: i Wi 11: end if 12: end for 13: until no Wi has changed during iteration\nparameter that controls this tradeoff and can be tuned to achieve good performance in different training tasks.\nFor each (xi, yi) in the training set, a set of constraints of the form in equation (4) is added to the optimization problem.\nNote that wT \u03a8 (x, y) is exactly our discriminant function F (x, y; w) (see equation (2)).\nDuring prediction, our model chooses the ranking which maximizes the discriminant (1).\nIf the discriminant value for an incorrect ranking y is greater than for the true ranking yi (e.g., F (xi, y; w)> F (xi, yi; w)), then corresponding slack variable, \u03bei, must be at least \u0394 (yi, y) for that constraint to be satisfied.\nTherefore, the sum of slacks, E \u03bei, upper bounds the MAP loss.\nThis is stated formally in Proposition 1.\nPROPOSITION 1.\nLet \u03be * (w) be the optimal solution of the slack variables for OP 1 for a given weight vector w. Then\n(see [19] for proof) Proposition 1 shows that OP 1 learns a ranking function that optimizes an upper bound on MAP error on the training set.\nUnfortunately there is a problem: a constraint is required for every possible wrong output y, and the number of possible wrong outputs is exponential in the size of C. Fortunately, we may employ Algorithm 1 to solve OP 1.\nAlgorithm 1 is a cutting plane algorithm, iteratively introducing constraints until we have solved the original problem within a desired tolerance ~ [19].\nThe algorithm starts with no constraints, and iteratively finds for each example (xi, yi) the output y\u02c6 associated with the most violated constraint.\nIf the corresponding constraint is violated by more than ~ we introduce y\u02c6 into the working set Wi of active constraints for example i, and re-solve (3) using the updated W.\nIt can be shown that Algorithm 1's outer loop is guaranteed to halt within a polynomial number of iterations for any desired precision ~.\nTHEOREM 1.\nLet R \u00af = maxi maxy k\u03a8 (xi, yi) \u2212 \u03a8 (xi, y) k, \u0394 \u00af = maxi maxy \u0394 (yi, y), and for any ~> 0, Algorithm 1 terminates after adding at most\nconstraints to the working set W. (see [19] for proof) However, within the inner loop of this algorithm we have\nto compute argmaxY \u2208 Y H (y; w), where H (y; w) = \u0394 (yi, y) + wT \u03a8 (xi, y) \u2212 wT \u03a8 (xi, yi), or equivalently, documents ranked before it, as opposed to j \u2212 1.\nThe second term is the change in the discriminant score, wT \u03a8 (x, y), when yij changes from +1 to \u2212 1.\nargmax \u0394 (yi, y) + wT \u03a8 (xi, y),..., dxi, dj, dx Y \u2208 Y x i +1,......, d \u00af xj, dxi, dxi +1,...since wT \u03a8 (xi, yi) is constant with respect to y. Though closely related to the classification procedure, this has the substantial complication that we must contend with the additional \u0394 (yi, y) term.\nWithout the ability to efficiently find the most violated constraint (i.e., solve argmaxY \u2208 Y H (y, w)), the constraint generation procedure is not tractable.\n3.2 Finding the Most Violated Constraint\nUsing OP 1 and optimizing to ROCArea loss (\u0394roc), the problem of finding the most violated constraint, or solving argmaxY \u2208 Y H (y, w) (henceforth argmax H), is addressed in [13].\nSolving argmax H for \u0394map is more difficult.\nThis is primarily because ROCArea decomposes nicely into a sum of scores computed independently on each relative ordering of a relevant\/non-relevant document pair.\nMAP, on the other hand, does not decompose in the same way as ROCArea.\nThe main algorithmic contribution of this paper is an efficient method for solving argmax H for \u0394map.\nOne useful property of \u0394map is that it is invariant to swapping two documents with equal relevance.\nFor example, if documents da and db are both relevant, then swapping the positions of da and db in any ranking does not affect \u0394map.\nBy extension, \u0394map is invariant to any arbitrary permutation of the relevant documents amongst themselves and of the non-relevant documents amongst themselves.\nHowever, this reshuffling will affect the discriminant score, wT \u03a8 (x, y).\nThis leads us to Observation 1.\nOBSERVATION 1.\nConsider rankings which are constrained by fixing the relevance at each position in the ranking (e.g., the 3rd document in the ranking must be relevant).\nEvery ranking which satisfies the same set of constraints will have the same \u0394map.\nIf the relevant documents are sorted by wT \u03c6 (x, d) in descending order, and the non-relevant documents are likewise sorted by wT \u03c6 (x, d), then the interleaving of the two sorted lists which satisfies the constraints will maximize H for that constrained set of rankings.\nObservation 1 implies that in the ranking which maximizes H, the relevant documents will be sorted by wT \u03c6 (x, d), and the non-relevant documents will also be sorted likewise.\nBy first sorting the relevant and non-relevant documents, the problem is simplified to finding the optimal interleaving of two sorted lists.\nFor the rest of our discussion, we assume that the relevant documents and non-relevant documents are both sorted by descending wT \u03c6 (x, d).\nFor convenience, we also refer to relevant documents as {dx1,...dx | Cs |} = Cx, and non-relevant documents as {dx 1,...d | C \u00af s |} = C x \u00af x.\nWe define \u03b4j (i1, i2), with i1 k]: ` df 2 F, ` dk 2 Kf), where f (djx) denotes the score that retrieval function f assigns to document d for query x, and each Kf is a set of real values.\nFrom a high level, we are expressing the score of each retrieval function using jKf j + 1 bins.\nSince we are using linear kernels, one can think of the learning problem as finding a good piecewise-constant combination of the scores of the retrieval functions.\nFigure 2 shows an example of our feature mapping method.\nIn this example we have a single feature F = {f}.\nHere, Kf = {a, b, c}, and the weight vector is w = (wa, wb, wc).\nFor any document d and query x, we have\nThis is expressed qualitatively in Figure 2, where wa and wb are positive, and wc is negative.\nWe ran our main experiments using four choices of F: the set of aforementioned Indri retrieval functions for TREC 9 and TREC 10, and the Web Track submissions for TREC 9 and TREC 10.\nFor each F and each function f 2 F, we chose 50 values for Kf which are reasonably spaced and capture the sensitive region of f. Using the four choices of F, we generated four datasets for our main experiments.\nTable 5 contains statistics of the generated datasets.\nThere are many ways to generate features, and we are not advocating our method over others.\nThis was simply an efficient means to normalize the outputs of different functions and allow for a more expressive model.\n5.\nEXPERIMENTS\nFor each dataset in Table 5, we performed 50 trials.\nFor each trial, we train on 10 randomly selected queries, and select another 5 queries at random for a validation set.\nModels were trained using a wide range of C values.\nThe model which performed best on the validation set was selected and tested on the remaining 35 queries.\nAll queries were selected to be in the training, validation and test sets the same number of times.\nUsing this setup, we performed the same experiments while using our method (SVM\u0394map), an SVM optimizing for ROCArea (SVM\u0394 roc) [13], and a conventional classification SVM (SVMacc) [20].\nAll SVM methods used a linear kernel.\nWe reported the average performance of all models over the 50 trials.\n5.1 Comparison with Base Functions\nIn analyzing our results, the first question to answer is, can SVM\u0394map learn a model which outperforms the best base\nTable 7: Comparison with TREC Submissions\nTable 8: Comparison with TREC Subm.\n(w\/o best)\nfunctions?\nTable 6 presents the comparison of SVM\u0394 map with the best Indri base functions.\nEach column group contains the macro-averaged MAP performance of SVM\u0394map or a base function.\nThe W\/L columns show the number of queries where SVM\u0394 map achieved a higher MAP score.\nSignificance tests were performed using the two-tailed Wilcoxon signed rank test.\nTwo stars indicate a significance level of 0.95.\nAll tables displaying our experimental results are structured identically.\nHere, we find that SVM\u0394 map significantly outperforms the best base functions.\nTable 7 shows the comparison when trained on TREC submissions.\nWhile achieving a higher MAP score than the best base functions, the performance difference between SVM\u0394map the base functions is not significant.\nGiven that many of these submissions use scoring functions which are carefully crafted to achieve high MAP, it is possible that the best performing submissions use techniques which subsume the techniques of the other submissions.\nAs a result, SVM\u0394map would not be able to learn a hypothesis which can significantly out-perform the best submission.\nHence, we ran the same experiments using a modified dataset where the features computed using the best submission were removed.\nTable 8 shows the results (note that we are still comparing against the best submission though we are not using it for training).\nNotice that while the performance of SVM\u0394 map degraded slightly, the performance was still comparable with that of the best submission.\n5.2 Comparison w \/ Previous SVM Methods\nThe next question to answer is, does SVM\u0394map produce higher MAP scores than previous SVM methods?\nTables 9 and 10 present the results of SVM\u0394 map, SVM\u0394 roc, and SVMacc when trained on the Indri retrieval functions and TREC submissions, respectively.\nTable 11 contains the corresponding results when trained on the TREC submissions without the best submission.\nTo start with, our results indicate that SVMacc was not competitive with SVM\u0394map and SVM\u0394 roc, and at times underperformed dramatically.\nAs such, we tried several approaches to improve the performance of SVMacc.\n5.2.1 Alternate SVMacc Methods\nOne issue which may cause SVMacc to underperform is the severe imbalance between relevant and non-relevant doc\nTable 9: Trained on Indri Functions\nTable 10: Trained on TREC Submissions\numents.\nThe vast majority of the documents are not relevant.\nSVMacc2 addresses this problem by assigning more penalty to false negative errors.\nFor each dataset, the ratio of the false negative to false positive penalties is equal to the ratio of the number non-relevant and relevant documents in that dataset.\nTables 9, 10 and 11 indicate that SVMacc2 still performs significantly worse than SVM\u0394 map.\nAnother possible issue is that SVMacc attempts to find just one discriminating threshold b that is query-invariant.\nIt may be that different queries require different values of b. Having the learning method trying to find a good b value (when one does not exist) may be detrimental.\nWe took two approaches to address this issue.\nThe first method, SVMacc3, converts the retrieval function scores into percentiles.\nFor example, for document d, query q and retrieval function f, if the score f (dlq) is in the top 90% of the scores f (\u2022 lq) for query q, then the converted score is f' (dlq) = 0.9.\nEach Kf contains 50 evenly spaced values between 0 and 1.\nTables 9, 10 and 11 show that the performance of SVMacc3 was also not competitive with SVM\u0394map.\nThe second method, SVMacc4, normalizes the scores given by f for each query.\nFor example, assume for query q that f outputs scores in the range 0.2 to 0.7.\nThen for document d, if f (dlq) = 0.6, the converted score would be f' (dlq) = (0.6 - 0.2) \/ (0.7 - 0.2) = 0.8.\nEach Kf contains 50 evenly spaced values between 0 and 1.\nAgain, Tables 9, 10 and 11 show that SVMacc4 was not competitive with SVM\u0394map\n5.2.2 MAP vs ROCArea\nSVM\u0394 roc performed much better than SVMacc in our experiments.\nWhen trained on Indri retrieval functions (see Table 9), the performance of SVM\u0394 roc was slight, though not significantly, worse than the performances of SVM\u0394map.\nHowever, Table 10 shows that SVM\u0394 map did significantly outperform SVM\u0394 roc when trained on the TREC submissions.\nTable 11 shows the performance of the models when trained on the TREC submissions with the best submission removed.\nThe performance of most models degraded by a small amount, with SVM\u0394 map still having the best performance.\nTable 11: Trained on TREC Subm.\n(w\/o Best)\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented an SVM method that directly optimizes MAP.\nIt provides a principled approach and avoids difficult to control heuristics.\nWe formulated the optimization problem and presented an algorithm which provably finds the solution in polynomial time.\nWe have shown empirically that our method is generally superior to or competitive with conventional SVMs methods.\nOur new method makes it conceptually just as easy to optimize SVMs for MAP as was previously possible only for Accuracy and ROCArea.\nThe computational cost for training is very reasonable in practice.\nSince other methods typically require tuning multiple heuristics, we also expect to train fewer models before finding one which achieves good performance.\nThe learning framework used by our method is fairly general.\nA natural extension of this framework would be to develop methods to optimize for other important IR measures, such as Normalized Discounted Cumulative Gain [2, 3, 4, 12] and Mean Reciprocal Rank.","keyphrases":["machin learn","rank retriev system","rank","learn techniqu","mean averag precis","optim solut","map relax","inform retriev system","probabl","surrog measur","loss function","supervis learn","machin learn for inform retriev","support vector machin"],"prmu":["P","P","P","P","P","P","R","M","U","U","U","M","M","R"]} {"id":"J-2","title":"Worst-Case Optimal Redistribution of VCG Payments","abstract":"For allocation problems with one or more items, the well-known Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents' payments will sum to more than 0. If there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer. However, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents. In 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategy-proofness, individual rationality, and the non-deficit property. In this paper, we extend this result in a restricted setting. We study allocation settings where there are multiple indistinguishable units of a single good, and agents have unit demand. (For this specific setting, Cavallo's mechanism coincides with a mechanism proposed by Bailey in 1997 [2].) Here we propose a family of mechanisms that redistribute some of the VCG payment back to the agents. All mechanisms in the family are efficient, strategy-proof, individually rational, and never incur a deficit. The family includes the Bailey-Cavallo mechanism as a special case. We then provide an optimization model for finding the optimal mechanism -- that is, the mechanism that maximizes redistribution in the worst case -- inside the family, and show how to cast this model as a linear program. We give both numerical and analytical solutions of this linear program, and the (unique) resulting mechanism shows significant improvement over the Bailey-Cavallo mechanism (in the worst case). Finally, we prove that the obtained mechanism is optimal among all anonymous deterministic mechanisms that satisfy the above properties.","lvl-1":"Worst-Case Optimal Redistribution of VCG Payments Mingyu Guo Duke University Department of Computer Science Durham, NC, USA mingyu@cs.duke.edu Vincent Conitzer Duke University Department of Computer Science Durham, NC, USA conitzer@cs.duke.edu ABSTRACT For allocation problems with one or more items, the wellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit.\nHowever, the VCG mechanism is not (strongly) budget balanced: generally, the agents'' payments will sum to more than 0.\nIf there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer.\nHowever, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents.\nIn 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategy-proofness, individual rationality, and the non-deficit property.\nIn this paper, we extend this result in a restricted setting.\nWe study allocation settings where there are multiple indistinguishable units of a single good, and agents have unit demand.\n(For this specific setting, Cavallo``s mechanism coincides with a mechanism proposed by Bailey in 1997 [2].)\nHere we propose a family of mechanisms that redistribute some of the VCG payment back to the agents.\nAll mechanisms in the family are efficient, strategyproof, individually rational, and never incur a deficit.\nThe family includes the Bailey-Cavallo mechanism as a special case.\nWe then provide an optimization model for finding the optimal mechanism-that is, the mechanism that maximizes redistribution in the worst case-inside the family, and show how to cast this model as a linear program.\nWe give both numerical and analytical solutions of this linear program, and the (unique) resulting mechanism shows significant improvement over the Bailey-Cavallo mechanism (in the worst case).\nFinally, we prove that the obtained mechanism is optimal among all anonymous deterministic mechanisms that satisfy the above properties.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics; I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION Many important problems in computer science and electronic commerce can be modeled as resource allocation problems.\nIn such problems, we want to allocate the resources (or items) to the agents that value them the most.\nUnfortunately, agents'' valuations are private knowledge, and self-interested agents will lie about their valuations if this is to their benefit.\nOne solution is to auction off the items, possibly in a combinatorial auction where agents can bid on bundles of items.\nThere exist ways of determining the payments that the agents make in such an auction that incentivizes the agents to report their true valuations-that is, the payments make the auction strategy-proof.\nOne very general way of doing so is to use the VCG mechanism [23, 4, 12].\n(The VCG mechanism is also known as the Clarke mechanism or, in the specific context of auctions, the Generalized Vickrey Auction.)\nBesides strategy-proofness, the VCG mechanism has several other nice properties in the context of resource allocation problems.\nIt is efficient: the chosen allocation always maximizes the sum of the agents'' valuations.\nIt is also (expost) individually rational: participating in the mechanism never makes an agent worse off than not participating.\nFinally, it has a no-deficit property: the sum of the agents'' payments is always nonnegative.\nIn many settings, another property that would be desirable is (strong) budget balance, meaning that the payments sum to exactly 0.\nSuppose the agents are trying to distribute some resources among themselves that do not have a previous owner.\nFor example, the agents may be trying to allocate the right to use a shared good on a given day.\nOr, the agents may be trying to allocate a resource that they have collectively constructed, discovered, or otherwise obtained.\nIf the agents use an auction to allocate these resources, and the sum of the agents'' payments in the auction is positive, then this surplus payment must leave the system 30 of the agents (for example, the agents must give the money to an outside party, or burn it).\nNa\u00a8\u0131ve redistribution of the surplus payment (e.g. each of the n agents receives 1\/n of the surplus) will generally result in a mechanism that is not strategy-proof (e.g. in a Vickrey auction, the second-highest bidder would want to increase her bid to obtain a larger redistribution payment).\nUnfortunately, the VCG mechanism is not budget balanced: typically, there is surplus payment.\nUnfortunately, in general settings, it is in fact impossible to design mechanisms that satisfy budget balance in addition to the other desirable properties [16, 11, 21].\nIn light of this impossibility result, several authors have obtained budget balance by sacrificing some of the other desirable properties [2, 6, 22, 5].\nAnother approach that is perhaps preferable is to use a mechanism that is more budget balanced than the VCG mechanism, and maintains all the other desirable properties.\nOne way of trying to design such a mechanism is to redistribute some of the VCG payment back to the agents in a way that will not affect the agents'' incentives (so that strategy-proofness is maintained), and that will maintain the other properties.\nIn 2006, Cavallo [3] pursued exactly this idea, and designed a mechanism that redistributes a large amount of the total VCG payment while maintaining all of the other desirable properties of the VCG mechanism.\nFor example, in a single-item auction (where the VCG mechanism coincides with the second-price sealed-bid auction), the amount redistributed to bidder i by Cavallo``s mechanism is 1\/n times the second-highest bid among bids other than i``s bid.\nThe total redistributed is at most the second-highest bid overall, and the redistribution to agent i does not affect i``s incentives because it does not depend on i``s own bid.\nIn this paper, we restrict our attention to a limited setting, and in this setting we extend Cavallo``s result.\nWe study allocation settings where there are multiple indistinguishable units of a single good, and all agents have unit demand, i.e. they want only a single unit.\nFor this specific setting, Cavallo``s mechanism coincides with a mechanism proposed by Bailey in 1997 [2].\nHere we propose the family of linear VCG redistribution mechanisms.\nAll mechanisms in this family are efficient, strategy-proof, individually rational, and never incur a deficit.\nThe family includes the Bailey-Cavallo mechanism as a special case (with the caveat that we only study allocation settings with multiple indistinguishable units of a single good and unit demand, while Bailey``s and Cavallo``s mechanisms can be applied outside these settings as well).\nWe then provide an optimization model for finding the optimal mechanism inside the family, based on worst-case analysis.\nBoth numerical and analytical solutions of this model are provided, and the resulting mechanism shows significant improvement over the BaileyCavallo mechanism (in the worst case).\nFor example, for the problem of allocating a single unit, when the number of agents is 10, our mechanism always redistributes more than 98% of the total VCG payment back to the agents (whereas the Bailey-Cavallo mechanism redistributes only 80% in the worst case).\nFinally, we prove that our mechanism is in fact optimal among all anonymous deterministic mechanisms (even nonlinear ones) that satisfy the desirable properties.\nAround the same time, the same mechanism has been independently derived by Moulin [19].1 Moulin actually pursues a different objective (also based on worst-case analysis): whereas our objective is to maximize the percentage of VCG payments that are redistributed, Moulin tries to minimize the overall payments from agents as a percentage of efficiency.\nIt turns out that the resulting mechanisms are the same.\nTowards the end of this paper, we consider dropping the individual rationality requirement, and show that this does not change the optimal mechanism for our objective.\nFor Moulin``s objective, dropping individual rationality does change the optimal mechanism (but only if there are multiple units).\n2.\nPROBLEM DESCRIPTION Let n denote the number of agents, and let m denote the number of units.\nWe only consider the case where m < n (otherwise the problem becomes trivial).\nWe also assume that m and n are always known.\n(This assumption is not harmful: in environments where anyone can join the auction, running a redistribution mechanism is typically not a good idea anyway, because everyone would want to join to collect part of the redistribution.)\nLet the set of agents be {a1, a2, ... , an}, where ai is the agent with ith highest report value \u02c6vi-that is, we have \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn \u2265 0.\nLet vi denote the true value of ai.\nGiven that the mechanism is strategy-proof, we can assume vi = \u02c6vi.\nUnder the VCG mechanism, each agent among a1, ... , am wins a unit, and pays \u02c6vm+1 for this unit.\nThus, the total VCG payment equals m\u02c6vm+1.\nWhen m = 1, this is the second-price or Vickrey auction.\nWe modify the mechanism as follows.\nAfter running the original VCG mechanism, the center returns to each agent ai some amount zi, agent ai``s redistribution payment.\nWe do not allow zi to depend on \u02c6vi; because of this, ai``s incentives are unaffected by this redistribution payment, and the mechanism remains strategy-proof.\n3.\nLINEAR VCG REDISTRIBUTION MECHANISMS We are now ready to introduce the family of linear VCG redistribution mechanisms.\nSuch a mechanism is defined by a vector of constants c0, c1, ... , cn\u22121.\nThe amount that the mechanism returns to agent ai is zi = c0 + c1\u02c6v1 + c2\u02c6v2 + ... + ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + ... + cn\u22121\u02c6vn.\nThat is, an agent receives c0, plus c1 times the highest bid other than the agent``s own bid, plus c2 times the second-highest other bid, etc..\nThe mechanism is strategy-proof, because for all i, zi is independent of \u02c6vi.\nAlso, the mechanism is anonymous.\nIt is helpful to see the entire list of redistribution payments: z1 = c0 + c1\u02c6v2 + c2\u02c6v3 + c3\u02c6v4 + ... + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn z2 = c0 + c1\u02c6v1 + c2\u02c6v3 + c3\u02c6v4 + ... + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn z3 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v4 + ... + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn z4 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + ... + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn ... zi = c0 + c1\u02c6v1 + c2\u02c6v2 + ... + ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + ... + cn\u22121\u02c6vn ... zn\u22122 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + ... + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn zn\u22121 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + ... + cn\u22122\u02c6vn\u22122 + cn\u22121\u02c6vn zn = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + ... + cn\u22122\u02c6vn\u22122 + cn\u22121\u02c6vn\u22121 1 We thank Rakesh Vohra for pointing us to Moulin``s working paper.\n31 Not all choices of the constants c0, ... , cn\u22121 produce a mechanism that is individually rational, and not all choices of the constants produce a mechanism that never incurs a deficit.\nHence, to obtain these properties, we need to place some constraints on the constants.\nTo satisfy the individual rationality criterion, each agent``s utility should always be non-negative.\nAn agent that does not win a unit obtains a utility that is equal to the agent``s redistribution payment.\nAn agent that wins a unit obtains a utility that is equal to the agent``s valuation for the unit, minus the VCG payment \u02c6vm+1, plus the agent``s redistribution payment.\nConsider agent an, the agent with the lowest bid.\nSince this agent does not win an item (m < n), her utility is just her redistribution payment zn.\nHence, for the mechanism to be individually rational, the ci must be such that zn is always nonnegative.\nIf the ci have this property, then it actually follows that zi is nonnegative for every i, for the following reason.\nSuppose there exists some i < n and some vector of bids \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn \u2265 0 such that zi < 0.\nThen, consider the bid vector that results from replacing \u02c6vj by \u02c6vj+1 for all j \u2265 i, and letting \u02c6vn = 0.\nIf we omit \u02c6vn from this vector, the same vector results that results from omitting \u02c6vi from the original vector.\nTherefore, an``s redistribution payment under the new vector should be the same as ai``s redistribution payment under the old vector-but this payment is negative.\nIf all redistribution payments are always nonnegative, then the mechanism must be individually rational (because the VCG mechanism is individually rational, and the redistribution payment only increases an agent``s utility).\nTherefore, the mechanism is individually rational if and only if for any bid vector, zn \u2265 0.\nTo satisfy the non-deficit criterion, the sum of the redistribution payments should be less than or equal to the total VCG payment.\nSo for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn \u2265 0, the constants ci should make z1 + z2 + ... + zn \u2264 m\u02c6vm+1.\nWe define the family of linear VCG redistribution mechanisms to be the set of all redistribution mechanisms corresponding to constants ci that satisfy the above constraints (so that the mechanisms will be individually rational and have the no-deficit property).\nWe now give two examples of mechanisms in this family.\nExample 1 (Bailey-Cavallo mechanism): Consider the mechanism corresponding to cm+1 = m n and ci = 0 for all other i. Under this mechanism, each agent receives a redistribution payment of m n times the (m+1)th highest bid from another agent.\nHence, a1, ... , am+1 receive a redistribution payment of m n \u02c6vm+2, and the others receive m n \u02c6vm+1.\nThus, the total redistribution payment is (m+1)m n \u02c6vm+2 +(n\u2212m\u2212 1)m n \u02c6vm+1.\nThis redistribution mechanism is individually rational, because all the redistribution payments are nonnegative, and never incurs a deficit, because (m + 1) m n \u02c6vm+2 + (n\u2212m\u22121)m n \u02c6vm+1 \u2264 nm n \u02c6vm+1 = m\u02c6vm+1.\n(We note that for this mechanism to make sense, we need n \u2265 m + 2.)\nExample 2: Consider the mechanism corresponding to cm+1 = m n\u2212m\u22121 , cm+2 = \u2212 m(m+1) (n\u2212m\u22121)(n\u2212m\u22122) , and ci = 0 for all other i.\nIn this mechanism, each agent receives a redistribution payment of m n\u2212m\u22121 times the (m + 1)th highest reported value from other agents, minus m(m+1) (n\u2212m\u22121)(n\u2212m\u22122) times the (m+2)th highest reported value from other agents.\nThus, the total redistribution payment is m\u02c6vm+1 \u2212 m(m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) \u02c6vm+3.\nIf n \u2265 2m+3 (which is equivalent to m n\u2212m\u22121 \u2265 m(m+1) (n\u2212m\u22121)(n\u2212m\u22122) ), then each agent always receives a nonnegative redistribution payment, thus the mechanism is individually rational.\nAlso, the mechanism never incurs a deficit, because the total VCG payment is m\u02c6vm+1, which is greater than the amount m\u02c6vm+1 \u2212 m(m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) \u02c6vm+3 that is redistributed.\nWhich of these two mechanisms is better?\nIs there another mechanism that is even better?\nThis is what we study in the next section.\n4.\nOPTIMAL REDISTRIBUTION MECHANISMS Among all linear VCG redistribution mechanisms, we would like to be able to identify the one that redistributes the greatest percentage of the total VCG payment.2 This is not a well-defined notion: it may be that one mechanism redistributes more on some bid vectors, and another more on other bid vectors.\nWe emphasize that we do not assume that a prior distribution over bidders'' valuations is available, so we cannot compare them based on expected redistribution.\nBelow, we study three well-defined ways of comparing redistribution mechanisms: best-case performance, dominance, and worst-case performance.\nBest-case performance.\nOne way of evaluating a mechanism is by considering the highest redistribution percentage that it achieves.\nConsider the previous two examples.\nFor the first example, the total redistribution payment is (m + 1)m n \u02c6vm+2 + (n \u2212 m \u2212 1)m n \u02c6vm+1.\nWhen \u02c6vm+2 = \u02c6vm+1, this is equal to the total VCG payment m\u02c6vm+1.\nThus, this mechanism redistributes 100% of the total VCG payment in the best case.\nFor the second example, the total redistribution payment is m\u02c6vm+1 \u2212 m(m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) \u02c6vm+3.\nWhen \u02c6vm+3 = 0, this is equal to the total VCG payment m\u02c6vm+1.\nThus, this mechanism also redistributes 100% of the total VCG payment in the best case.\nMoreover, there are actually infinitely many mechanisms that redistribute 100% of the total VCG payment in the best case-for example, any convex combination of the above two will redistribute 100% if both \u02c6vm+2 = \u02c6vm+1 and \u02c6vm+3 = 0.\nDominance.\nInside the family of linear VCG redistribution mechanisms, we say one mechanism dominates another mechanism if the first one redistributes at least as much as the other for any bid vector.\nFor the previous two examples, neither dominates the other, because they each redistribute 100% in different cases.\nIt turns out that there is no mechanism in the family that dominates all other mechanisms in the family.\nFor suppose such a mechanism exists.\nThen, it should dominate both examples above.\nConsider the remaining VCG payment (the VCG payment failed to be redistributed).\nThe remaining VCG payment of the dominant mechanism should be 0 whenever \u02c6vm+2 = \u02c6vm+1 or \u02c6vm+3 = 0.\nNow, the remaining VCG payment is a linear function of the \u02c6vi (linear redistribution), and therefore also a polynomial function.\nThe above implies that this function can be written as (\u02c6vm+2 \u2212 \u02c6vm+1)(\u02c6vm+3)P(\u02c6v1, \u02c6v2, ... , \u02c6vn), where P is a 2 The percentage redistributed seems the natural criterion to use, among other things because it is scale-invariant: if we multiply all bids by the same positive constant (for example, if we change the units by re-expressing the bids in euros instead of dollars), we would not want the behavior of our mechanism to change.\n32 polynomial function.\nBut since the function must be linear (has degree at most 1), it follows that P = 0.\nThus, a dominant mechanism would always redistribute all of the VCG payment, which is not possible.\n(If it were possible, then our worst-case optimal redistribution mechanism would also always redistribute all of the VCG payment, and we will see later that it does not.)\nWorst-case performance.\nFinally, we can evaluate a mechanism by considering the lowest redistribution percentage that it guarantees.\nFor the first example, the total redistribution payment is (m+1)m n \u02c6vm+2 +(n\u2212m\u22121)m n \u02c6vm+1, which is greater than or equal to (n\u2212m\u22121) m n \u02c6vm+1.\nSo in the worst case, which is when \u02c6vm+2 = 0, the percentage redistributed is n\u2212m\u22121 n .\nFor the second example, the total redistribution payment is m\u02c6vm+1 \u2212 m(m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) \u02c6vm+3, which is greater than or equal to m\u02c6vm+1(1\u2212 (m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) ).\nSo in the worst case, which is when \u02c6vm+3 = \u02c6vm+1, the percentage redistributed is 1 \u2212 (m+1)(m+2) (n\u2212m\u22121)(n\u2212m\u22122) .\nSince we assume that the number of agents n and the number of units m are known, we can determine which example mechanism has better worst-case performance by comparing the two quantities.\nWhen n = 6 and m = 1, for the first example (Bailey-Cavallo mechanism), the percentage redistributed in the worst case is 2 3 , and for the second example, this percentage is 1 2 , which implies that for this pair of n and m, the first mechanism has better worst-case performance.\nOn the other hand, when n = 12 and m = 1, for the first example, the percentage redistributed in the worst case is 5 6 , and for the second example, this percentage is 14 15 , which implies that this time the second mechanism has better worst-case performance.\nThus, it seems most natural to compare mechanisms by the percentage of total VCG payment that they redistribute in the worst case.\nThis percentage is undefined when the total VCG payment is 0.\nTo deal with this, technically, we define the worst-case redistribution percentage as the largest k so that the total amount redistributed is at least k times the total VCG payment, for all bid vectors.\n(Hence, as long as the total amount redistributed is at least 0 when the total VCG payment is 0, these cases do not affect the worst-case percentage.)\nThis corresponds to the following optimization problem: Maximize k (the percentage redistributed in the worst case) Subject to: For every bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn \u2265 0 zn \u2265 0 (individual rationality) z1 + z2 + ... + zn \u2264 m\u02c6vm+1 (non-deficit) z1 + z2 + ... + zn \u2265 km\u02c6vm+1 (worst-case constraint) We recall that zi = c0 + c1\u02c6v1 + c2\u02c6v2 + ... + ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + ... + cn\u22121\u02c6vn.\n5.\nTRANSFORMATION TO LINEAR PROGRAMMING The optimization problem given in the previous section can be rewritten as a linear program, based on the following observations.\nClaim 1.\nIf c0, c1, ... , cn\u22121 satisfy both the individual rationality and the non-deficit constraints, then ci = 0 for i = 0, ... , m. Proof.\nFirst, let us prove that c0 = 0.\nConsider the bid vector in which \u02c6vi = 0 for all i. To obtain individual rationality, we must have c0 \u2265 0.\nTo satisfy the non-deficit constraint, we must have c0 \u2264 0.\nThus we know c0 = 0.\nNow, if ci = 0 for all i, there is nothing to prove.\nOtherwise, let j = min{i|ci = 0}.\nAssume that j \u2264 m.\nWe recall that we can write the individual rationality constraint as follows: zn = c0 +c1\u02c6v1 +c2\u02c6v2 +c3\u02c6v3 +...+cn\u22122\u02c6vn\u22122 +cn\u22121\u02c6vn\u22121 \u2265 0 for any bid vector.\nLet us consider the bid vector in which \u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for the rest.\nIn this case zn = cj, so we must have cj \u2265 0.\nThe non-deficit constraint can be written as follows: z1 + z2 + ... + zn \u2264 m\u02c6vm+1 for any bid vector.\nConsider the same bid vector as above.\nWe have zi = 0 for i \u2264 j, because for these bids, the jth highest other bid has value 0, so all the ci that are nonzero are multiplied by 0.\nFor i > j, we have zi = cj, because the jth highest other bid has value 1, and all lower bids have value 0.\nSo the non-deficit constraint tells us that cj(n \u2212 j) \u2264 m\u02c6vm+1.\nBecause j \u2264 m, \u02c6vm+1 = 0, so the right hand side is 0.\nWe also have n \u2212 j > 0 because j \u2264 m < n.\nSo cj \u2264 0.\nBecause we have already established that cj \u2265 0, it follows that cj = 0; but this is contrary to assumption.\nSo j > m. Incidentally, this claim also shows that if m = n \u2212 1, then ci = 0 for all i. Thus, we are stuck with the VCG mechanism.\nFrom here on, we only consider the case where m < n \u2212 1.\nClaim 2.\nThe individual rationality constraint can be written as follows: Pj i=m+1 ci \u2265 0 for j = m + 1, ... , n \u2212 1.\nBefore proving this claim, we introduce the following lemma.\nLemma 1.\nGiven a positive integer k and a set of real constants s1, s2, ... , sk, (s1t1 + s2t2 + ... + sktk \u2265 0 for any t1 \u2265 t2 \u2265 ... \u2265 tk \u2265 0) if and only if ( Pj i=1 si \u2265 0 for j = 1, 2, ... , k).\nProof.\nLet di = ti \u2212ti+1 for i = 1, 2, ... , k\u22121, and dk = tk.\nThen (s1t1 +s2t2 +...+sktk \u2265 0 for any t1 \u2265 t2 \u2265 ... \u2265 tk \u2265 0) is equivalent to (( P1 i=1 si)d1 + ( P2 i=1 si)d2 + ... + ( Pk i=1 si)dk \u2265 0 for any set of arbitrary non-negative dj).\nWhen Pj i=1 si \u2265 0 for j = 1, 2, ... , k, the above inequality is obviously true.\nIf for some j, Pj i=1 si < 0, if we set dj > 0 and di = 0 for all i = j, then the above inequality becomes false.\nSo Pj i=1 si \u2265 0 for j = 1, 2, ... , k is both necessary and sufficient.\nWe are now ready to present the proof of Claim 2.\nProof.\nThe individual rationality constraint can be written as zn = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + ... + cn\u22122\u02c6vn\u22122 + cn\u22121\u02c6vn\u22121 \u2265 0 for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn\u22121 \u2265 \u02c6vn \u2265 0.\nWe have already shown that ci = 0 for i \u2264 m. Thus, the above can be simplified to zn = cm+1\u02c6vm+1 + cm+2\u02c6vm+2+...+cn\u22122\u02c6vn\u22122+cn\u22121\u02c6vn\u22121 \u2265 0 for any bid vector.\nBy the above lemma, this is equivalent to Pj i=m+1 ci \u2265 0 for j = m + 1, ... , n \u2212 1.\nClaim 3.\nThe non-deficit constraint and the worst-case constraint can also be written as linear inequalities involving only the ci and k. Proof.\nThe non-deficit constraint requires that for any bid vector, z1 +z2 +...+zn \u2264 m\u02c6vm+1, where zi = c0 +c1\u02c6v1 + 33 c2\u02c6v2 +...+ci\u22121\u02c6vi\u22121 +ci\u02c6vi+1 +...+cn\u22121\u02c6vn for i = 1, 2, ... , n. Because ci = 0 for i \u2264 m, we can simplify this inequality to qm+1\u02c6vm+1 + qm+2\u02c6vm+2 + ... + qn\u02c6vn \u2265 0 qm+1 = m \u2212 (n \u2212 m \u2212 1)cm+1 qi = \u2212(i\u22121)ci\u22121 \u2212(n\u2212i)ci, for i = m+2, ... , n\u22121 (when m + 2 > n \u2212 1, this set of equalities is empty) qn = \u2212(n \u2212 1)cn\u22121 By the above lemma, this is equivalent to Pj i=m+1 qi \u2265 0 for j = m + 1, ... , n. So, we can simplify further as follows: qm+1 \u2265 0 \u21d0\u21d2 (n \u2212 m \u2212 1)cm+1 \u2264 m qm+1 + ... + qm+i \u2265 0 \u21d0\u21d2 n Pj=m+i\u22121 j=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2264 m for i = 2, ... , n \u2212 m \u2212 1 qm+1 + ... + qn \u2265 0 \u21d0\u21d2 n Pj=n\u22121 j=m+1 cj \u2264 m So, the non-deficit constraint can be written as a set of linear inequalities involving only the ci.\nThe worst-case constraint can be also written as a set of linear inequalities, by the following reasoning.\nThe worstcase constraint requires that for any bid input z1 +z2 +...+ zn \u2265 km\u02c6vm+1, where zi = c0 +c1\u02c6v1 +c2\u02c6v2 +...+ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + ... + cn\u22121\u02c6vn for i = 1, 2, ... , n. Because ci = 0 for i \u2264 m, we can simplify this inequality to Qm+1\u02c6vm+1 + Qm+2\u02c6vm+2 + ... + Qn\u02c6vn \u2265 0 Qm+1 = (n \u2212 m \u2212 1)cm+1 \u2212 km Qi = (i \u2212 1)ci\u22121 + (n \u2212 i)ci, for i = m + 2, ... , n \u2212 1 Qn = (n \u2212 1)cn\u22121 By the above lemma, this is equivalent to Pj i=m+1 Qi \u2265 0 for j = m + 1, ... , n. So, we can simplify further as follows: Qm+1 \u2265 0 \u21d0\u21d2 (n \u2212 m \u2212 1)cm+1 \u2265 km Qm+1 + ... + Qm+i \u2265 0 \u21d0\u21d2 n Pj=m+i\u22121 j=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2265 km for i = 2, ... , n \u2212 m \u2212 1 Qm+1 + ... + Qn \u2265 0 \u21d0\u21d2 n Pj=n\u22121 j=m+1 cj \u2265 km So, the worst-case constraint can also be written as a set of linear inequalities involving only the ci and k. Combining all the claims, we see that the original optimization problem can be transformed into the following linear program.\nVariables: cm+1, cm+2, ... , cn\u22121, k Maximize k (the percentage redistributed in the worst case) Subject to:Pj i=m+1 ci \u2265 0 for j = m + 1, ... , n \u2212 1 km \u2264 (n \u2212 m \u2212 1)cm+1 \u2264 m km \u2264 n Pj=m+i\u22121 j=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2264 m for i = 2, ... , n \u2212 m \u2212 1 km \u2264 n Pj=n\u22121 j=m+1 cj \u2264 m 6.\nNUMERICAL RESULTS For selected values of n and m, we solved the linear program using Glpk (GNU Linear Programming Kit).\nIn the table below, we present the results for a single unit (m = 1).\nWe present 1\u2212k (the percentage of the total VCG payment that is not redistributed by the worst-case optimal mechanism in the worst case) instead of k in the second column because writing k would require too many significant digits.\nCorrespondingly, the third column displays the percentage 5 10 15 20 25 30 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of AgentsWorst\u2212caseRedistributionPercentage 1 Unit WO 1 Unit BC 2 Units WO 2 Units BC 3 Units WO 3 Units BC 4 Units WO 4 Units BC Figure 1: A comparison of the worst-case optimal mechanism (WO) and the Bailey-Cavallo mechanism (BC).\nof the total VCG payment that is not redistributed by the Bailey-Cavallo mechanism in the worst case (which is equal to 2 n ).\nn 1 \u2212 k Bailey \u2212 Cavallo Mechanism 3 66.7% 66.7% 4 42.9% 50.0% 5 26.7% 40.0% 6 16.1% 33.3% 7 9.52% 28.6% 8 5.51% 25.0% 9 3.14% 22.2% 10 1.76% 20.0% 20 3.62e \u2212 5 10.0% 30 5.40e \u2212 8 6.67e \u2212 2 40 7.09e \u2212 11 5.00e \u2212 2 The worst-case optimal mechanism significantly outperforms the Bailey-Cavallo mechanism in the worst case.\nPerhaps more surprisingly, the worst-case optimal mechanism sometimes does better in the worst case than the BaileyCavallo mechanism does on average, as the following example shows.\nRecall that the total redistribution payment of the BaileyCavallo mechanism is (m + 1)m n \u02c6vm+2 + (n \u2212 m \u2212 1)m n \u02c6vm+1.\nFor the single-unit case, this simplifies to 2 n \u02c6v3 + n\u22122 n \u02c6v2.\nHence the percentage of the total VCG payment that is not redistributed is \u02c6v2\u2212 2 n \u02c6v3\u2212 n\u22122 n \u02c6v2 \u02c6v2 = 2 n \u2212 2 n \u02c6v3 \u02c6v2 , which has an expected value of E( 2 n \u2212 2 n \u02c6v3 \u02c6v2 ) = 2 n \u2212 2 n E \u02c6v3 \u02c6v2 .\nSuppose the bid values are drawn from a uniform distribution over [0, 1].\nThe theory of order statistics tells us that the 34 joint probability density function of \u02c6v2 and \u02c6v3 is f(\u02c6v3, \u02c6v2) = n(n \u2212 1)(n \u2212 2)\u02c6vn\u22123 3 (1 \u2212 \u02c6v2) for \u02c6v2 \u2265 \u02c6v3.\nNow, E \u02c6v3 \u02c6v2 = R 1 0 R \u02c6v2 0 \u02c6v3 \u02c6v2 f(\u02c6v3, \u02c6v2)d\u02c6v3d\u02c6v2 = n\u22122 n\u22121 .\nSo, the expected value of the remaining percentage is 2 n \u2212 2 n n\u22122 n\u22121 = 2 n(n\u22121) .\nFor n = 20, this is 5.26e \u2212 3, whereas the remaining percentage for the worst-case optimal mechanism is 3.62e\u22125 in the worst case.\nLet us present the optimal solution for the case n = 5 in detail.\nBy solving the above linear program, we find that the optimal values for the ci are c2 = 11 45 , c3 = \u22121 9 , and c4 = 1 15 .\nThat is, the redistribution payment received by each agent is: 11 45 times the second highest bid among the other agents, minus 1 9 times the third highest bid among the other agents, plus 1 15 times the fourth highest bid among the other agents.\nThe total amount redistributed is 11 15 \u02c6v2 + 4 15 \u02c6v3 \u2212 4 15 \u02c6v4 + 4 15 \u02c6v5; in the worst case, 11 15 \u02c6v2 is redistributed.\nHence, the percentage of the total VCG payment that is not redistributed is never more than 4 15 = 26.7%.\nFinally, we compare the worst-case optimal mechanism to the Bailey-Cavallo mechanism for m = 1, 2, 3, 4, n = m + 2, ... , 30.\nThese results are in Figure 1.\nWe see that for any m, when n = m + 2, the worst-case optimal mechanism has the same worst-case performance as the Bailey-Cavallo mechanism (actually, in this case, the worst-case optimal mechanism is identical to the BaileyCavallo mechanism).\nWhen n > m + 2, the worst-case optimal mechanism outperforms the Bailey-Cavallo mechanism (in the worst case).\n7.\nANALYTICAL CHARACTERIZATION OF THE WORST-CASE OPTIMAL MECHANISM We recall that our linear program has the following form: Variables: cm+1, cm+2, ... , cn\u22121, k Maximize k (the percentage redistributed in the worst case) Subject to:Pj i=m+1 ci \u2265 0 for j = m + 1, ... , n \u2212 1 km \u2264 (n \u2212 m \u2212 1)cm+1 \u2264 m km \u2264 n Pj=m+i\u22121 j=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2264 m for i = 2, ... , n \u2212 m \u2212 1 km \u2264 n Pj=n\u22121 j=m+1 cj \u2264 m A linear program has no solution if and only if either the objective is unbounded, or the constraints are contradictory (there is no feasible solution).\nIt is easy to see that k is bounded above by 1 (redistributing more than 100% violates the non-deficit constraint).\nAlso, a feasible solution always exists, for example, k = 0 and ci = 0 for all i.\nSo an optimal solution always exists.\nObserve that the linear program model depends only on the number of agents n and the number of units m. Hence the optimal solution is a function of n and m.\nIt turns out that this optimal solution can be analytically characterized as follows.\nTheorem 1.\nFor any m and n with n \u2265 m+2, the worstcase optimal mechanism (among linear VCG redistribution mechanisms) is unique.\nFor this mechanism, the percentage redistributed in the worst case is k\u2217 = 1 \u2212 `n\u22121 m \u00b4 Pn\u22121 j=m `n\u22121 j \u00b4 The worst-case optimal mechanism is characterized by the following values for the ci: c\u2217 i = (\u22121)i+m\u22121 (n \u2212 m) `n\u22121 m\u22121 \u00b4 i Pn\u22121 j=m `n\u22121 j \u00b4 1 `n\u22121 i \u00b4 n\u22121X j=i n \u2212 1 j !\nfor i = m + 1, ... , n \u2212 1.\nIt should be noted that we have proved ci = 0 for i \u2264 m in Claim 1.\nProof.\nWe first rewrite the linear program as follows.\nWe introduce new variables xm+1, xm+2, ... , xn\u22121, defined by xj = Pj i=m+1 ci for j = m + 1, ... , n \u2212 1.\nThe linear program then becomes: Variables: xm+1, xm+2, ... , xn\u22121, k Maximize k Subject to: km \u2264 (n \u2212 m \u2212 1)xm+1 \u2264 m km \u2264 (m + i)xm+i\u22121 + (n \u2212 m \u2212 i)xm+i \u2264 m for i = 2, ... , n \u2212 m \u2212 1 km \u2264 nxn\u22121 \u2264 m xi \u2265 0 for i = m + 1, m + 2, ... , n \u2212 1 We will prove that for any optimal solution to this linear program, k = k\u2217 .\nMoreover, we will prove that when k = k\u2217 , xj = Pj i=m+1 c\u2217 i for j = m + 1, ... , n \u2212 1.\nThis will prove the theorem.\nWe first make the following observations: (n \u2212 m \u2212 1)c\u2217 m+1 = (n \u2212 m \u2212 1) (n\u2212m)(n\u22121 m\u22121) (m+1) Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 m+1) Pn\u22121 j=m+1 `n\u22121 j \u00b4 = (n \u2212 m \u2212 1) (n\u2212m)(n\u22121 m\u22121) (m+1) Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 m+1) ( Pn\u22121 j=m `n\u22121 j \u00b4 \u2212 `n\u22121 m \u00b4 ) = (n \u2212 m \u2212 1) m n\u2212m\u22121 \u2212 (n \u2212 m \u2212 1) m(n\u22121 m ) (n\u2212m\u22121) Pn\u22121 j=m (n\u22121 j ) = m \u2212 (1 \u2212 k\u2217 )m = k\u2217 m For i = m + 1, ... , n \u2212 2, ic\u2217 i + (n \u2212 i \u2212 1)c\u2217 i+1 = i (\u22121)i+m\u22121 (n\u2212m)(n\u22121 m\u22121) i Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 i ) Pn\u22121 j=i `n\u22121 j \u00b4 + (n \u2212 i \u2212 1) (\u22121)i+m (n\u2212m)(n\u22121 m\u22121) (i+1) Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 i+1 ) Pn\u22121 j=i+1 `n\u22121 j \u00b4 = (\u22121)i+m\u22121 (n\u2212m)(n\u22121 m\u22121) Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 i ) Pn\u22121 j=i `n\u22121 j \u00b4 \u2212 (n \u2212 i \u2212 1) (\u22121)i+m\u22121 (n\u2212m)(n\u22121 m\u22121) (i+1) Pn\u22121 j=m (n\u22121 j ) i+1 (n\u22121 i )(n\u2212i\u22121) Pn\u22121 j=i+1 `n\u22121 j \u00b4 = (\u22121)i+m\u22121 (n\u2212m)(n\u22121 m\u22121) Pn\u22121 j=m (n\u22121 j ) = (\u22121)i+m\u22121 m(1 \u2212 k\u2217 ) Finally, (n \u2212 1)c\u2217 n\u22121 = (n \u2212 1) (\u22121)n+m (n\u2212m)(n\u22121 m\u22121) (n\u22121) Pn\u22121 j=m (n\u22121 j ) 1 (n\u22121 n\u22121) Pn\u22121 j=n\u22121 `n\u22121 j \u00b4 = (\u22121)m+n m(1 \u2212 k\u2217 ) Summarizing the above, we have: (n \u2212 m \u2212 1)c\u2217 m+1 = k\u2217 m (m + 1)c\u2217 m+1 + (n \u2212 m \u2212 2)c\u2217 m+2 = m(1 \u2212 k\u2217 ) (m + 2)c\u2217 m+2 + (n \u2212 m \u2212 3)c\u2217 m+3 = \u2212m(1 \u2212 k\u2217 ) (m + 3)c\u2217 m+3 + (n \u2212 m \u2212 4)c\u2217 m+4 = m(1 \u2212 k\u2217 ) ... 35 (n \u2212 3)c\u2217 n\u22123 + 2c\u2217 n\u22122 = (\u22121)m+n\u22122 m(1 \u2212 k\u2217 ) (n \u2212 2)c\u2217 n\u22122 + c\u2217 n\u22121 = (\u22121)m+n\u22121 m(1 \u2212 k\u2217 ) (n \u2212 1)c\u2217 n\u22121 = (\u22121)m+n m(1 \u2212 k\u2217 ) Let x\u2217 j = Pj i=m+1 c\u2217 i for j = m + 1, m + 2, ... , n \u2212 1, the first equation in the above tells us that (n \u2212 m \u2212 1)x\u2217 m+1 = k\u2217 m. By adding the first two equations, we get (m + 2)x\u2217 m+1 + (n \u2212 m \u2212 2)x\u2217 m+2 = m By adding the first three equations, we get (m + 3)x\u2217 m+2 + (n \u2212 m \u2212 3)x\u2217 m+3 = k\u2217 m By adding the first i equations, where i = 2, ... , n\u2212m\u22121, we get (m + i)x\u2217 m+i\u22121 + (n \u2212 m \u2212 i)x\u2217 m+i = m if i is even (m + i)x\u2217 m+i\u22121 + (n \u2212 m \u2212 i)x\u2217 m+i = k\u2217 m if i is odd Finally by adding all the equations, we get nx\u2217 n\u22121 = m if n \u2212 m is even; nx\u2217 n\u22121 = k\u2217 m if n \u2212 m is odd.\nThus, for all of the constraints other than the nonnegativity constraints, we have shown that they are satisfied by setting xj = x\u2217 j = Pj i=m+1 c\u2217 i and k = k\u2217 .\nWe next show that the nonnegativity constraints are satisfied by these settings as well.\nFor m + 1 \u2264 i, i + 1 \u2264 n \u2212 1, we have 1 i Pn\u22121 j=i (n\u22121 j ) (n\u22121 i ) = 1 i Pn\u22121 j=i i!\n(n\u22121\u2212i)!\nj!\n(n\u22121\u2212j)!\n\u2265 1 i+1 Pn\u22122 j=i i!\n(n\u22121\u2212i)!\nj!\n(n\u22121\u2212j)!\n\u2265 1 i+1 Pn\u22122 j=i (i+1)!\n(n\u22121\u2212i\u22121)!\n(j+1)!\n(n\u22121\u2212j\u22121)!\n= 1 i+1 Pn\u22121 j=i+1 (n\u22121 j ) (n\u22121 i+1 ) This implies that the absolute value of c\u2217 i is decreasing as i increases (if the c\u2217 contains more than one number).\nWe further observe that the sign of c\u2217 i alternates, with the first element c\u2217 m+1 positive.\nSo x\u2217 j = Pj i=m+1 c\u2217 i \u2265 0 for all j. Thus, we have shown that these xi = x\u2217 i together with k = k\u2217 form a feasible solution of the linear program.\nWe proceed to show that it is in fact the unique optimal solution.\nFirst we prove the following claim: Claim 4.\nIf \u02c6k, \u02c6xi, i = m + 1, m + 2, ... , n \u2212 1 satisfy the following inequalities: \u02c6km \u2264 (n \u2212 m \u2212 1)\u02c6xm+1 \u2264 m \u02c6km \u2264 (m + i)\u02c6xm+i\u22121 + (n \u2212 m \u2212 i)\u02c6xm+i \u2264 m for i = 2, ... , n \u2212 m \u2212 1 \u02c6km \u2264 n\u02c6xn\u22121 \u2264 m \u02c6k \u2265 k\u2217 Then we must have that \u02c6xi = \u02c6x\u2217 i and \u02c6k = k\u2217 .\nProof of claim.\nConsider the first inequality.\nWe know that (n \u2212 m \u2212 1)x\u2217 m+1 = k\u2217 m, so (n \u2212 m \u2212 1)\u02c6xm+1 \u2265 \u02c6km \u2265 k\u2217 m = (n \u2212 m \u2212 1)x\u2217 m+1.\nIt follows that \u02c6xm+1 \u2265 x\u2217 m+1 (n \u2212 m \u2212 1 = 0).\nNow, consider the next inequality for i = 2.\nWe know that (m + 2)x\u2217 m+1 + (n \u2212 m \u2212 2)x\u2217 m+2 = m.\nIt follows that (n\u2212m\u22122)\u02c6xm+2 \u2264 m\u2212(m+2)\u02c6xm+1 \u2264 m\u2212(m+2)x\u2217 m+1 = (n \u2212 m \u2212 2)x\u2217 m+2, so \u02c6xm+2 \u2264 x\u2217 m+2 (i = 2 \u2264 n \u2212 m \u2212 1 \u21d2 n \u2212 m \u2212 2 = 0).\nNow consider the next inequality for i = 3.\nWe know that (m + 3)x\u2217 m+2 + (n \u2212 m \u2212 3)x\u2217 m+3 = m.\nIt follows that (n\u2212m\u22123)\u02c6xm+3 \u2265 \u02c6km\u2212(m+3)\u02c6xm+2 \u2265 k\u2217 m\u2212(m+3)x\u2217 m+2 = (n \u2212 m \u2212 3)x\u2217 m+3, so \u02c6xm+3 \u2265 x\u2217 m+3 (i = 3 \u2264 n \u2212 m \u2212 1 \u21d2 n \u2212 m \u2212 3 = 0).\nProceeding like this all the way up to i = n\u2212m\u22121, we get that \u02c6xm+i \u2265 x\u2217 m+i if i is odd and \u02c6xm+i \u2264 x\u2217 m+i if i is even.\nMoreover, if one inequality is strict, then all subsequent inequalities are strict.\nNow, if we can prove \u02c6xn\u22121 = x\u2217 n\u22121, it would follow that the x\u2217 i are equal to the \u02c6xi (which also implies that \u02c6k = k\u2217 ).\nWe consider two cases: Case 1: n \u2212 m is even.\nWe have: n \u2212 m even \u21d2 n \u2212 m \u2212 1 odd \u21d2 \u02c6xn\u22121 \u2265 x\u2217 n\u22121.\nWe also have: n\u2212m even \u21d2 nx\u2217 n\u22121 = m. Combining these two, we get m = nx\u2217 n\u22121 \u2264 n\u02c6xn\u22121 \u2264 m \u21d2 \u02c6xn\u22121 = x\u2217 n\u22121.\nCase 2: n \u2212 m is odd.\nIn this case, we have \u02c6xn\u22121 \u2264 x\u2217 n\u22121, and nx\u2217 n\u22121 = k\u2217 m. Then, we have: k\u2217 m \u2264 \u02c6km \u2264 n\u02c6xn\u22121 \u2264 nx\u2217 n\u22121 = k\u2217 m \u21d2 \u02c6xn\u22121 = x\u2217 n\u22121.\nThis completes the proof of the claim.\nIt follows that if \u02c6k, \u02c6xi, i = m + 1, m + 2, ... , n \u2212 1 is a feasible solution and \u02c6k \u2265 k\u2217 , then since all the inequalities in Claim 4 are satisfied, we must have \u02c6xi = x\u2217 i and \u02c6k = k\u2217 .\nHence no other feasible solution is as good as the one described in the theorem.\nKnowing the analytical characterization of the worst-case optimal mechanism provides us with at least two major benefits.\nFirst, using these formulas is computationally more efficient than solving the linear program using a generalpurpose solver.\nSecond, we can derive the following corollary.\nCorollary 1.\nIf the number of units m is fixed, then as the number of agents n increases, the worst-case percentage redistributed linearly converges to 1, with a rate of convergence 1 2 .\n(That is, limn\u2192\u221e 1\u2212k\u2217 n+1 1\u2212k\u2217 n = 1 2 .\nThat is, in the limit, the percentage that is not redistributed halves for every additional agent.)\nWe note that this is consistent with the experimental data for the single-unit case, where the worst-case remaining percentage roughly halves each time we add another agent.\nThe worst-case percentage that is redistributed under the Bailey-Cavallo mechanism also converges to 1 as the number of agents goes to infinity, but the convergence is much slower-it does not converge linearly (that is, letting kC n be the percentage redistributed by the Bailey-Cavallo mechanism in the worst case for n agents, limn\u2192\u221e 1\u2212kC n+1 1\u2212kC n = limn\u2192\u221e n n+1 = 1).\nWe now present the proof of the corollary.\nProof.\nWhen the number of agents is n, the worst-case percentage redistributed is k\u2217 n = 1 \u2212 (n\u22121 m ) Pn\u22121 j=m (n\u22121 j ) .\nWhen the number of agents is n + 1, the percentage becomes k\u2217 n+1 = 1 \u2212 (n m) Pn j=m (n j ) .\nFor n sufficiently large, we will have 2n \u2212 mnm\u22121 > 0, and hence 1\u2212k\u2217 n+1 1\u2212k\u2217 n = (n m) Pn\u22121 j=m (n\u22121 j ) (n\u22121 m ) Pn j=m (n j ) = n n\u2212m 2n\u22121 \u2212 Pm\u22121 j=0 (n\u22121 j ) 2n\u2212 Pm\u22121 j=0 (n j ) , and n n\u2212m 2n\u22121 \u2212m(n\u22121)m\u22121 2n \u2264 1\u2212k\u2217 n+1 1\u2212k\u2217 n \u2264 n n\u2212m 2n\u22121 2n\u2212mnm\u22121 (because `n j \u00b4 \u2264 ni if j \u2264 i).\nSince we have limn\u2192\u221e n n\u2212m 2n\u22121 \u2212m(n\u22121)m\u22121 2n = 1 2 , and limn\u2192\u221e n n\u2212m 2n\u22121 2n\u2212mnm\u22121 = 1 2 , it follows that limn\u2192\u221e 1\u2212k\u2217 n+1 1\u2212k\u2217 n = 1 2 .\n36 8.\nWORST-CASE OPTIMALITY OUTSIDE THE FAMILY In this section, we prove that the worst-case optimal redistribution mechanism among linear VCG redistribution mechanisms is in fact optimal (in the worst case) among all redistribution mechanisms that are deterministic, anonymous, strategy-proof, efficient and satisfy the non-deficit constraint.\nThus, restricting our attention to linear VCG redistribution mechanisms did not come at a loss.\nTo prove this theorem, we need the following lemma.\nThis lemma is not new: it was informally stated by Cavallo [3].\nFor completeness, we present it here with a detailed proof.\nLemma 2.\nA VCG redistribution mechanism is deterministic, anonymous and strategy-proof if and only if there exists a function f : Rn\u22121 \u2192 R, so that the redistribution payment zi received by ai satisfies zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) for all i and all bid vectors.\nProof.\nFirst, let us prove the only if direction, that is, if a VCG redistribution mechanism is deterministic, anonymous and strategy-proof then there exists a deterministic function f : Rn\u22121 \u2192 R, which makes zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) for all i and all bid vectors.\nIf a VCG redistribution mechanism is deterministic and anonymous, then for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 ... \u2265 \u02c6vn, the mechanism outputs a unique redistribution payment list: z1, z2, ... , zn.\nLet G : Rn \u2192 Rn be the function that maps \u02c6v1, \u02c6v2, ... , \u02c6vn to z1, z2, ... , zn for all bid vectors.\nLet H(i, x1, x2, ... , xn) be the ith element of G(x1, x2, ... , xn), so that zi = H(i, \u02c6v1, \u02c6v2, ... , \u02c6vn) for all bid vectors and all 1 \u2264 i \u2264 n. Because the mechanism is anonymous, two agents should receive the same redistribution payment if their bids are the same.\nSo, if \u02c6vi = \u02c6vj, H(i, \u02c6v1, \u02c6v2, ... , \u02c6vn) = H(j, \u02c6v1, \u02c6v2, ... , \u02c6vn).\nHence, if we let j = min{t|\u02c6vt = \u02c6vi}, then H(i, \u02c6v1, \u02c6v2, ... , \u02c6vn) = H(j, \u02c6v1, \u02c6v2, ... , \u02c6vn).\nLet us define K : Rn \u2192 N \u00d7 Rn as follows: K(y, x1, x2, ... , xn\u22121) = [j, w1, w2, ... , wn], where w1, w2, ... , wn are y, x1, x2, ... , xn\u22121 sorted in descending order, and j = min{t|wt = y}.\n({t|wt = y} = \u2205 because y \u2208 {w1, w2, ... , wn}).\nAlso let us define F : Rn \u2192 R by F(\u02c6vi, \u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) = H \u25e6 K(\u02c6vi, \u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) = H(min{t|\u02c6vt = \u02c6vi}, \u02c6v1, \u02c6v2, ... , \u02c6vn) = H(i, \u02c6v1, \u02c6v2, ... , \u02c6vn) = zi.\nThat is, F is the redistribution payment to an agent that bids \u02c6vi when the other bids are \u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn.\nSince our mechanism is required to be strategy-proof, and the space of valuations is unrestricted, zi should be independent of \u02c6vi by Lemma 1 in Cavallo [3].\nHence, we can simply ignore the first variable input to F; let f(x1, x2, ... , xn\u22121) = F(0, x1, x2, ... , xn\u22121).\nSo, we have for all bid vectors and i, zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn).\nThis completes the proof for the only if direction.\nFor the if direction, if the redistribution payment received by ai satisfies zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) for all bid vectors and i, then this is clearly a deterministic and anonymous mechanism.\nTo prove strategy-proofness, we observe that because an agent``s redistribution payment is not affected by her own bid, her incentives are the same as in the VCG mechanism, which is strategy-proof.\nNow we are ready to introduce the next theorem: Theorem 2.\nFor any m and n with n \u2265 m+2, the worstcase optimal mechanism among the family of linear VCG redistribution mechanisms is worst-case optimal among all mechanisms that are deterministic, anonymous, strategy-proof, efficient and satisfy the non-deficit constraint.\nWhile we needed individual rationality earlier in the paper, this theorem does not mention it, that is, we can not find a mechanism with better worst-case performance even if we sacrifice individual rationality.\n(The worst-case optimal linear VCG redistribution mechanism is of course individually rational.)\nProof.\nSuppose there is a redistribution mechanism (when the number of units is m and the number of agents is n) that satisfies all of the above properties and has a better worstcase performance than the worst-case optimal linear VCG redistribution mechanism, that is, its worst-case redistribution percentage \u02c6k is strictly greater than k\u2217 .\nBy Lemma 2, for this mechanism, there is a function f : Rn\u22121 \u2192 R so that zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) for all i and all bid vectors.\nWe first prove that f has the following properties.\nClaim 5.\nf(1, 1, ... , 1, 0, 0, ... , 0) = 0 if the number of 1s is less than or equal to m. Proof of claim.\nWe assumed that for this mechanism, the worst-case redistribution percentage satisfies \u02c6k > k\u2217 \u2265 0.\nIf the total VCG payment is x, the total redistribution payment should be in [\u02c6kx, x] (non-deficit criterion).\nConsider the case where all agents bid 0, so that the total VCG payment is also 0.\nHence, the total redistribution payment should be in [\u02c6k \u00b7 0, 0]-that is, it should be 0.\nHence every agent``s redistribution payment f(0, 0, ... , 0) must be 0.\nNow, let ti = f(1, 1, ... , 1, 0, 0, ... , 0) where the number of 1s equals i.\nWe proved t0 = 0.\nIf tn\u22121 = 0, consider the bid vector where everyone bids 1.\nThe total VCG payment is m and the total redistribution payment is nf(1, 1, ... , 1) = ntn\u22121 = 0.\nThis corresponds to 0% redistribution, which is contrary to our assumption that \u02c6k > k\u2217 \u2265 0.\nNow, consider j = min{i|ti = 0} (which is well-defined because tn\u22121 = 0).\nIf j > m, the property is satisfied.\nIf j \u2264 m, consider the bid vector where \u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for all other i. Under this bid vector, the first j agents each get redistribution payment tj\u22121 = 0, and the remaining n \u2212 j agents each get tj.\nThus, the total redistribution payment is (n \u2212 j)tj.\nBecause the total VCG payment for this bid vector is 0, we must have (n \u2212 j)tj = 0.\nSo tj = 0 (j \u2264 m < n).\nBut this is contrary to the definition of j. Hence f(1, 1, ... , 1, 0, 0, ... , 0) = 0 if the number of 1s is less than or equal to m. Claim 6.\nf satisfies the following inequalities: \u02c6km \u2264 (n \u2212 m \u2212 1)tm+1 \u2264 m \u02c6km \u2264 (m + i)tm+i\u22121 + (n \u2212 m \u2212 i)tm+i \u2264 m for i = 2, 3, ... , n \u2212 m \u2212 1 \u02c6km \u2264 ntn\u22121 \u2264 m Here ti is defined as in the proof of Claim 5.\n37 Proof of claim.\nFor j = m + 1, ... , n, consider the bid vectors where \u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for all other i.\nThese bid vectors together with the non-deficit constraint and worst-case constraint produce the above set of inequalities: for example, when j = m + 1, we consider the bid vector \u02c6vi = 1 for i \u2264 m + 1 and \u02c6vi = 0 for all other i.\nThe first m+1 agents each receive a redistribution payment of tm = 0, and all other agents each receive tm+1.\nThus, the total VCG redistribution is (n \u2212 m \u2212 1)tm+1.\nThe nondeficit constraint gives (n \u2212 m \u2212 1)tm+1 \u2264 m (because the total VCG payment is m).\nThe worst-case constraint gives (n \u2212 m \u2212 1)tm+1 \u2265 \u02c6km.\nCombining these two, we get the first inequality.\nThe other inequalities can be obtained in the same way.\nWe now observe that the inequalities in Claim 6, together with \u02c6k \u2265 k\u2217 , are the same as those in Claim 4 (where the ti are replaced by the \u02c6xi).\nThus, we can conclude that \u02c6k = k\u2217 , which is contrary to our assumption \u02c6k > k\u2217 .\nHence no mechanism satisfying all the listed properties has a redistribution percentage greater than k\u2217 in the worst case.\nSo far we have only talked about the case where n \u2265 m+2.\nFor the purpose of completeness, we provide the following claim for the n = m + 1 case.\nClaim 7.\nFor any m and n with n = m + 1, the original VCG mechanism (that is, redistributing nothing) is (uniquely) worst-case optimal among all redistribution mechanisms that are deterministic, anonymous, strategy-proof, efficient and satisfy the non-deficit constraint.\nWe recall that when n = m+1, Claim 1 tells us that the only mechanism inside the family of linear redistribution mechanisms is the original VCG mechanism, so that this mechanism is automatically worst-case optimal inside this family.\nHowever, to prove the above claim, we need to show that it is worst-case optimal among all redistribution mechanisms that have the desired properties.\nProof.\nSuppose a redistribution mechanism exists that satisfies all of the above properties and has a worst-case performance as good as the original VCG mechanism, that is, its worst-case redistribution percentage is greater than or equal to 0.\nThis implies that the total redistribution payment of this mechanism is always nonnegative.\nBy Lemma 2, for this mechanism, there is a function f : Rn\u22121 \u2192 R so that zi = f(\u02c6v1, \u02c6v2, ... , \u02c6vi\u22121, \u02c6vi+1, ... , \u02c6vn) for all i and all bid vectors.\nWe will prove that f(x1, x2, ... , xn\u22121) = 0 for all x1 \u2265 x2 \u2265 ... \u2265 xn\u22121 \u2265 0.\nFirst, consider the bid vector where \u02c6vi = 0 for all i. Here, each agent receives a redistribution payment f(0, 0, ... , 0).\nThe total redistribution payment is then nf(0, 0, ... , 0), which should be both greater than or equal to 0 (by the above observation) as well less than or equal to 0 (using the nondeficit criterion and the fact that the total VCG payment is 0).\nIt follows that f(0, 0, ... , 0) = 0.\nNow, let us consider the bid vector where \u02c6v1 = x1 \u2265 0 and \u02c6vi = 0 for all other i. For this bid vector, the agent with the highest bid receives a redistribution payment of f(0, 0, ... , 0) = 0, and the other n \u2212 1 agents each receive f(x1, 0, ... , 0).\nBy the same reasoning as above, the total redistribution payment should be both greater than or equal to 0 and less than or equal to 0, hence f(x1, 0, ... , 0) = 0 for all x1 \u2265 0.\nProceeding by induction, let us assume f(x1, x2, ... , xk, 0, ... , 0) = 0 for all x1 \u2265 x2 \u2265 ... \u2265 xk \u2265 0, for some k < n \u2212 1.\nConsider the bid vector where \u02c6vi = xi for i \u2264 k + 1, and \u02c6vi = 0 for all other i, where the xi are arbitrary numbers satisfying x1 \u2265 x2 \u2265 ... \u2265 xk \u2265 xk+1 \u2265 0.\nFor the agents with the highest k + 1 bids, their redistribution payment is specified by f acting on an input with only k non-zero variables.\nHence they all receive 0 by induction assumption.\nThe other n \u2212 k \u2212 1 agents each receive f(x1, x2, ... , xk, xk+1, 0, ... , 0).\nThe total redistribution payment is then (n\u2212k\u22121)f(x1, x2, ... , xk, xk+1, 0, ... , 0), which should be both greater than or equal to 0, and less than or equal to the total VCG payment.\nNow, in this bid vector, the lowest bid is 0 because k + 1 < n.\nBut since n = m + 1, the total VCG payment is m\u02c6vn = 0.\nSo we have f(x1, x2, ... , xk, xk+1, 0, ... , 0) = 0 for all x1 \u2265 x2 \u2265 ... \u2265 xk \u2265 xk+1 \u2265 0.\nBy induction, this statement holds for all k < n \u2212 1; when k + 1 = n \u2212 1, we have f(x1, x2, ... , xn\u22122, xn\u22121) = 0 for all x1 \u2265 x2 \u2265 ... \u2265 xn\u22122 \u2265 xn\u22121 \u2265 0.\nHence, in this mechanism, the redistribution payment is always 0; that is, the mechanism is just the original VCG mechanism.\nIncidentally, we obtain the following corollary: Corollary 2.\nNo VCG redistribution mechanism satisfies all of the following: determinism, anonymity, strategyproofness, efficiency, and (strong) budget balance.\nThis holds for any n \u2265 m + 1.\nProof.\nFor the case n \u2265 m + 2: If such a mechanism exists, its worst-case performance would be better than that of the worst-case optimal linear VCG redistribution mechanism, which by Theorem 1 obtains a redistribution percentage strictly less than 1.\nBut Theorem 2 shows that it is impossible to outperform this mechanism in the worst case.\nFor the case n = m + 1: If such a mechanism exists, it would perform as well as the original VCG mechanism in the worst case, which implies that it is identical to the VCG mechanism by Claim 7.\nBut the VCG mechanism is not (strongly) budget balanced.\n9.\nCONCLUSIONS For allocation problems with one or more items, the wellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit.\nHowever, the VCG mechanism is not (strongly) budget balanced: generally, the agents'' payments will sum to more than 0.\nIf there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer.\nHowever, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents.\nIn 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategy-proofness, individual rationality, and the non-deficit property.\nIn this paper, we extended this result in a restricted setting.\nWe studied allocation settings where there are multiple indistinguishable units of a single good, and agents have unit demand.\n(For this specific setting, Cavallo``s mechanism coincides with a mechanism proposed by Bailey in 1997 [2].)\nHere we proposed a family of mechanisms that redistribute some of the VCG payment 38 back to the agents.\nAll mechanisms in the family are efficient, strategy-proof, individually rational, and never incur a deficit.\nThe family includes the Bailey-Cavallo mechanism as a special case.\nWe then provided an optimization model for finding the optimal mechanism-that is, the mechanism that maximizes redistribution in the worst case-inside the family, and showed how to cast this model as a linear program.\nWe gave both numerical and analytical solutions of this linear program, and the (unique) resulting mechanism shows significant improvement over the Bailey-Cavallo mechanism (in the worst case).\nFinally, we proved that the obtained mechanism is optimal among all anonymous deterministic mechanisms that satisfy the above properties.\nOne important direction for future research is to try to extend these results beyond multi-unit auctions with unit demand.\nHowever, it turns out that in sufficiently general settings, the worst-case optimal redistribution percentage is 0.\nIn such settings, the worst-case criterion provides no guidance in determining a good redistribution mechanism (even redistributing nothing achieves the optimal worst-case percentage), so it becomes necessary to pursue other criteria.\nAlternatively, one can try to identify other special settings in which positive redistribution in the worst case is possible.\nAnother direction for future research is to consider whether this mechanism has applications to collusion.\nFor example, in a typical collusive scheme, there is a bidding ring consisting of a number of colluders, who submit only a single bid [10, 17].\nIf this bid wins, the colluders must allocate the item amongst themselves, perhaps using payments-but of course they do not want payments to flow out of the ring.\nThis work is part of a growing literature on designing mechanisms that obtain good results in the worst case.\nTraditionally, economists have mostly focused either on designing mechanisms that always obtain certain properties (such as the VCG mechanism), or on designing mechanisms that are optimal with respect to some prior distribution over the agents'' preferences (such as the Myerson auction [20] and the Maskin-Riley auction [18] for maximizing expected revenue).\nSome more recent papers have focused on designing mechanisms for profit maximization using worst-case competitive analysis (e.g. [9, 1, 15, 8]).\nThere has also been growing interest in the design of online mechanisms [7] where the agents arrive over time and decisions must be taken before all the agents have arrived.\nSuch work often also takes a worst-case competitive analysis approach [14, 13].\nIt does not appear that there are direct connections between our work and these other works that focus on designing mechanisms that perform well in the worst case.\nNevertheless, it seems likely that future research will continue to investigate mechanism design for the worst case, and hopefully a coherent framework will emerge.\n10.\nREFERENCES [1] G. Aggarwal, A. Fiat, A. Goldberg, J. Hartline, N. Immorlica, and M. Sudan.\nDerandomization of auctions.\nSTOC, 619-625, 2005.\n[2] M. J. Bailey.\nThe demand revealing process: to distribute the surplus.\nPublic Choice, 91:107-126, 1997.\n[3] R. Cavallo.\nOptimal decision-making with minimal waste: Strategyproof redistribution of VCG payments.\nAAMAS, 882-889, 2006.\n[4] E. H. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[5] B. Faltings.\nA budget-balanced, incentive-compatible scheme for social choice.\nAMEC, 30-43, 2005.\n[6] J. Feigenbaum, C. Papadimitriou, and S. Shenker.\nSharing the cost of muliticast transmissions.\nJCSS, 63:21-41, 2001.\n[7] E. Friedman and D. Parkes.\nPricing WiFi at Starbucks - Issues in online mechanism design.\nEC, 240-241, 2003.\n[8] A. Goldberg, J. Hartline, A. Karlin, M. Saks, and A. Wright.\nCompetitive auctions.\nGames and Economic Behavior, 2006.\n[9] A. Goldberg, J. Hartline, and A. Wright.\nCompetitive auctions and digital goods.\nSODA, 735-744, 2001.\n[10] D. A. Graham and R. C. Marshall.\nCollusive bidder behavior at single-object second-price and English auctions.\nJournal of Political Economy, 95(6):1217-1239, 1987.\n[11] J. Green and J.-J.\nLaffont.\nCharacterization of satisfactory mechanisms for the revelation of preferences for public goods.\nEconometrica, 45:427-438, 1977.\n[12] T. Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[13] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and D. C. Parkes.\nOnline auctions with re-usable goods.\nEC, 165-174, 2005.\n[14] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes.\nAdaptive limited-supply online auctions.\nEC, 71-80, 2004.\n[15] J. Hartline and R. McGrew.\nFrom optimal limited to unlimited supply auctions.\nEC, 175-182, 2005.\n[16] L. Hurwicz.\nOn the existence of allocation systems whose manipulative Nash equilibria are Pareto optimal, 1975.\nPresented at the 3rd World Congress of the Econometric Society.\n[17] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.\nBidding clubs in first-price auctions.\nAAAI, 373-378, 2002.\n[18] E. Maskin and J. Riley.\nOptimal multi-unit auctions.\nIn F. Hahn, editor, The Economics of Missing Markets, Information, and Games, chapter 14, 312-335.\nClarendon Press, Oxford, 1989.\n[19] H. Moulin.\nEfficient and strategy-proof assignment with a cheap residual claimant.\nWorking paper, March 2007.\n[20] R. Myerson.\nOptimal auction design.\nMathematics of Operations Research, 6:58-73, 1981.\n[21] R. Myerson and M. Satterthwaite.\nEfficient mechanisms for bilateral trading.\nJournal of Economic Theory, 28:265-281, 1983.\n[22] D. Parkes, J. Kalagnanam, and M. Eso.\nAchieving budget-balance with Vickrey-based payment schemes in exchanges.\nIJCAI, 1161-1168, 2001.\n[23] W. Vickrey.\nCounterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n39","lvl-3":"Worst-Case Optimal Redistribution of VCG Payments in Heterogeneous-Item Auctions with Unit Demand\nABSTRACT\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies a list of desired properties, including efficiency, strategy-proofness, individual rationality, and the non-deficit property.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms aim to redistribute as much VCG payments back to the agents as possible, while maintaining the aforementioned desired properties of the VCG mechanism.\nWe continue the search for worst-case optimal VCG redistribution mechanisms--mechanisms that maximize the fraction of total VCG payment redistributed in the worst case.\nPreviously, a worst-case optimal VCG redistribution mechanism (denoted by WCO) was characterized for multi-unit auctions with nonincreasing marginal values [7].\nLater, WCO was generalized to settings involving heterogeneous items [4], resulting in the HETERO mechanism.\n[4] conjectured that HETERO is feasible and worst-case optimal for heterogeneous-item auctions with unit demand.\nIn this paper, we propose a more natural way to generalize the WCO mechanism.\nWe prove that our generalized mechanism, though represented differently, actually coincides with HETERO.\nBased on this new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal in heterogeneous-item auctions with unit demand.\nFinally, we conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n1.\nINTRODUCTION\n1.1 VCG Redistribution Mechanisms\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies the following list of desired properties:\n\u2022 Efficiency: the allocation maximizes the agents' total valuation (without considering payments).\n\u2022 Strategy-proofness: for any agent, reporting truthfully is a dominant strategy, regardless of the other agents' types.\n\u2022 (Ex post) individual rationality: Every agent's final utility (after deducting her payment) is always nonnegative.\n\u2022 Non-deficit: the total paymentfrom the agents is nonnegative.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms still allocate the resources using VCG.\nOn top of VCG, these mechanisms try to redistribute as much VCG payments back to the agents as possible.\nWe require that an agent's redistribution be independent of her own type.\nThis is sufficient for maintaining strategyproofness and efficiency (an agent has no control over her own redistribution).\nFor smoothly connected domains (including multiunit auctions with nonincreasing marginal values and heterogeneousitem auctions with unit demand), the above requirement is also necessary for maintaining strategy-proofness and efficiency [8].\nA VCG redistribution mechanism is feasible if it maintains all the desired properties of the VCG mechanism.\nThat is, we also require that the redistribution process maintains individual rationality and the non-deficit property.\nLet n be the number of agents.\nSince all VCG redistribution mechanisms start by allocating according to the VCG mechanism, a VCG redistribution mechanism is characterized by its redistribution scheme r ~ = (r1, r2,..., rn).\nUnder VCG redistribution mechanism ~ r, agent i's redistribution equals ri (01,..., 0i \u2212 1, 0i +1,..., 0n), where 0j is agent j's type.\n(We do not have to differentiate between an agent's true type and her reported type, since all VCG redistribution mechanisms are strategy-proof.)\nFor the mechanism design objective studied in this paper, it is without loss of generality to only consider VCG redistribution mechanisms that are anonymous (we defer the proof of this claim to the appendix).\nAn anonymous VCG redistribution mechanism is characterized by a single function r. Under (anonymous) VCG redistribution mechanism r, agent i's redistribution equals r (0 \u2212 i), where 0 \u2212 i is the multiset of the types of the agents other than i.\nWe use \u03b8 ~ to denote the type profile.\nLet V CG (~ \u03b8) be the total We organize existing results by their settings.\nVCG payment for this type profile.\nA VCG redistribution mechanism r satisfies the non-deficit property if the total redistribution never exceeds the total VCG payment.\nThat is, for any type profile ~ \u03b8, Ei r (\u03b8 \u2212 i) \u2264 V CG (~ \u03b8).\nA VCG redistribution mechanism r is (ex post) individually rational if every agent's final utility is always nonnegative.\nSince VCG is individually rational, we have that a sufficient condition for r to be individually rational is for any \u03b8 ~ and any i, r (\u03b8 \u2212 i) \u2265 0 (on top of VCG, every agent also receives a redistribution amount that is always nonnegative).\nOn the other hand, when agent i is not interested in any item (her valuation on any item bundle equals 0), under VCG, i's utility always equals 0.\nAfter redistribution, agent i's utility is exactly her redistribution r (\u03b8 \u2212 i).\nThat is, r (\u03b8 \u2212 i) \u2265 0 for all \u03b8 \u2212 i (hence for all \u03b8 ~ and all i) is also necessary for individual rationality.\nWe want to find VCG redistribution mechanisms that maximize the fraction of total VCG payment redistributed in the worst-case.\nThis mechanism design problem is equivalent to the following functional optimization model:\nIn this paper, we will analytically characterize one worst-case optimal VCG redistribution mechanism for heterogeneous-item auctions with unit demand .1 We conclude this subsection with an example VCG redistribution mechanism in the simplest setting of single-item auctions.\nIn a single-item auction, an agent's type is a nonnegative real number representing her utility for winning the item.\nWithout loss of generality, we assume that \u03b81 \u2265 \u03b82 \u2265...\u2265 \u03b8n \u2265 0.\nIn single-item auctions, the Bailey-Cavallo VCG redistribution mechanism [2, 3] works as follows:\n\u2022 Allocate the item according to VCG: Agent 1 wins the item and pays \u03b82.\nThe other agents win nothing and do not pay.\n\u2022 Every agent receives a redistribution that equals n1 times the\nsecond highest other type: Agent 1 and 2 each receives n1 \u03b83.\nThe other agents each receives n1\u03b82.\nThe above mechanism obviously maintains strategy-proofness and efficiency (an agent's redistribution does not depend on her own type).\nIt also maintains individual rationality because all redistributions are nonnegative.\nThe total redistribution equals 2n \u03b83 +\nthe above mechanism maintains the non-deficit property.\nFinally, the total redistribution 2 n item auctions, this example mechanism's worst-case redistribution fraction is n \u2212 2 n\n1.2 Previous Research on Worst-Case Optimal VCG Redistribution Mechanisms\nIn this subsection, we review existing results on worst-case optimal VCG redistribution mechanisms.\nBesides high-level discussions, we also choose to include a certain level of technical details, as they are needed for later sections.\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with Unit Demand [7, 12]: In multi-unit auctions with unit demand, the items for sale are identical.\nEach agent wants at most one copy of the item.\n(Single-item auctions are special cases of multi-unit auctions with unit demand.)\nLet m be the number of items.\nThroughout this paper, we only consider cases where m \u2264 n \u2212 2.2 Here, an agent's type is a nonnegative real number representing her valuation for winning one copy of the item.\nIt is without loss of generality to assume that \u03b81 \u2265 \u03b82 \u2265...\u2265 \u03b8n \u2265 0.\n[7] showed that for multi-unit auctions with unit demand, any VCG redistribution mechanism's worst-case redistribution fraction is at most If we switch to a more general setting, then \u03b1 \u2217 is still an upper bound: if there exists a VCG redistribution mechanism whose worst-case redistribution fraction is strictly larger than \u03b1 \u2217 in a more general setting, then this mechanism, when applied to multi-unit auctions with unit demand, has a worst-case redistribution fraction that is strictly larger than \u03b1 \u2217, which contradicts with the meaning of \u03b1 \u2217.\n[7] also characterized a VCG redistribution mechanism for multiunit auctions with unit demand, called the WCO mechanism .3 WCO's worst-case redistribution fraction is exactly \u03b1 \u2217.\nThat is, it is worst-case optimal.\nWCO was obtained by optimizing within the family of linear VCG redistribution mechanisms.\nA linear VCG redistribution mechanism r takes the following form: Here, the ci are constants.\n(We only consider the ci that correspond to feasible VCG redistribution mechanisms.)\n[\u03b8 \u2212 i] j is the j-th highest type among \u03b8 \u2212 i. Linear mechanism r is characterized by the values of the ci.\nThe optimal values the ci are as follows:\nfor i = m + 1,..., n \u2212 1, and c \u2217 i = 0 for i = 1, 2,..., m.\nThe characterization of WCO then follows:\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with\nNonincreasing Marginal Values [7]: Multi-unit auctions with non2 [7] showed that for multi-unit auctions with unit demand, when m = n \u2212 1, the worst-case redistribution fraction (of any feasible VCG redistribution mechanism) is at most 0.\nSince the setting studied in this paper is more general (heterogeneous-item auctions with unit demand), we also have that the worst-case redistribution fraction is at most 0 when m = n \u2212 1.\nSince heterogeneous-item auctions with x units are special cases of heterogeneous-item auctions with x + 1 units, we have that for our setting the worst-case redistribution fraction is at most 0 when m \u2265 n \u2212 1.\nThat is, not redistributing anything is worst-case optimal when m \u2265 n \u2212 1.\n3WCO has also been independently derived in [12], under a slightly different objective of maximizing worst-case efficiency ratio.\nAlso, for [12]'s objective, the optimal mechanism coincides with WCO only when the individual rationality constraint is enforced.\nincreasing marginal values are more general than multi-unit auctions with unit demand.\nIn this more general setting, the items are still identical, but an agent may demand more than one copy of the item.\nAn agent's valuation for winning the first copy of the item is called her initial\/first marginal value.\nSimilarly, an agent's additional valuation for winning the i-th copy of the item is called her i-th marginal value.\nAn agent's type contains m nonnegative real numbers (i-th marginal value for i = 1,..., m).\nIn this setting, it is further assumed that the marginal values are nonincreasing.\nAs discussed earlier, in this more general setting, any VCG redistribution mechanism's worst-case redistribution fraction is still bounded above by \u03b1 *.\n[7] generalized WCO to this setting, and proved that its worst-case redistribution fraction remains the same.\nTherefore, WCO (after generalization) is also worst-case optimal for multi-unit auctions with nonincreasing marginal values.\nThe original definition of WCO does not directly generalize to multi-unit auctions with nonincreasing marginal values.\nWhen it comes to multi-unit auctions with nonincreasing marginal values, an agent's type is no longer a single value, which means that there is no such thing as \"the j-th highest type among \u03b8_i\".\nTo address this, [7] replaced [\u03b8_i] j bym1R (\u03b8_i, j \u2212 m \u2212 1) for j = m +1,..., n \u2212 1.\nBasically, R (\u03b8_i, j \u2212 m \u2212 1) is the generalization of [\u03b8_i] j: it is identical to [\u03b8_i] j in the unit demand setting, and it remains well-defined for multi-unit auctions with nonincreasing marginal values.\nWe abuse notation by not differentiating the agents and their types.\nFor example, \u03b8_i is equivalent to the set of agents other than i. Let S be a set of agents.\nR (S, i) is formally defined as follows (this definition is included for completeness; we will not use it anywhere):\n\u2022 R (S, 0) = V CG (S) (the total VCG payment when only those in S participate in the auction).\nPm + i \u2022 For i = 1,..., | S | \u2212 m \u2212 1, R (S, i) = 1 j = 1 R (U (S, j), m + i\ni \u2212 1).\nHere, U (S, j) is the new set of agents, after removing the agent with the j-th highest initial marginal value in S from S.\nThe general form of WCO is as follows:\nWorst-Case Optimal Redistribution in Heterogeneous-Item Auctions with Unit Demand [4]: In heterogeneous-item auctions with unit demand, the items for sale are different.\nEach agent demands at most one item.\nHere, an agent's type consists of m nonnegative real numbers (her valuation for winning item i for i = 1,..., m).\nHeterogeneous-item auctions with unit demand is the main focus of this paper.\nSince heterogeneous-item auctions with unit demand is more general than multi-unit auctions with unit demand, \u03b1 * is still an upper bound on the worst-case redistribution fraction.\n[4] proposed the HETERO mechanism, by generalizing WCO.\nThe authors conjectured that HETERO is feasible and has a worst-case redistribution fraction that equals \u03b1 *.\nThat is, the authors conjectured that HETERO is worst-case optimal in this setting.\nThe main contribution of this paper is a proof of this conjecture.\nRedistribution in Combinatorial Auctions with Gross Substitutes [6]: The gross substitutes condition was first proposed in [9].\nLike unit demand, the gross substitutes condition is a condition on an agent's type (does not depend on the mechanism under discussion).\nIn words, an agent's type satisfies the gross substitutes condition if her demand for an item does not decrease when the prices of the other items increase.\nBoth multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand are special cases of combinatorial auctions with gross substitutes [5, 9].\n[6] showed that for this setting, the worst-case redistribution fraction of the Bailey-Cavallo mechanism [2, 3] is exactly n_m_1 n (when n \u2265 m + 1), and it is possible to construct mechanisms with even higher worst-case redistribution fractions.\nThe authors did not find a worst-case optimal mechanism for this setting.\nAt the end of this paper, we conjecture that HETERO is optimal for combinatorial auctions with gross substitutes.\nFinally, Naroditskiy et al. [13] proposed a numerical technique for designing worst-case optimal redistribution mechanisms.\nThe proposed technique only works for single-parameter domains.\nIt does not apply to our setting (multi-parameter domain).\n1.3 Our contribution\nWe generalize WCO to heterogeneous-item auctions with unit demand.\nWe prove that the generalized mechanism, though represented differently, coincides with the HETERO mechanism proposed in [4].\nThat is, what we proposed is not a new mechanism, but a new representation of an existing mechanism.\nBased on our new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal when applied to heterogeneousitem auctions with unit demand, thus confirming the conjecture raised in [4].\nWe conclude with a new conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n2.\nNEW REPRESENTATION OF HETERO\n3.\nFEASIBILITY AND WORST-CASE OPTIMALITY OF HETERO\n4.\nCONCLUSION\nWe conclude our paper with the following conjecture: CONJECTURE 1.\nGross substitutes implies redistribution monotonicity.\nThat is, HETERO remainsfeasible and worst-case optimal in combinatorial auctions with gross substitutes.\nThe idea is that both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand satisfy redistribution monotonicity.\nA natural conjecture is that the \"most restrictive joint\" of these two settings also satisfies redistribution monotonicity.\nThere are many well-studied auction settings that contain both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand (a list of which can be found in [10]).\nAmong these well-studied settings, combinatorial auctions with gross substitutes is the most restrictive.\nTo prove the conjecture, we need to prove that gross substitutes implies that for any set of agents S, R (S, 0)> R (S, 1)>...> R (S, ISI--m--1)> 0.\nSo far, we have only proved R (S, 0)> R (S, 1)> 0.","lvl-4":"Worst-Case Optimal Redistribution of VCG Payments in Heterogeneous-Item Auctions with Unit Demand\nABSTRACT\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies a list of desired properties, including efficiency, strategy-proofness, individual rationality, and the non-deficit property.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms aim to redistribute as much VCG payments back to the agents as possible, while maintaining the aforementioned desired properties of the VCG mechanism.\nWe continue the search for worst-case optimal VCG redistribution mechanisms--mechanisms that maximize the fraction of total VCG payment redistributed in the worst case.\nPreviously, a worst-case optimal VCG redistribution mechanism (denoted by WCO) was characterized for multi-unit auctions with nonincreasing marginal values [7].\nLater, WCO was generalized to settings involving heterogeneous items [4], resulting in the HETERO mechanism.\n[4] conjectured that HETERO is feasible and worst-case optimal for heterogeneous-item auctions with unit demand.\nIn this paper, we propose a more natural way to generalize the WCO mechanism.\nWe prove that our generalized mechanism, though represented differently, actually coincides with HETERO.\nBased on this new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal in heterogeneous-item auctions with unit demand.\nFinally, we conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n1.\nINTRODUCTION\n1.1 VCG Redistribution Mechanisms\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies the following list of desired properties:\n\u2022 Efficiency: the allocation maximizes the agents' total valuation (without considering payments).\n\u2022 Strategy-proofness: for any agent, reporting truthfully is a dominant strategy, regardless of the other agents' types.\n\u2022 (Ex post) individual rationality: Every agent's final utility (after deducting her payment) is always nonnegative.\n\u2022 Non-deficit: the total paymentfrom the agents is nonnegative.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms still allocate the resources using VCG.\nOn top of VCG, these mechanisms try to redistribute as much VCG payments back to the agents as possible.\nWe require that an agent's redistribution be independent of her own type.\nThis is sufficient for maintaining strategyproofness and efficiency (an agent has no control over her own redistribution).\nFor smoothly connected domains (including multiunit auctions with nonincreasing marginal values and heterogeneousitem auctions with unit demand), the above requirement is also necessary for maintaining strategy-proofness and efficiency [8].\nA VCG redistribution mechanism is feasible if it maintains all the desired properties of the VCG mechanism.\nThat is, we also require that the redistribution process maintains individual rationality and the non-deficit property.\nLet n be the number of agents.\nSince all VCG redistribution mechanisms start by allocating according to the VCG mechanism, a VCG redistribution mechanism is characterized by its redistribution scheme r ~ = (r1, r2,..., rn).\nUnder VCG redistribution mechanism ~ r, agent i's redistribution equals ri (01,..., 0i \u2212 1, 0i +1,..., 0n), where 0j is agent j's type.\n(We do not have to differentiate between an agent's true type and her reported type, since all VCG redistribution mechanisms are strategy-proof.)\nAn anonymous VCG redistribution mechanism is characterized by a single function r. Under (anonymous) VCG redistribution mechanism r, agent i's redistribution equals r (0 \u2212 i), where 0 \u2212 i is the multiset of the types of the agents other than i.\nWe use \u03b8 ~ to denote the type profile.\nLet V CG (~ \u03b8) be the total We organize existing results by their settings.\nVCG payment for this type profile.\nA VCG redistribution mechanism r satisfies the non-deficit property if the total redistribution never exceeds the total VCG payment.\nA VCG redistribution mechanism r is (ex post) individually rational if every agent's final utility is always nonnegative.\nAfter redistribution, agent i's utility is exactly her redistribution r (\u03b8 \u2212 i).\nWe want to find VCG redistribution mechanisms that maximize the fraction of total VCG payment redistributed in the worst-case.\nThis mechanism design problem is equivalent to the following functional optimization model:\nIn this paper, we will analytically characterize one worst-case optimal VCG redistribution mechanism for heterogeneous-item auctions with unit demand .1 We conclude this subsection with an example VCG redistribution mechanism in the simplest setting of single-item auctions.\nIn a single-item auction, an agent's type is a nonnegative real number representing her utility for winning the item.\nIn single-item auctions, the Bailey-Cavallo VCG redistribution mechanism [2, 3] works as follows:\n\u2022 Allocate the item according to VCG: Agent 1 wins the item and pays \u03b82.\nThe other agents win nothing and do not pay.\n\u2022 Every agent receives a redistribution that equals n1 times the\nsecond highest other type: Agent 1 and 2 each receives n1 \u03b83.\nThe other agents each receives n1\u03b82.\nThe above mechanism obviously maintains strategy-proofness and efficiency (an agent's redistribution does not depend on her own type).\nIt also maintains individual rationality because all redistributions are nonnegative.\nThe total redistribution equals 2n \u03b83 +\nthe above mechanism maintains the non-deficit property.\nFinally, the total redistribution 2 n item auctions, this example mechanism's worst-case redistribution fraction is n \u2212 2 n\n1.2 Previous Research on Worst-Case Optimal VCG Redistribution Mechanisms\nIn this subsection, we review existing results on worst-case optimal VCG redistribution mechanisms.\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with Unit Demand [7, 12]: In multi-unit auctions with unit demand, the items for sale are identical.\nEach agent wants at most one copy of the item.\n(Single-item auctions are special cases of multi-unit auctions with unit demand.)\nLet m be the number of items.\nThroughout this paper, we only consider cases where m \u2264 n \u2212 2.2 Here, an agent's type is a nonnegative real number representing her valuation for winning one copy of the item.\n[7] also characterized a VCG redistribution mechanism for multiunit auctions with unit demand, called the WCO mechanism .3 WCO's worst-case redistribution fraction is exactly \u03b1 \u2217.\nThat is, it is worst-case optimal.\nWCO was obtained by optimizing within the family of linear VCG redistribution mechanisms.\nA linear VCG redistribution mechanism r takes the following form: Here, the ci are constants.\n(We only consider the ci that correspond to feasible VCG redistribution mechanisms.)\n[\u03b8 \u2212 i] j is the j-th highest type among \u03b8 \u2212 i. Linear mechanism r is characterized by the values of the ci.\nThe optimal values the ci are as follows:\nThe characterization of WCO then follows:\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with\nNonincreasing Marginal Values [7]: Multi-unit auctions with non2 [7] showed that for multi-unit auctions with unit demand, when m = n \u2212 1, the worst-case redistribution fraction (of any feasible VCG redistribution mechanism) is at most 0.\nSince the setting studied in this paper is more general (heterogeneous-item auctions with unit demand), we also have that the worst-case redistribution fraction is at most 0 when m = n \u2212 1.\nSince heterogeneous-item auctions with x units are special cases of heterogeneous-item auctions with x + 1 units, we have that for our setting the worst-case redistribution fraction is at most 0 when m \u2265 n \u2212 1.\nThat is, not redistributing anything is worst-case optimal when m \u2265 n \u2212 1.\nAlso, for [12]'s objective, the optimal mechanism coincides with WCO only when the individual rationality constraint is enforced.\nincreasing marginal values are more general than multi-unit auctions with unit demand.\nIn this more general setting, the items are still identical, but an agent may demand more than one copy of the item.\nAn agent's valuation for winning the first copy of the item is called her initial\/first marginal value.\nSimilarly, an agent's additional valuation for winning the i-th copy of the item is called her i-th marginal value.\nAn agent's type contains m nonnegative real numbers (i-th marginal value for i = 1,..., m).\nIn this setting, it is further assumed that the marginal values are nonincreasing.\nAs discussed earlier, in this more general setting, any VCG redistribution mechanism's worst-case redistribution fraction is still bounded above by \u03b1 *.\n[7] generalized WCO to this setting, and proved that its worst-case redistribution fraction remains the same.\nTherefore, WCO (after generalization) is also worst-case optimal for multi-unit auctions with nonincreasing marginal values.\nThe original definition of WCO does not directly generalize to multi-unit auctions with nonincreasing marginal values.\nWhen it comes to multi-unit auctions with nonincreasing marginal values, an agent's type is no longer a single value, which means that there is no such thing as \"the j-th highest type among \u03b8_i\".\nWe abuse notation by not differentiating the agents and their types.\nFor example, \u03b8_i is equivalent to the set of agents other than i. Let S be a set of agents.\ni \u2212 1).\nHere, U (S, j) is the new set of agents, after removing the agent with the j-th highest initial marginal value in S from S.\nThe general form of WCO is as follows:\nWorst-Case Optimal Redistribution in Heterogeneous-Item Auctions with Unit Demand [4]: In heterogeneous-item auctions with unit demand, the items for sale are different.\nEach agent demands at most one item.\nHere, an agent's type consists of m nonnegative real numbers (her valuation for winning item i for i = 1,..., m).\nHeterogeneous-item auctions with unit demand is the main focus of this paper.\nSince heterogeneous-item auctions with unit demand is more general than multi-unit auctions with unit demand, \u03b1 * is still an upper bound on the worst-case redistribution fraction.\n[4] proposed the HETERO mechanism, by generalizing WCO.\nThe authors conjectured that HETERO is feasible and has a worst-case redistribution fraction that equals \u03b1 *.\nThat is, the authors conjectured that HETERO is worst-case optimal in this setting.\nThe main contribution of this paper is a proof of this conjecture.\nRedistribution in Combinatorial Auctions with Gross Substitutes [6]: The gross substitutes condition was first proposed in [9].\nLike unit demand, the gross substitutes condition is a condition on an agent's type (does not depend on the mechanism under discussion).\nIn words, an agent's type satisfies the gross substitutes condition if her demand for an item does not decrease when the prices of the other items increase.\nBoth multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand are special cases of combinatorial auctions with gross substitutes [5, 9].\nThe authors did not find a worst-case optimal mechanism for this setting.\nAt the end of this paper, we conjecture that HETERO is optimal for combinatorial auctions with gross substitutes.\nFinally, Naroditskiy et al. [13] proposed a numerical technique for designing worst-case optimal redistribution mechanisms.\nThe proposed technique only works for single-parameter domains.\nIt does not apply to our setting (multi-parameter domain).\n1.3 Our contribution\nWe generalize WCO to heterogeneous-item auctions with unit demand.\nWe prove that the generalized mechanism, though represented differently, coincides with the HETERO mechanism proposed in [4].\nThat is, what we proposed is not a new mechanism, but a new representation of an existing mechanism.\nBased on our new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal when applied to heterogeneousitem auctions with unit demand, thus confirming the conjecture raised in [4].\nWe conclude with a new conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n4.\nCONCLUSION\nWe conclude our paper with the following conjecture: CONJECTURE 1.\nGross substitutes implies redistribution monotonicity.\nThat is, HETERO remainsfeasible and worst-case optimal in combinatorial auctions with gross substitutes.\nThe idea is that both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand satisfy redistribution monotonicity.\nA natural conjecture is that the \"most restrictive joint\" of these two settings also satisfies redistribution monotonicity.\nThere are many well-studied auction settings that contain both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand (a list of which can be found in [10]).\nAmong these well-studied settings, combinatorial auctions with gross substitutes is the most restrictive.","lvl-2":"Worst-Case Optimal Redistribution of VCG Payments in Heterogeneous-Item Auctions with Unit Demand\nABSTRACT\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies a list of desired properties, including efficiency, strategy-proofness, individual rationality, and the non-deficit property.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms aim to redistribute as much VCG payments back to the agents as possible, while maintaining the aforementioned desired properties of the VCG mechanism.\nWe continue the search for worst-case optimal VCG redistribution mechanisms--mechanisms that maximize the fraction of total VCG payment redistributed in the worst case.\nPreviously, a worst-case optimal VCG redistribution mechanism (denoted by WCO) was characterized for multi-unit auctions with nonincreasing marginal values [7].\nLater, WCO was generalized to settings involving heterogeneous items [4], resulting in the HETERO mechanism.\n[4] conjectured that HETERO is feasible and worst-case optimal for heterogeneous-item auctions with unit demand.\nIn this paper, we propose a more natural way to generalize the WCO mechanism.\nWe prove that our generalized mechanism, though represented differently, actually coincides with HETERO.\nBased on this new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal in heterogeneous-item auctions with unit demand.\nFinally, we conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n1.\nINTRODUCTION\n1.1 VCG Redistribution Mechanisms\nMany important problems in multiagent systems involve the allocation of multiple resources among the agents.\nFor resource allocation problems, the well-known VCG mechanism satisfies the following list of desired properties:\n\u2022 Efficiency: the allocation maximizes the agents' total valuation (without considering payments).\n\u2022 Strategy-proofness: for any agent, reporting truthfully is a dominant strategy, regardless of the other agents' types.\n\u2022 (Ex post) individual rationality: Every agent's final utility (after deducting her payment) is always nonnegative.\n\u2022 Non-deficit: the total paymentfrom the agents is nonnegative.\nHowever, VCG is generally not budget-balanced.\nUnder VCG, agents pay the VCG payments, which reduces social welfare.\nTo offset the loss of social welfare due to the VCG payments, VCG redistribution mechanisms were introduced.\nThese mechanisms still allocate the resources using VCG.\nOn top of VCG, these mechanisms try to redistribute as much VCG payments back to the agents as possible.\nWe require that an agent's redistribution be independent of her own type.\nThis is sufficient for maintaining strategyproofness and efficiency (an agent has no control over her own redistribution).\nFor smoothly connected domains (including multiunit auctions with nonincreasing marginal values and heterogeneousitem auctions with unit demand), the above requirement is also necessary for maintaining strategy-proofness and efficiency [8].\nA VCG redistribution mechanism is feasible if it maintains all the desired properties of the VCG mechanism.\nThat is, we also require that the redistribution process maintains individual rationality and the non-deficit property.\nLet n be the number of agents.\nSince all VCG redistribution mechanisms start by allocating according to the VCG mechanism, a VCG redistribution mechanism is characterized by its redistribution scheme r ~ = (r1, r2,..., rn).\nUnder VCG redistribution mechanism ~ r, agent i's redistribution equals ri (01,..., 0i \u2212 1, 0i +1,..., 0n), where 0j is agent j's type.\n(We do not have to differentiate between an agent's true type and her reported type, since all VCG redistribution mechanisms are strategy-proof.)\nFor the mechanism design objective studied in this paper, it is without loss of generality to only consider VCG redistribution mechanisms that are anonymous (we defer the proof of this claim to the appendix).\nAn anonymous VCG redistribution mechanism is characterized by a single function r. Under (anonymous) VCG redistribution mechanism r, agent i's redistribution equals r (0 \u2212 i), where 0 \u2212 i is the multiset of the types of the agents other than i.\nWe use \u03b8 ~ to denote the type profile.\nLet V CG (~ \u03b8) be the total We organize existing results by their settings.\nVCG payment for this type profile.\nA VCG redistribution mechanism r satisfies the non-deficit property if the total redistribution never exceeds the total VCG payment.\nThat is, for any type profile ~ \u03b8, Ei r (\u03b8 \u2212 i) \u2264 V CG (~ \u03b8).\nA VCG redistribution mechanism r is (ex post) individually rational if every agent's final utility is always nonnegative.\nSince VCG is individually rational, we have that a sufficient condition for r to be individually rational is for any \u03b8 ~ and any i, r (\u03b8 \u2212 i) \u2265 0 (on top of VCG, every agent also receives a redistribution amount that is always nonnegative).\nOn the other hand, when agent i is not interested in any item (her valuation on any item bundle equals 0), under VCG, i's utility always equals 0.\nAfter redistribution, agent i's utility is exactly her redistribution r (\u03b8 \u2212 i).\nThat is, r (\u03b8 \u2212 i) \u2265 0 for all \u03b8 \u2212 i (hence for all \u03b8 ~ and all i) is also necessary for individual rationality.\nWe want to find VCG redistribution mechanisms that maximize the fraction of total VCG payment redistributed in the worst-case.\nThis mechanism design problem is equivalent to the following functional optimization model:\nIn this paper, we will analytically characterize one worst-case optimal VCG redistribution mechanism for heterogeneous-item auctions with unit demand .1 We conclude this subsection with an example VCG redistribution mechanism in the simplest setting of single-item auctions.\nIn a single-item auction, an agent's type is a nonnegative real number representing her utility for winning the item.\nWithout loss of generality, we assume that \u03b81 \u2265 \u03b82 \u2265...\u2265 \u03b8n \u2265 0.\nIn single-item auctions, the Bailey-Cavallo VCG redistribution mechanism [2, 3] works as follows:\n\u2022 Allocate the item according to VCG: Agent 1 wins the item and pays \u03b82.\nThe other agents win nothing and do not pay.\n\u2022 Every agent receives a redistribution that equals n1 times the\nsecond highest other type: Agent 1 and 2 each receives n1 \u03b83.\nThe other agents each receives n1\u03b82.\nThe above mechanism obviously maintains strategy-proofness and efficiency (an agent's redistribution does not depend on her own type).\nIt also maintains individual rationality because all redistributions are nonnegative.\nThe total redistribution equals 2n \u03b83 +\nthe above mechanism maintains the non-deficit property.\nFinally, the total redistribution 2 n item auctions, this example mechanism's worst-case redistribution fraction is n \u2212 2 n\n1.2 Previous Research on Worst-Case Optimal VCG Redistribution Mechanisms\nIn this subsection, we review existing results on worst-case optimal VCG redistribution mechanisms.\nBesides high-level discussions, we also choose to include a certain level of technical details, as they are needed for later sections.\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with Unit Demand [7, 12]: In multi-unit auctions with unit demand, the items for sale are identical.\nEach agent wants at most one copy of the item.\n(Single-item auctions are special cases of multi-unit auctions with unit demand.)\nLet m be the number of items.\nThroughout this paper, we only consider cases where m \u2264 n \u2212 2.2 Here, an agent's type is a nonnegative real number representing her valuation for winning one copy of the item.\nIt is without loss of generality to assume that \u03b81 \u2265 \u03b82 \u2265...\u2265 \u03b8n \u2265 0.\n[7] showed that for multi-unit auctions with unit demand, any VCG redistribution mechanism's worst-case redistribution fraction is at most If we switch to a more general setting, then \u03b1 \u2217 is still an upper bound: if there exists a VCG redistribution mechanism whose worst-case redistribution fraction is strictly larger than \u03b1 \u2217 in a more general setting, then this mechanism, when applied to multi-unit auctions with unit demand, has a worst-case redistribution fraction that is strictly larger than \u03b1 \u2217, which contradicts with the meaning of \u03b1 \u2217.\n[7] also characterized a VCG redistribution mechanism for multiunit auctions with unit demand, called the WCO mechanism .3 WCO's worst-case redistribution fraction is exactly \u03b1 \u2217.\nThat is, it is worst-case optimal.\nWCO was obtained by optimizing within the family of linear VCG redistribution mechanisms.\nA linear VCG redistribution mechanism r takes the following form: Here, the ci are constants.\n(We only consider the ci that correspond to feasible VCG redistribution mechanisms.)\n[\u03b8 \u2212 i] j is the j-th highest type among \u03b8 \u2212 i. Linear mechanism r is characterized by the values of the ci.\nThe optimal values the ci are as follows:\nfor i = m + 1,..., n \u2212 1, and c \u2217 i = 0 for i = 1, 2,..., m.\nThe characterization of WCO then follows:\nWorst-Case Optimal Redistribution in Multi-Unit Auctions with\nNonincreasing Marginal Values [7]: Multi-unit auctions with non2 [7] showed that for multi-unit auctions with unit demand, when m = n \u2212 1, the worst-case redistribution fraction (of any feasible VCG redistribution mechanism) is at most 0.\nSince the setting studied in this paper is more general (heterogeneous-item auctions with unit demand), we also have that the worst-case redistribution fraction is at most 0 when m = n \u2212 1.\nSince heterogeneous-item auctions with x units are special cases of heterogeneous-item auctions with x + 1 units, we have that for our setting the worst-case redistribution fraction is at most 0 when m \u2265 n \u2212 1.\nThat is, not redistributing anything is worst-case optimal when m \u2265 n \u2212 1.\n3WCO has also been independently derived in [12], under a slightly different objective of maximizing worst-case efficiency ratio.\nAlso, for [12]'s objective, the optimal mechanism coincides with WCO only when the individual rationality constraint is enforced.\nincreasing marginal values are more general than multi-unit auctions with unit demand.\nIn this more general setting, the items are still identical, but an agent may demand more than one copy of the item.\nAn agent's valuation for winning the first copy of the item is called her initial\/first marginal value.\nSimilarly, an agent's additional valuation for winning the i-th copy of the item is called her i-th marginal value.\nAn agent's type contains m nonnegative real numbers (i-th marginal value for i = 1,..., m).\nIn this setting, it is further assumed that the marginal values are nonincreasing.\nAs discussed earlier, in this more general setting, any VCG redistribution mechanism's worst-case redistribution fraction is still bounded above by \u03b1 *.\n[7] generalized WCO to this setting, and proved that its worst-case redistribution fraction remains the same.\nTherefore, WCO (after generalization) is also worst-case optimal for multi-unit auctions with nonincreasing marginal values.\nThe original definition of WCO does not directly generalize to multi-unit auctions with nonincreasing marginal values.\nWhen it comes to multi-unit auctions with nonincreasing marginal values, an agent's type is no longer a single value, which means that there is no such thing as \"the j-th highest type among \u03b8_i\".\nTo address this, [7] replaced [\u03b8_i] j bym1R (\u03b8_i, j \u2212 m \u2212 1) for j = m +1,..., n \u2212 1.\nBasically, R (\u03b8_i, j \u2212 m \u2212 1) is the generalization of [\u03b8_i] j: it is identical to [\u03b8_i] j in the unit demand setting, and it remains well-defined for multi-unit auctions with nonincreasing marginal values.\nWe abuse notation by not differentiating the agents and their types.\nFor example, \u03b8_i is equivalent to the set of agents other than i. Let S be a set of agents.\nR (S, i) is formally defined as follows (this definition is included for completeness; we will not use it anywhere):\n\u2022 R (S, 0) = V CG (S) (the total VCG payment when only those in S participate in the auction).\nPm + i \u2022 For i = 1,..., | S | \u2212 m \u2212 1, R (S, i) = 1 j = 1 R (U (S, j), m + i\ni \u2212 1).\nHere, U (S, j) is the new set of agents, after removing the agent with the j-th highest initial marginal value in S from S.\nThe general form of WCO is as follows:\nWorst-Case Optimal Redistribution in Heterogeneous-Item Auctions with Unit Demand [4]: In heterogeneous-item auctions with unit demand, the items for sale are different.\nEach agent demands at most one item.\nHere, an agent's type consists of m nonnegative real numbers (her valuation for winning item i for i = 1,..., m).\nHeterogeneous-item auctions with unit demand is the main focus of this paper.\nSince heterogeneous-item auctions with unit demand is more general than multi-unit auctions with unit demand, \u03b1 * is still an upper bound on the worst-case redistribution fraction.\n[4] proposed the HETERO mechanism, by generalizing WCO.\nThe authors conjectured that HETERO is feasible and has a worst-case redistribution fraction that equals \u03b1 *.\nThat is, the authors conjectured that HETERO is worst-case optimal in this setting.\nThe main contribution of this paper is a proof of this conjecture.\nRedistribution in Combinatorial Auctions with Gross Substitutes [6]: The gross substitutes condition was first proposed in [9].\nLike unit demand, the gross substitutes condition is a condition on an agent's type (does not depend on the mechanism under discussion).\nIn words, an agent's type satisfies the gross substitutes condition if her demand for an item does not decrease when the prices of the other items increase.\nBoth multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand are special cases of combinatorial auctions with gross substitutes [5, 9].\n[6] showed that for this setting, the worst-case redistribution fraction of the Bailey-Cavallo mechanism [2, 3] is exactly n_m_1 n (when n \u2265 m + 1), and it is possible to construct mechanisms with even higher worst-case redistribution fractions.\nThe authors did not find a worst-case optimal mechanism for this setting.\nAt the end of this paper, we conjecture that HETERO is optimal for combinatorial auctions with gross substitutes.\nFinally, Naroditskiy et al. [13] proposed a numerical technique for designing worst-case optimal redistribution mechanisms.\nThe proposed technique only works for single-parameter domains.\nIt does not apply to our setting (multi-parameter domain).\n1.3 Our contribution\nWe generalize WCO to heterogeneous-item auctions with unit demand.\nWe prove that the generalized mechanism, though represented differently, coincides with the HETERO mechanism proposed in [4].\nThat is, what we proposed is not a new mechanism, but a new representation of an existing mechanism.\nBased on our new representation of HETERO, we prove that HETERO is indeed feasible and worst-case optimal when applied to heterogeneousitem auctions with unit demand, thus confirming the conjecture raised in [4].\nWe conclude with a new conjecture that HETERO remains feasible and worst-case optimal in the even more general setting of combinatorial auctions with gross substitutes.\n2.\nNEW REPRESENTATION OF HETERO\nWe recall that WCO was obtained by optimizing within the family of linear VCG redistribution mechanisms.\nThe original representation of HETERO was obtained using a similar approach [4].\nThe authors focused on the following family of mechanisms:\nHere, the \u03b2i are constants.\nt (S, j) is the expected total VCG payment when we remove j agents uniformly at random from S, and allocate all the items to the remaining agents.\nIt is easy to see that all member mechanisms of the above family are well-defined for general combinatorial auctions.\nNot every member mechanism is feasible though.\n[4] did not attempt optimizing over the family.\nInstead, the \u03b2i are chosen so that the corresponding mechanism coincides with WCO when it comes to multi-unit auctions with unit demand.\nIt turns out that the choice is unique, and the corresponding mechanism is called HETERO.\n[4] conjectured that HETERO is feasible and worst-case optimal for heterogeneous-item auctions with unit demand.\nIn this section, we propose another way to generalize WCO.\nWe will show that the generalized WCO actually coincides with HETERO.\nThat is, what we derive is a new representation of HETERO.\nThis new representation will prove itself useful in later discussions.\nWe recall that the characterization of WCO for multi-unit auctions with nonincreasing marginal values is based on a series of functions R (S, i).\nThese functions do not directly generalize to settings involving heterogeneous items, because, for i> 0, R (S, i) is defined explicitly based on the agents' initial marginal values.\nFortunately, there is an easy way to rewrite R (S, i), so that it becomes well-defined for settings involving heterogeneous items.\nBased on Equation 1, WCO can be rewritten into the following form (the only changes are that for i> 0, R (S, i)'s definition no longer mentions \"initial marginal values\"): Definition 1.\nHeterogeneous WCO (new representation of HETERO):\nc \u2217 j R (\u03b8 \u2212 i, j \u2212 m \u2212 1) \u2022 R (S, 0) = V CG (S) \u2022 For i = 1,..., | S | \u2212 m \u2212 1, R (S, i) equals:\nHeterogeneous WCO is well-defined for general combinatorial auctions, so we can directly apply it to heterogeneous-item auctions with unit demand.\nOf course, we still have the burden to prove that it remains feasible and worst-case optimal.\nWe will do so in the next section.\nHeterogeneous WCO is not a new mechanism.\nIt turns out that it coincides with HETERO for general combinatorial auctions.\nThat is, Definition 1 is a new representation of the existing mechanism HETERO.\nSimilarly, R (S, 2) = [S] 4, R (S, 3) = [S] 5,..., and finally R (S, | S | \u2212 m \u2212 1) = R (S, | S | \u2212 2) = [S] | S | (lowest type from the agents in S).\nIt is clear that redistribution monotonicity holds here.\nMore generally, redistribution monotonicity holds for multi-unit auctions with nonincreasing marginal values: Claim 17 of [7] proved that R (S, i) is nonincreasing in i for multi-unit auctions with nonincreasing marginal values; R (S, i)'s original definition as described in Subsection 1.2 makes it clear that the R (S, i) are nonnegative.\nThe following proposition greatly simplifies our task:\nPROPOSITION 2.\nIfthe setting satisfies redistribution monotonicity, then HETERO is feasible (strategy-proof, efficient, individually rational, and non-deficit), and its worst-case redistributionfraction is at least \u03b1 \u2217.\nIf the setting is also more general than multi-unit auctions with unit demand, then HETERO is worst-case optimal.\nPROOF.\nWe first prove that HETERO is feasible given redistribution monotonicity.\nAccording to Definition 1, under HETERO, an agent's redistribution does not depend on her own type.\nThat is, HETERO is strategy-proof and efficient in all settings.\nWe only need to prove that HETERO is individually rational and non-deficit given redistribution monotonicity.\nIndividual rationality: As discussed in Subsection 1.1, individual rationality is equivalent to redistributions being nonnegative.\nWe recall that for multi-unit auctions with unit demand, under WCO, agent i's redistribution equals\nWCO is known to be individually rational.\nThat is, for all \u03b8 \u2212 i, PROPOSITION 1.\nHeterogeneous WCO coincides with HETERO n \u2212 1X cj [\u03b8 \u2212 i] j \u2265 0 for general combinatorial auctions.\nj = m +1 \u2217\nProof omitted since it is based on pure algebraic manipulation.\n3.\nFEASIBILITY AND WORST-CASE OPTIMALITY OF HETERO\nIn this section, we prove that HETERO, as represented in Definition 1, is feasible and worst-case optimal for heterogeneous-item auctions with unit demand.\nWe first define the redistribution monotonicity condition: Definition 2.\nAn auction setting satisfies redistribution monotonicity if for any set of agents S, we have that R (S, 0) \u2265 R (S, 1) \u2265...\u2265 R (S, | S | \u2212 m \u2212 1) \u2265 0 R was defined in Definition 1.\nThat is, R (S, 0) = VCG (S), and for i = 1,..., | S | \u2212 m \u2212 1, R (S, i) equals!\nR (S \u2212 a, i \u2212 1) \u2212 (| S | \u2212 m \u2212 i) R (S, i \u2212 1).\nFor example, the setting of single-item auctions satisfies redistribution monotonicity.\nIn a single-item auction, R (S, 0) = V CG (S) = [S] 2 ([S] i is the i-th highest type from the agents in S).\nThis is equivalent to for all x0 \u2265...\u2265 xn \u2212 m \u2212 2 \u2265 0,\nBased on (2) and (4) (substituting R (\u03b8 \u2212 i, j) for xj for all j), we have that (3) is nonnegative.\nTherefore, redistribution monotonicity implies individual rationality.\nNon-deficit and worst-case optimality: For multi-unit auctions with unit demand, under WCO, the total VCG payment is m\u03b8m +1.\nThe total redistribution is\nUnder HETERO, the total redistribution is The total VCG payment equals V CG (~ \u03b8) = R (~ \u03b8, 0).\nRedistribution monotonicity implies that\nGiven (5) and (7) (substituting R (~ \u03b8, j) for xj for all j), we have that (6) is between \u03b1 \u2217 times the total VCG payment and the total VCG payment.\nTherefore, redistribution monotonicity implies the non-deficit property and also worst-case optimality.\nWe want to prove that for heterogeneous-item auctions with unit demand, the following redistribution monotonicity condition holds.\nR (S, 0) \u2265 R (S, 1) \u2265...\u2265 R (S, | S | \u2212 m \u2212 1) \u2265 0 By Proposition 3, it suffices to prove that for all j, Rj (S, 0) \u2265 Rj (S, 1) \u2265...\u2265 Rj (S, | S | \u2212 m \u2212 1) \u2265 0.\nWithout loss of generality, we will prove\nTo prove the above inequality, we need the following definitions and propositions.\nFrom now on to the end ofthis section, the setting by default is heterogeneous-item auctions with unit demand, unless specified.\nWe use E (T, S) to denote the efficient total valuation when we allocate all the items in T to the agents in S.\nIn the remaining of this section, we prove that heterogeneousitem auctions with unit demand satisfies redistribution monotonicity, which would then imply that HETERO is feasible and worstcase optimal for heterogeneous-item auctions with unit demand.\nWe define Rj (S, i) by modifying the definition of R (S, i) in Definition 1.\n\u2022 Rj (S, 0) = V CGj (S).\nV CGj (S) is the VCG price of item j (the VCG payment from the agent winning item j) when we allocate all the items to the agents in S using VCG.\n\u2022 For i = 1,..., | S | \u2212 m \u2212 1, Rj (S, i) equals\n[14] showed that the proposition is true when gross substitutes condition holds.\nHeterogeneous-item auctions with unit demand satisfies gross substitutes.\nWe use {1} \u2295 {1,..., m} to denote the item set that contains not only item 1 to m, but also an additional duplicate of item 1.\nPROPOSITION 5.\nLet S be any set of agents.\nLet a be the agent who wins item 1 when we allocate the items {1,..., m} to the agents in S.\nWe have that E ({1} \u2295 {1,..., m}, S) = E ({1}, a) + E ({1,..., m}, S \u2212 a).\nThat is, after we add an additional duplicate of item 1 to the auction, there exists an efficient allocation under which agent a still wins item 1.\nThe above proposition was proved in [11].\nPROOF.\nWe prove by induction.\nWhen i = 0, by definition, for any S,\nPROOF.\nLet w1 be the winner of item 1 when we allocate the items {1,..., m} to the agents in S using VCG.\nV CG1 (S) = E ({1,..., m}, S \u2212 w1) \u2212 E ({2,..., m}, S \u2212 w1).\na could be either w1 or some other agent.\nWe discuss case by case.\nCase a = w1: Let w \u2032 1 be the new winner of item 1 when we allocate the items {1,..., m} to the agents in S \u2212 w1 using VCG.\nV CG1 (S \u2212 w1) = E ({1,..., m}, S \u2212 w1 \u2212 w \u2032 1) \u2212 E ({2,..., m}, S \u2212 w1 \u2212 w \u2032 1).\nWe need to prove that E ({1,..., m}, S \u2212 w1) \u2212 E ({2,..., m}, S \u2212 w1) \u2265 E ({1,..., m}, S \u2212 w1 \u2212 w \u2032 1) \u2212 E ({2,..., m}, S \u2212 w1 \u2212 w \u2032 1).\nWe construct a new agent x. Let x's valuation for item 1 be extremely high so that she wins item 1.\nThe above inequality can be rewritten as E ({1,..., m}, S \u2212 w1) \u2212 E ({2,..., m}, S \u2212 w1) \u2212 E ({1}, x) \u2265 E ({1,..., m}, S \u2212 w1 \u2212 w \u2032 1) \u2212 E ({2,..., m}, S \u2212 w1 \u2212 w \u2032 1) \u2212 E ({1}, x).\nThis\nis, E ({1,..., m}, S \u2212 w1) \u2212 E ({1,..., m}, S \u2212 w1 + x)> E ({1,..., m}, S \u2212 w1 \u2212 w \u2032 1) \u2212 E ({1,..., m}, S \u2212 w1 \u2212 w \u2032 1 + x).\nWe rearrange the terms, and get E ({1,..., m}, S \u2212 w1) +\nCase a = 6 w1: Let w \u2032 1 be the new winner of item 1 when we allocate all the items {1,..., m} to the agents in S \u2212 a using VCG.\nPROPOSITION 7.\nWinners still win after we remove some other agents [4, 6]:4 For any set of agents S and any set of items T, we use W to denote the set of winners when we allocate the items in T to the agents in S using VCG.\nAfter we remove some agents in S, those in W that have not been removed remain to be winners, provided that a consistent tie-breaking rule exists.\nIt should be noted that there may not exist a consistent tie-breaking rule that satisfies the above proposition.\nFortunately, we are able to prove that tie-breaking is irrelevant for the goal of proving redistribution monotonicity.\nWe say that a type profile is tie-free if it satisfies the following: Let T1 = {1} \u00ae {1,..., m}.\nLet T2 = {1,..., m}.\nBasically, T1 and T2 are the only item sets that we will ever mention.\nA type profile is tie-free if for any set of agents S, when we allocate the items in T1 (or T2) to S, the set of VCG winners is unique.\nIf we only consider tie-free type profiles, then we do not need to be bothered by tie-breaking.\nWe notice that the set of tie-free type profiles is a dense subset of the set of all type profiles--any type profile can be perturbed infinitesimally to become a tie-free type profile.\nOur ultimate goal is to prove that for any set of agents S, R (S, 0)> R (S, 1)>...> R (S, | S | \u2212 m \u2212 1)> 0 We notice that the R (S, j) are continuous in the agents' types.\nTherefore, it suffices to prove the above inequality for tie-free type profiles only.\nFrom now on, we simply assume that the set of VCG winners is always unique.\nDefinition 3.\nFor any set of agents S with | S |> m +1, let D (S) be the set of m + 1 winners when we allocate {1} \u00ae {1,..., m} to the agents in S. D (S) is called the determination set of S.\nThe above proposition says that for the purpose of calculating item 1's VCG price, only those agents in D (S) are relevant.\n4The proposition was originally introduced in [4].\nA more rigorous proof of a more general claim was also given in [6].\nLet w1 be the winner of item 1 when we allocate {1,..., m} to the agents in S. V CG1 (S) = E ({1,..., m}, S \u2212 w1) \u2212 E ({2,..., m}, S \u2212 w1) = E ({1,..., m}, S \u2212 w1) + E ({1}, w1) \u2212 E ({1,..., m}, S) = E ({1} \u00ae {1,..., m}, S) \u2212 E ({1,..., m}, S) (the last step is due to Proposition 5).\nThe first term only depends on those in D (S).\nThe second term also only depends on those in D (S) for the following reason: Let S \u2032 be the set of VCG winners when we allocate {1,..., m} to the agents in S.\nThe second term only depends on those in S \u2032.\nWe introduce an agent x whose valuation for item 1 is extremely high so that she wins item 1.\nWhen we allocate {1} \u00ae {1,..., m} to the agents in S + x, the set of VCG winners are then x + S \u2032.\nD (S) are the new set of VCG winners after we remove x. By Proposition 7, those in S \u2032 must still remain in D (S).\nOverall, V CG1 (S) only depends on those agents in D (S).\nSimilarly, V CG1 (S \u2212 a) only depends on those agents in D (S \u2212 a).\nFor a \u2208 S \u2212 D (S), D (S) = D (S \u2212 a).\nTherefore, we must have VCG1 (S) = VCG1 (S \u2212 a).\nDefinition 4.\nLet S be any set of agents.\nLet k be any integer from 1 to | S |.\nLet a1 \u227a a2 \u227a...\u227a ak be a sequence of k distinct agents in S.\nWe say these k agents form a winner sequence with respect to S if\nLet S \u2032 be a subset of S of size k.\nWe say that S \u2032 forms a winner sequence with respect to S if there exists an ordering of the agents in S \u2032 that forms a winner sequence with respect to S.\nWhen S \u2032 forms a winner sequence with respect to S, we call S \u2032 a size - | S \u2032 | winner sequence set with respect to S. Let H (S \u2032, S) = 1 if S \u2032 forms a winner sequence with respect to S, and let H (S \u2032, S) = 0 otherwise.\nFor presentation purpose, we say that the empty set forms a winner sequence (of size 0) with respect to any set S.\nThat is, H (\u2205, S) = 1.\nNow we are ready to prove that heterogeneous-item auctions with unit demand satisfies redistribution monotonicity.\nWe recall that it suffices to prove that for any set of agents S,\nNow let us analyze the expression Pa \u2208 S R1 (S \u2212 a, k).\nBy induction assumption, it can be rewritten as\nThe following lemmas are needed for the proof of the above proposition.\nAll these lemmas are implications of \"winners still win after we remove some other agents\".\nThe proofs are omitted due to space constraints.\nNow we are ready to prove the proposition.\nPROOF.\nWe prove by induction.\nInitial step: We have R1 (S, 0) = VCG1 (S).\nWhen k = 0,\nWe need to prove that the results hold for k + 1.\nThat is, for any S (| S | \u2265 k + m + 2), | S | m + k By induction assumption, the above expression is the sum of ~ terms.\nEach term corresponds to one choice of a among S and one choice of S \u2032 among S \u2212 a.\nWe divide these | S | m + k\nterms into two groups: Group A, terms with a \u2208 \/ D (S \u2212 S \u2032): By Lemma 2, S \u2032 must also form a winner sequence with respect to S.\nThat is, there are at most m + k ~ choices of S \u2032.\nFor each choice of S \u2032, there are at k most | S \u2212 S \u2032 \u2212 D (S \u2212 S \u2032) | = | S | \u2212 k \u2212 m \u2212 1 choices of a. Overall, there are at most m + k ~ (| S | \u2212 k \u2212 m \u2212 1) terms in Group k A. On the other hand, for any S \u2032 that forms a winner sequence with respect to S, S \u2032 must also form a winner sequence with respect to S \u2212 a by Lemma 1.\nFor any a \u2208 \/ D (S \u2212 S \u2032), there must be a term in Group A that is characterized by a and S \u2032.\nThat is, there are at least m + k ~ (| S | \u2212 k \u2212 m \u2212 1) terms in Group A. Hence, there are exactly m + k\na \u2208 \/ D (S \u2212 S \u2032), we have that V CG1 (S \u2212 a \u2212 S \u2032) = V CG1 (S \u2212 S \u2032) by Proposition 8.\nTherefore, the sum of all the terms in Group A equals\n(k + 1) winner sequence sets with respect to S.\nAccording to Lemma 3 and Lemma 4, every term in Group B must corresponds to an element in X, and every element in X must correspond to exactly k + 1 terms in Group B (e.g., a size - (k + 1) winner sequence set Y = {x1,..., xk +1} corresponds to the following k + 1 terms: a = xi and S \u2032 = Y \u2212 xi for all i).\nTherefore, the total number of elements in X must be m + k +1 ~.\nm The sum of the terms in Group B equals\nProposition 9 implies that function R1 is always nonnegative.\nWe still need to prove that R1 (S, 0)> R1 (S, 1)>...> R1 (S, ISI--m--1).\nDue to space constraint, we only present an outline of the proof of R1 (S, 3)> R1 (S, 4), which highlights the main idea behind the full proof.\na, 3)--(ISI--m--4) R1 (S, 3)).\nTo prove that R1 (S, 4) R1 (S--a, 3) for any a E S. Let a be an arbitrary agent in S.\nAccording to Proposition 9, we need to prove\nsition 9).\nEvery term is characterized by a size-3 winner sequence set S \u2032.\n.\nFor every term on the right-hand side, we map it to a corresponding term on the left-hand side.\nThe corresponding term on the left-hand side is larger or the same.\n.\nWe prove that the mapping is injective.\nThat is, different terms on the right-hand side are mapped to different terms on the left-hand side.\n.\nTherefore, the left-hand side must be greater than or equal to the right-hand side.\n4.\nCONCLUSION\nWe conclude our paper with the following conjecture: CONJECTURE 1.\nGross substitutes implies redistribution monotonicity.\nThat is, HETERO remainsfeasible and worst-case optimal in combinatorial auctions with gross substitutes.\nThe idea is that both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand satisfy redistribution monotonicity.\nA natural conjecture is that the \"most restrictive joint\" of these two settings also satisfies redistribution monotonicity.\nThere are many well-studied auction settings that contain both multi-unit auctions with nonincreasing marginal values and heterogeneous-item auctions with unit demand (a list of which can be found in [10]).\nAmong these well-studied settings, combinatorial auctions with gross substitutes is the most restrictive.\nTo prove the conjecture, we need to prove that gross substitutes implies that for any set of agents S, R (S, 0)> R (S, 1)>...> R (S, ISI--m--1)> 0.\nSo far, we have only proved R (S, 0)> R (S, 1)> 0.","keyphrases":["mechan","mechan design","vickrei-clark-grove","redistribut payment","effici mechan","strategi-proof","individu ration mechan","linear vcg redistribut mechan","transform to linear program","analyt character","worst-case optim mechan","vickrei-clark-grove mechan","payment redistribut"],"prmu":["P","M","U","R","R","U","R","R","M","M","M","M","R"]} {"id":"H-30","title":"Latent Concept Expansion Using Markov Random Fields","abstract":"Query expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness. Most previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies. In this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval. The technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion. Furthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques. We evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique. Our model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets. We also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion\/reformulation.","lvl-1":"Latent Concept Expansion Using Markov Random Fields Donald Metzler metzler@cs.umass.edu W. Bruce Croft croft@cs.umass.edu Center for Intelligent Information Retrieval Department of Computer Science University of Massachusetts Amherst, MA 01003 ABSTRACT Query expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness.\nMost previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies.\nIn this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval.\nThe technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion.\nFurthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques.\nWe evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique.\nOur model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets.\nWe also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion\/reformulation.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Algorithms, Experimentation, Theory 1.\nINTRODUCTION Users of information retrieval systems are required to express complex information needs in terms of Boolean expressions, a short list of keywords, a sentence, a question, or possibly a longer narrative.\nA great deal of information is lost during the process of translating from the information need to the actual query.\nFor this reason, there has been a strong interest in query expansion techniques.\nSuch techniques are used to augment the original query to produce a representation that better reflects the underlying information need.\nQuery expansion techniques have been well studied for various models in the past and have shown to significantly improve effectiveness in both the relevance feedback and pseudo-relevance feedback setting [12, 21, 28, 29].\nRecently, a Markov random field (MRF) model for information retrieval was proposed that goes beyond the simplistic bag of words assumption that underlies BM25 and the (unigram) language modeling approach to information retrieval [20, 22].\nThe MRF model generalizes the unigram, bigram, and other various dependence models [14].\nMost past term dependence models have failed to show consistent, significant improvements over unigram baselines, with few exceptions [8].\nThe MRF model, however, has been shown to be highly effective across a number of tasks, including ad hoc retrieval [14, 16], named-page finding [16], and Japanese language web search [6].\nUntil now, the model has been solely used for ranking documents in response to a given query.\nIn this work, we show how the model can be extended and used for query expansion using a technique that we call latent concept expansion (LCE).\nThere are three primary contributions of our work.\nFirst, LCE provides a mechanism for combining term dependence with query expansion.\nPrevious query expansion techniques are based on bag of words models.\nTherefore, by performing query expansion using the MRF model, we are able to study the dynamics between term dependence and query expansion.\nNext, as we will show, the MRF model allows arbitrary features to be used within the model.\nQuery expansion techniques in the past have implicitly only made use of term occurrence features.\nBy using more robust feature sets, it is possible to produce better expansion terms that discriminate between relevant and non-relevant documents better.\nFinally, our proposed approach seamlessly provides a mechanism for generating both single and multi-term concepts.\nMost previous techniques, by default, generate terms independently.\nThere have been several approaches that make use of generalized concepts, however such approaches were somewhat heuristic and done outside of the model [19, 28].\nOur approach is both formally motivated and a natural extension of the underlying model.\nThe remainder of this paper is laid out as follows.\nIn Section 2 we describe related query expansion approaches.\nSection 3 provides an overview of the MRF model and details our proposed latent concept expansion technique.\nIn Section 4 we evaluate our proposed model and analyze the results.\nFinally, Section 5 concludes the paper and summarizes the major results.\n2.\nRELATED WORK One of the classic and most widely used approaches to query expansion is the Rocchio algorithm [21].\nRocchio``s approach, which was developed within the vector space model, reweights the original query vector by moving the weights towards the set of relevant or pseudo-relevant documents and away from the non-relevant documents.\nUnfortunately, it is not possible to formally apply Rocchio``s approach to a statistical retrieval model, such as language modeling for information retrieval.\nA number of formalized query expansion techniques have been developed for the language modeling framework, including Zhai and Lafferty``s model-based feedback and Lavrenko and Croft``s relevance models [12, 29].\nBoth approaches attempt to use pseudo-relevant or relevant documents to estimate a better query model.\nModel-based feedback finds the model that best describes the relevant documents while taking a background (noise) model into consideration.\nThis separates the content model from the background model.\nThe content model is then interpolated with the original query model to form the expanded query.\nThe other technique, relevance models, is more closely related to our work.\nTherefore, we go into the details of the model.\nMuch like model-based feedback, relevance models estimate an improved query model.\nThe only difference between the two approaches is that relevance models do not explicitly model the relevant or pseudo-relevant documents.\nInstead, they model a more generalized notion of relevance, as we now show.\nGiven a query Q, a relevance model is a multinomial distribution, P(\u00b7|Q), that encodes the likelihood of each term given the query as evidence.\nIt is computed as: P(w|Q) = D P(w|D)P(D|Q) \u2248 D\u2208RQ P(w|D)P(Q|D)P(D) w D\u2208RQ P(w|D)P(Q|D)P(D) (1) where RQ is the set of documents that are relevant or pseudorelevant to query Q.\nIn the pseudo-relevant case, these are the top ranked documents for query Q. Furthermore, it is assumed that P(D) is uniform over this set.\nThese mild assumptions make computing the Bayesian posterior more practical.\nAfter the model is estimated, documents are ranked by clipping the relevance model by choosing the k most likely terms from P(\u00b7|Q).\nThis clipped distribution is then interpolated with with the original, maximum likelihood query model [1].\nThis can be thought of as expanding the original query by k weighted terms.\nThroughout the remainder of this work, we refer to this instantiation of relevance models as RM3.\nThere has been relatively little work done in the area of query expansion in the context of dependence models [9].\nHowever, there have been several attempts to expand using multi-term concepts.\nXu and Croft``s local context analysis (LCA) method combined passage-level retrieval with concept expansion, where concepts were single terms and phrases [28].\nExpansion concepts were chosen and weighted using a metric based on co-occurrence statistics.\nHowever, it is not clear based on the analysis done how much the phrases helped over the single terms alone.\nPapka and Allan investigate using relevance feedback to perform multi-term concept expansion for document routing [19].\nThe concepts used in their work are more general than those used in LCA, and include InQuery query language structures, such as #UW50(white house), which corresponds to the concept the terms white and house occur, in any order, within 50 terms of each other.\nResults showed that combining single term and large window multi-term concepts significantly improved effectiveness.\nHowever, it is unclear whether the same approach is also effective for ad hoc retrieval, due to the differences in the tasks.\n3.\nMODEL This section details our proposed latent concept expansion technique.\nAs mentioned previously, the technique is an extension of the MRF model for information retrieval [14].\nTherefore, we begin by providing an overview of the MRF model and our proposed extensions.\n3.1 MRFs for IR 3.1.1 Basics Markov random fields, which are undirected graphical models, provide a compact, robust way of modeling a joint distribution.\nHere, we are interested in modeling the joint distribution over a query Q = q1, ... , qn and a document D.\nIt is assumed the underlying distribution over pairs of documents and queries is a relevance distribution.\nThat is, sampling from the distribution gives pairs of documents and queries, such that the document is relevant to the query.\nA MRF is defined by a graph G and a set of non-negative potential functions over the cliques in G.\nThe nodes in the graph represent the random variables and the edges define the independence semantics of the distribution.\nA MRF satisfies the Markov property, which states that a node is independent of all of its non-neighboring nodes given observed values for its neighbors.\nGiven a graph G, a set of potentials \u03c8i, and a parameter vector \u039b, the joint distribution over Q and D is given by: PG,\u039b(Q, D) = 1 Z\u039b c\u2208C(G) \u03c8(c; \u039b) where Z is a normalizing constant.\nWe follow common convention and parameterize the potentials as \u03c8i(c; \u039b) = exp[\u03bbifi(c)], where fi(c) is a real-valued feature function.\n3.1.2 Constructing G Given a query Q, the graph G can be constructed in a number of ways.\nHowever, following previous work, we consider three simple variants [14].\nThese variants are full independence, where each query term is independent of each other given a document, sequential dependence, which assumes a dependence exists between adjacent query terms, and full dependence, which makes no independence assumptions.\n3.1.3 Parameterization MRFs are commonly parameterized based on the maximal cliques of G. However, such a parameterization is too coarse for our needs.\nWe need a parameterization that allows us to associate feature functions with cliques on a more fine grained level, while keeping the number of features, and thus the number of parameters, reasonable.\nTherefore, we allow cliques to share feature functions and parameters based on clique sets.\nThat is, all of the cliques within a clique set are associated with the same feature function and share a single parameter.\nThis effectively ties together the parameters of the features associated with each set, which significantly reduces the number of parameters while still providing a mechanism for fine-tuning on the level of clique sets.\nWe propose seven clique sets for use with information retrieval.\nThe first three clique sets consist of cliques that contain one or more query terms and the document node.\nFeatures over these cliques should encode how well the terms in the clique configuration describe the document.\nThese sets are: \u2022 TD - set of cliques containing the document node and exactly one query term.\n\u2022 OD - set of cliques containing the document node and two or more query terms that appear in sequential order within the query.\n\u2022 UD - set of cliques containing the document node and two or more query terms that appear in any order within the query.\nNote that UD is a superset of OD.\nBy tying the parameters among the cliques within each set we can control how much influence each type gets.\nThis also avoids the problem of trying to determine how to estimate weights for each clique within the sets.\nInstead, we now must only estimate a single parameter per set.\nNext, we consider cliques that only contain query term nodes.\nThese cliques, which were not considered in [14], are defined in an analogous way to those just defined, except the the cliques are only made up of query term nodes and do not contain the document node.\nFeature functions over these cliques should capture how compatible query terms are to one another.\nThese clique features may take on the form of language models that impose well-formedness of the terms.\nTherefore, we define following query-dependent clique sets: \u2022 TQ - set of cliques containing exactly one query term.\n\u2022 OQ - set of cliques containing two or more query terms that appear in sequential order within the query.\n\u2022 UQ - set of cliques containing two or more query terms that appear in any order within the query.\nFinally, there is the clique that only contains the document node.\nFeatures over this node can be used as a type of document prior, encoding document-centric properties.\nThis trivial clique set is then: \u2022 D - clique set containing only the singleton node D We note that our clique sets form a set cover over the cliques of G, but are not a partition, since some cliques appear in multiple clique sets.\nAfter tying the parameters in our clique sets together and using the exponential potential function form, we end up with the following simplified form of the joint distribution: log PG,\u039b(Q, D) = \u03bbTD c\u2208TD fTD (c) + \u03bbOD c\u2208OD fOD (c) + \u03bbUD c\u2208UD fUD (c) FDQ(D,Q) - document and query dependent + \u03bbTQ c\u2208TQ fTQ (c) + \u03bbOQ c\u2208OQ fOQ (c) + \u03bbUQ c\u2208UQ fUQ (c) FQ(Q) - query dependent + \u03bbDfD(D) FD(D) - document dependent \u2212 log Z\u039b document + query independent where FDQ, FQ, and FD are convenience functions defined by the document and query dependent, query dependent, and document dependent components of the joint distribution, respectively.\nThese will be used to simplify and clarify expressions derived throughout the remainder of the paper.\n3.1.4 Features Any arbitrary feature function over clique configurations can be used in the model.\nThe correct choice of features depends largely on the retrieval task and the evaluation metric.\nTherefore, there is likely not to be a single, universally applicable set of features.\nTo provide an idea of the range of features that can be used, we now briefly describe possible types of features that could be used.\nPossible query term dependent features include tf, idf, named entities, term proximity, and text style to name a few.\nMany types of document dependent features can be used, as well, including document length, PageRank, readability, and genre, among others.\nSince it is not our goal here to find optimal features, we use a simple, fixed set of features that have been shown to be effective in previous work [14].\nSee Table 1 for a list of features used.\nThese features attempt to capture term occurrence and term proximity.\nBetter feature selection in the future will likely lead to improved effectiveness.\n3.1.5 Ranking Given a query Q, we wish to rank documents in descending order according to PG,\u039b(D|Q).\nAfter dropping document independent expressions from log PG,\u039b(Q, D), we derive the following ranking function: PG,\u039b(D|Q) rank = FDQ(D, Q) + FD(D) (2) which is a simple weighted linear combination of feature functions that can be computed efficiently for reasonable graphs.\n3.1.6 Parameter Estimation Now that the model has been fully specified, the final step is to estimate the model parameters.\nAlthough MRFs are generative models, it is inappropriate to train them using Feature Value fTD (qi, D) log (1 \u2212 \u03b1) tfqi,D |D| + \u03b1 cfqi |C| fOD (qi, qi+1 ... , qi+k, D) log (1 \u2212 \u03b2) tf#1(qi...qi+k),D |D| + \u03b2 cf#1(qi...qi+k) |C| fUD (qi, ..., qj, D) log (1 \u2212 \u03b2) tf#uw(qi...qj ),D |D| + \u03b2 cf#uw(qi...qj ) |C| fTQ (qi) \u2212 log cfqi |C| fOQ (qi, qi+1 ... , qi+k) \u2212 log cf#1(qi...qi+k) |C| fUQ (qi, ..., qj) \u2212 log cf#uw(qi...qj ) |C| fD 0 Table 1: Feature functions used in Markov random field model.\nHere, tfw,D is the number of times term w occurs in document D, tf#1(qi...qi+k),D denotes the number of times the exact phrase qi ... qi+k occurs in document D, tf#uw(qi...qj ),D is the number of times the terms qi, ... qj appear ordered or unordered within a window of N terms, and |D| is the length of document D.\nThe cf and |C| values are analogously defined on the collection level.\nFinally, \u03b1 and \u03b2 are model hyperparameters that control smoothing for single term and phrase features, respectively.\nconventional likelihood-based approaches because of metric divergence [17].\nThat is, the maximum likelihood estimate is unlikely to be the estimate that maximizes our evaluation metric.\nFor this reason, we discriminatively train our model to directly maximize the evaluation metric under consideration [14, 15, 25].\nSince our parameter space is small, we make use of a simple hill climbing strategy, although other more sophisticated approaches are possible [10].\n3.2 Latent Concept Expansion In this section we describe how this extended MRF model can be used in a novel way to generate single and multiterm concepts that are topically related to some original query.\nAs we will show, the concepts generated using our technique can be used for query expansion or other tasks, such as suggesting alternative query formulations.\nWe assume that when a user formulates their original query, they have some set of concepts in mind, but are only able to express a small number of them in the form of a query.\nWe treat the concepts that the user has in mind, but did not explicitly express in the query, as latent concepts.\nThese latent concepts can consist of a single term, multiple terms, or some combination of the two.\nIt is, therefore, our goal to recover these latent concepts given some original query.\nThis can be accomplished within our framework by first expanding the original graph G to include the type of concept we are interested in generating.\nWe call this expanded graph H.\nIn Figure 1, the middle graph provides an example of how to construct an expanded graph that can generate single term concepts.\nSimilarly, the graph on the right illustrates an expanded graph that generates two term concepts.\nAlthough these two examples make use of the sequential dependence assumption (i.e. dependencies between adjacent query terms), it is important to note that both the original query and the expansion concepts can use any independence structure.\nAfter H is constructed, we compute PH,\u039b(E|Q), a probability distribution over latent concepts, according to: PH,\u039b(E|Q) = D\u2208R PH,\u039b(Q, E, D) D\u2208R E PH,\u039b(Q, E, D) where R is the universe of all possible documents and E is some latent concept that may consist of one or more terms.\nSince it is not practical to compute this summation, we must approximate it.\nWe notice that PH,\u039b(Q, E, D) is likely to be peaked around those documents D that are highly ranked according to query Q. Therefore, we approximate PH,\u039b(E|Q) by only summing over a small subset of relevant or pseudo-relevant documents for query Q.\nThis is computed as follows: PH,\u039b(E|Q) \u2248 D\u2208RQ PH,\u039b(Q, E, D) D\u2208RQ E PH,\u039b(Q, E, D) (3) \u221d D\u2208RQ exp FQD(Q, D) + FD(D) + FQD(E, D) + FQ(E) where RQ is a set of relevant or pseudo-relevant documents for query Q and all clique sets are constructed using H.\nAs we see, the likelihood contribution for each document in RQ is a combination of the original query``s score for the document (see Equation 2), concept E``s score for the document, and E``s document-independent score.\nTherefore, this equation can be interpreted as measuring how well Q and E account for the top ranked documents and the goodness of E, independent of the documents.\nFor maximum robustness, we use a different set of parameters for FQD(Q, D) and FQD(E, D), which allows us to weight the term, ordered, and unordered window features differently for the original query and the candidate expansion concept.\n3.2.1 Query Expansion To use this framework for query expansion, we first choose an expansion graph H that encodes the latent concept structure we are interested in expanding the query using.\nWe then select the k latent concepts with the highest likelihood given by Equation 3.\nA new graph G is constructed by augmenting the original graph G with the k expansion concepts E1, ... , Ek.\nFinally, documents are ranked according to PG ,\u039b(D|Q, E1, ... , Ek) using Equation 2.\n3.2.2 Comparison to Relevance Models Inspecting Equations 1 and 3 reveals the close connection that exists between LCE and relevance models.\nBoth Figure 1: Graphical model representations of relevance modeling (left), latent concept expansion using single term concepts (middle), and latent concept expansion using two term concepts (right) for a three term query.\nmodels essentially compute the likelihood of a term (or concept) in the same manner.\nIt is easy to see that just as the MRF model can be viewed as a generalization of language modeling, so too can LCE be viewed as a generalization of relevance models.\nThere are important differences between MRFs\/LCE and unigram language models\/relevance models.\nSee Figure 1 for graphical model representations of both models.\nUnigram language models and relevance models are based on the multinomial distribution.\nThis distributional assumption locks the model into the bag of words representation and the implicit use of term occurrence features.\nHowever, the distribution underlying the MRF model allows us to move beyond both of these assumptions, by modeling both dependencies between query terms and allowing arbitrary features to be explicitly used.\nMoving beyond the simplistic bag of words assumption in this way results in a general, robust model and, as we show in the next section, translates into significant improvements in retrieval effectiveness.\n4.\nEXPERIMENTAL RESULTS In order to better understand the strengths and weaknesses of our technique, we evaluate it on a wide range of data sets.\nTable 2 provides a summary of the TREC data sets considered.\nThe WSJ, AP, and ROBUST collections are smaller and consist entirely of newswire articles, whereas WT10g and GOV2 are large web collections.\nFor each data set, we split the available topics into a training and test set, where the training set is used solely for parameter estimation and the test set is used for evaluation purposes.\nAll experiments were carried out using a modified version of Indri, which is part of the Lemur Toolkit [18, 23].\nAll collections were stopped using a standard list of 418 common terms and stemmed using a Porter stemmer.\nIn all cases, only the title portion of the TREC topics are used to construct queries.\nWe construct G using the sequential dependence assumption for all data sets [14].\n4.1 ad-hoc Retrieval Results We now investigate how well our model performs in practice in a pseudo-relevance feedback setting.\nWe compare unigram language modeling (with Dirichlet smoothing), the MRF model (without expansion), relevance models, and LCE to better understand how each model performs across the various data sets.\nFor the unigram language model, the smoothing parameter was trained.\nFor the MRF model, we train the model parameters (i.e. \u039b) and model hyperparameters (i.e. \u03b1, \u03b2).\nFor RM3 and LCE, we also train the number of pseudoName Description # Docs Train Topics Test Topics WSJ Wall St. Journal 87-92 173,252 51-150\u00a0151-200 AP Assoc. Press 88-90 242,918 51-150\u00a0151-200 ROBUST Robust 2004 data 528,155 301-450\u00a0601-700 WT10g TREC Web collection 1,692,096 451-500\u00a0501-550 GOV2 2004 crawl of .\ngov domain 25,205,179 701-750\u00a0751-800 Table 2: Overview of TREC collections and topics.\nrelevant feedback documents used and the number of expansion terms.\n4.1.1 Expansion with Single Term Concepts We begin by evaluating how well our model performs when expanding using only single terms.\nBefore we describe and analyze the results, we explicitly state how expansion term likelihoods are computed under this setup (i.e. using the sequential dependence assumption, expanding with single term concepts, and using our feature set).\nThe expansion term likelihoods are computed as follows: PH,\u039b(e|Q) \u221d D\u2208RQ exp \u03bbTD w\u2208Q log (1 \u2212 \u03b1) tfw,D |D| + \u03b1 cfw |C| + \u03bbOD b\u2208Q log (1 \u2212 \u03b2) tf#1(b),D |D| + \u03b2 cf#1(b) |C| + \u03bbUD b\u2208Q log (1 \u2212 \u03b2) tf#uw(b),D |D| + \u03b2 cf#uw(b) |C| + log (1 \u2212 \u03b1) tfe,D |D| + \u03b1 cfe |C| \u03bbTD cfe |C| \u03bbTQ (4) where b \u2208 Q denotes the set of bigrams in Q.\nThis equation clearly shows how LCE differs from relevance models.\nWhen we set \u03bbTD = \u03bbT,D = 1 and all other parameters to 0, we obtain the exact formula that is used to compute term likelihoods in the relevance modeling framework.\nTherefore, LCE adds two very important factors to the equation.\nFirst, it adds the ordered and unordered window features that are applied to the original query.\nSecond, it applies an intuitive tf.idf-like form to the candidate expansion term w.\nThe idf factor, which is not present in relevance models, plays an important role in expansion term selection.\n<= \u2212100% (\u2212100%, \u221275%] (\u221275%, \u221250%] (\u221250%, \u221225%] (\u221225%, 0%] (0%, 25%] (25%, 50%] (50%, 75%] (75%, 100%] > 100% RM3 LCE 05101520 AP <= \u2212100% (\u2212100%, \u221275%] (\u221275%, \u221250%] (\u221250%, \u221225%] (\u221225%, 0%] (0%, 25%] (25%, 50%] (50%, 75%] (75%, 100%] > 100% RM3 LCE 05101520253035 ROBUST <= \u2212100% (\u2212100%, \u221275%] (\u221275%, \u221250%] (\u221250%, \u221225%] (\u221225%, 0%] (0%, 25%] (25%, 50%] (50%, 75%] (75%, 100%] > 100% RM3 LCE 0510152025 WT10G Figure 2: Histograms that demonstrate and compare the robustness of relevance models (RM3) and latent concept expansion (LCE) with respect to the query likelihood model (QL) for the AP, ROBUST, and WT10G data sets.\nThe results, evaluated using mean average precision, are given in Table 3.\nAs we see, the MRF model, relevance models, and LCE always significantly outperform the unigram language model.\nIn addition, LCE shows significant improvements over relevance models across all data sets.\nThe relative improvements over relevance models is 6.9% for AP, 12.9% for WSJ, 6.5% for ROBUST, 16.7% for WT10G, and 7.3% for GOV2.\nFurthermore, LCE shows small, but not significant, improvements over relevance modeling for metrics such as precision at 5, 10, and 20.\nHowever, both relevance modeling and LCE show statistically significant improvements in such metrics over the unigram language model.\nAnother interesting result is that the MRF model is statistically equivalent to relevance models on the two web data sets.\nIn fact, the MRF model outperforms relevance models on the WT10g data set.\nThis reiterates the importance of non-unigram, proximity-based features for content-based web search observed previously [14, 16].\nAlthough our model has more free parameters than relevance models, there is surprisingly little overfitting.\nInstead, the model exhibits good generalization properties.\n4.1.2 Expansion with Multi-Term Concepts We also investigated expanding using both single and two word concepts.\nFor each query, we expanded using a set of single term concepts and a set of two term concepts.\nThe sets were chosen independently.\nUnfortunately, only negligible increases in mean average precision were observed.\nThis result may be due to the fact that strong correlations exist between the single term expansion concepts.\nWe found that the two word concepts chosen often consisted of two highly correlated terms that are also chosen as single term concepts.\nFor example, the two term concept stock market was chosen while the single term concepts stock and market were also chosen.\nTherefore, many two word concepts are unlikely to increase the discriminative power of the expanded query.\nThis result suggests that concepts should be chosen according to some criteria that also takes novelty, diversity, or term correlations into account.\nAnother potential issue is the feature set used.\nOther feature sets may ultimately yield different results, especially if they reduce the correlation among the expansion concepts.\nTherefore, our experiments yield no conclusive results with regard to expansion using multi-term concepts.\nInstead, the results introduce interesting open questions and directions for future exploration.\nLM MRF RM3 LCE WSJ .3258 .3425\u03b1 .3493\u03b1 .3943\u03b1\u03b2\u03b3 AP .2077 .2147\u03b1 .2518\u03b1\u03b2 .2692\u03b1\u03b2\u03b3 ROBUST .2920 .3096\u03b1 .3382\u03b1\u03b2 .3601\u03b1\u03b2\u03b3 WT10g .1861 .2053\u03b1 .1944\u03b1 .2269\u03b1\u03b2\u03b3 GOV2 .3234 .3520\u03b1 .3656\u03b1 .3924\u03b1\u03b2\u03b3 Table 3: Test set mean average precision for language modeling (LM), Markov random field (MRF), relevance models (RM3), and latent concept expansion (LCE).\nThe superscripts \u03b1, \u03b2, and \u03b3 indicate statistically significant improvements (p < 0.05) over LM, MRF, and RM3, respectively.\n4.2 Robustness As we have shown, relevance models and latent concept expansion can significantly improve retrieval effectiveness over the baseline query likelihood model.\nIn this section we analyze the robustness of these two methods.\nHere, we define robustness as the number queries whose effectiveness are improved\/hurt (and by how much) as the result of applying these methods.\nA highly robust expansion technique will significantly improve many queries and only minimally hurt a few.\nFigure 2 provides an analysis of the robustness of relevance modeling and latent concept expansion for the AP, ROBUST, and WT10G data sets.\nThe analysis for the two data sets not shown is similar.\nThe histograms provide, for various ranges of relative decreases\/increases in mean average precision, the number of queries that were hurt\/improved with respect to the query likelihood baseline.\nAs the results show, LCE exhibits strong robustness for each data set.\nFor AP, relevance models improve 38 queries and hurt 11, whereas LCE improves 35 and hurts 14.\nAlthough relevance models improve the effectiveness of 3 more queries than LCE, the relative improvement exhibited by LCE is significantly larger.\nFor the ROBUST data set, relevance models improve 67 queries and hurt 32, and LCE improves 77 and hurts 22.\nFinally, for the WT10G collection, relevance models improve 32 queries and hurt 16, and LCE improves 35 and hurts 14.\nAs with AP, the amount of improvement exhibited by the LCE versus relevance models is significantly larger for both the ROBUST and WT10G data sets.\nIn addition, when LCE does hurt performance, it is less likely to hurt as much as relevance modeling, which is a desirable property.\n1 word concepts 2 word concepts 3 word concepts telescope hubble telescope hubble space telescope hubble space telescope hubble telescope space space hubble space space telescope hubble mirror telescope mirror space telescope NASA NASA telescope hubble hubble telescope astronomy launch mirror telescope NASA hubble space astronomy telescope NASA space telescope mirror shuttle telescope space telescope space NASA test hubble mirror hubble telescope mission new NASA hubble mirror mirror mirror discovery telescope astronomy space telescope launch time telescope optical space telescope discovery universe hubble optical shuttle space telescope optical telescope discovery hubble telescope flaw light telescope shuttle two hubble space Table 4: Fifteen most likely one, two, and three word concepts constructed using the top 25 documents retrieved for the query hubble telescope achievements on the ROBUST collection.\nOverall, LCE improves effectiveness for 65%-80% of queries, depending on the data set.\nWhen used in combination with a highly accurate query performance prediction system, it may be possible to selectively expand queries and minimize the loss associated with sub-baseline performance.\n4.3 Multi-Term Concept Generation Although we found that expansion using multi-term concepts failed to produce conclusive improvements in effectiveness, there are other potential tasks that these concepts may be useful for, such as query suggestion\/reformulation, summarization, and concept mining.\nFor example, for a query suggestion task, the original query could be used to generate a set of latent concepts which correspond to alternative query formulations.\nAlthough evaluating our model on these tasks is beyond the scope of this work, we wish to show an illustrative example of the types of concepts generated using our model.\nIn Table 4, we present the most likely one, two, and three term concepts generated using LCE for the query hubble telescope achievements using the top 25 ranked documents from the ROBUST collection.\nIt is well known that generating multi-term concepts using a unigram-based model produces unsatisfactory results, since it fails to consider term dependencies.\nThis is not the case when generating multi-term concepts using our model.\nInstead, a majority of the concepts generated are well-formed and meaningful.\nThere are several cases where the concepts are less coherent, such as mirror mirror mirror.\nIn this case, the likelihood of the term mirror appearing in a pseudo-relevant document outweighs the language modeling features (e.g. fOQ ), which causes this non-coherent concept to have a high likelihood.\nSuch examples are in the minority, however.\nNot only are the concepts generated well-formed and meaningful, but they are also topically relevant to the original query.\nAs we see, all of the concepts generated are on topic and in some way related to the Hubble telescope.\nIt is interesting to see that the concept hubble telescope flaw is one of the most likely three term concepts, given that it is somewhat contradictory to the original query.\nDespite this contradiction, documents that discuss the telescope flaws are also likely to describe the successes, as well, and therefore this is likely to be a meaningful concept.\nOne important thing to note is that the concepts LCE generates are of a different nature than those that would be generated using a bigram relevance model.\nFor example, a bigram model would be unlikely to generate the concept telescope space NASA, since none of the bigrams that make up the concept have high likelihood.\nHowever, since our model is based on a number of different features over various types of cliques, it is more general and robust than a bigram model.\nAlthough we only provided the concepts generated for a single query, we note that the same analysis and conclusions generalize across other data sets, with coherent, topically related concepts being consistently generated using LCE.\n4.4 Discussion Our latent concept expansion technique captures two semiorthogonal types of dependence.\nIn information retrieval, there has been a long-term interest in understanding the role of term dependence.\nOut of this research, two broad types of dependencies have been identified.\nThe first type of dependence is syntactic dependence.\nThis type of dependence covers phrases, term proximity, and term co-occurrence [2, 4, 5, 7, 26].\nThese methods capture the fact that queries implicitly or explicitly impose a certain set of positional dependencies.\nThe second type is semantic dependence.\nExamples of semantic dependence are relevance feedback, pseudo-relevance feedback, synonyms, and to some extent stemming [3].\nThese techniques have been explored on both the query and document side.\nOn the query side, this is typically done using some form of query expansion, such as relevance models or LCE.\nOn the document side, this is done as document expansion or document smoothing [11, 13, 24].\nAlthough there may be some overlap between syntactic and semantic dependencies, they are mostly orthogonal.\nOur model uses both types of dependencies.\nThe use of phrase and proximity features within the model captures syntactic dependencies, whereas LCE captures query-side semantic dependence.\nThis explains why the initial improvement in effectiveness achieved by using the MRF model is not lost after query expansion.\nIf the same types of dependencies were capture by both syntactic and semantic dependencies, LCE would be expected to perform about equally as well as relevance models.\nTherefore, by modeling both types of dependencies we see an additive effect, rather than an absorbing effect.\nAn interesting area of future work is to determine whether or not modeling document-side semantic dependencies can add anything to the model.\nPrevious results that have combined query- and document-side semantic dependencies have shown mixed results [13, 27].\n5.\nCONCLUSIONS In this paper we proposed a robust query expansion technique called latent concept expansion.\nThe technique was shown to be a natural extension of the Markov random field model for information retrieval and a generalization of relevance models.\nLCE is novel in that it performs single or multi-term expansion within a framework that allows the modeling of term dependencies and the use of arbitrary features, whereas previous work has been based on the bag of words assumption and term occurrence features.\nWe showed that the technique can be used to produce high quality, well formed, topically relevant multi-term expansion concepts.\nThe concepts generated can be used in an alternative query suggestion module.\nWe also showed that the model is highly effective.\nIn fact, it achieves significant improvements in mean average precision over relevance models across a selection of TREC data sets.\nIt was also shown the MRF model itself, without any query expansion, outperforms relevance models on large web data sets.\nThis reconfirms previous observations that modeling dependencies via the use of proximity features within the MRF has more of an impact on larger, noisier collections than smaller, well-behaved ones.\nFinally, we reiterated the importance of choosing expansion terms that model relevance, rather than the relevant documents and showed how LCE captures both syntactic and query-side semantic dependencies.\nFuture work will look at incorporating document-side dependencies, as well.\nAcknowledgments This work was supported in part by the Center for Intelligent Information Retrieval, in part by NSF grant #CNS-0454018, in part by ARDA and NSF grant #CCF-0205575, and in part by Microsoft Live Labs.\nAny opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the sponsor.\n6.\nREFERENCES [1] N. Abdul-Jaleel, J. Allan, W. B. Croft, F. Diaz, L. Larkey, X. Li, M. D. Smucker, and C. Wade.\nUMass at TREC 2004: Novelty and HARD.\nIn Online proceedings of the 2004 Text Retrieval Conf., 2004.\n[2] C. L. A. Clarke and G. V. Cormack.\nShortest-substring retrieval and ranking.\nACM Trans.\nInf.\nSyst., 18(1):44-78, 2000.\n[3] K. Collins-Thompson and J. Callan.\nQuery expansion using random walk models.\nIn Proc.\n14th Intl..\nConf.\non Information and Knowledge Management, pages 704-711, 2005.\n[4] W. B. Croft.\nBoolean queries and term dependencies in probabilistic retrieval models.\nJournal of the American Society for Information Science, 37(4):71-77, 1986.\n[5] W. B. Croft, H. Turtle, and D. Lewis.\nThe use of phrases and structured queries in information retrieval.\nIn Proc.\n14th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 32-45, 1991.\n[6] K. Eguchi.\nNTCIR-5 query expansion experiments using term dependence models.\nIn Proc.\nof the Fifth NTCIR Workshop Meeting on Evaluation of Information Access Technologies, pages 494-501, 2005.\n[7] J. Fagan.\nAutomatic phrase indexing for document retrieval: An examination of syntactic and non-syntactic methods.\nIn Proc.\ntenth Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 91-101, 1987.\n[8] J. Gao, J. Nie, G. Wu, and G. Cao.\nDependence language model for information retrieval.\nIn Proc.\n27th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 170-177, 2004.\n[9] D. Harper and C. J. van Rijsbergen.\nAn evaluation of feedback in document retrieval using co-occurrence data.\nJournal of Documentation, 34(3):189-216, 1978.\n[10] T. Joachims.\nA support vector method for multivariate performance measures.\nIn Proc.\nof the International Conf.\non Machine Learning, pages 377-384, 2005.\n[11] O. Kurland and L. Lee.\nCorpus structure, language models, and ad-hoc information retrieval.\nIn Proc.\n27th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 194-201, 2004.\n[12] V. Lavrenko and W. B. Croft.\nRelevance-based language models.\nIn Proc.\n24th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 120-127, 2001.\n[13] X. Liu and W. B. Croft.\nCluster-based retrieval using language models.\nIn Proc.\n27th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 186-193, 2004.\n[14] D. Metzler and W. B. Croft.\nA Markov random field model for term dependencies.\nIn Proc.\n28th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 472-479, 2005.\n[15] D. Metzler and W. B. Croft.\nLinear feature based models for information retrieval.\nInformation Retrieval, to appear, 2006.\n[16] D. Metzler, T. Strohman, Y. Zhou, and W. B. Croft.\nIndri at terabyte track 2005.\nIn Online proceedings of the 2005 Text Retrieval Conf., 2005.\n[17] W. Morgan, W. Greiff, and J. Henderson.\nDirect maximization of average precision by hill-climbing with a comparison to a maximum entropy approach.\nTechnical report, MITRE, 2004.\n[18] P. Ogilvie and J. P. Callan.\nExperiments using the lemur toolkit.\nIn Proc.\nof the Text REtrieval Conf., 2001.\n[19] R. Papka and J. Allan.\nWhy bigger windows are better than smaller ones.\nTechnical report, University of Massachusetts, Amherst, 1997.\n[20] S. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford.\nOkapi at trec-3.\nIn Online proceedings of the Third Text Retrieval Conf., pages 109-126, 1995.\n[21] J. J. Rocchio.\nRelevance Feedback in Information Retrieval, pages 313-323.\nPrentice-Hall, 1971.\n[22] F. Song and W. B. Croft.\nA general language model for information retrieval.\nIn Proc.\neighth international conference on Information and knowledge management (CIKM 99), pages 316-321, 1999.\n[23] T. Strohman, D. Metzler, H. Turtle, and W. B. Croft.\nIndri: A language model-based serach engine for complex queries.\nIn Proc.\nof the International Conf.\non Intelligence Analysis, 2004.\n[24] T. Tao, X. Wang, Q. Mei, and C. Zhai.\nLanguage model information retrieval with document expansion.\nIn Proc.\nof HLT\/NAACL, pages 407-414, 2006.\n[25] B. Taskar, C. Guestrin, and D. Koller.\nMax-margin markov networks.\nIn Proc.\nof Advances in Neural Information Processing Systems (NIPS 2003), 2003.\n[26] C. J. van Rijsbergen.\nA theoretical basis for the use of cooccurrence data in information retrieval.\nJournal of Documentation, 33(2):106-119, 1977.\n[27] X. Wei and W. B. Croft.\nLDA-based document models for ad-hoc retrieval.\nIn Proc.\n29th Ann.\nIntl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, pages 178-185, 2006.\n[28] J. Xu and W. B. Croft.\nImproving the effectiveness of information retrieval with local context analysis.\nACM Trans.\nInf.\nSyst., 18(1):79-112, 2000.\n[29] C. Zhai and J. Lafferty.\nModel-based feedback in the language modeling approach to information retrieval.\nIn Proc.\n10th Intl..\nConf.\non Information and Knowledge Management, pages 403-410, 2001.","lvl-3":"Latent Concept Expansion Using Markov Random Fields\nABSTRACT\nQuery expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness.\nMost previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies.\nIn this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval.\nThe technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion.\nFurthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques.\nWe evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique.\nOur model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets.\nWe also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion\/reformulation.\n1.\nINTRODUCTION\nUsers of information retrieval systems are required to express complex information needs in terms of Boolean expressions, a short list of keywords, a sentence, a question, or\npossibly a longer narrative.\nA great deal of information is lost during the process of translating from the information need to the actual query.\nFor this reason, there has been a strong interest in query expansion techniques.\nSuch techniques are used to augment the original query to produce a representation that better reflects the underlying information need.\nQuery expansion techniques have been well studied for various models in the past and have shown to significantly improve effectiveness in both the relevance feedback and pseudo-relevance feedback setting [12, 21, 28, 29].\nRecently, a Markov random field (MRF) model for information retrieval was proposed that goes beyond the simplistic bag of words assumption that underlies BM25 and the (unigram) language modeling approach to information retrieval [20, 22].\nThe MRF model generalizes the unigram, bigram, and other various dependence models [14].\nMost past term dependence models have failed to show consistent, significant improvements over unigram baselines, with few exceptions [8].\nThe MRF model, however, has been shown to be highly effective across a number of tasks, including ad hoc retrieval [14, 16], named-page finding [16], and Japanese language web search [6].\nUntil now, the model has been solely used for ranking documents in response to a given query.\nIn this work, we show how the model can be extended and used for query expansion using a technique that we call latent concept expansion (LCE).\nThere are three primary contributions of our work.\nFirst, LCE provides a mechanism for combining term dependence with query expansion.\nPrevious query expansion techniques are based on bag of words models.\nTherefore, by performing query expansion using the MRF model, we are able to study the dynamics between term dependence and query expansion.\nNext, as we will show, the MRF model allows arbitrary features to be used within the model.\nQuery expansion techniques in the past have implicitly only made use of term occurrence features.\nBy using more robust feature sets, it is possible to produce better expansion terms that discriminate between relevant and non-relevant documents better.\nFinally, our proposed approach seamlessly provides a mechanism for generating both single and multi-term concepts.\nMost previous techniques, by default, generate terms independently.\nThere have been several approaches that make use of generalized concepts, however such approaches were somewhat heuristic and done outside of the model [19, 28].\nOur approach is both formally motivated and a natural extension of the underlying model.\nThe remainder of this paper is laid out as follows.\nIn Section 2 we describe related query expansion approaches.\nSection 3 provides an overview of the MRF model and details our proposed latent concept expansion technique.\nIn Section 4 we evaluate our proposed model and analyze the results.\nFinally, Section 5 concludes the paper and summarizes the major results.\n2.\nRELATED WORK\nOne of the classic and most widely used approaches to query expansion is the Rocchio algorithm [21].\nRocchio's approach, which was developed within the vector space model, reweights the original query vector by moving the weights towards the set of relevant or pseudo-relevant documents and away from the non-relevant documents.\nUnfortunately, it is not possible to formally apply Rocchio's approach to a statistical retrieval model, such as language modeling for information retrieval.\nA number of formalized query expansion techniques have been developed for the language modeling framework, including Zhai and Lafferty's model-based feedback and Lavrenko and Croft's relevance models [12, 29].\nBoth approaches attempt to use pseudo-relevant or relevant documents to estimate a better query model.\nModel-based feedback finds the model that best describes the relevant documents while taking a background (noise) model into consideration.\nThis separates the content model from the background model.\nThe content model is then interpolated with the original query model to form the expanded query.\nThe other technique, relevance models, is more closely related to our work.\nTherefore, we go into the details of the model.\nMuch like model-based feedback, relevance models estimate an improved query model.\nThe only difference between the two approaches is that relevance models do not explicitly model the relevant or pseudo-relevant documents.\nInstead, they model a more generalized notion of relevance, as we now show.\nGiven a query Q, a relevance model is a multinomial distribution, P (\u00b7 | Q), that encodes the likelihood of each term given the query as evidence.\nIt is computed as:\nwhere RQ is the set of documents that are relevant or pseudorelevant to query Q.\nIn the pseudo-relevant case, these are the top ranked documents for query Q. Furthermore, it is assumed that P (D) is uniform over this set.\nThese mild assumptions make computing the Bayesian posterior more practical.\nAfter the model is estimated, documents are ranked by clipping the relevance model by choosing the k most likely terms from P (\u00b7 | Q).\nThis clipped distribution is then interpolated with with the original, maximum likelihood query model [1].\nThis can be thought of as expanding the original query by k weighted terms.\nThroughout the remainder of this work, we refer to this instantiation of relevance models as RM3.\nThere has been relatively little work done in the area of query expansion in the context of dependence models [9].\nHowever, there have been several attempts to expand using multi-term concepts.\nXu and Croft's local context analysis (LCA) method combined passage-level retrieval with concept expansion, where concepts were single terms and phrases [28].\nExpansion concepts were chosen and weighted using a metric based on co-occurrence statistics.\nHowever, it is not clear based on the analysis done how much the phrases helped over the single terms alone.\nPapka and Allan investigate using relevance feedback to perform multi-term concept expansion for document routing [19].\nThe concepts used in their work are more general than those used in LCA, and include InQuery query language structures, such as #UW 50 (white house), which corresponds to the concept \"the terms white and house occur, in any order, within 50 terms of each other\".\nResults showed that combining single term and large window multi-term concepts significantly improved effectiveness.\nHowever, it is unclear whether the same approach is also effective for ad hoc retrieval, due to the differences in the tasks.\n3.\nMODEL\n3.1 MRFs for IR 3.1.1 Basics\n3.1.2 Constructing G\n3.1.3 Parameterization\n3.1.4 Features\n3.1.5 Ranking\n3.1.6 Parameter Estimation\n3.2 Latent Concept Expansion\n3.2.1 Query Expansion\n3.2.2 Comparison to Relevance Models\n4.\nEXPERIMENTAL RESULTS\n4.1 Ad Hoc Retrieval Results\n4.1.1 Expansion with Single Term Concepts\n4.1.2 Expansion with Multi-Term Concepts\n4.2 Robustness\n4.3 Multi-Term Concept Generation\n4.4 Discussion\n5.\nCONCLUSIONS\nIn this paper we proposed a robust query expansion technique called latent concept expansion.\nThe technique was shown to be a natural extension of the Markov random field model for information retrieval and a generalization of relevance models.\nLCE is novel in that it performs single or multi-term expansion within a framework that allows the modeling of term dependencies and the use of arbitrary features, whereas previous work has been based on the bag of words assumption and term occurrence features.\nWe showed that the technique can be used to produce high quality, well formed, topically relevant multi-term expansion concepts.\nThe concepts generated can be used in an alternative query suggestion module.\nWe also showed that the model is highly effective.\nIn fact, it achieves significant improvements in mean average precision over relevance models across a selection of TREC data sets.\nIt was also shown the MRF model itself, without any query expansion, outperforms relevance models on large web data sets.\nThis reconfirms previous observations that modeling dependencies via the use of proximity features within the MRF has more of an impact on larger, noisier collections than smaller, well-behaved ones.\nFinally, we reiterated the importance of choosing expansion terms that model relevance, rather than the relevant documents and showed how LCE captures both syntactic and query-side semantic dependencies.\nFuture work will look at incorporating document-side dependencies, as well.","lvl-4":"Latent Concept Expansion Using Markov Random Fields\nABSTRACT\nQuery expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness.\nMost previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies.\nIn this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval.\nThe technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion.\nFurthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques.\nWe evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique.\nOur model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets.\nWe also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion\/reformulation.\n1.\nINTRODUCTION\npossibly a longer narrative.\nA great deal of information is lost during the process of translating from the information need to the actual query.\nFor this reason, there has been a strong interest in query expansion techniques.\nSuch techniques are used to augment the original query to produce a representation that better reflects the underlying information need.\nQuery expansion techniques have been well studied for various models in the past and have shown to significantly improve effectiveness in both the relevance feedback and pseudo-relevance feedback setting [12, 21, 28, 29].\nThe MRF model generalizes the unigram, bigram, and other various dependence models [14].\nMost past term dependence models have failed to show consistent, significant improvements over unigram baselines, with few exceptions [8].\nUntil now, the model has been solely used for ranking documents in response to a given query.\nIn this work, we show how the model can be extended and used for query expansion using a technique that we call latent concept expansion (LCE).\nThere are three primary contributions of our work.\nFirst, LCE provides a mechanism for combining term dependence with query expansion.\nPrevious query expansion techniques are based on bag of words models.\nTherefore, by performing query expansion using the MRF model, we are able to study the dynamics between term dependence and query expansion.\nNext, as we will show, the MRF model allows arbitrary features to be used within the model.\nQuery expansion techniques in the past have implicitly only made use of term occurrence features.\nBy using more robust feature sets, it is possible to produce better expansion terms that discriminate between relevant and non-relevant documents better.\nFinally, our proposed approach seamlessly provides a mechanism for generating both single and multi-term concepts.\nMost previous techniques, by default, generate terms independently.\nThere have been several approaches that make use of generalized concepts, however such approaches were somewhat heuristic and done outside of the model [19, 28].\nOur approach is both formally motivated and a natural extension of the underlying model.\nIn Section 2 we describe related query expansion approaches.\nSection 3 provides an overview of the MRF model and details our proposed latent concept expansion technique.\nIn Section 4 we evaluate our proposed model and analyze the results.\n2.\nRELATED WORK\nOne of the classic and most widely used approaches to query expansion is the Rocchio algorithm [21].\nRocchio's approach, which was developed within the vector space model, reweights the original query vector by moving the weights towards the set of relevant or pseudo-relevant documents and away from the non-relevant documents.\nUnfortunately, it is not possible to formally apply Rocchio's approach to a statistical retrieval model, such as language modeling for information retrieval.\nA number of formalized query expansion techniques have been developed for the language modeling framework, including Zhai and Lafferty's model-based feedback and Lavrenko and Croft's relevance models [12, 29].\nBoth approaches attempt to use pseudo-relevant or relevant documents to estimate a better query model.\nModel-based feedback finds the model that best describes the relevant documents while taking a background (noise) model into consideration.\nThis separates the content model from the background model.\nThe content model is then interpolated with the original query model to form the expanded query.\nThe other technique, relevance models, is more closely related to our work.\nTherefore, we go into the details of the model.\nMuch like model-based feedback, relevance models estimate an improved query model.\nThe only difference between the two approaches is that relevance models do not explicitly model the relevant or pseudo-relevant documents.\nInstead, they model a more generalized notion of relevance, as we now show.\nGiven a query Q, a relevance model is a multinomial distribution, P (\u00b7 | Q), that encodes the likelihood of each term given the query as evidence.\nIt is computed as:\nwhere RQ is the set of documents that are relevant or pseudorelevant to query Q.\nThese mild assumptions make computing the Bayesian posterior more practical.\nAfter the model is estimated, documents are ranked by clipping the relevance model by choosing the k most likely terms from P (\u00b7 | Q).\nThis clipped distribution is then interpolated with with the original, maximum likelihood query model [1].\nThis can be thought of as expanding the original query by k weighted terms.\nThroughout the remainder of this work, we refer to this instantiation of relevance models as RM3.\nThere has been relatively little work done in the area of query expansion in the context of dependence models [9].\nHowever, there have been several attempts to expand using multi-term concepts.\nXu and Croft's local context analysis (LCA) method combined passage-level retrieval with concept expansion, where concepts were single terms and phrases [28].\nExpansion concepts were chosen and weighted using a metric based on co-occurrence statistics.\nPapka and Allan investigate using relevance feedback to perform multi-term concept expansion for document routing [19].\nResults showed that combining single term and large window multi-term concepts significantly improved effectiveness.\n5.\nCONCLUSIONS\nIn this paper we proposed a robust query expansion technique called latent concept expansion.\nThe technique was shown to be a natural extension of the Markov random field model for information retrieval and a generalization of relevance models.\nWe showed that the technique can be used to produce high quality, well formed, topically relevant multi-term expansion concepts.\nThe concepts generated can be used in an alternative query suggestion module.\nWe also showed that the model is highly effective.\nIn fact, it achieves significant improvements in mean average precision over relevance models across a selection of TREC data sets.\nIt was also shown the MRF model itself, without any query expansion, outperforms relevance models on large web data sets.\nFinally, we reiterated the importance of choosing expansion terms that model relevance, rather than the relevant documents and showed how LCE captures both syntactic and query-side semantic dependencies.\nFuture work will look at incorporating document-side dependencies, as well.","lvl-2":"Latent Concept Expansion Using Markov Random Fields\nABSTRACT\nQuery expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness.\nMost previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies.\nIn this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval.\nThe technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion.\nFurthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques.\nWe evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique.\nOur model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets.\nWe also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion\/reformulation.\n1.\nINTRODUCTION\nUsers of information retrieval systems are required to express complex information needs in terms of Boolean expressions, a short list of keywords, a sentence, a question, or\npossibly a longer narrative.\nA great deal of information is lost during the process of translating from the information need to the actual query.\nFor this reason, there has been a strong interest in query expansion techniques.\nSuch techniques are used to augment the original query to produce a representation that better reflects the underlying information need.\nQuery expansion techniques have been well studied for various models in the past and have shown to significantly improve effectiveness in both the relevance feedback and pseudo-relevance feedback setting [12, 21, 28, 29].\nRecently, a Markov random field (MRF) model for information retrieval was proposed that goes beyond the simplistic bag of words assumption that underlies BM25 and the (unigram) language modeling approach to information retrieval [20, 22].\nThe MRF model generalizes the unigram, bigram, and other various dependence models [14].\nMost past term dependence models have failed to show consistent, significant improvements over unigram baselines, with few exceptions [8].\nThe MRF model, however, has been shown to be highly effective across a number of tasks, including ad hoc retrieval [14, 16], named-page finding [16], and Japanese language web search [6].\nUntil now, the model has been solely used for ranking documents in response to a given query.\nIn this work, we show how the model can be extended and used for query expansion using a technique that we call latent concept expansion (LCE).\nThere are three primary contributions of our work.\nFirst, LCE provides a mechanism for combining term dependence with query expansion.\nPrevious query expansion techniques are based on bag of words models.\nTherefore, by performing query expansion using the MRF model, we are able to study the dynamics between term dependence and query expansion.\nNext, as we will show, the MRF model allows arbitrary features to be used within the model.\nQuery expansion techniques in the past have implicitly only made use of term occurrence features.\nBy using more robust feature sets, it is possible to produce better expansion terms that discriminate between relevant and non-relevant documents better.\nFinally, our proposed approach seamlessly provides a mechanism for generating both single and multi-term concepts.\nMost previous techniques, by default, generate terms independently.\nThere have been several approaches that make use of generalized concepts, however such approaches were somewhat heuristic and done outside of the model [19, 28].\nOur approach is both formally motivated and a natural extension of the underlying model.\nThe remainder of this paper is laid out as follows.\nIn Section 2 we describe related query expansion approaches.\nSection 3 provides an overview of the MRF model and details our proposed latent concept expansion technique.\nIn Section 4 we evaluate our proposed model and analyze the results.\nFinally, Section 5 concludes the paper and summarizes the major results.\n2.\nRELATED WORK\nOne of the classic and most widely used approaches to query expansion is the Rocchio algorithm [21].\nRocchio's approach, which was developed within the vector space model, reweights the original query vector by moving the weights towards the set of relevant or pseudo-relevant documents and away from the non-relevant documents.\nUnfortunately, it is not possible to formally apply Rocchio's approach to a statistical retrieval model, such as language modeling for information retrieval.\nA number of formalized query expansion techniques have been developed for the language modeling framework, including Zhai and Lafferty's model-based feedback and Lavrenko and Croft's relevance models [12, 29].\nBoth approaches attempt to use pseudo-relevant or relevant documents to estimate a better query model.\nModel-based feedback finds the model that best describes the relevant documents while taking a background (noise) model into consideration.\nThis separates the content model from the background model.\nThe content model is then interpolated with the original query model to form the expanded query.\nThe other technique, relevance models, is more closely related to our work.\nTherefore, we go into the details of the model.\nMuch like model-based feedback, relevance models estimate an improved query model.\nThe only difference between the two approaches is that relevance models do not explicitly model the relevant or pseudo-relevant documents.\nInstead, they model a more generalized notion of relevance, as we now show.\nGiven a query Q, a relevance model is a multinomial distribution, P (\u00b7 | Q), that encodes the likelihood of each term given the query as evidence.\nIt is computed as:\nwhere RQ is the set of documents that are relevant or pseudorelevant to query Q.\nIn the pseudo-relevant case, these are the top ranked documents for query Q. Furthermore, it is assumed that P (D) is uniform over this set.\nThese mild assumptions make computing the Bayesian posterior more practical.\nAfter the model is estimated, documents are ranked by clipping the relevance model by choosing the k most likely terms from P (\u00b7 | Q).\nThis clipped distribution is then interpolated with with the original, maximum likelihood query model [1].\nThis can be thought of as expanding the original query by k weighted terms.\nThroughout the remainder of this work, we refer to this instantiation of relevance models as RM3.\nThere has been relatively little work done in the area of query expansion in the context of dependence models [9].\nHowever, there have been several attempts to expand using multi-term concepts.\nXu and Croft's local context analysis (LCA) method combined passage-level retrieval with concept expansion, where concepts were single terms and phrases [28].\nExpansion concepts were chosen and weighted using a metric based on co-occurrence statistics.\nHowever, it is not clear based on the analysis done how much the phrases helped over the single terms alone.\nPapka and Allan investigate using relevance feedback to perform multi-term concept expansion for document routing [19].\nThe concepts used in their work are more general than those used in LCA, and include InQuery query language structures, such as #UW 50 (white house), which corresponds to the concept \"the terms white and house occur, in any order, within 50 terms of each other\".\nResults showed that combining single term and large window multi-term concepts significantly improved effectiveness.\nHowever, it is unclear whether the same approach is also effective for ad hoc retrieval, due to the differences in the tasks.\n3.\nMODEL\nThis section details our proposed latent concept expansion technique.\nAs mentioned previously, the technique is an extension of the MRF model for information retrieval [14].\nTherefore, we begin by providing an overview of the MRF model and our proposed extensions.\n3.1 MRFs for IR 3.1.1 Basics\nMarkov random fields, which are undirected graphical models, provide a compact, robust way of modeling a joint distribution.\nHere, we are interested in modeling the joint distribution over a query Q = q1,..., qn and a document D.\nIt is assumed the underlying distribution over pairs of documents and queries is a relevance distribution.\nThat is, sampling from the distribution gives pairs of documents and queries, such that the document is relevant to the query.\nA MRF is defined by a graph G and a set of non-negative potential functions over the cliques in G.\nThe nodes in the graph represent the random variables and the edges define the independence semantics of the distribution.\nA MRF satisfies the Markov property, which states that a node is independent of all of its non-neighboring nodes given observed values for its neighbors.\nGiven a graph G, a set of potentials 0i, and a parameter vector \u039b, the joint distribution over Q and D is given by:\nwhere Z is a normalizing constant.\nWe follow common convention and parameterize the potentials as 0i (c; \u039b) = exp [\u03bbifi (c)], where fi (c) is a real-valued feature function.\n3.1.2 Constructing G\nGiven a query Q, the graph G can be constructed in a number of ways.\nHowever, following previous work, we consider three simple variants [14].\nThese variants are full independence, where each query term is independent of each\nother given a document, sequential dependence, which assumes a dependence exists between adjacent query terms, and full dependence, which makes no independence assumptions.\n3.1.3 Parameterization\nMRFs are commonly parameterized based on the maximal cliques of G. However, such a parameterization is too coarse for our needs.\nWe need a parameterization that allows us to associate feature functions with cliques on a more fine grained level, while keeping the number of features, and thus the number of parameters, reasonable.\nTherefore, we allow cliques to share feature functions and parameters based on clique sets.\nThat is, all of the cliques within a clique set are associated with the same feature function and share a single parameter.\nThis effectively ties together the parameters of the features associated with each set, which significantly reduces the number of parameters while still providing a mechanism for fine-tuning on the level of clique sets.\nWe propose seven clique sets for use with information retrieval.\nThe first three clique sets consist of cliques that contain one or more query terms and the document node.\nFeatures over these cliques should encode how well the terms in the clique configuration describe the document.\nThese sets are:\n\u2022 TD--set of cliques containing the document node and exactly one query term.\n\u2022 OD--set of cliques containing the document node and two or more query terms that appear in sequential order within the query.\n\u2022 UD--set of cliques containing the document node and\ntwo or more query terms that appear in any order within the query.\nNote that UD is a superset of OD.\nBy tying the parameters among the cliques within each set we can control how much influence each type gets.\nThis also avoids the problem of trying to determine how to estimate weights for each clique within the sets.\nInstead, we now must only estimate a single parameter per set.\nNext, we consider cliques that only contain query term nodes.\nThese cliques, which were not considered in [14], are defined in an analogous way to those just defined, except the the cliques are only made up of query term nodes and do not contain the document node.\nFeature functions over these cliques should capture how compatible query terms are to one another.\nThese clique features may take on the form of language models that impose well-formedness of the terms.\nTherefore, we define following query-dependent clique sets:\n\u2022 TQ--set of cliques containing exactly one query term.\n\u2022 OQ--set of cliques containing two or more query terms that appear in sequential order within the query.\n\u2022 UQ--set of cliques containing two or more query terms that appear in any order within the query.\nFinally, there is the clique that only contains the document node.\nFeatures over this node can be used as a type of document prior, encoding document-centric properties.\nThis trivial clique set is then:\n\u2022 D--clique set containing only the singleton node D\nWe note that our clique sets form a set cover over the cliques of G, but are not a partition, since some cliques appear in multiple clique sets.\nAfter tying the parameters in our clique sets together and using the exponential potential function form, we end up with the following simplified form of the joint distribution:\nwhere FDQ, FQ, and FD are convenience functions defined by the document and query dependent, query dependent, and document dependent components of the joint distribution, respectively.\nThese will be used to simplify and clarify expressions derived throughout the remainder of the paper.\n3.1.4 Features\nAny arbitrary feature function over clique configurations can be used in the model.\nThe correct choice of features depends largely on the retrieval task and the evaluation metric.\nTherefore, there is likely not to be a single, universally applicable set of features.\nTo provide an idea of the range of features that can be used, we now briefly describe possible types of features that could be used.\nPossible query term dependent features include tf, idf, named entities, term proximity, and text style to name a few.\nMany types of document dependent features can be used, as well, including document length, PageRank, readability, and genre, among others.\nSince it is not our goal here to find optimal features, we use a simple, fixed set of features that have been shown to be effective in previous work [14].\nSee Table 1 for a list of features used.\nThese features attempt to capture term occurrence and term proximity.\nBetter feature selection in the future will likely lead to improved effectiveness.\n3.1.5 Ranking\nGiven a query Q, we wish to rank documents in descending order according to PG, \u039b (D | Q).\nAfter dropping document independent expressions from log PG, \u039b (Q, D), we derive the following ranking function: PG, \u039b (D | Q) rank = FDQ (D, Q) + FD (D) (2) which is a simple weighted linear combination of feature functions that can be computed efficiently for reasonable graphs.\n3.1.6 Parameter Estimation\nNow that the model has been fully specified, the final step is to estimate the model parameters.\nAlthough MRFs are generative models, it is inappropriate to train them using\nTable 1: Feature functions used in Markov random field model.\nHere, tfw, D is the number of times term\nw occurs in document D, tf #1 (qi...qi + k), D denotes the number of times the exact phrase qi...qi + k occurs in document D, tf #uw (qi...qj), D is the number of times the terms qi,...qj appear ordered or unordered within a window of N terms, and | D | is the length of document D.\nThe cf and | C | values are analogously defined on the collection level.\nFinally, \u03b1 and \u03b2 are model hyperparameters that control smoothing for single term and phrase features, respectively.\nconventional likelihood-based approaches because of metric divergence [17].\nThat is, the maximum likelihood estimate is unlikely to be the estimate that maximizes our evaluation metric.\nFor this reason, we discriminatively train our model to directly maximize the evaluation metric under consideration [14, 15, 25].\nSince our parameter space is small, we make use of a simple hill climbing strategy, although other more sophisticated approaches are possible [10].\n3.2 Latent Concept Expansion\nIn this section we describe how this extended MRF model can be used in a novel way to generate single and multiterm concepts that are topically related to some original query.\nAs we will show, the concepts generated using our technique can be used for query expansion or other tasks, such as suggesting alternative query formulations.\nWe assume that when a user formulates their original query, they have some set of concepts in mind, but are only able to express a small number of them in the form of a query.\nWe treat the concepts that the user has in mind, but did not explicitly express in the query, as latent concepts.\nThese latent concepts can consist of a single term, multiple terms, or some combination of the two.\nIt is, therefore, our goal to recover these latent concepts given some original query.\nThis can be accomplished within our framework by first expanding the original graph G to include the type of concept we are interested in generating.\nWe call this expanded graph H.\nIn Figure 1, the middle graph provides an example of how to construct an expanded graph that can generate single term concepts.\nSimilarly, the graph on the right illustrates an expanded graph that generates two term concepts.\nAlthough these two examples make use of the sequential dependence assumption (i.e. dependencies between adjacent query terms), it is important to note that both the original query and the expansion concepts can use any independence structure.\nAfter H is constructed, we compute PH, \u039b (E | Q), a probability distribution over latent concepts, according to:\nwhere R is the universe of all possible documents and E is some latent concept that may consist of one or more terms.\nSince it is not practical to compute this summation, we must approximate it.\nWe notice that PH, \u039b (Q, E, D) is likely to be peaked around those documents D that are highly ranked according to query Q. Therefore, we approximate PH, \u039b (E | Q) by only summing over a small subset of relevant or pseudo-relevant documents for query Q.\nThis is computed as follows:\nwhere RQ is a set of relevant or pseudo-relevant documents for query Q and all clique sets are constructed using H.\nAs we see, the likelihood contribution for each document in RQ is a combination of the original query's score for the document (see Equation 2), concept E's score for the document, and E's document-independent score.\nTherefore, this equation can be interpreted as measuring how well Q and E account for the top ranked documents and the \"goodness\" of E, independent of the documents.\nFor maximum robustness, we use a different set of parameters for FQD (Q, D) and FQD (E, D), which allows us to weight the term, ordered, and unordered window features differently for the original query and the candidate expansion concept.\n3.2.1 Query Expansion\nTo use this framework for query expansion, we first choose an expansion graph H that encodes the latent concept structure we are interested in expanding the query using.\nWe then select the k latent concepts with the highest likelihood given by Equation 3.\nA new graph G' is constructed by augmenting the original graph G with the k expansion concepts E1,..., Ek.\nFinally, documents are ranked according to PG, \u039b (D | Q, E1,..., Ek) using Equation 2.\n3.2.2 Comparison to Relevance Models\nInspecting Equations 1 and 3 reveals the close connection that exists between LCE and relevance models.\nBoth\nFigure 1: Graphical model representations of relevance modeling (left), latent concept expansion using single term concepts (middle), and latent concept expansion using two term concepts (right) for a three term query.\nmodels essentially compute the likelihood of a term (or concept) in the same manner.\nIt is easy to see that just as the MRF model can be viewed as a generalization of language modeling, so too can LCE be viewed as a generalization of relevance models.\nThere are important differences between MRFs\/LCE and unigram language models\/relevance models.\nSee Figure 1 for graphical model representations of both models.\nUnigram language models and relevance models are based on the multinomial distribution.\nThis distributional assumption locks the model into the bag of words representation and the implicit use of term occurrence features.\nHowever, the distribution underlying the MRF model allows us to move beyond both of these assumptions, by modeling both dependencies between query terms and allowing arbitrary features to be explicitly used.\nMoving beyond the simplistic bag of words assumption in this way results in a general, robust model and, as we show in the next section, translates into significant improvements in retrieval effectiveness.\n4.\nEXPERIMENTAL RESULTS\nIn order to better understand the strengths and weaknesses of our technique, we evaluate it on a wide range of data sets.\nTable 2 provides a summary of the TREC data sets considered.\nThe WSJ, AP, and ROBUST collections are smaller and consist entirely of newswire articles, whereas WT10g and GOV2 are large web collections.\nFor each data set, we split the available topics into a training and test set, where the training set is used solely for parameter estimation and the test set is used for evaluation purposes.\nAll experiments were carried out using a modified version of Indri, which is part of the Lemur Toolkit [18, 23].\nAll collections were stopped using a standard list of 418 common terms and stemmed using a Porter stemmer.\nIn all cases, only the title portion of the TREC topics are used to construct queries.\nWe construct G using the sequential dependence assumption for all data sets [14].\n4.1 Ad Hoc Retrieval Results\nWe now investigate how well our model performs in practice in a pseudo-relevance feedback setting.\nWe compare unigram language modeling (with Dirichlet smoothing), the MRF model (without expansion), relevance models, and LCE to better understand how each model performs across the various data sets.\nFor the unigram language model, the smoothing parameter was trained.\nFor the MRF model, we train the model parameters (i.e. \u039b) and model hyperparameters (i.e. \u03b1, \u03b2).\nFor RM3 and LCE, we also train the number of pseudo\nTable 2: Overview of TREC collections and topics.\nrelevant feedback documents used and the number of expansion terms.\n4.1.1 Expansion with Single Term Concepts\nWe begin by evaluating how well our model performs when expanding using only single terms.\nBefore we describe and analyze the results, we explicitly state how expansion term likelihoods are computed under this setup (i.e. using the sequential dependence assumption, expanding with single term concepts, and using our feature set).\nThe expansion term likelihoods are computed as follows:\nwhere b \u2208 Q denotes the set of bigrams in Q.\nThis equation clearly shows how LCE differs from relevance models.\nWhen we set \u03bbTD = \u03bb ~ T, D = 1 and all other parameters to 0, we obtain the exact formula that is used to compute term likelihoods in the relevance modeling framework.\nTherefore, LCE adds two very important factors to the equation.\nFirst, it adds the ordered and unordered window features that are applied to the original query.\nSecond, it applies an intuitive tf.idf - like form to the candidate expansion term w.\nThe idf factor, which is not present in relevance models, plays an important role in expansion term selection.\n~\nFigure 2: Histograms that demonstrate and compare the robustness of relevance models (RM3) and latent concept expansion (LCE) with respect to the query likelihood model (QL) for the AP, ROBUST, and WT10G data sets.\nThe results, evaluated using mean average precision, are given in Table 3.\nAs we see, the MRF model, relevance models, and LCE always significantly outperform the unigram language model.\nIn addition, LCE shows significant improvements over relevance models across all data sets.\nThe relative improvements over relevance models is 6.9% for AP, 12.9% for WSJ, 6.5% for ROBUST, 16.7% for WT10G, and 7.3% for GOV2.\nFurthermore, LCE shows small, but not significant, improvements over relevance modeling for metrics such as precision at 5, 10, and 20.\nHowever, both relevance modeling and LCE show statistically significant improvements in such metrics over the unigram language model.\nAnother interesting result is that the MRF model is statistically equivalent to relevance models on the two web data sets.\nIn fact, the MRF model outperforms relevance models on the WT10g data set.\nThis reiterates the importance of non-unigram, proximity-based features for content-based web search observed previously [14, 16].\nAlthough our model has more free parameters than relevance models, there is surprisingly little overfitting.\nInstead, the model exhibits good generalization properties.\n4.1.2 Expansion with Multi-Term Concepts\nWe also investigated expanding using both single and two word concepts.\nFor each query, we expanded using a set of single term concepts and a set of two term concepts.\nThe sets were chosen independently.\nUnfortunately, only negligible increases in mean average precision were observed.\nThis result may be due to the fact that strong correlations exist between the single term expansion concepts.\nWe found that the two word concepts chosen often consisted of two highly correlated terms that are also chosen as single term concepts.\nFor example, the two term concept \"stock market\" was chosen while the single term concepts \"stock\" and \"market\" were also chosen.\nTherefore, many two word concepts are unlikely to increase the discriminative power of the expanded query.\nThis result suggests that concepts should be chosen according to some criteria that also takes novelty, diversity, or term correlations into account.\nAnother potential issue is the feature set used.\nOther feature sets may ultimately yield different results, especially if they reduce the correlation among the expansion concepts.\nTherefore, our experiments yield no conclusive results with regard to expansion using multi-term concepts.\nInstead, the results introduce interesting open questions and directions for future exploration.\nTable 3: Test set mean average precision for lan\nguage modeling (LM), Markov random field (MRF), relevance models (RM3), and latent concept expansion (LCE).\nThe superscripts \u03b1, \u03b2, and \u03b3 indicate statistically significant improvements (p <0.05) over LM, MRF, and RM3, respectively.\n4.2 Robustness\nAs we have shown, relevance models and latent concept expansion can significantly improve retrieval effectiveness over the baseline query likelihood model.\nIn this section we analyze the robustness of these two methods.\nHere, we define robustness as the number queries whose effectiveness are improved\/hurt (and by how much) as the result of applying these methods.\nA highly robust expansion technique will significantly improve many queries and only minimally hurt a few.\nFigure 2 provides an analysis of the robustness of relevance modeling and latent concept expansion for the AP, ROBUST, and WT10G data sets.\nThe analysis for the two data sets not shown is similar.\nThe histograms provide, for various ranges of relative decreases\/increases in mean average precision, the number of queries that were hurt\/improved with respect to the query likelihood baseline.\nAs the results show, LCE exhibits strong robustness for each data set.\nFor AP, relevance models improve 38 queries and hurt 11, whereas LCE improves 35 and hurts 14.\nAlthough relevance models improve the effectiveness of 3 more queries than LCE, the relative improvement exhibited by LCE is significantly larger.\nFor the ROBUST data set, relevance models improve 67 queries and hurt 32, and LCE improves 77 and hurts 22.\nFinally, for the WT10G collection, relevance models improve 32 queries and hurt 16, and LCE improves 35 and hurts 14.\nAs with AP, the amount of improvement exhibited by the LCE versus relevance models is significantly larger for both the ROBUST and WT10G data sets.\nIn addition, when LCE does hurt performance, it is less likely to hurt as much as relevance modeling, which is a desirable property.\n1 word concepts 2 word concepts 3 word concepts telescope hubble telescope hubble space telescope hubble space telescope hubble telescope space space hubble space space telescope hubble mirror telescope mirror space telescope NASA NASA telescope hubble hubble telescope astronomy launch mirror telescope NASA hubble space astronomy telescope NASA space telescope mirror shuttle telescope space telescope space NASA test hubble mirror hubble telescope mission new NASA hubble mirror mirror mirror discovery telescope astronomy space telescope launch time telescope optical space telescope discovery universe hubble optical shuttle space telescope optical telescope discovery hubble telescope flaw light telescope shuttle two hubble space\nTable 4: Fifteen most likely one, two, and three word concepts constructed using the top 25 documents retrieved for the query hubble telescope achievements on the ROBUST collection.\nOverall, LCE improves effectiveness for 65% -80% of queries, depending on the data set.\nWhen used in combination with a highly accurate query performance prediction system, it may be possible to selectively expand queries and minimize the loss associated with sub-baseline performance.\n4.3 Multi-Term Concept Generation\nAlthough we found that expansion using multi-term concepts failed to produce conclusive improvements in effectiveness, there are other potential tasks that these concepts may be useful for, such as query suggestion\/reformulation, summarization, and concept mining.\nFor example, for a query suggestion task, the original query could be used to generate a set of latent concepts which correspond to alternative query formulations.\nAlthough evaluating our model on these tasks is beyond the scope of this work, we wish to show an illustrative example of the types of concepts generated using our model.\nIn Table 4, we present the most likely one, two, and three term concepts generated using LCE for the query hubble telescope achievements using the top 25 ranked documents from the ROBUST collection.\nIt is well known that generating multi-term concepts using a unigram-based model produces unsatisfactory results, since it fails to consider term dependencies.\nThis is not the case when generating multi-term concepts using our model.\nInstead, a majority of the concepts generated are well-formed and meaningful.\nThere are several cases where the concepts are less coherent, such as mirror mirror mirror.\nIn this case, the likelihood of the term mirror appearing in a pseudo-relevant document outweighs the \"language modeling\" features (e.g. fOQ), which causes this non-coherent concept to have a high likelihood.\nSuch examples are in the minority, however.\nNot only are the concepts generated well-formed and meaningful, but they are also topically relevant to the original query.\nAs we see, all of the concepts generated are on topic and in some way related to the Hubble telescope.\nIt is interesting to see that the concept hubble telescope flaw is one of the most likely three term concepts, given that it is somewhat contradictory to the original query.\nDespite this contradiction, documents that discuss the telescope flaws are also likely to describe the successes, as well, and therefore this is likely to be a meaningful concept.\nOne important thing to note is that the concepts LCE generates are of a different nature than those that would be generated using a bigram relevance model.\nFor example, a bigram model would be unlikely to generate the concept telescope space NASA, since none of the bigrams that make up the concept have high likelihood.\nHowever, since our model is based on a number of different features over various types of cliques, it is more general and robust than a bigram model.\nAlthough we only provided the concepts generated for a single query, we note that the same analysis and conclusions generalize across other data sets, with coherent, topically related concepts being consistently generated using LCE.\n4.4 Discussion\nOur latent concept expansion technique captures two semiorthogonal types of dependence.\nIn information retrieval, there has been a long-term interest in understanding the role of term dependence.\nOut of this research, two broad types of dependencies have been identified.\nThe first type of dependence is syntactic dependence.\nThis type of dependence covers phrases, term proximity, and term co-occurrence [2, 4, 5, 7, 26].\nThese methods capture the fact that queries implicitly or explicitly impose a certain set of positional dependencies.\nThe second type is semantic dependence.\nExamples of semantic dependence are relevance feedback, pseudo-relevance feedback, synonyms, and to some extent stemming [3].\nThese techniques have been explored on both the query and document side.\nOn the query side, this is typically done using some form of query expansion, such as relevance models or LCE.\nOn the document side, this is done as document expansion or document smoothing [11, 13, 24].\nAlthough there may be some overlap between syntactic and semantic dependencies, they are mostly orthogonal.\nOur model uses both types of dependencies.\nThe use of phrase and proximity features within the model captures syntactic dependencies, whereas LCE captures query-side semantic dependence.\nThis explains why the initial improvement in effectiveness achieved by using the MRF model is not lost\nafter query expansion.\nIf the same types of dependencies were capture by both syntactic and semantic dependencies, LCE would be expected to perform about equally as well as relevance models.\nTherefore, by modeling both types of dependencies we see an additive effect, rather than an absorbing effect.\nAn interesting area of future work is to determine whether or not modeling document-side semantic dependencies can add anything to the model.\nPrevious results that have combined query - and document-side semantic dependencies have shown mixed results [13, 27].\n5.\nCONCLUSIONS\nIn this paper we proposed a robust query expansion technique called latent concept expansion.\nThe technique was shown to be a natural extension of the Markov random field model for information retrieval and a generalization of relevance models.\nLCE is novel in that it performs single or multi-term expansion within a framework that allows the modeling of term dependencies and the use of arbitrary features, whereas previous work has been based on the bag of words assumption and term occurrence features.\nWe showed that the technique can be used to produce high quality, well formed, topically relevant multi-term expansion concepts.\nThe concepts generated can be used in an alternative query suggestion module.\nWe also showed that the model is highly effective.\nIn fact, it achieves significant improvements in mean average precision over relevance models across a selection of TREC data sets.\nIt was also shown the MRF model itself, without any query expansion, outperforms relevance models on large web data sets.\nThis reconfirms previous observations that modeling dependencies via the use of proximity features within the MRF has more of an impact on larger, noisier collections than smaller, well-behaved ones.\nFinally, we reiterated the importance of choosing expansion terms that model relevance, rather than the relevant documents and showed how LCE captures both syntactic and query-side semantic dependencies.\nFuture work will look at incorporating document-side dependencies, as well.","keyphrases":["markov random field","inform retriev","queri expans","pseudo-relev feedback","relev feedback","robust queri expans techniqu","languag model queri expans techniqu","languag model approach","web search","mrf","rocchio algorithm","languag model framework","rm3","document rout","ad-hoc retriev","mrf model","relev distribut"],"prmu":["P","P","P","P","P","P","P","R","U","U","U","R","U","U","M","M","M"]} {"id":"H-24","title":"Investigating the Querying and Browsing Behavior of Advanced Search Engine Users","abstract":"One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone. In this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search. The results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced. Our findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.","lvl-1":"Investigating the Querying and Browsing Behavior of Advanced Search Engine Users Ryen W. White Microsoft Research One Microsoft Way Redmond, WA 98052 ryenw@microsoft.com Dan Morris Microsoft Research One Microsoft Way Redmond, WA 98052 dan@microsoft.com ABSTRACT One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone.\nIn this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search.\nThe results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced.\nOur findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: query formulation, search process, relevance feedback.\nGeneral Terms Experimentation, Human Factors.\n1.\nINTRODUCTION The formulation of query statements that capture both the salient aspects of information needs and are meaningful to Information Retrieval (IR) systems poses a challenge for many searchers [3].\nCommercial Web search engines such as Google, Yahoo!, and Windows Live Search offer users the ability to improve the quality of their queries using query operators such as quotation marks, plus and minus signs, and modifiers that restrict the search to a particular site or type of file.\nThese techniques can be useful in improving result precision yet, other than via log analyses (e.g., [15][27]), they have generally been overlooked by the research community in attempts to improve the quality of search results.\nIR research has generally focused on alternative ways for users to specify their needs rather than increasing the uptake of advanced syntax.\nResearch on practical techniques to supplement existing search technology and support users has been intensifying in recent years (e.g. [18][34]).\nHowever, it is challenging to implement such techniques at large scale with tolerable latencies.\nTypical queries submitted to Web search engines take the form of a series of tokens separated by spaces.\nThere is generally an implied Boolean AND operator between tokens that restricts search results to documents containing all query terms.\nDe Lima and Pedersen [7] investigated the effect of parsing, phrase recognition, and expansion on Web search queries.\nThey showed that the automatic recognition of phrases in queries can improve result precision in Web search.\nHowever, the value of advanced syntax for typical searchers has generally been limited, since most users do not know about advanced syntax or do not understand how to use it [15].\nSince it appears operators can help retrieve relevant documents, further investigation of their use is warranted.\nIn this paper we explore the use of query operators in more detail and propose alternative applications that do not require all users to use advanced syntax explicitly.\nWe hypothesize that searchers who use advanced query syntax demonstrate a degree of search expertise that the majority of the user population does not; an assertion supported by previous research [13].\nStudying the behavior of these advanced search engine users may yield important insights about searching and result browsing from which others may benefit.\nUsing logs gathered from a large number of consenting users, we investigate differences between the search behavior of those that use advanced syntax and those that do not, and differences in the information those users target.\nWe are interested in answering three research questions: (i) Is there a relationship between the use of advanced syntax and other characteristics of a search?\n(ii) Is there a relationship between the use of advanced syntax and post-query navigation behaviors?\n(iii) Is there a relationship between the use of advanced syntax and measures of search success?\nThrough an experimental study and analysis, we offer potential answers for each of these questions.\nA relationship between the use of advanced syntax and any of these features could support the design of systems tailored to advanced search engine users, or use advanced users'' interactions to help non-advanced users be more successful in their searches.\nWe describe related work in Section 2, the data we used in this log-based study in Section 3, the search characteristics on which we focus our analysis in Section 4, and the findings of this analysis in Section 5.\nIn Section 6 we discuss the implications of this research, and we conclude in Section 7.\n2.\nRELATED WORK Factors such as lack of domain knowledge, poor understanding of the document collection being searched, and a poorly developed information need can all influence the quality of the queries that users submit to IR systems ([24],[28]).\nThere has been a variety of research into different ways of helping users specify their information needs more effectively.\nBelkin et al. [4] experimented with providing additional space for users to type a more verbose description of their information needs.\nA similar approach was attempted by Kelly et al. [18], who used clarification forms to elicit additional information about the search context from users.\nThese approaches have been shown to be effective in best-match retrieval systems where longer queries generally lead to more relevant search results [4].\nHowever, in Web search, where many of the systems are based on an extended Boolean retrieval model, longer queries may actually hurt retrieval performance, leading to a small number of potentially irrelevant results being retrieved.\nIt is not simply sufficient to request more information from users; this information must be of better quality.\nRelevance Feedback (RF) [22] and interactive query expansion [9] are popular techniques that have been used to improve the quality of information that users provide to IR systems regarding their information needs.\nIn the case of RF, the user presents the system with examples of relevant information that are then used to formulate an improved query or retrieve a new set of documents.\nIt has proven difficult to get users to use RF in the Web domain due to difficulty in conveying the meaning and the benefit of RF to typical users [17].\nQuery suggestions offered based on query logs have the potential to improve retrieval performance with limited user burden.\nThis approach is limited to re-executing popular queries, and searchers often ignore the suggestions presented to them [1].\nIn addition, both of these techniques do not help users learn to produce more effective queries.\nMost commercial search engines provide advanced query syntax that allows users to specify their information needs in more detail.\nQuery modifiers such as `+'' (plus), `\u2212'' (minus), and ` '' (double quotes) can be used to emphasize, deemphasize, and group query terms.\nBoolean operators (AND, OR, and NOT) can join terms and phrases, and modifiers such as site: and link: can be used to restrict the search space.\nQueries created with these techniques can be powerful.\nHowever, this functionality is often hidden from the immediate view of the searcher, and unless she knows the syntax, she must use text fields, pull-down menus and combo boxes available via a dedicated advanced search interface to access these features.\nLog-based analysis of users'' interactions with the Excite and AltaVista search engines has shown that only 10-20% of queries contained any advanced syntax [14][25].\nThis analysis can be a useful way of capturing characteristics of users interacting with IR systems.\nResearch in user modeling [6] and personalization [30] has shown that gathering more information about users can improve the effectiveness of searches, but require more information about users than is typically available from interaction logs alone.\nUnless coupled with a qualitative technique, such as a post-session questionnaire [23], it can be difficult to associate interactions with user characteristics.\nIn our study we conjecture that given the difficulty in locating advanced search features within the typical search interface, and the potential problems in understanding the syntax, those users that do use advanced syntax regularly represent a distinct class of searchers who will exhibit other common search behaviors.\nOther studies of advanced searchers'' search behaviors have attempted to better understand the strategic knowledge they have acquired.\nHowever, such studies are generally limited in size (e.g., [13][19]) or focus on domain expertise in areas such as healthcare or e-commerce (e.g., [5]).\nNonetheless, they can give valuable insight about the behaviors of users with domain, system, or search expertise that exceeds that of the average user.\nQuerying behavior in particular has been studied extensively to better understand users [31] and support other users [16].\nIn this paper we study other search characteristics of users of advanced syntax in an attempt to determine whether there is anything different about how these search engine users search, and whether their searches can be used to benefit those who do not make use of the advanced features of search engines.\nTo do this we use interaction logs gathered from large set of consenting users over a prolonged period.\nIn the next section we describe the data we use to study the behavior of the users who use advanced syntax, relative to those that do not use this syntax.\n3.\nDATA To perform this study we required a description of the querying and browsing behavior of many searchers, preferably over a period of time to allow patterns in user behavior to be analyzed.\nTo obtain these data we mined the interaction logs of consenting Web users over a period of 13 weeks, from January to April 2006.\nWhen downloading a partner client-side application, the users were invited to consent to their interaction with Web pages being anonymously recorded (with a unique identifier assigned to each user) and used to improve the performance of future systems.1 The information contained in these log entries included a unique identifier for the user, a timestamp for each page view, a unique browser window identifier (to resolve ambiguities in determining which browser a page was viewed), and the URL of the Web page visited.\nThis provided us with sufficient data on querying behavior (from interaction with search engines), and browsing behavior (from interaction with the pages that follow a search) to more broadly investigate search behavior.\nIn addition to the data gathered during the course of this study we also had relevance judgments of documents that users examined for 10,680 unique query statements present in the interaction logs.\nThese judgments were assigned on a six-point scale by trained human judges at the time the data were collected.\nWe use these judgments in this analysis to assess the relevance of sites users visited on their browse trail away from search result pages.\nWe studied the interaction logs of 586,029 unique users, who submitted millions of queries to three popular search enginesGoogle, Yahoo!, and MSN Search - over the 13-week duration of the study.\nTo limit the effect of search engine bias, we used four operators common to all three search engines: + (plus), \u2212 (minus), (double quotes), and site: (to restrict the search to a domain or Web page) as advanced syntax.\n1.12% of the queries submitted contained at least one of these four operators.\n51,080 (8.72%) of users used query operators in any of their queries.\nIn the remainder of this paper, we will refer to these users as advanced searchers.\nWe acknowledge that the direct relationship between query syntax usage and search expertise has only been studied 1 It is worth noting that if users did not provide their consent, then their interaction was not recorded and analyzed in this study.\n(and shown) in a few studies (e.g., [13]), but we feel that this is a reasonable criterion for a log-based investigation.\nWe conjecture that these advanced searchers do possess a high level of search expertise, and will show later in the paper that they demonstrate behavioral characteristics consistent with search expertise.\nTo handle potential outlier users that may skew our data analysis, we removed users who submitted fewer than 50 queries in the study``s 13-week duration.\nThis left us with 188,405 users \u2212 37,795 (20.1%) advanced users and 150,610 (79.9%) nonadvanced users \u2212 whose interactions we study in more detail.\nIf significant differences emerge between these groups, it is conceivable that these interactions could be used to automatically classify users and adjust a search system``s interface and result weighting to better match the current user.\nThe privacy of our volunteers was maintained throughout the entire course of the study: no personal information was elicited about them, participants were assigned a unique anonymous identifier that could not be traced back to them, and we made no attempt to identify a particular user or study individual behavior in any way.\nAll findings were aggregated over multiple users, and no information other than consent for logging was elicited.\nTo find out more about these users we studied whether those using advanced syntax exhibited other search behaviors that were not observed in those who did not use this syntax.\nWe focused on querying, navigation, and overall search success to compare the user groups.\nIn the next section we describe in more detail the search features that we used.\n4.\nSEARCH FEATURES We elected to choose features that described a variety of aspects of the search process: queries, result clicks, post-query browsing, and search success.\nThe query and result-click characteristics we chose to examine are described in more detail in Table 1.\nTable 1.\nQuery and result-click features (per user).\nFeature Meaning Queries Per Second (QPS) Avg.\nnumber of queries per second between initial query and end-of-session Query Repeat Rate (QRR) Fraction of queries that are repeats Query Word Length (QWL) Avg.\nnumber of words in query Queries Per Day (QPD) Avg.\nnumber of queries per day Avg.\nClick Position (ACP) Avg.\nrank of clicked results Click Probability (CP) Ratio of result clicks to queries Avg.\nSeconds To Click (ASC) Avg.\nsearch to result click interval These seven features give us a useful overview of users'' direct interactions with search engines, but not of how users are looking for relevant information beyond the result page or how successful they are in locating relevant information.\nTherefore, in addition to these characteristics we also studied some relevant aspects of users'' post-query browsing behavior.\nTo do this, we extracted search trails from the interaction logs described in the previous section.\nA search trail is a series of visited Web pages connected via a hyperlink trail, initiated with a search result page and terminating on one of the following events: navigation to any page not linked from the current page, closing of the active browser window, or a session inactivity timeout of 30 minutes.\nMore detail on the extraction of the search trails are provided in previous work [33].\nIn total, around 12.5 million search trails (containing around 60 million documents) were extracted from the logs for all users.\nThe median number of search trails per user was 30.\nThe median number of steps in the trails was 3.\nAll search trails contained one search result page and at least one page on a hyperlink trail leading from the result page.\nThe extraction of these trails allowed us to study aspects of postquery browsing behavior, namely the average duration of users'' search sessions, the average duration of users'' search trails, the average display time of each document, the average number of steps in users'' search trails, the number of branches in users'' navigation patterns, and the number of back operations in users'' search trails.\nAll search trails contain at least one branch representing any forward motion on the browse path.\nA trail can have additional branches if the user clicks the browser``s back button and immediately proceeds forward to another page prior to the next (if any) back operation.\nThe post-query browsing features are described further in Table 2.\nTable 2.\nPost-query browsing features (per trail).\nFeature Meaning Session Seconds (SS) Average session length (in seconds) Trail Seconds (TS) Average trail length (in seconds) Display Seconds (DS) Average display time for each page on the trail (in seconds) Num.\nSteps (NS) Average number of steps from the page following the results page to the end of the trail Num.\nBranches (NB) Average number of branches Num.\nBacks (NBA) Average number of back operations As well as using these attributes of users'' interactions, we also used the relevance judgments described earlier in the paper to measure the degree of search success based on the relevance judgments assigned to pages that lie on the search trail.\nGiven that we did not have access to relevance assessments from our users, we approximated these assessments using judgments collected as part of ongoing research into search engine performance.2 These judgments were created by trained human assessors for 10,680 unique queries.\nOf the 1,420,625 steps on search trails that started with any one of these queries, we have relevance judgments for 802,160 (56.4%).\nWe use these judgments to approximate search success for a given trail in a number of ways.\nIn Table 3 we list these measures.\n2 Our assessment of search success is fairly crude compared to what would have been possible if we had been able to contact our subjects.\nWe address this problem in a manner similar to that used by the Text Retrieval Conference (TREC) [21], in that since we cannot determine perceived search success, we approximate search success based on assigned relevance scores of visited documents.\nTable 3.\nRelevance judgment measures (per trail).\nMeasure Meaning First Judgment assigned to the first page in the trail Last Judgment assigned to the last page in the trail Average Average judgment across all pages in the trail Maximum Maximum judgment across all pages in the trail These measures are used during our analysis to estimate the relevance of the pages viewed at different stages in the trails, and allow us to estimate search success in different ways.\nWe chose multiple measures, as users may encounter relevant information in many ways and at different points in the trail (e.g., single highlyrelevant document or gradually over the course of the trail).\nThe features described in this section allowed us to analyze important attributes of the search process that must be better understood if we are to support users in their searching.\nIn the next section we present the findings of the analysis.\n5.\nFINDINGS Our analysis is divided into three parts: analysis of query behavior and interaction with the results page, analysis of post-query navigation behavior, and search success in terms of locating judged-relevant documents.\nParametric statistical testing is used, and the level of significance for the statistical tests is set to .05.\n5.1 Query and result-click behavior We were interested in comparing the query and result-click behaviors of our advanced and non-advanced users.\nIn Table 4 we show the mean average values for each of the seven search features for our users.\nWe use padvanced to denote the percentage of all queries from each user that contains advanced syntax (i.e., padvanced = 0% means a user never used advanced syntax).\nThe table shows values for users that do not use query operators (0%), users who submitted at least one query with operators (\u2265 0%), through to users whose queries contained operators at least threequarters of the time (\u2265 75%).\nTable 4.\nQuery and result click features (per user).\nFeature padvanced 0% > 0% \u2265 25% \u2265 50% \u2265 75% QPS .028 .010 .012 .013 .015 QRR .53 .57 .58 .61 .62 QWL 2.02 2.83 3.40 3.66 4.04 QPD 2.01 3.52 2.70 2.66 2.31 ACP 6.83 9.12 10.09 10.17 11.37 CP .57 .51 .47 .47 .47 ASC 87.71 88.16 112.44 102.12 79.13 %Users 79.90% 20.10% .79% .18% .04% We compared the query and result click features of users who did not use any advanced syntax (padvanced = 0%) in any of their queries with those who used advanced syntax in at least one query (padvanced > 0%).\nThe columns corresponding to these two groups are bolded in Table 4.\nWe performed an independent measures ttest between these groups for each of the features.\nSince this analysis involved many features, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (\u03b1) to .007, i.e., .05 divided by the number of features.\nThis correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true.\nAll differences between the groups were statistically significant (all t(188403) \u2265 2.81, all p \u2264 .002).\nHowever, given the large sample sizes, all differences in the means were likely to be statistically significant.\nWe applied a Cohen``s d-test to determine the effect size for each of the comparisons between the advanced and non-advanced user groups.\nOrdering in descending order by effect size, the main findings are that relative to non-advanced users, advanced search engine users: \u2022 Query less frequently in a session (d = 1.98) \u2022 Compose longer queries (d = .69) \u2022 Click further down the result list (d = .67) \u2022 Submit more queries per day (d = .49) \u2022 Are less likely to click on a result (d = .32) \u2022 Repeat queries more often (d = .16) The increased likelihood that advanced search engine users will click further down the result list implies that they may be less trusting of the search engines'' ability to rank the most relevant document first, that they are more willing to explore beyond the most popular pages for a given query, that they may be submitting different types of queries (e.g., informational rather than navigational), or that they may have customized their search settings to display more than only the default top-10 results.\nMany of the findings listed are consistent with those identified in other studies of advanced searchers'' querying and result-click behaviors [13][34].\nGiven that the only criteria we employed to classify a user as an advanced searcher was their use of advanced syntax, it is certainly promising that this criterion seems to identify users that interact in a way consistent with that reported previously for those with more search expertise.\nAs mentioned earlier, the advanced search engine users for which the average values shown in Table 4 are computed are those who submit 50 or more queries in the 13 week duration of the data collection and submit at least one query containing advanced query operators.\nIn other words, we consider users whose percentage of queries containing advanced syntax, padvanced, is greater than zero.\nThe use of query operators in any queries, regardless of frequency, suggests that a user knows about the existence of the operators, and implies a greater degree of familiarity with the search system.\nWe further hypothesized that users whose queries more frequently contained advanced syntax may be more advanced search engine users.\nTo test this we investigated varying the query threshold required to qualify for advanced status (padvanced).\nWe incremented padvanced one percentage point at a time, and recorded the values of the seven query and result-click features at each point.\nThe values of the features at four milestones (> 0%, \u2265 25%, \u2265 50%, and \u2265 75%) are shown in Table 4.\nAs can be seen in the table, as padvanced increases, differences in the features between those using advanced syntax and those not using advanced syntax become more substantial.\nHowever, it is interesting to note that as padvanced increases, the number of queries submitted per day actually falls (Pearson``s R = \u2212.512, t(98) = 5.98, p < .0001).\nMore advanced users may need to pose fewer queries to find relevant information.\nTo study the patterns of relationship among these dependent variables (including the padvanced), we applied factor analysis [26].\nTable 5 shows the intercorrelation matrix between the features and the percentage of queries with operators (Padvanced).\nEach cell in the table contains the Pearson``s correlation coefficient between the two features for a given row-column pair.\nTable 5.\nIntercorrelation matrix (query \/ result-click features).\npadv.\nQPS QRR QWL QPD ACP CP ASC padv.\n1.00 .946 .970 .987 \u2212.512 .930 \u2212.746 \u2212.583 QPS 1.00 .944 .943 \u2212.643 .860 \u2212.594 \u2212.712 QRR 1.00 .934 \u2212.462 .919 \u2212.621 -.667 QWL 1.00 \u2212.392 .612 \u2212.445 .735 QPD 1.00 .676 .780 .943 ACP 1.00 .838 .711 CP 1.00 .654 ASC 1.00 It is only the first data column and row that reflect the correlations between padvanced and the other query and result-click features.\nColumns 2 - 8 show the inter-correlations between the other features.\nThere are strong positive correlations between some of the features (e.g., the number of words in the query (QWL) and the average probability of clicking on a search result (ACP)).\nHowever, there were also fairly strong negative correlations between some features (e.g., the average length of the queries (QWL) and the probability of clicking on a search result (CP)).\nThe factor analysis revealed the presence of two factors that account for 83.6% of the variance.\nAs is standard practice in factor analysis, all features with an absolute factor loading of .30 or less were removed.\nThe two factors that emerged, with their respective loadings, can be expressed as: Factor A = .98(QRR) + .97(padv) + .97(QPS) + .71(ACP) + .69(QWL) Factor B = .96(CP) + .90(QPD) + .67(ACP) + .52(ASC) Variance in the query and result-click behavior of our advanced search engine users can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.5% of the variance.\nIt appears to represent a very basic dimension of variance that covers query attributes and querying behavior, and suggests a relationship between query properties (length, frequency, complexity, and repetition) and the position of users'' clicks in the result list.\nThe dimension underlying Factor B accounts for 33.1% of the variance, and describes attributes of result-click behavior, and a strong correlation between result clicks and the number of queries submitted each day.\nSummary: In this section we have shown that there are marked differences in aspects of the querying and result-clickthrough behaviors of advanced users relative to non-advanced users.\nWe have also shown that the greater the proportion of queries that contain advanced syntax, the larger the differences in query and clickthrough behaviors become.\nA factor analysis revealed the presence of two dimensions that adequately characterize variance in the query and result-click features.\nIn the querying dimension query attributes, such as the length and proportion that contain advanced syntax, and querying behavior, such as the number of queries submitted per day both affect result-click position.\nIn addition, in the result-click dimension, it appears that daily querying frequency influences result-click features such as the likelihood that a user will click on a search result and the amount of time between result presentation and the search result click.\nThe features used in this section are only interactions with search engines in the form of queries and result clicks.\nWe did not address how users searched for information beyond the result page.\nIn the next section we use the search trails described in Section 4 to analyze the post-query browsing behavior of users.\n5.2 Post-query browsing behavior In this section we look at several attributes of the search trails users followed beyond the results page in an attempt to discern whether the use of advanced search syntax can be used as a predictor of aspects of post-query interaction behavior.\nAs we did previously, we first describe the mean average values for each of the browsing features, across all advanced users (i.e. padvanced > 0%), all non-advanced users (i.e., padvanced = 0%), and all users regardless of their estimated search expertise level.\nWe then look at the effect on the browsing features of increasing the value of padvanced required to be considered advanced from 1% to 100%.\nIn Table 6 we present the average values for each of these features for the two groups of users.\nAlso shown are the percentage of search trails (%Trails) and the percentage of users (%Users) used to compute the averages.\nTable 6.\nPost-query browsing features (per trail).\nFeature padvanced 0% > 0% \u2265 25% \u2265 50% \u2265 75% Session secs.\n701.10 706.21 792.65 903.01 1114.71 Trail secs.\n205.39 159.56 156.45 147.91 136.79 Display secs.\n36.95 32.94 34.91 33.11 30.67 Num.\nsteps 4.88 4.72 4.40 4.40 4.39 Num.\nbacks 1.20 1.02 1.03 1.03 1.02 Num.\nbranches 1.55 1.51 1.50 1.47 1.44 %Trails 72.14% 27.86% .83% .23% .05% %Users 79.90% 20.10% .79% .18% .04% As can be seen from Table 6, there are differences in the postquery interaction behaviors of advanced users (padvanced > 0%) relative to that do not use query operators in any of their queries (padvanced = 0%).\nOnce again, the columns of interest in this comparison are bolded.\nAs we did in Section 5.1 for query and result-click behavior, we performed an independent measures ttest between the values reported for each of the post-query browsing features.\nThe results of this test suggest that differences between those that use advanced syntax and those that do not are significant (t(12495029) \u2265 3.09, p \u2264 .002, \u03b1 = .008).\nGiven the sample sizes, all of the differences between means in the two groups were significant.\nHowever, we once again applied a Cohen``s d-test to determine the effect size.\nThe findings (ranked in descending order based on effect size), show that relative to non-advanced users, advanced search engine users: \u2022 Revisit pages in the trail less often (d = .45) \u2022 Spend less time traversing each search trail (d = .38) \u2022 Spend less time viewing each document (d = .28) \u2022 Branch (i.e., proceed to new pages following a back operation) less often (d = .18) \u2022 Follow search trails with fewer steps (d = .16) It seems that advanced users use a more directed searching style than non-advanced users.\nThey spend less time following search trails and view the documents that lie on those trails for less time.\nThis is in accordance with our earlier proposition that advanced users seem able to discern document relevance in less time.\nAdvanced users also tend to deviate less from a direct path as they search, with fewer revisits to previously-visited pages and less branching during their searching.\nAs we did in the previous section, we increased the padvanced threshold one point at a time.\nWith the exception of number of back operations (NB), the values attributable to each of the features change as padvanced increased.\nIt seems that the differences noted earlier between non-advanced users and those that use any advanced syntax become more significant as padvanced increases.\nAs in the previous section, we conducted a factor analysis of these features and padvanced.\nTable 7 shows the intercorrelation matrix for all these variables.\nTable 7.\nIntercorrelation matrix (post-query browsing).\npadv SS TS DS NS NB NBA padv 1.00 .977 \u2212.843 \u2212.867 \u2212.395 \u2212.339 \u2212.249 SS 1.00 \u2212.765 \u2212.875 \u2212.374 \u2212.335 \u2212.237 TS 1.00 .948 .387 .281 .250 DS 1.00 .392 .344 .257 NS 1.00 .891 .934 NB 1.00 .918 NBA 1.00 As the proportion of queries containing advanced syntax increases, the values of many of the post-query browsing features decrease.\nOnly the average session time (SS) exhibits a strong positive correlation with padvanced.\nThe factor analysis revealed the presence of two factors that account for 89.8% of the variance.\nOnce again, all features with an absolute factor loading of .30 or less were removed.\nThe two factors that emerged, with their respective loadings, can be expressed as: Factor A = .95(DS) + .88 (TS) \u2212 .91(SS) \u2212 .95(padv) Factor B = .99(NBA) + .93(NS) + .91(NB) Variance in the query and result-click behavior of those who use query operators can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.1% of the variance.\nIt appears to represent a very basic temporal dimension that covers timing and percentage of queries with advanced syntax, and suggests a negative relationship between time spent searching and overall session time, and a negative relationship between time spent searching and padvanced.\nThe navigation dimension underlying Factor B accounts for 39.7% of the variance, and describes attributes of post-query navigation, all of which seem to be strongly correlated with each other but not padvanced or timing.\nSummary: In this section we have shown that advanced users'' post-query browsing behavior appears more directed than that of non-advanced users.\nAlthough their search sessions are longer, advanced users follow fewer search trails during their sessions, (i.e., submit fewer queries), their search trails are shorter, and their trails exhibit fewer deviations or regressions to previously encountered pages.\nWe also showed that as padvanced increases, session time increases (perhaps more advanced users are multitasking between search and other operations), and search interaction becomes more focused, perhaps because advanced users are able target relevant information more effectively, with less need for regressions or deviations in their search trails.\nAs well as interaction behaviors such as queries, result clicks, and post-query browse behavior, another important aspect of the search process is the attainment of information relevant to the query.\nIn the next section we analyze the success of advanced and non-advanced users in obtaining relevant information.\n5.3 Search success As described earlier, we used six-level relevance judgments assigned to query-document pairs as an approximate measure of search success based on documents encountered on search trails.\nHowever, the queries for which we have judgments generally did not contain advanced operators.\nTo maximize the likelihood of coverage we removed advanced operators from all queries when retrieving the relevance judgments.\nThe mean average relevance judgment values for each of the four metrics - first, last, average, and maximum - are shown in Table 8 for non-advanced users (0%) and advanced users (> 0%).\nTable 8.\nSearch success (min.\n= 1, max.\n= 6) (per trail).\nFeature padvanced 0% > 0% \u2265 25% \u2265 50% \u2265 75% First M 4.03 4.19 4.24 4.26 4.57 SD 1.58 1.56 1.34 1.38 1.27 Last M 3.79 3.92 4.00 4.13 4.35 SD 1.60 1.57 1.29 1.25 .89 Max.\nM 4.04 4.20 4.19 4.19 4.46 SD 1.63 1.51 1.28 1.37 1.25 Avg.\nM 3.93 4.06 4.08 4.08 4.26 SD 1.57 1.51 1.23 1.32 1.14 The findings suggest that users who use advanced syntax at all (padvanced > 0%) were more successful - across all four measuresthan those who never used advanced syntax (padvanced = 0%).\nNot only were these users more successful in their searching, but they were consistently more successful (i.e., the standard deviation in relevance scores is lower for advanced users and continues to drop as padvanced increases).\nThe differences in the four mean average relevance scores for each metric between these two user groups were significant with independent measures t-tests (all t(516765) \u2265 3.29, p \u2264 .001, \u03b1 = .0125).\nAs we increase the value of padvanced as in previous sections, the average relevance score across all metrics also increases (all Pearson``s R \u2265 .654), suggesting that more advanced users are also more likely to succeed in their searching.\nThe searchers that use advanced operators may have additional skills in locating relevant information, or may know where this information resides based on previous experience.3 Despite the fact that the four metrics targeted different parts of the search trail (e.g., first vs. last) or different ways to gather relevant information (e.g., average vs. maximum), the differences between groups and within the advanced group were consistent.\n3 Although in our logs there was no obvious indication of more revisitation by advanced search engine users.\nTo see whether there were any differences in the nature of the queries submitted by advanced search engine users, we studied the distribution of the four advanced operators: quotation marks, plus, minus, and site:.\nIn Table 9 we show how these operators were distributed in all queries submitted by these users.\nTable 9.\nDistribution of query operators.\nFeature padvanced > 0% \u2265 25% \u2265 50% \u2265 75% Quotes () 71.08 77.09 70.33 70.00 Plus (+) 6.84 13.31 19.21 33.90 Minus (\u2212) 6.62 2.88 1.96 2.42 Site: 21.55 12.72 13.04 9.86 Avg.\nnum.\noperators 1.08 1.14 1.28 1.49 The distribution of the quotes, plus, and minus operators are similar amongst the four levels of padvanced, with quotes being the most popular of the four operators used.\nHowever, it appears that the plus operator is the main differentiator between the padvanced user groups.\nThis operator, which forces the search engine to include in the query terms that are usually excluded by default (e.g. the, a), may account for some portion of the difference in observed search success.4 However, this does not capture the contribution that each of these operators makes to the increase in relevance compared with excluding the operator.\nTo gain some insight into this, we examined the impact that each of the operators had on the relevance of retrieved results.\nWe focused on queries in padvanced > 0% where the same user had issued a query without operators and the same query with operators either before or afterwards.\nAlthough there were few queries with matching pairs - and almost all of them contained quotes - there was a small (approximately 10%) increase in the average relevance judgment score assigned to documents on the trail with quotes in the initial query.\nIt may be the case that quoted queries led to retrieval of more relevant documents, or that they better match the perceived needs of relevance judges and therefore lead to judged documents receiving higher scores.\nMore analysis similar to [8] is required to test these propositions further.\nSummary: In this section we have used several measures to study the search success of advanced and non-advanced users.\nThe findings of our analysis suggest that advanced search engine users are more successful and have more consistency in the relevance of the pages they visit.\nTheir additional search expertise may make these users better able to make better decisions about which documents to view, meaning they encounter consistently more relevant information on their searches.\nIn addition, within the group of advanced users there is a strong correlation between padvanced and the degree of search success.\nAdvanced search engine users may be more adept at combining query operators to formulate powerful query statements.\nWe now discuss the findings from all three subsections and their implications for the design of improved Web search systems.\n4 It is worth noting that there were no significant differences in the distribution of usage of the three search engines - Google, Yahoo!, or Windows Live Search - amongst advanced search engine users, or between advanced users and non-advanced.\n6.\nDISCUSSION AND IMPLICATIONS Our findings indicate significant differences in the querying, result-click, post-query navigation, and search success of those that use advanced syntax versus those that do not.\nMany of these findings mirror those already found in previous studies with groups of self-identified novices and experts [13][19].\nThere are several ways in which a commercial search engine system might benefit from a quantitative indication of searcher expertise.\nThis might be yet another feature available to a ranking engine; i.e. it may be the case that expert searchers in some cases prefer different pages than novice searchers.\nThe user interface to a search engine might be tailored to a user``s expertise level; perhaps even more advanced features such as term weighting and query expansion suggestions could be presented to more experienced searchers while preserving the simplicity of the basic interface for novices.\nResult presentation might also be customized based on search skill level; future work might re-evaluate the benefits of content snippets, thumbnails, etc. in a manner that allows different outcomes for different expertise levels.\nAdditionally, if browsing histories are available, the destinations of advanced searchers could be used as suggested results for queries, bypassing and potentially improving upon the traditional search process [10].\nThe use of the interaction of advanced search engine users to guide others with less expertise is an attractive proposition for the designers of search systems.\nIn part, these searchers may have more post-query browsing expertise that allows them to overcome the shortcomings of search systems [29].\nTheir interactions can be used to point users to places that advanced search engine users visit [32] or simply to train less experienced searchers how to search more effectively.\nHowever, if expert users are going to be used in this way, issues of data sparsity will need to be overcome.\nOur advanced users only accounted for 20.1% of the users whose interactions we studied.\nWhilst these may be amongst the most active users it is unlikely that they will view documents that cover large number of subject areas.\nHowever, rather than focusing on where they go (which is perhaps more appropriate for those with domain knowledge), advanced search engine users may use moves, tactics and strategies [2] that inexperienced users can learn from.\nEncouraging users to use advanced syntax helps them learn how to formulate better search queries; leveraging the searching style of expert searchers could help them learn more successful post-query interactions.\nOne potential limitation to the results we report is that in prior research, it has been shown that query operators do not significantly improve the effectiveness of Web search results [8], and that searchers may be able to perform just as well without them [27].\nIt could therefore be argued that the users who do not use query operators are in fact more advanced, since they do not waste time using potentially redundant syntax in their query statements.\nHowever, this seems unlikely given that those who use advanced syntax exhibited search behaviors typical of users with expertise [13], and are more successful in their searching.\nHowever, in future work we will expand of definition of advanced user beyond attributes of the query to also include other interaction behaviors, some of which we have defined in this study, and other avenues of research such as eye-tracking [12].\n7.\nCONCLUSIONS In this paper we have described a log-based study of search behavior on the Web that has demonstrated that the use of advanced search syntax is correlated with other aspects of search behavior such as querying, result clickthrough, post-query navigation, and search success.\nThose that use this syntax are active online for longer, spend less time querying and traversing search trails, exhibit less deviation in their trails, are more likely to explore search results, take less time to click on results, and are more successful in there searching.\nThese are all traits that we would expect expert searchers to exhibit.\nCrude classification of users based on just one feature that is easily extractable from the query stream yields remarkable results about the interaction behavior of users that do not use the syntax and those that do.\nAs we have suggested, search systems may leverage the interactions of these users for improved document ranking, page recommendation, or even user training.\nFuture work will include the development of search interfaces and modified retrieval engines that make use of these information-rich features, and further investigation into the use of these features as indicators of search expertise, including a cross-correlation analysis between result click and post-query behavior.\n8.\nACKNOWLEDGEMENTS The authors are grateful to Susan Dumais for her thoughtful and constructive comments on a draft of this paper.\n9.\nREFERENCES [1] Anick, P. (2003).\nUsing terminological feedback for Web search refinement: A log-based study.\nIn Proc.\nACM SIGIR, pp. 88-95.\n[2] Bates, M. (1990).\nWhere should the person stop and the information search interface start?\nInf.\nProc.\nManage.\n26, 5, 575-591.\n[3] Belkin, N.J. (2000).\nHelping people find what they don``t know.\nComm.\nACM, 43, 8, 58-61.\n[4] Belkin, N.J. et al. (2003).\nQuery length in interactive information retrieval.\nIn Proc.\nACM SIGIR, pp. 205-212.\n[5] Bhavnani, S.K. (2001).\nDomain-specific search strategies for the effective retrieval of healthcare and shopping information.\nIn Proc.\nACM SIGCHI, pp. 610-611.\n[6] Chi, E. H., Pirolli, P. L., Chen, K. & Pitkow, J. E. (2001).\nUsing information scent to model user information needs and actions and the Web.\nIn Proc.\nACM SIGCHI, pp. 490-497.\n[7] De Lima, E.F. & Pedersen, J.O. (1999).\nPhrase recognition and expansion for short, precision-biased queries based on a query log.\nIn Proc.\nof ACM SIGIR, pp. 145-152.\n[8] Eastman, C.M. & Jansen, B.J. (2003).\nCoverage, relevance, and ranking: The impact of query operators on Web search engine results.\nACM TOIS, 21, 4, 383-411.\n[9] Efthimiadis, E.N. (1996).\nQuery expansion.\nAnnual Review of Information Science and Technology, 31, 121-187.\n[10] Furnas, G. (1985).\nExperience with an adaptive indexing scheme.\nIn Proc.\nACM SIGCHI, pp. 131-135.\n[11] Furnas, G.W., Landauer, T.K., Gomez, L.M. & Dumais, S.T. (1987).\nThe vocabulary problem in human-system communication: An analysis and a solution.\nComm.\nACM, 30, 11, 964-971.\n[12] Granka, L., Joachims, T. & Gay, G. (2004).\nEye-tracking analysis of user behavior in WWW search.\nIn Proc.\nACM SIGIR, pp. 478-479.\n[13] H\u00f6lscher, C. & Strube, G. (2000).\nWeb search behavior of internet experts and newbies.\nIn Proc.WWW, pp. 337-346.\n[14] Jansen, B.J. (2000).\nAn investigation into the use of simple queries on Web IR systems.\nInf.\nRes.\n6, 1.\n[15] Jansen, B.J., Spink, A. & Saracevic, T. (2000).\nReal life, real users, and real needs: A study and analysis of user queries on the Web.\nInf.\nProc.\nManage.\n36, 2, 207-227.\n[16] Jones, R., Rey, B., Madani, O. & Greiner, W. (2006).\nGenerating query substitutions.\nIn Proc.\nWWW, pp. 387-396.\n[17] Kaski, S., Myllym\u00e4ki, P. & Kojo, I. (2005).\nUser models from implicit feedback for proactive information retrieval.\nIn Workshop at UM Conference; Machine Learning for User Modeling: Challenges.\n[18] Kelly, D., Dollu, V.D. & Fu, X. (2005).\nThe loquacious user: a document-independent source of terms for query expansion.\nIn Proc.\nACM SIGIR, pp. 457-464.\n[19] Lazonder, A.W., Biemans, H.J.A. & Woperis, I.G.J.H. (2000).\nDifferences between novice and experienced users in searching for information on the World Wide Web.\nJ. ASIST.\n51, 6, 576-581.\n[20] Morita, M. & Shinoda, Y. (1994).\nInformation filtering based on user behavior analysis and best match text retrieval.\nIn Proc.\nACM SIGIR, pp. 272-281.\n[21] NIST Special Publication 500-266: The Fourteenth Text Retrieval Conference Proceedings (TREC 2005).\n[22] Oddy, R. (1977).\nInformation retrieval through man-machine dialogue.\nJ. Doc.\n33, 1, 1-14.\n[23] Rose, D.E. & Levinson, D. (2004).\nUnderstanding user goals in Web search.\nIn Proc.\nWWW, pp. 13-19.\n[24] Salton, G. and Buckley, C. (1990).\nImproving retrieval performance by relevance feedback.\nJ. ASIST, 41 4, 288-287.\n[25] Silverstein, C., Marais, H., Henzinger, M. & Moricz, M. (1999).\nAnalysis of a very large web search engine query log.\nSIGIR Forum, 33, 1, 6-12.\n[26] Spearman, C. (1904).\nGeneral intelligence, objectively determined and measured.\nAmer.\nJ. Psy.\n15, 201-293.\n[27] Spink, A., Bateman, J. & Jansen, B.J. (1998).\nSearching heterogeneous collections on the Web: Behavior of Excite users.\nInf.\nRes.\n4, 2, 317-328.\n[28] Spink, A., Griesdorf, H. & Bateman, J. (1998).\nFrom highly relevant to not relevant: examining different regions of relevance.\nInf.\nProc.\nManage.\n34 5, 599-621.\n[29] Teevan, J. et al. (2004).\nThe perfect search engine is not enough: A study of orienteering behavior in directed search.\nIn Proc.\nACM SIGCHI, pp. 415-422.\n[30] Teevan, J., Dumais, S.T. & Horvitz, E. (2005).\nPersonalizing search via automated analysis of interests and activities.\nIn Proc.\nACM SIGIR, pp. 449-456 [31] Wang, P., Berry, M. & Yang, Y. (2003).\nMining longitudinal Web queries: Trends and patterns.\nJ. ASIST, 54, 3, 742-758.\n[32] White, R.W., Bilenko, M. & Cucerzan, S. (2007).\nStudying the use of popular destinations to enhance Web search interaction.\nIn Proc.\nACM SIGIR, in press.\n[33] White, R.W. & Drucker, S. (2007).\nInvestigating behavioral variability in Web search.\nIn Proc.\nWWW, in press.\n[34] White, R.W., Ruthven, I. & Jose, J.M. (2002).\nFinding relevant documents using top-ranking sentences: An evaluation of two alternative schemes.\nIn Proc.\nACM SIGIR, pp. 57-64.\n[35] Wildemuth, B.M., do Bleik, R., Friedman, C.P. & File, D.D. (1995).\nMedical students'' personal knowledge.\nSearch proficiency, and database use in problem solving.\nJ. ASIST, 46, 590-607.","lvl-3":"Investigating the Querying and Browsing Behavior of Advanced Search Engine Users\nABSTRACT\nOne way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone.\nIn this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search.\nThe results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced.\nOur findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.\n1.\nINTRODUCTION\nThe formulation of query statements that capture both the salient aspects of information needs and are meaningful to Information Retrieval (IR) systems poses a challenge for many searchers [3].\nCommercial Web search engines such as Google, Yahoo!, and Windows Live Search offer users the ability to improve the quality of their queries using query operators such as quotation marks, plus and minus signs, and modifiers that restrict the search to a particular site or type of file.\nThese techniques can be useful in improving result precision yet, other than via log analyses (e.g., [15] [27]), they have generally been overlooked by the research community in attempts to improve the quality of search results.\nIR research has generally focused on alternative ways for users to specify their needs rather than increasing the uptake of advanced syntax.\nResearch on practical techniques to supplement existing\nsearch technology and support users has been intensifying in recent years (e.g. [18] [34]).\nHowever, it is challenging to implement such techniques at large scale with tolerable latencies.\nTypical queries submitted to Web search engines take the form of a series of tokens separated by spaces.\nThere is generally an implied Boolean AND operator between tokens that restricts search results to documents containing all query terms.\nDe Lima and Pedersen [7] investigated the effect of parsing, phrase recognition, and expansion on Web search queries.\nThey showed that the automatic recognition of phrases in queries can improve result precision in Web search.\nHowever, the value of advanced syntax for typical searchers has generally been limited, since most users do not know about advanced syntax or do not understand how to use it [15].\nSince it appears operators can help retrieve relevant documents, further investigation of their use is warranted.\nIn this paper we explore the use of query operators in more detail and propose alternative applications that do not require all users to use advanced syntax explicitly.\nWe hypothesize that searchers who use advanced query syntax demonstrate a degree of search expertise that the majority of the user population does not; an assertion supported by previous research [13].\nStudying the behavior of these advanced search engine users may yield important insights about searching and result browsing from which others may benefit.\nUsing logs gathered from a large number of consenting users, we investigate differences between the search behavior of those that use advanced syntax and those that do not, and differences in the information those users target.\nWe are interested in answering three research questions:\n(i) Is there a relationship between the use of advanced syntax and other characteristics of a search?\n(ii) Is there a relationship between the use of advanced syntax and post-query navigation behaviors?\n(iii) Is there a relationship between the use of advanced syntax and measures of search success?\nThrough an experimental study and analysis, we offer potential answers for each of these questions.\nA relationship between the use of advanced syntax and any of these features could support the design of systems tailored to advanced search engine users, or use advanced users' interactions to help non-advanced users be more successful in their searches.\nWe describe related work in Section 2, the data we used in this log-based study in Section 3, the search characteristics on which we focus our analysis in Section 4, and the findings of this analysis in Section 5.\nIn Section 6 we discuss the implications of this research, and we conclude in Section 7.\n2.\nRELATED WORK\nFactors such as lack of domain knowledge, poor understanding of the document collection being searched, and a poorly developed information need can all influence the quality of the queries that users submit to IR systems ([24], [28]).\nThere has been a variety of research into different ways of helping users specify their information needs more effectively.\nBelkin et al. [4] experimented with providing additional space for users to type a more verbose description of their information needs.\nA similar approach was attempted by Kelly et al. [18], who used clarification forms to elicit additional information about the search context from users.\nThese approaches have been shown to be effective in best-match retrieval systems where longer queries generally lead to more relevant search results [4].\nHowever, in Web search, where many of the systems are based on an extended Boolean retrieval model, longer queries may actually hurt retrieval performance, leading to a small number of potentially irrelevant results being retrieved.\nIt is not simply sufficient to request more information from users; this information must be of better quality.\nRelevance Feedback (RF) [22] and interactive query expansion [9] are popular techniques that have been used to improve the quality of information that users provide to IR systems regarding their information needs.\nIn the case of RF, the user presents the system with examples of relevant information that are then used to formulate an improved query or retrieve a new set of documents.\nIt has proven difficult to get users to use RF in the Web domain due to difficulty in conveying the meaning and the benefit of RF to typical users [17].\nQuery suggestions offered based on query logs have the potential to improve retrieval performance with limited user burden.\nThis approach is limited to re-executing popular queries, and searchers often ignore the suggestions presented to them [1].\nIn addition, both of these techniques do not help users learn to produce more effective queries.\nMost commercial search engines provide advanced query syntax that allows users to specify their information needs in more detail.\nQuery modifiers such as ` +' (plus), ` \u2212' (minus), and ` \"\"' (double quotes) can be used to emphasize, deemphasize, and group query terms.\nBoolean operators (AND, OR, and NOT) can join terms and phrases, and modifiers such as \"site:\" and \"link:\" can be used to restrict the search space.\nQueries created with these techniques can be powerful.\nHowever, this functionality is often hidden from the immediate view of the searcher, and unless she knows the syntax, she must use text fields, pull-down menus and combo boxes available via a dedicated \"advanced search\" interface to access these features.\nLog-based analysis of users' interactions with the Excite and AltaVista search engines has shown that only 10-20% of queries contained any advanced syntax [14] [25].\nThis analysis can be a useful way of capturing characteristics of users interacting with IR systems.\nResearch in user modeling [6] and personalization [30] has shown that gathering more information about users can improve the effectiveness of searches, but require more information about users than is typically available from interaction logs alone.\nUnless coupled with a qualitative technique, such as a post-session questionnaire [23], it can be difficult to associate interactions with user characteristics.\nIn our study we conjecture that given the difficulty in locating advanced search features within the typical search interface, and the potential problems in understanding the syntax, those users that do use advanced syntax regularly represent a distinct class of searchers who will exhibit other common search behaviors.\nOther studies of advanced searchers' search behaviors have attempted to better understand the strategic knowledge they have acquired.\nHowever, such studies are generally limited in size (e.g., [13] [19]) or focus on domain expertise in areas such as healthcare or e-commerce (e.g., [5]).\nNonetheless, they can give valuable insight about the behaviors of users with domain, system, or search expertise that exceeds that of the average user.\nQuerying behavior in particular has been studied extensively to better understand users [31] and support other users [16].\nIn this paper we study other search characteristics of users of advanced syntax in an attempt to determine whether there is anything different about how these search engine users search, and whether their searches can be used to benefit those who do not make use of the advanced features of search engines.\nTo do this we use interaction logs gathered from large set of consenting users over a prolonged period.\nIn the next section we describe the data we use to study the behavior of the users who use advanced syntax, relative to those that do not use this syntax.\n3.\nDATA\nSIGIR 2007 Proceedings Session 11: Interaction\n4.\nSEARCH FEATURES\n5.\nFINDINGS\n5.1 Query and result-click behavior\nSIGIR 2007 Proceedings Session 11: Interaction\n5.2 Post-query browsing behavior\nSIGIR 2007 Proceedings Session 11: Interaction\n5.3 Search success\n6.\nDISCUSSION AND IMPLICATIONS\n7.\nCONCLUSIONS\nSIGIR 2007 Proceedings Session 11: Interaction\nbehavior such as querying, result clickthrough, post-query navigation, and search success.\nThose that use this syntax are active online for longer, spend less time querying and traversing search trails, exhibit less deviation in their trails, are more likely to explore search results, take less time to click on results, and are more successful in there searching.\nThese are all traits that we would expect expert searchers to exhibit.\nCrude classification of users based on just one feature that is easily extractable from the query stream yields remarkable results about the interaction behavior of users that do not use the syntax and those that do.\nAs we have suggested, search systems may leverage the interactions of these users for improved document ranking, page recommendation, or even user training.\nFuture work will include the development of search interfaces and modified retrieval engines that make use of these information-rich features, and further investigation into the use of these features as indicators of search expertise, including a cross-correlation analysis between result click and post-query behavior.","lvl-4":"Investigating the Querying and Browsing Behavior of Advanced Search Engine Users\nABSTRACT\nOne way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone.\nIn this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search.\nThe results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced.\nOur findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.\n1.\nINTRODUCTION\nThe formulation of query statements that capture both the salient aspects of information needs and are meaningful to Information Retrieval (IR) systems poses a challenge for many searchers [3].\nThese techniques can be useful in improving result precision yet, other than via log analyses (e.g., [15] [27]), they have generally been overlooked by the research community in attempts to improve the quality of search results.\nIR research has generally focused on alternative ways for users to specify their needs rather than increasing the uptake of advanced syntax.\nResearch on practical techniques to supplement existing\nsearch technology and support users has been intensifying in recent years (e.g. [18] [34]).\nHowever, it is challenging to implement such techniques at large scale with tolerable latencies.\nTypical queries submitted to Web search engines take the form of a series of tokens separated by spaces.\nThere is generally an implied Boolean AND operator between tokens that restricts search results to documents containing all query terms.\nDe Lima and Pedersen [7] investigated the effect of parsing, phrase recognition, and expansion on Web search queries.\nThey showed that the automatic recognition of phrases in queries can improve result precision in Web search.\nHowever, the value of advanced syntax for typical searchers has generally been limited, since most users do not know about advanced syntax or do not understand how to use it [15].\nIn this paper we explore the use of query operators in more detail and propose alternative applications that do not require all users to use advanced syntax explicitly.\nWe hypothesize that searchers who use advanced query syntax demonstrate a degree of search expertise that the majority of the user population does not; an assertion supported by previous research [13].\nStudying the behavior of these advanced search engine users may yield important insights about searching and result browsing from which others may benefit.\nUsing logs gathered from a large number of consenting users, we investigate differences between the search behavior of those that use advanced syntax and those that do not, and differences in the information those users target.\nWe are interested in answering three research questions:\n(i) Is there a relationship between the use of advanced syntax and other characteristics of a search?\n(ii) Is there a relationship between the use of advanced syntax and post-query navigation behaviors?\n(iii) Is there a relationship between the use of advanced syntax and measures of search success?\nThrough an experimental study and analysis, we offer potential answers for each of these questions.\nA relationship between the use of advanced syntax and any of these features could support the design of systems tailored to advanced search engine users, or use advanced users' interactions to help non-advanced users be more successful in their searches.\nWe describe related work in Section 2, the data we used in this log-based study in Section 3, the search characteristics on which we focus our analysis in Section 4, and the findings of this analysis in Section 5.\n2.\nRELATED WORK\nFactors such as lack of domain knowledge, poor understanding of the document collection being searched, and a poorly developed information need can all influence the quality of the queries that users submit to IR systems ([24], [28]).\nThere has been a variety of research into different ways of helping users specify their information needs more effectively.\nBelkin et al. [4] experimented with providing additional space for users to type a more verbose description of their information needs.\nA similar approach was attempted by Kelly et al. [18], who used clarification forms to elicit additional information about the search context from users.\nThese approaches have been shown to be effective in best-match retrieval systems where longer queries generally lead to more relevant search results [4].\nHowever, in Web search, where many of the systems are based on an extended Boolean retrieval model, longer queries may actually hurt retrieval performance, leading to a small number of potentially irrelevant results being retrieved.\nIt is not simply sufficient to request more information from users; this information must be of better quality.\nRelevance Feedback (RF) [22] and interactive query expansion [9] are popular techniques that have been used to improve the quality of information that users provide to IR systems regarding their information needs.\nIn the case of RF, the user presents the system with examples of relevant information that are then used to formulate an improved query or retrieve a new set of documents.\nIt has proven difficult to get users to use RF in the Web domain due to difficulty in conveying the meaning and the benefit of RF to typical users [17].\nQuery suggestions offered based on query logs have the potential to improve retrieval performance with limited user burden.\nThis approach is limited to re-executing popular queries, and searchers often ignore the suggestions presented to them [1].\nIn addition, both of these techniques do not help users learn to produce more effective queries.\nMost commercial search engines provide advanced query syntax that allows users to specify their information needs in more detail.\nBoolean operators (AND, OR, and NOT) can join terms and phrases, and modifiers such as \"site:\" and \"link:\" can be used to restrict the search space.\nQueries created with these techniques can be powerful.\nLog-based analysis of users' interactions with the Excite and AltaVista search engines has shown that only 10-20% of queries contained any advanced syntax [14] [25].\nThis analysis can be a useful way of capturing characteristics of users interacting with IR systems.\nResearch in user modeling [6] and personalization [30] has shown that gathering more information about users can improve the effectiveness of searches, but require more information about users than is typically available from interaction logs alone.\nUnless coupled with a qualitative technique, such as a post-session questionnaire [23], it can be difficult to associate interactions with user characteristics.\nIn our study we conjecture that given the difficulty in locating advanced search features within the typical search interface, and the potential problems in understanding the syntax, those users that do use advanced syntax regularly represent a distinct class of searchers who will exhibit other common search behaviors.\nOther studies of advanced searchers' search behaviors have attempted to better understand the strategic knowledge they have acquired.\nNonetheless, they can give valuable insight about the behaviors of users with domain, system, or search expertise that exceeds that of the average user.\nQuerying behavior in particular has been studied extensively to better understand users [31] and support other users [16].\nIn this paper we study other search characteristics of users of advanced syntax in an attempt to determine whether there is anything different about how these search engine users search, and whether their searches can be used to benefit those who do not make use of the advanced features of search engines.\nTo do this we use interaction logs gathered from large set of consenting users over a prolonged period.\nIn the next section we describe the data we use to study the behavior of the users who use advanced syntax, relative to those that do not use this syntax.\nSIGIR 2007 Proceedings Session 11: Interaction\nbehavior such as querying, result clickthrough, post-query navigation, and search success.\nCrude classification of users based on just one feature that is easily extractable from the query stream yields remarkable results about the interaction behavior of users that do not use the syntax and those that do.\nAs we have suggested, search systems may leverage the interactions of these users for improved document ranking, page recommendation, or even user training.","lvl-2":"Investigating the Querying and Browsing Behavior of Advanced Search Engine Users\nABSTRACT\nOne way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone.\nIn this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search.\nThe results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced.\nOur findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.\n1.\nINTRODUCTION\nThe formulation of query statements that capture both the salient aspects of information needs and are meaningful to Information Retrieval (IR) systems poses a challenge for many searchers [3].\nCommercial Web search engines such as Google, Yahoo!, and Windows Live Search offer users the ability to improve the quality of their queries using query operators such as quotation marks, plus and minus signs, and modifiers that restrict the search to a particular site or type of file.\nThese techniques can be useful in improving result precision yet, other than via log analyses (e.g., [15] [27]), they have generally been overlooked by the research community in attempts to improve the quality of search results.\nIR research has generally focused on alternative ways for users to specify their needs rather than increasing the uptake of advanced syntax.\nResearch on practical techniques to supplement existing\nsearch technology and support users has been intensifying in recent years (e.g. [18] [34]).\nHowever, it is challenging to implement such techniques at large scale with tolerable latencies.\nTypical queries submitted to Web search engines take the form of a series of tokens separated by spaces.\nThere is generally an implied Boolean AND operator between tokens that restricts search results to documents containing all query terms.\nDe Lima and Pedersen [7] investigated the effect of parsing, phrase recognition, and expansion on Web search queries.\nThey showed that the automatic recognition of phrases in queries can improve result precision in Web search.\nHowever, the value of advanced syntax for typical searchers has generally been limited, since most users do not know about advanced syntax or do not understand how to use it [15].\nSince it appears operators can help retrieve relevant documents, further investigation of their use is warranted.\nIn this paper we explore the use of query operators in more detail and propose alternative applications that do not require all users to use advanced syntax explicitly.\nWe hypothesize that searchers who use advanced query syntax demonstrate a degree of search expertise that the majority of the user population does not; an assertion supported by previous research [13].\nStudying the behavior of these advanced search engine users may yield important insights about searching and result browsing from which others may benefit.\nUsing logs gathered from a large number of consenting users, we investigate differences between the search behavior of those that use advanced syntax and those that do not, and differences in the information those users target.\nWe are interested in answering three research questions:\n(i) Is there a relationship between the use of advanced syntax and other characteristics of a search?\n(ii) Is there a relationship between the use of advanced syntax and post-query navigation behaviors?\n(iii) Is there a relationship between the use of advanced syntax and measures of search success?\nThrough an experimental study and analysis, we offer potential answers for each of these questions.\nA relationship between the use of advanced syntax and any of these features could support the design of systems tailored to advanced search engine users, or use advanced users' interactions to help non-advanced users be more successful in their searches.\nWe describe related work in Section 2, the data we used in this log-based study in Section 3, the search characteristics on which we focus our analysis in Section 4, and the findings of this analysis in Section 5.\nIn Section 6 we discuss the implications of this research, and we conclude in Section 7.\n2.\nRELATED WORK\nFactors such as lack of domain knowledge, poor understanding of the document collection being searched, and a poorly developed information need can all influence the quality of the queries that users submit to IR systems ([24], [28]).\nThere has been a variety of research into different ways of helping users specify their information needs more effectively.\nBelkin et al. [4] experimented with providing additional space for users to type a more verbose description of their information needs.\nA similar approach was attempted by Kelly et al. [18], who used clarification forms to elicit additional information about the search context from users.\nThese approaches have been shown to be effective in best-match retrieval systems where longer queries generally lead to more relevant search results [4].\nHowever, in Web search, where many of the systems are based on an extended Boolean retrieval model, longer queries may actually hurt retrieval performance, leading to a small number of potentially irrelevant results being retrieved.\nIt is not simply sufficient to request more information from users; this information must be of better quality.\nRelevance Feedback (RF) [22] and interactive query expansion [9] are popular techniques that have been used to improve the quality of information that users provide to IR systems regarding their information needs.\nIn the case of RF, the user presents the system with examples of relevant information that are then used to formulate an improved query or retrieve a new set of documents.\nIt has proven difficult to get users to use RF in the Web domain due to difficulty in conveying the meaning and the benefit of RF to typical users [17].\nQuery suggestions offered based on query logs have the potential to improve retrieval performance with limited user burden.\nThis approach is limited to re-executing popular queries, and searchers often ignore the suggestions presented to them [1].\nIn addition, both of these techniques do not help users learn to produce more effective queries.\nMost commercial search engines provide advanced query syntax that allows users to specify their information needs in more detail.\nQuery modifiers such as ` +' (plus), ` \u2212' (minus), and ` \"\"' (double quotes) can be used to emphasize, deemphasize, and group query terms.\nBoolean operators (AND, OR, and NOT) can join terms and phrases, and modifiers such as \"site:\" and \"link:\" can be used to restrict the search space.\nQueries created with these techniques can be powerful.\nHowever, this functionality is often hidden from the immediate view of the searcher, and unless she knows the syntax, she must use text fields, pull-down menus and combo boxes available via a dedicated \"advanced search\" interface to access these features.\nLog-based analysis of users' interactions with the Excite and AltaVista search engines has shown that only 10-20% of queries contained any advanced syntax [14] [25].\nThis analysis can be a useful way of capturing characteristics of users interacting with IR systems.\nResearch in user modeling [6] and personalization [30] has shown that gathering more information about users can improve the effectiveness of searches, but require more information about users than is typically available from interaction logs alone.\nUnless coupled with a qualitative technique, such as a post-session questionnaire [23], it can be difficult to associate interactions with user characteristics.\nIn our study we conjecture that given the difficulty in locating advanced search features within the typical search interface, and the potential problems in understanding the syntax, those users that do use advanced syntax regularly represent a distinct class of searchers who will exhibit other common search behaviors.\nOther studies of advanced searchers' search behaviors have attempted to better understand the strategic knowledge they have acquired.\nHowever, such studies are generally limited in size (e.g., [13] [19]) or focus on domain expertise in areas such as healthcare or e-commerce (e.g., [5]).\nNonetheless, they can give valuable insight about the behaviors of users with domain, system, or search expertise that exceeds that of the average user.\nQuerying behavior in particular has been studied extensively to better understand users [31] and support other users [16].\nIn this paper we study other search characteristics of users of advanced syntax in an attempt to determine whether there is anything different about how these search engine users search, and whether their searches can be used to benefit those who do not make use of the advanced features of search engines.\nTo do this we use interaction logs gathered from large set of consenting users over a prolonged period.\nIn the next section we describe the data we use to study the behavior of the users who use advanced syntax, relative to those that do not use this syntax.\n3.\nDATA\nTo perform this study we required a description of the querying and browsing behavior of many searchers, preferably over a period of time to allow patterns in user behavior to be analyzed.\nTo obtain these data we mined the interaction logs of consenting Web users over a period of 13 weeks, from January to April 2006.\nWhen downloading a partner client-side application, the users were invited to consent to their interaction with Web pages being anonymously recorded (with a unique identifier assigned to each user) and used to improve the performance of future systems .1 The information contained in these log entries included a unique identifier for the user, a timestamp for each page view, a unique browser window identifier (to resolve ambiguities in determining which browser a page was viewed), and the URL of the Web page visited.\nThis provided us with sufficient data on querying behavior (from interaction with search engines), and browsing behavior (from interaction with the pages that follow a search) to more broadly investigate search behavior.\nIn addition to the data gathered during the course of this study we also had relevance judgments of documents that users examined for 10,680 unique query statements present in the interaction logs.\nThese judgments were assigned on a six-point scale by trained human judges at the time the data were collected.\nWe use these judgments in this analysis to assess the relevance of sites users visited on their browse trail away from search result pages.\nWe studied the interaction logs of 586,029 unique users, who submitted millions of queries to three popular search engines--Google, Yahoo!, and MSN Search--over the 13-week duration of the study.\nTo limit the effect of search engine bias, we used four operators common to all three search engines: + (plus), \u2212 (minus), \"\" (double quotes), and \"site:\" (to restrict the search to a domain or Web page) as advanced syntax.\n1.12% of the queries submitted contained at least one of these four operators.\n51,080 (8.72%) of users used query operators in any of their queries.\nIn the remainder of this paper, we will refer to these users as \"advanced\" searchers.\nWe acknowledge that the direct relationship between query syntax usage and search expertise has only been studied\nSIGIR 2007 Proceedings Session 11: Interaction\n(and shown) in a few studies (e.g., [13]), but we feel that this is a reasonable criterion for a log-based investigation.\nWe conjecture that these \"advanced\" searchers do possess a high level of search expertise, and will show later in the paper that they demonstrate behavioral characteristics consistent with search expertise.\nTo handle potential outlier users that may skew our data analysis, we removed users who submitted fewer than 50 queries in the study's 13-week duration.\nThis left us with 188,405 users \u2212 37,795 (20.1%) advanced users and 150,610 (79.9%) nonadvanced users \u2212 whose interactions we study in more detail.\nIf significant differences emerge between these groups, it is conceivable that these interactions could be used to automatically classify users and adjust a search system's interface and result weighting to better match the current user.\nThe privacy of our volunteers was maintained throughout the entire course of the study: no personal information was elicited about them, participants were assigned a unique anonymous identifier that could not be traced back to them, and we made no attempt to identify a particular user or study individual behavior in any way.\nAll findings were aggregated over multiple users, and no information other than consent for logging was elicited.\nTo find out more about these users we studied whether those using advanced syntax exhibited other search behaviors that were not observed in those who did not use this syntax.\nWe focused on querying, navigation, and overall search success to compare the user groups.\nIn the next section we describe in more detail the search features that we used.\n4.\nSEARCH FEATURES\nWe elected to choose features that described a variety of aspects of the search process: queries, result clicks, post-query browsing, and search success.\nThe query and result-click characteristics we chose to examine are described in more detail in Table 1.\nTable 1.\nQuery and result-click features (per user).\nThese seven features give us a useful overview of users' direct interactions with search engines, but not of how users are looking for relevant information beyond the result page or how successful they are in locating relevant information.\nTherefore, in addition to these characteristics we also studied some relevant aspects of users' post-query browsing behavior.\nTo do this, we extracted search trails from the interaction logs described in the previous section.\nA search trail is a series of visited Web pages connected via a hyperlink trail, initiated with a search result page and terminating on one of the following events: navigation to any page not linked from the current page, closing of the active browser window, or a session inactivity timeout of 30 minutes.\nMore detail on the extraction of the search trails are provided in previous work [33].\nIn total, around 12.5 million search trails (containing around 60 million documents) were extracted from the logs for all users.\nThe median number of search trails per user was 30.\nThe median number of steps in the trails was 3.\nAll search trails contained one search result page and at least one page on a hyperlink trail leading from the result page.\nThe extraction of these trails allowed us to study aspects of postquery browsing behavior, namely the average duration of users' search sessions, the average duration of users' search trails, the average display time of each document, the average number of steps in users' search trails, the number of branches in users' navigation patterns, and the number of \"back\" operations in users' search trails.\nAll search trails contain at least one \"branch\" representing any forward motion on the browse path.\nA trail can have additional branches if the user clicks the browser's \"back\" button and immediately proceeds forward to another page prior to the next (if any) back operation.\nThe post-query browsing features are described further in Table 2.\nTable 2.\nPost-query browsing features (per trail).\nFeature Meaning Session Seconds (SS) Average session length (in seconds) Trail Seconds (TS) Average trail length (in seconds) Display Seconds (DS) Average display time for each page on the trail (in seconds) Num.\nSteps (NS) Average number of steps from the page following the results page to the end of the trail Num.\nBranches (NB) Average number of branches Num.\nBacks (NBA) Average number of \"back\" operations As well as using these attributes of users' interactions, we also used the relevance judgments described earlier in the paper to measure the degree of search success based on the relevance judgments assigned to pages that lie on the search trail.\nGiven that we did not have access to relevance assessments from our users, we approximated these assessments using judgments collected as part of ongoing research into search engine performance .2 These judgments were created by trained human assessors for 10,680 unique queries.\nOf the 1,420,625 steps on search trails that started with any one of these queries, we have relevance judgments for 802,160 (56.4%).\nWe use these judgments to approximate search success for a given trail in a number of ways.\nIn Table 3 we list these measures.\n2 Our assessment of search success is fairly crude compared to what would have been possible if we had been able to contact our subjects.\nWe address this problem in a manner similar to that used by the Text Retrieval Conference (TREC) [21], in that since we cannot determine perceived search success, we approximate search success based on assigned relevance scores of visited documents.\nTable 3.\nRelevance judgment measures (per trail).\nThese measures are used during our analysis to estimate the relevance of the pages viewed at different stages in the trails, and allow us to estimate search success in different ways.\nWe chose multiple measures, as users may encounter relevant information in many ways and at different points in the trail (e.g., single highlyrelevant document or gradually over the course of the trail).\nThe features described in this section allowed us to analyze important attributes of the search process that must be better understood if we are to support users in their searching.\nIn the next section we present the findings of the analysis.\n5.\nFINDINGS\nOur analysis is divided into three parts: analysis of query behavior and interaction with the results page, analysis of post-query navigation behavior, and search success in terms of locating judged-relevant documents.\nParametric statistical testing is used, and the level of significance for the statistical tests is set to .05.\n5.1 Query and result-click behavior\nWe were interested in comparing the query and result-click behaviors of our advanced and non-advanced users.\nIn Table 4 we show the mean average values for each of the seven search features for our users.\nWe use padvanced to denote the percentage of all queries from each user that contains advanced syntax (i.e., padvanced = 0% means a user never used advanced syntax).\nThe table shows values for users that do not use query operators (0%), users who submitted at least one query with operators (> 0%), through to users whose queries contained operators at least threequarters of the time (> 75%).\nTable 4.\nQuery and result click features (per user).\nWe compared the query and result click features of users who did not use any advanced syntax (padvanced = 0%) in any of their queries with those who used advanced syntax in at least one query (padvanced> 0%).\nThe columns corresponding to these two groups are bolded in Table 4.\nWe performed an independent measures ttest between these groups for each of the features.\nSince this analysis involved many features, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (\u03b1) to .007, i.e., .05 divided by the number of features.\nThis correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true.\nAll differences between the groups were statistically significant (all t (188403)> 2.81, all p <.002).\nHowever, given the large sample sizes, all differences in the means were likely to be statistically significant.\nWe applied a Cohen's d-test to determine the effect size for each of the comparisons between the advanced and non-advanced user groups.\nOrdering in descending order by effect size, the main findings are that relative to non-advanced users, advanced search engine users:\n\u2022 Query less frequently in a session (d = 1.98) \u2022 Compose longer queries (d = .69) \u2022 Click further down the result list (d = .67) \u2022 Submit more queries per day (d = .49) \u2022 Are less likely to click on a result (d = .32) \u2022 Repeat queries more often (d = .16)\nThe increased likelihood that advanced search engine users will click further down the result list implies that they may be less trusting of the search engines' ability to rank the most relevant document first, that they are more willing to explore beyond the most popular pages for a given query, that they may be submitting different types of queries (e.g., informational rather than navigational), or that they may have customized their search settings to display more than only the default top-10 results.\nMany of the findings listed are consistent with those identified in other studies of advanced searchers' querying and result-click behaviors [13] [34].\nGiven that the only criteria we employed to classify a user as an advanced searcher was their use of advanced syntax, it is certainly promising that this criterion seems to identify users that interact in a way consistent with that reported previously for those with more search expertise.\nAs mentioned earlier, the advanced search engine users for which the average values shown in Table 4 are computed are those who submit 50 or more queries in the 13 week duration of the data collection and submit at least one query containing advanced query operators.\nIn other words, we consider users whose percentage of queries containing advanced syntax, padvanced, is greater than zero.\nThe use of query operators in any queries, regardless of frequency, suggests that a user knows about the existence of the operators, and implies a greater degree of familiarity with the search system.\nWe further hypothesized that users whose queries more frequently contained advanced syntax may be more advanced search engine users.\nTo test this we investigated varying the query threshold required to qualify for advanced status (padvanced).\nWe incremented padvanced one percentage point at a time, and recorded the values of the seven query and result-click features at each point.\nThe values of the features at four milestones (> 0%,> 25%,> 50%, and> 75%) are shown in Table 4.\nAs can be seen in the table, as padvanced increases, differences in the features between those using advanced syntax and those not using advanced syntax become more substantial.\nHowever, it is interesting to note that as padvanced increases, the number of queries submitted per day actually falls (Pearson's R = \u2212 .512, t (98) = 5.98, p <.0001).\nMore advanced users may need to pose fewer queries to find relevant information.\nTo study the patterns of relationship among these dependent variables (including the padvanced), we applied factor analysis [26].\nSIGIR 2007 Proceedings Session 11: Interaction\nTable 5 shows the intercorrelation matrix between the features and the percentage of queries with operators (Padvanced).\nEach cell in the table contains the Pearson's correlation coefficient between the two features for a given row-column pair.\nTable 5.\nIntercorrelation matrix (query \/ result-click features).\nIt is only the first data column and row that reflect the correlations between padvanced and the other query and result-click features.\nColumns 2--8 show the inter-correlations between the other features.\nThere are strong positive correlations between some of the features (e.g., the number of words in the query (QWL) and the average probability of clicking on a search result (ACP)).\nHowever, there were also fairly strong negative correlations between some features (e.g., the average length of the queries (QWL) and the probability of clicking on a search result (CP)).\nThe factor analysis revealed the presence of two factors that account for 83.6% of the variance.\nAs is standard practice in factor analysis, all features with an absolute factor loading of .30 or less were removed.\nThe two factors that emerged, with their respective loadings, can be expressed as:\nVariance in the query and result-click behavior of our advanced search engine users can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.5% of the variance.\nIt appears to represent a very basic dimension of variance that covers query attributes and querying behavior, and suggests a relationship between query properties (length, frequency, complexity, and repetition) and the position of users' clicks in the result list.\nThe dimension underlying Factor B accounts for 33.1% of the variance, and describes attributes of result-click behavior, and a strong correlation between result clicks and the number of queries submitted each day.\nSummary: In this section we have shown that there are marked differences in aspects of the querying and result-clickthrough behaviors of advanced users relative to non-advanced users.\nWe have also shown that the greater the proportion of queries that contain advanced syntax, the larger the differences in query and clickthrough behaviors become.\nA factor analysis revealed the presence of two dimensions that adequately characterize variance in the query and result-click features.\nIn the querying dimension query attributes, such as the length and proportion that contain advanced syntax, and querying behavior, such as the number of queries submitted per day both affect result-click position.\nIn addition, in the result-click dimension, it appears that daily querying frequency influences result-click features such as the likelihood that a user will click on a search result and the amount of time between result presentation and the search result click.\nThe features used in this section are only interactions with search engines in the form of queries and result clicks.\nWe did not address how users searched for information beyond the result page.\nIn the next section we use the search trails described in Section 4 to analyze the post-query browsing behavior of users.\n5.2 Post-query browsing behavior\nIn this section we look at several attributes of the search trails users followed beyond the results page in an attempt to discern whether the use of advanced search syntax can be used as a predictor of aspects of post-query interaction behavior.\nAs we did previously, we first describe the mean average values for each of the browsing features, across all advanced users (i.e. padvanced> 0%), all non-advanced users (i.e., padvanced = 0%), and all users regardless of their estimated search expertise level.\nWe then look at the effect on the browsing features of increasing the value of padvanced required to be considered \"advanced\" from 1% to 100%.\nIn Table 6 we present the average values for each of these features for the two groups of users.\nAlso shown are the percentage of search trails (% Trails) and the percentage of users (% Users) used to compute the averages.\nTable 6.\nPost-query browsing features (per trail).\nAs can be seen from Table 6, there are differences in the postquery interaction behaviors of advanced users (padvanced> 0%) relative to that do not use query operators in any of their queries (padvanced = 0%).\nOnce again, the columns of interest in this comparison are bolded.\nAs we did in Section 5.1 for query and result-click behavior, we performed an independent measures ttest between the values reported for each of the post-query browsing features.\nThe results of this test suggest that differences between those that use advanced syntax and those that do not are significant (t (12495029)> 3.09, p \u2264 .002, \u03b1 = .008).\nGiven the sample sizes, all of the differences between means in the two groups were significant.\nHowever, we once again applied a Cohen's d-test to determine the effect size.\nThe findings (ranked in descending order based on effect size), show that relative to non-advanced users, advanced search engine users:\n\u2022 Revisit pages in the trail less often (d = .45) \u2022 Spend less time traversing each search trail (d = .38) \u2022 Spend less time viewing each document (d = .28) \u2022 \"Branch\" (i.e., proceed to new pages following a back operation) less often (d = .18) \u2022 Follow search trails with fewer steps (d = .16)\nSIGIR 2007 Proceedings Session 11: Interaction\nIt seems that advanced users use a more directed searching style than non-advanced users.\nThey spend less time following search trails and view the documents that lie on those trails for less time.\nThis is in accordance with our earlier proposition that advanced users seem able to discern document relevance in less time.\nAdvanced users also tend to deviate less from a direct path as they search, with fewer revisits to previously-visited pages and less branching during their searching.\nAs we did in the previous section, we increased the padvanced threshold one point at a time.\nWith the exception of number of back operations (NB), the values attributable to each of the features change as padvanced increased.\nIt seems that the differences noted earlier between non-advanced users and those that use any advanced syntax become more significant as padvanced increases.\nAs in the previous section, we conducted a factor analysis of these features and padvanced.\nTable 7 shows the intercorrelation matrix for all these variables.\nTable 7.\nIntercorrelation matrix (post-query browsing).\nAs the proportion of queries containing advanced syntax increases, the values of many of the post-query browsing features decrease.\nOnly the average session time (SS) exhibits a strong positive correlation with padvanced.\nThe factor analysis revealed the presence of two factors that account for 89.8% of the variance.\nOnce again, all features with an absolute factor loading of .30 or less were removed.\nThe two factors that emerged, with their respective loadings, can be expressed as: Factor A = .95 (DS) + .88 (TS) - .91 (SS) - .95 (padv) Factor B = .99 (NBA) + .93 (NS) + .91 (NB) Variance in the query and result-click behavior of those who use query operators can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.1% of the variance.\nIt appears to represent a very basic temporal dimension that covers timing and percentage of queries with advanced syntax, and suggests a negative relationship between time spent searching and overall session time, and a negative relationship between time spent searching and padvanced.\nThe navigation dimension underlying Factor B accounts for 39.7% of the variance, and describes attributes of post-query navigation, all of which seem to be strongly correlated with each other but not padvanced or timing.\nSummary: In this section we have shown that advanced users' post-query browsing behavior appears more directed than that of non-advanced users.\nAlthough their search sessions are longer, advanced users follow fewer search trails during their sessions, (i.e., submit fewer queries), their search trails are shorter, and their trails exhibit fewer deviations or regressions to previously encountered pages.\nWe also showed that as padvanced increases, session time increases (perhaps more advanced users are multitasking between search and other operations), and search interaction becomes more focused, perhaps because advanced users are able target relevant information more effectively, with less need for regressions or deviations in their search trails.\nAs well as interaction behaviors such as queries, result clicks, and post-query browse behavior, another important aspect of the search process is the attainment of information relevant to the query.\nIn the next section we analyze the success of advanced and non-advanced users in obtaining relevant information.\n5.3 Search success\nAs described earlier, we used six-level relevance judgments assigned to query-document pairs as an approximate measure of search success based on documents encountered on search trails.\nHowever, the queries for which we have judgments generally did not contain advanced operators.\nTo maximize the likelihood of coverage we removed advanced operators from all queries when retrieving the relevance judgments.\nThe mean average relevance judgment values for each of the four metrics--first, last, average, and maximum--are shown in Table 8 for non-advanced users (0%) and advanced users (> 0%).\nTable 8.\nSearch success (min.\n= 1, max.\n= 6) (per trail).\nThe findings suggest that users who use advanced syntax at all (padvanced> 0%) were more successful--across all four measures--than those who never used advanced syntax (padvanced = 0%).\nNot only were these users more successful in their searching, but they were consistently more successful (i.e., the standard deviation in relevance scores is lower for advanced users and continues to drop as padvanced increases).\nThe differences in the four mean average relevance scores for each metric between these two user groups were significant with independent measures t-tests (all t (516765)> 3.29, p <.001, a = .0125).\nAs we increase the value of padvanced as in previous sections, the average relevance score across all metrics also increases (all Pearson's R>.654), suggesting that more advanced users are also more likely to succeed in their searching.\nThe searchers that use advanced operators may have additional skills in locating relevant information, or may know where this information resides based on previous experience .3 Despite the fact that the four metrics targeted different parts of the search trail (e.g., first vs. last) or different ways to gather relevant information (e.g., average vs. maximum), the differences between groups and within the advanced group were consistent.\nTo see whether there were any differences in the nature of the queries submitted by advanced search engine users, we studied the distribution of the four advanced operators: quotation marks, plus, minus, and \"site:\".\nIn Table 9 we show how these operators were distributed in all queries submitted by these users.\nTable 9.\nDistribution of query operators.\nThe distribution of the quotes, plus, and minus operators are similar amongst the four levels of padvanced, with quotes being the most popular of the four operators used.\nHowever, it appears that the plus operator is the main differentiator between the padvanced user groups.\nThis operator, which forces the search engine to include in the query terms that are usually excluded by default (e.g. \"the\", \"a\"), may account for some portion of the difference in observed search success .4 However, this does not capture the contribution that each of these operators makes to the increase in relevance compared with excluding the operator.\nTo gain some insight into this, we examined the impact that each of the operators had on the relevance of retrieved results.\nWe focused on queries in padvanced> 0% where the same user had issued a query without operators and the same query with operators either before or afterwards.\nAlthough there were few queries with matching pairs--and almost all of them contained quotes--there was a small (approximately 10%) increase in the average relevance judgment score assigned to documents on the trail with quotes in the initial query.\nIt may be the case that quoted queries led to retrieval of more relevant documents, or that they better match the perceived needs of relevance judges and therefore lead to judged documents receiving higher scores.\nMore analysis similar to [8] is required to test these propositions further.\nSummary: In this section we have used several measures to study the search success of advanced and non-advanced users.\nThe findings of our analysis suggest that advanced search engine users are more successful and have more consistency in the relevance of the pages they visit.\nTheir additional search expertise may make these users better able to make better decisions about which documents to view, meaning they encounter consistently more relevant information on their searches.\nIn addition, within the group of advanced users there is a strong correlation between padvanced and the degree of search success.\nAdvanced search engine users may be more adept at combining query operators to formulate powerful query statements.\nWe now discuss the findings from all three subsections and their implications for the design of improved Web search systems.\n4 It is worth noting that there were no significant differences in the distribution of usage of the three search engines--Google, Yahoo!, or Windows Live Search--amongst advanced search engine users, or between advanced users and non-advanced.\n6.\nDISCUSSION AND IMPLICATIONS\nOur findings indicate significant differences in the querying, result-click, post-query navigation, and search success of those that use advanced syntax versus those that do not.\nMany of these findings mirror those already found in previous studies with groups of self-identified novices and experts [13] [19].\nThere are several ways in which a commercial search engine system might benefit from a quantitative indication of searcher expertise.\nThis might be yet another feature available to a ranking engine; i.e. it may be the case that expert searchers in some cases prefer different pages than novice searchers.\nThe user interface to a search engine might be tailored to a user's expertise level; perhaps even more advanced features such as term weighting and query expansion suggestions could be presented to more experienced searchers while preserving the simplicity of the basic interface for novices.\nResult presentation might also be customized based on search skill level; future work might re-evaluate the benefits of content snippets, thumbnails, etc. in a manner that allows different outcomes for different expertise levels.\nAdditionally, if browsing histories are available, the destinations of advanced searchers could be used as suggested results for queries, bypassing and potentially improving upon the traditional search process [10].\nThe use of the interaction of advanced search engine users to guide others with less expertise is an attractive proposition for the designers of search systems.\nIn part, these searchers may have more post-query browsing expertise that allows them to overcome the shortcomings of search systems [29].\nTheir interactions can be used to point users to places that advanced search engine users visit [32] or simply to train less experienced searchers how to search more effectively.\nHowever, if expert users are going to be used in this way, issues of data sparsity will need to be overcome.\nOur advanced users only accounted for 20.1% of the users whose interactions we studied.\nWhilst these may be amongst the most active users it is unlikely that they will view documents that cover large number of subject areas.\nHowever, rather than focusing on where they go (which is perhaps more appropriate for those with domain knowledge), advanced search engine users may use moves, tactics and strategies [2] that inexperienced users can learn from.\nEncouraging users to use advanced syntax helps them learn how to formulate better search queries; leveraging the searching style of expert searchers could help them learn more successful post-query interactions.\nOne potential limitation to the results we report is that in prior research, it has been shown that query operators do not significantly improve the effectiveness of Web search results [8], and that searchers may be able to perform just as well without them [27].\nIt could therefore be argued that the users who do not use query operators are in fact more advanced, since they do not waste time using potentially redundant syntax in their query statements.\nHowever, this seems unlikely given that those who use advanced syntax exhibited search behaviors typical of users with expertise [13], and are more successful in their searching.\nHowever, in future work we will expand of definition of \"advanced user\" beyond attributes of the query to also include other interaction behaviors, some of which we have defined in this study, and other avenues of research such as eye-tracking [12].\n7.\nCONCLUSIONS\nIn this paper we have described a log-based study of search behavior on the Web that has demonstrated that the use of advanced search syntax is correlated with other aspects of search\nSIGIR 2007 Proceedings Session 11: Interaction\nbehavior such as querying, result clickthrough, post-query navigation, and search success.\nThose that use this syntax are active online for longer, spend less time querying and traversing search trails, exhibit less deviation in their trails, are more likely to explore search results, take less time to click on results, and are more successful in there searching.\nThese are all traits that we would expect expert searchers to exhibit.\nCrude classification of users based on just one feature that is easily extractable from the query stream yields remarkable results about the interaction behavior of users that do not use the syntax and those that do.\nAs we have suggested, search systems may leverage the interactions of these users for improved document ranking, page recommendation, or even user training.\nFuture work will include the development of search interfaces and modified retrieval engines that make use of these information-rich features, and further investigation into the use of these features as indicators of search expertise, including a cross-correlation analysis between result click and post-query behavior.","keyphrases":["queri","search engin","search success","relev inform","relev","search strategi","toler latenc","advanc syntax","navig behavior","search behavior","relev feedback","queri syntax","advanc search featur","expert search"],"prmu":["P","P","P","P","P","P","U","M","M","R","M","M","M","M"]} {"id":"H-18","title":"Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents","abstract":"Topic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents. Previous work in this area usually focuses on single documents, although similar multiple documents are available in many domains. In this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights. The basic idea is that the optimal segmentation maximizes MI (or WMI). Our approach can detect shared topics among documents. It can find the optimal boundaries in a document, and align segments among documents at the same time. It also can handle single-document segmentation as a special case of the multi-document segmentation and alignment. Our methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents. Our experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation. Utilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.","lvl-1":"Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents Bingjun Sun*, Prasenjit Mitra*\u2020 , Hongyuan Zha\u2021 , C. Lee Giles*\u2020 , John Yen*\u2020 *Department of Computer Science and Engineering \u2020 College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802 \u2021 College of Computing The Georgia Institute of Technology Atlanta, GA 30332 *bsun@cse.psu.edu, \u2020 {pmitra,giles,jyen}@ist.\npsu.edu, \u2021 zha@cc.gatech.edu ABSTRACT Topic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents.\nPrevious work in this area usually focuses on single documents, although similar multiple documents are available in many domains.\nIn this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights.\nThe basic idea is that the optimal segmentation maximizes MI(or WMI).\nOur approach can detect shared topics among documents.\nIt can find the optimal boundaries in a document, and align segments among documents at the same time.\nIt also can handle single-document segmentation as a special case of the multi-document segmentation and alignment.\nOur methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents.\nOur experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation.\nUtilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Clustering; H.3.1 [Information Storage and Retrieval]: Content Analysis and IndexingLinguistic processing; I.2.7 [Artificial Intelligence]: Natural Language Processing-Text analysis; I.5.3 [Pattern Recognition]: Clustering-Algorithms;Similarity measures General Terms Algorithms, Design, Experimentation 1.\nINTRODUCTION Many researchers have worked on topic detection and tracking (TDT) [26] and topic segmentation during the past decade.\nTopic segmentation intends to identify the boundaries in a document with the goal to capture the latent topical structure.\nTopic segmentation tasks usually fall into two categories [15]: text stream segmentation where topic transition is identified, and coherent document segmentation in which documents are split into sub-topics.\nThe former category has applications in automatic speech recognition, while the latter one has more applications such as partial-text query of long documents in information retrieval, text summary, and quality measurement of multiple documents.\nPrevious research in connection with TDT falls into the former category, targeted on topic tracking of broadcast speech data and newswire text, while the latter category has not been studied very well.\nTraditional approaches perform topic segmentation on documents one at a time [15, 25, 6].\nMost of them perform badly in subtle tasks like coherent document segmentation [15].\nOften, end-users seek documents that have the similar content.\nSearch engines, like, Google, provide links to obtain similar pages.\nAt a finer granularity, users may actually be looking to obtain sections of a document similar to a particular section that presumably discusses a topic of the users interest.\nThus, the extension of topic segmentation from single documents to identifying similar segments from multiple similar documents with the same topic is a natural and necessary direction, and multi-document topic segmentation is expected to have a better performance since more information is utilized.\nTraditional approaches using similarity measurement based on term frequency generally have the same assumption that similar vocabulary tends to be in a coherent topic segment [15, 25, 6].\nHowever, they usually suffer the issue of identifying stop words.\nFor example, additional document-dependent stop words are removed together with the generic stop words in [15].\nThere are two reasons that we do not remove stop words directly.\nFirst, identifying stop words is another issue [12] that requires estimation in each domain.\nRemoving common stop words may result in the loss of useful information in a specific domain.\nSecond, even though stop words can be identified, hard classification of stop words and nonstop words cannot represent the gradually changing amount of information content of each word.\nWe employ a soft classification using term weights.\nIn this paper, we view the problem of topic segmentation as an optimization issue using information theoretic techniques to find the optimal boundaries of a document given the number of text segments so as to minimize the loss of mutual information (MI) (or a weighted mutual information (WMI)) after segmentation and alignment.\nThis is equal to maximizing the MI (or WMI).\nThe MI focuses on measuring the difference among segments whereas previous research focused on finding the similarity (e.g. cosine distance) of segments [15, 25, 6].\nTopic alignment of multiple similar documents can be achieved by clustering sentences on the same topic into the same cluster.\nSingle-document topic segmentation is just a special case of the multi-document topic segmentation and alignment problem.\nTerms can be co-clustered as in [10] at the same time, given the number of clusters, but our experimental results show that this method results in a worse segmentation (see Tables 1, 4, and 6).\nUsually, human readers can identify topic transition based on cue words, and can ignore stop words.\nInspired by this, we give each term (or term cluster) a weight based on entropy among different documents and different segments of documents.\nNot only can this approach increase the contribution of cue words, but it can also decrease the effect of common stop words, noisy word, and document-dependent stop words.\nThese words are common in a document.\nMany methods based on sentence similarity require that these words are removed before topic segmentation can be performed [15].\nOur results in Figure 3 show that term weights are useful for multi-document topic segmentation and alignment.\nThe major contribution of this paper is that it introduces a novel method for topic segmentation using MI and shows that this method performs better than previously used criteria.\nAlso, we have addressed the problem of topic segmentation and alignment across multiple documents, whereas most existing research focused on segmentation of single documents.\nMulti-document segmentation and alignment can utilize information from similar documents and improves the performance of topic segmentation greatly.\nObviously, our approach can handle single documents as a special case when multiple documents are unavailable.\nIt can detect shared topics among documents to judge if they are multiple documents on the same topic.\nWe also introduce the new criterion of WMI based on term weights learned from multiple similar documents, which can improve performance of topic segmentation further.\nWe propose an iterative greedy algorithm based on dynamic programming and show that it works well in practice.\nSome of our prior work is in [24].\nThe rest of this paper is organized as follows: In Section 2, we review related work.\nSection 3 contains a formulation of the problem of topic segmentation and alignment of multiple documents with term co-clustering, a review of the criterion of MI for clustering, and finally an introduction to WMI.\nIn Section 4, we first propose the iterative greedy algorithm of topic segmentation and alignment with term co-clustering, and then describe how the algorithm can be optimized by usFigure 1: Illustration of multi-document segmentation and alignment.\ning dynamic programming.\nIn Section 5, experiments about single-document segmentation, shared topic detection, and multi-document segmentation are described, and results are presented and discussed to evaluate the performance of our algorithm.\nConclusions and some future directions of the research work are discussed in Section 6.\n2.\nPREVIOUS WORK Generally, the existing approaches to text segmentation fall into two categories: supervised learning [19, 17, 23] and unsupervised learning [3, 27, 5, 6, 15, 25, 21].\nSupervised learning usually has good performance, since it learns functions from labelled training sets.\nHowever, often getting large training sets with manual labels on document sentences is prohibitively expensive, so unsupervised approaches are desired.\nSome models consider dependence between sentences and sections, such as Hidden Markov Model [3, 27], Maximum Entropy Markov Model [19], and Conditional Random Fields [17], while many other approaches are based on lexical cohesion or similarity of sentences [5, 6, 15, 25, 21].\nSome approaches also focus on cue words as hints of topic transitions [11].\nWhile some existing methods only consider information in single documents [6, 15], others utilize multiple documents [16, 14].\nThere are not many works in the latter category, even though the performance of segmentation is expected to be better with utilization of information from multiple documents.\nPrevious research studied methods to find shared topics [16] and topic segmentation and summarization between just a pair of documents [14].\nText classification and clustering is a related research area which categorizes documents into groups using supervised or unsupervised methods.\nTopical classification or clustering is an important direction in this area, especially co-clustering of documents and terms, such as LSA [9], PLSA [13], and approaches based on distances and bipartite graph partitioning [28] or maximum MI [2, 10], or maximum entropy [1, 18].\nCriteria of these approaches can be utilized in the issue of topic segmentation.\nSome of those methods have been extended into the area of topic segmentation, such as PLSA [5] and maximum entropy [7], but to our best knowledge, using MI for topic segmentation has not been studied.\n3.\nPROBLEM FORMULATION Our goal is to segment documents and align the segments across documents (Figure 1).\nLet T be the set of terms {t1, t2, ..., tl}, which appear in the unlabelled set of documents D = {d1, d2, ..., dm}.\nLet Sd be the set of sentences for document d \u2208 D, i.e.{s1, s2, ..., snd }.\nWe have a 3D matrix of term frequency, in which the three dimensions are random variables of D, Sd, and T. Sd actually is a random vector including a random variable for each d \u2208 D.\nThe term frequency can be used to estimate the joint probability distribution P(D, Sd, T), which is p(t, d, s) = T(t, d, s)\/ND, where T(t, d, s) is the number of t in d``s sentence s and ND is the total number of terms in D. \u02c6S represents the set of segments {\u02c6s1, \u02c6s2, ..., \u02c6sp} after segmentation and alignment among multiple documents, where the number of segments | \u02c6S| = p.\nA segment \u02c6si of document d is a sequence of adjacent sentences in d.\nSince for different documents si may discuss different sub-topics, our goal is to cluster adjacent sentences in each document into segments, and align similar segments among documents, so that for different documents \u02c6si is about the same sub-topic.\nThe goal is to find the optimal topic segmentation and alignment mapping Segd(si) : {s1, s2, ..., snd } \u2192 {\u02c6s1, \u02c6s2, ..., \u02c6sp} and Alid(\u02c6si) : {\u02c6s1, \u02c6s2, ..., \u02c6sp} \u2192 {\u02c6s1, \u02c6s2, ..., \u02c6sp}, for all d \u2208 D, where \u02c6si is ith segment with the constraint that only adjacent sentences can be mapped to the same segment, i.e. for d, {si, si+1, ..., sj} \u2192 {\u02c6sq}, where q \u2208 {1, ..., p}, where p is the segment number, and if i > j, then for d, \u02c6sq is missing.\nAfter segmentation and alignment, random vector Sd becomes an aligned random variable \u02c6S.\nThus, P(D, Sd, T) becomes P(D, \u02c6S, T).\nTerm co-clustering is a technique that has been employed [10] to improve the accuracy of document clustering.\nWe evaluate the effect of it for topic segmentation.\nA term t is mapped to exactly one term cluster.\nTerm co-clustering involves simultaneously finding the optimal term clustering mapping Clu(t) : {t1, t2, ..., tl} \u2192 {\u02c6t1, \u02c6t2, ..., \u02c6tk}, where k \u2264 l, l is the total number of words in all the documents, and k is the number of clusters.\n4.\nMETHODOLOGY We now describe a novel algorithm which can handle singledocument segmentation, shared topic detection, and multidocument segmentation and alignment based on MI or WMI.\n4.1 Mutual Information MI I(X; Y ) is a quantity to measure the amount of information which is contained in two or more random variables [8, 10].\nFor the case of two random variables, we have I(X; Y ) = x\u2208X y\u2208Y p(x, y)log p(x, y) p(x)p(y) , (1) Obviously, when random variables X and Y are independent, I(X; Y ) = 0.\nThus, intuitively, the value of MI depends on how random variables are dependent on each other.\nThe optimal co-clustering is the mapping Clux : X \u2192 \u02c6X and Cluy : Y \u2192 \u02c6Y that minimizes the loss: I(X; Y ) \u2212 I( \u02c6X; \u02c6Y ), which is equal to maximizing I( \u02c6X; \u02c6Y ).\nThis is the criterion of MI for clustering.\nIn the case of topic segmentation, the two random variables are the term variable T and the segment variable S, and each sample is an occurrence of a term T = t in a particular segment S = s. I(T; S) is used to measure how dependent T and S are.\nHowever, I(T; S) cannot be computed for documents before segmentation, since we do not have a set of S due to the fact that sentences of Document d, si \u2208 Sd, is not aligned with other documents.\nThus, instead of minimizing the loss of MI, we can maximize MI after topic segmentation, computed as: I( \u02c6T; \u02c6S) = \u02c6t\u2208 \u02c6T \u02c6s\u2208 \u02c6S p(\u02c6t, \u02c6s)log p(\u02c6t, \u02c6s) p(\u02c6t)p(\u02c6s) , (2) where p(\u02c6t, \u02c6s) are estimated by the term frequency tf of Term Cluster \u02c6t and Segment \u02c6s in the training set D. Note that here a segment \u02c6s includes sentences about the the same topic among all documents.\nThe optimal solution is the mapping Clut : T \u2192 \u02c6T, Segd : Sd \u2192 \u02c6S , and Alid : \u02c6S \u2192 \u02c6S, which maximizes I( \u02c6T; \u02c6S).\n4.2 Weighted Mutual Information In topic segmentation and alignment of multiple documents, if P(D, \u02c6S, T) is known, based on the marginal distributions P(D|T) and P( \u02c6S|T) for each term t \u2208 T, we can categorize terms into four types in the data set: \u2022 Common stop words are common both along the dimensions of documents and segments.\n\u2022 Document-dependent stop words that depends on the personal writing style are common only along the dimension of segments for some documents.\n\u2022 Cue words are the most important elements for segmentation.\nThey are common along the dimension of documents only for the same segment, and they are not common along the dimensions of segments.\n\u2022 Noisy words are other words which are not common along both dimensions.\nEntropy based on P(D|T) and P( \u02c6S|T) can be used to identify different types of terms.\nTo reinforce the contribution of cue words in the MI computation, and simultaneously reduce the effect of the other three types of words, similar as the idea of the tf-idf weight [22], we use entropies of each term along the dimensions of document D and segment \u02c6S, i.e. ED(\u02c6t) and E\u02c6S(\u02c6t), to compute the weight.\nA cue word usually has a large value of ED(\u02c6t) but a small value of E\u02c6S(\u02c6t).\nWe introduce term weights (or term cluster weights) w\u02c6t = ( ED(\u02c6t) max\u02c6t \u2208 \u02c6T (ED(\u02c6t )) )a (1 \u2212 E\u02c6S(\u02c6t) max\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t )) )b , (3) where ED(\u02c6t) = d\u2208D p(d|\u02c6t)log|D| 1 p(d|\u02c6t) , E\u02c6S(\u02c6t) = \u02c6s\u2208 \u02c6S p(\u02c6s|\u02c6t)log| \u02c6S| 1 p(\u02c6s|\u02c6t) , and a > 0 and b > 0 are powers to adjust term weights.\nUsually a = 1 and b = 1 as default, and max\u02c6t \u2208 \u02c6T (ED(\u02c6t )) and max\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t )) are used to normalize the entropy values.\nTerm cluster weights are used to adjust p(\u02c6t, \u02c6s), pw(\u02c6t, \u02c6s) = w\u02c6tp(\u02c6t, \u02c6s) \u02c6t\u2208 \u02c6T ;\u02c6s\u2208 \u02c6S w\u02c6tp(\u02c6t, \u02c6s) , (4) and Iw( \u02c6T; \u02c6S) = \u02c6t\u2208 \u02c6T \u02c6s\u2208 \u02c6S pw(\u02c6t, \u02c6s)log pw(\u02c6t, \u02c6s) pw(\u02c6t)pw(\u02c6s) , (5) where pw(\u02c6t) and pw(\u02c6s) are marginal distributions of pw(\u02c6t, \u02c6s).\nHowever, since we do not know either the term weights or P(D, \u02c6S, T), we need to estimate them, but w\u02c6t depends on p(\u02c6s|t) and \u02c6S, while \u02c6S and p(\u02c6s|t) also depend on w\u02c6t that is still unknown.\nThus, an iterative algorithm is required to estimate term weights w\u02c6t and find the best segmentation and alignment to optimize the objective function Iw concurrently.\nAfter a document is segmented into sentences Input: Joint probability distribution P(D, Sd, T), number of text segments p \u2208 {2, 3, ..., max(sd)}, number of term clusters k \u2208 {2, 3, ..., l} (if k = l, no term co-clustering required), and weight type w \u2208 {0, 1}, indicating to use I or Iw, respectively.\nOutput: Mapping Clu, Seg, Ali, and term weights w\u02c6t.\nInitialization: 0.\ni = 0.\nInitialize Clu (0) t , Seg (0) d , and Ali (0) d ; Initialize w (0) \u02c6t using Equation (6) if w = 1; Stage 1: 1.\nIf |D| = 1, k = l, and w = 0, check all sequential segmentations of d into p segments and find the best one Segd(s) = argmax\u02c6sI( \u02c6T; \u02c6S), and return Segd; otherwise, if w = 1 and k = l, go to 3.1; Stage 2: 2.1 If k < l, for each term t, find the best cluster \u02c6t as Clu(i+1)(t) = argmax\u02c6tI( \u02c6T; \u02c6S(i)) based on Seg(i) and Ali(i); 2.2 For each d, check all sequential segmentations of d into p segments with mapping s \u2192 \u02c6s \u2192 \u02c6s, and find the best one Ali (i+1) d (Seg (i+1) d (s)) = argmax\u02c6sI( \u02c6T(i+1); \u02c6S) based on Clu(i+1)(t) if k < l or Clu(0)(t) if k = l; 2.3 i + +.\nIf Clu, Seg, or Ali changed, go to 2.1; otherwise, if w = 0, return Clu(i), Seg(i), and Ali(i); else j = 0, go to 3.1; Stage 3: 3.1 Update w (i+j+1) \u02c6t based on Seg(i+j), Ali(i+j), and Clu(i) using Equation (3); 3.2 For each d, check all sequential segmentations of d into p segments with mapping s \u2192 \u02c6s \u2192 \u02c6s, and find the best one Ali (i+j+1) d (Seg (i+j+1) d (s)) = argmax\u02c6sIw( \u02c6T(i); \u02c6S) based on Clu(i) and w (i+j+1) \u02c6t ; 3.3 j + +.\nIf Iw( \u02c6T; \u02c6S) changes, go to step 6; otherwise, stop and return Clu(i), Seg(i+j), Ali(i+j), and w (i+j) \u02c6t ; Figure 2: Algorithm: Topic segmentation and alignment based on MI or WMI.\nand each sentence is segmented into words, each word is stemmed.\nThen the joint probability distribution P(D, Sd, T) can be estimated.\nFinally, this distribution can be used to compute MI in our algorithm.\n4.3 Iterative Greedy Algorithm Our goal is to maximize the objective function, I( \u02c6T; \u02c6S) or Iw( \u02c6T; \u02c6S), which can measure the dependence of term occurrences in different segments.\nGenerally, first we do not know the estimate term weights, which depend on the optimal topic segmentation and alignment, and term clusters.\nMoreover, this problem is NP-hard [10], even though if we know the term weights.\nThus, an iterative greedy algorithm is desired to find the best solution, even though probably only local maxima are reached.\nWe present the iterative greedy algorithm in Figure 2 to find a local maximum of I( \u02c6T; \u02c6S) or Iw( \u02c6T; \u02c6S) with simultaneous term weight estimation.\nThis algorithm can is iterative and greedy for multi-document cases or single-document cases with term weight estimation and\/or term co-clustering.\nOtherwise, since it is just a one step algorithm to solve the task of single-document segmentation [6, 15, 25], the global maximum of MI is guaranteed.\nWe will show later that term co-clustering reduces the accuracy of the results and is not necessary, and for singledocument segmentation, term weights are also not required.\n4.3.1 Initialization In Step 0, the initial term clustering Clut and topic segmentation and alignment Segd and Alid are important to avoid local maxima and reduce the number of iterations.\nFirst, a good guess of term weights can be made by using the distributions of term frequency along sentences for each document and averaging them to get the initial values of w\u02c6t: wt = ( ED(t) maxt \u2208T (ED(t )) )(1 \u2212 ES(t) maxt \u2208T (ES(t )) ), (6) where ES(t) = 1 |Dt| d\u2208Dt (1 \u2212 s\u2208Sd p(s|t)log|Sd| 1 p(s|t) ), where Dt is the set of documents which contain Term t. Then, for the initial segmentation Seg(0) , we can simply segment documents equally by sentences.\nOr we can find the optimal segmentation just for each document d which maximizes the WMI, Seg (0) d = argmax\u02c6sIw(T; \u02c6S), where w = w (0) \u02c6t .\nFor the initial alignment Ali(0) , we can first assume that the order of segments for each d is the same.\nFor the initial term clustering Clu(0) , first cluster labels can be set randomly, and after the first time of Step 3, a good initial term clustering is obtained.\n4.3.2 Different Cases After initialization, there are three stages for different cases.\nTotally there are eight cases, |D| = 1 or |D| > 1, k = l or k < l, w = 0 or w = 1.\nSingle document segmentation without term clustering and term weight estimation (|D| = 1, k = l, w = 0) only requires Stage 1 (Step 1).\nIf term clustering is required (k < l), Stage 2 (Step 2.1, 2.2, and 2.3) is executed iteratively.\nIf term weight estimation is required (w = 1), Stage 3 (Step 3.1, 3.2, and 3.3) is executed iteratively.\nIf both are required (k < l, w = 1), Stage 2 and 3 run one after the other.\nFor multi-document segmentation without term clustering and term weight estimation (|D| > 1, k = l, w = 0), only iteration of Step 2.2 and 2.3 are required.\nAt Stage 1, the global maximum can be found based on I( \u02c6T; \u02c6S) using dynamic programming in Section 4.4.\nSimultaneously finding a good term clustering and estimated term weights is impossible, since when moving a term to a new term cluster to maximize Iw( \u02c6T; \u02c6S), we do not know that the weight of this term should be the one of the new cluster or the old cluster.\nThus, we first do term clustering at Stage 2, and then estimate term weights at Stage 3.\nAt Stage 2, Step 2.1 is to find the best term clustering and Step 2.2 is to find the best segmentation.\nThis cycle is repeated to find a local maximum based on MI I until it converges.\nThe two steps are: (1) based on current term clustering Clu\u02c6t, for each document d, the algorithm segments all the sentences Sd into p segments sequentially (some segments may be empty), and put them into the p segments \u02c6S of the whole training set D (all possible cases of different segmentation Segd and alignment Alid are checked) to find the optimal case, and (2) based on the current segmentation and alignment, for each term t, the algorithm finds the best term cluster of t based on the current segmentation Segd and alignment Alid.\nAfter finding a good term clustering, term weights are estimated if w = 1.\nAt Stage 3, similar as Stage 2, Step 3.1 is term weight re-estimation and Step 3.2 is to find a better segmentation.\nThey are repeated to find a local maximum based on WMI Iw until it converges.\nHowever, if the term clustering in Stage 2 is not accurate, then the term weight estimation at Stage 3 may have a bad result.\nFinally, at Step 3.3, this algorithm converges and return the output.\nThis algorithm can handle both single-document and multi-document segmentation.\nIt also can detect shared topics among documents by checking the proportion of overlapped sentences on the same topics, as described in Sec 5.2.\n4.4 Algorithm Optimization In many previous works on segmentation, dynamic programming is a technique used to maximize the objective function.\nSimilarly, at Step 1, 2.2, and 3.2 of our algorithm, we can use dynamic programming.\nFor Stage 1, using dynamic programming can still find the global optimum, but for Stage 2 and Stage 3, we can only find the optimum for each step of topic segmentation and alignment of a document.\nHere we only show the dynamic programming for Step 3.2 using WMI (Step 1 and 2.2 are similar but they can use either I or Iw).\nThere are two cases that are not shown in the algorithm in Figure 2: (a) single-document segmentation or multi-document segmentation with the same sequential order of segments, where alignment is not required, and (b) multi-document segmentation with different sequential orders of segments, where alignment is necessary.\nThe alignment mapping function of the former case is simply just Alid(\u02c6si) = \u02c6si, while for the latter one``s alignment mapping function Alid(\u02c6si) = \u02c6sj, i and j may be different.\nThe computational steps for the two cases are listed below: Case 1 (no alignment): For each document d: (1) Compute pw(\u02c6t), partial pw(\u02c6t, \u02c6s) and partial pw(\u02c6s) without counting sentences from d.\nThen put sentences from i to j into Part k, and compute partial WMI PIw( \u02c6T; \u02c6sk(si, si+1, ..., sj)) \u02c6t\u2208 \u02c6T pw(\u02c6t, \u02c6sk)log pw(\u02c6t, \u02c6sk) pw(\u02c6t)pw(\u02c6sk) , where Alid(si, si+1, ..., sj) = k, k \u2208 {1, 2, ..., p}, 1 \u2264 i \u2264 j \u2264 nd, and Segd(sq) = \u02c6sk for all i \u2264 q \u2264 j. (2) Let M(sm, 1) = PIw( \u02c6T; \u02c6s1(s1, s2, ..., sm)).\nThen M(sm, L) = maxi[M(si\u22121, L \u2212 1) + PIw( \u02c6T; \u02c6sL(si, ..., sm))], where 0 \u2264 m \u2264 nd, 1 < L < p, 1 \u2264 i \u2264 m + 1, and when i > m, no sentences are put into \u02c6sk when compute PIw (note PIw( \u02c6T; \u02c6s(si, ..., sm)) = 0 for single-document segmentation).\n(3) Finally M(snd , p) = maxi[M(si\u22121, p \u2212 1)+ PIw( \u02c6T; \u02c6sp(si, ..., snd ))], where 1 \u2264 i \u2264 nd+1.\nThe optimal Iw is found and the corresponding segmentation is the best.\nCase 2 (alignment required): For each document d: (1) Compute pw(\u02c6t), partial pw(\u02c6t, \u02c6s), and partial pw(\u02c6s), and PIw( \u02c6T; \u02c6sk(si, si+1, ..., sj)) similarly as Case 1.\n(2) Let M(sm, 1, k) = PIw( \u02c6T; \u02c6sk(s1, s2, ..., sm)), where k \u2208 {1, 2, ..., p}.\nThen M(sm, L, kL) = maxi,j[M(si\u22121, L \u2212 1, kL\/j) + PIw( \u02c6T; \u02c6sAlid(\u02c6sL )=j(si, si+1, ..., sm))], where 0 \u2264 m \u2264 nd, 1 < L < p, 1 \u2264 i \u2264 m + 1, kL \u2208 Set(p, L), which is the set of all p!\nL!\n(p\u2212L)!\ncombinations of L segments chosen from all p segments, j \u2208 kL, the set of L segments chosen from all p segments, and kL\/j is the combination of L \u2212 1 segments in kL except Segment j. (3) Finally, M(snd , p, kp) = maxi,j[M(si\u22121, p \u2212 1, kp\/j) +PIw( \u02c6T; \u02c6sAlid(\u02c6sL )=j(si, si+1, ..., snd ))], where kp is just the combination of all p segments and 1 \u2264 i \u2264 nd + 1, which is the optimal Iw and the corresponding segmentation is the best.\nThe steps of Case 1 and 2 are similar, except in Case 2, alignment is considered in addition to segmentation.\nFirst, basic items of probability for computing Iw are computed excluding Doc d, and then partial WMI by putting every possible sequential segment (including empty segment) of d into every segment of the set.\nSecond, the optimal sum of PIw for L segments and the leftmost m sentences, M(sm, L), is found.\nFinally, the maximal WMI is found among different sums of M(sm, p \u2212 1) and PIw for Segment p. 5.\nEXPERIMENTS In this section, single-document segmentation, shared topic detection, and multi-document segmentation will be tested.\nDifferent hyper parameters of our method are studied.\nFor convenience, we refer to the method using I as MIk if w = 0, and Iw as WMIk if w = 2 or as WMIk if w = 1, where k is the number of term clusters, and if k = l, where l is the total number of terms, then no term clustering is required, i.e. MIl and WMIl.\n5.1 Single-document Segmentation 5.1.1 Test Data and Evaluation The first data set we tested is a synthetic one used in previous research [6, 15, 25] and many other papers.\nIt has 700 samples.\nEach is a concatenation of ten segments.\nEach segment is the first n sentence selected randomly from the Brown corpus, which is supposed to have a different topic from each other.\nCurrently, the best results on this data set is achieved by Ji et.al.\n[15].\nTo compare the performance of our methods, the criterion used widely in previous research is applied, instead of the unbiased criterion introduced in [20].\nIt chooses a pair of words randomly.\nIf they are in different segments (different) for the real segmentation (real), but predicted (pred) as in the same segment, it is a miss.\nIf they are in the same segment (same), but predicted as in different segments, it is a false alarm.\nThus, the error rate is computed using the following equation: p(err|real, pred) = p(miss|real, pred, diff)p(diff|real) +p(false alarm|real, pred, same)p(same|real).\n5.1.2 Experiment Results We tested the case when the number of segments is known.\nTable 1 shows the results of our methods with different hyper parameter values and three previous approaches, C99[25], U00[6], and ADDP03[15], on this data set when the segment number is known.\nIn WMI for single-document segmentation, the term weights are computed as follows: w\u02c6t = 1\u2212E\u02c6S(\u02c6t)\/max\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t )).\nFor this case, our methods MIl and WMIl both outperform all the previous approaches.\nWe compared our methods with ADDP03using one-sample one-sided t-test and p-values are shown in Table 2.\nFrom the p-values, we can see that mostly the differences are very Table 1: Average Error Rates of Single-document Segmentation Given Segment Numbers Known Range of n 3-11 3-5 6-8 9-11 Sample size 400\u00a0100\u00a0100\u00a0100 C99 12% 11% 10% 9% U00 10% 9% 7% 5% ADDP03 6.0% 6.8% 5.2% 4.3% MIl 4.68% 5.57% 2.59% 1.59% WMIl 4.94% 6.33% 2.76% 1.62% MI100 9.62% 12.92% 8.66% 6.67% Table 2: Single-document Segmentation: P-values of T-test on Error Rates Range of n 3-11 3-5 6-8 9-11 ADDP03, MIl 0.000 0.000 0.000 0.000 ADDP03, WMIl 0.000 0.099 0.000 0.000 MIl, WMIl 0.061 0.132 0.526 0.898 significant.\nWe also compare the error rates between our two methods using two-sample two-sided t-test to check the hypothesis that they are equal.\nWe cannot reject the hypothesis that they are equal, so the difference are not significant, even though all the error rates for MIl are smaller than WMIl.\nHowever, we can conclude that term weights contribute little in single-document segmentation.\nThe results also show that MI using term co-clustering (k = 100) decreases the performance.\nWe tested different number of term clusters, and found that the performance becomes better when the cluster number increases to reach l. WMIk \u03b8, where Sd is the set of sentences of d, and |Sd| is the number of sentences of d, then d and d have the shared topic.\nFor a pair of documents selected randomly, the error rate is computed using the following equation: p(err|real, pred) = p(miss|real, pred, same)p(same|real) +p(false alarm|real, pred, diff)p(diff|real), where a miss means if they have the same topic (same) for the real case (real), but predicted (pred) as on the same topic.\nIf they are on different topics (diff), but predicted as on the same topic, it is a false alarm.\n5.2.2 Experiment Results The results are shown in Table 3.\nIf most documents have different topics, in WMIl, the estimation of term weights in Equation (3) is not correct.\nThus, WMIl is not expected to have a better performance than MIl, when most documents have different topics.\nWhen there are fewer documents in a subset with the same number of topics, more documents have different topics, so WMIl is more worse than MIl.\nWe can see that for most cases MIl has a better (or at least similar) performance than LDA.\nAfter shared topic detection, multi-document segmentation of documents with the shared topics is able to be executed.\n5.3 Multi-document Segmentation 5.3.1 Test Data and Evaluation For multi-document segmentation and alignment, our goal is to identify these segments about the same topic among multiple similar documents with shared topics.\nUsing Iw is expected to perform better than I, since without term weights the result is affected seriously by document-dependent stop words and noisy words which depends on the personal writing style.\nIt is more likely to treat the same segments of different documents as different segments under the effect of document-dependent stop words and noisy words.\nTerm weights can reduce the effect of document-dependent stop words and noisy words by giving cue terms more weights.\nThe data set for multi-document segmentation and alignment has 102 samples and 2264 sentences totally.\nEach is the introduction part of a lab report selected from the course of Biol 240W, Pennsylvania State University.\nEach sample has two segments, introduction of plant hormones and the content in the lab.\nThe length range of samples is from two to 56 sentences.\nSome samples only have one part and some have a reverse order the these two segments.\nIt is not hard to identify the boundary between two segments for human.\nWe labelled each sentence manually for evaluation.\nThe criterion of evaluation is just using the proportion of the number of sentences with wrong predicted segment labels in the total number of sentences in the whole training Table 4: Average Error Rates of Multi-document Segmentation Given Segment Numbers Known #Doc MIl WMIl k MIk WMIk 102 3.14% 2.78% 300 4.68% 6.58% 51 4.17% 3.63% 300 17.83% 22.84% 34 5.06% 4.12% 300 18.75% 20.95% 20 7.08% 5.42% 250 20.40% 21.83% 10 10.38% 7.89% 250 21.42% 21.91% 5 15.77% 11.64% 250 21.89% 22.59% 2 25.90% 23.18% 50 25.44% 25.49% 1 23.90% 24.82% 25 25.75% 26.15% Table 5: Multi-document Segmentation: P-values of T-test on Error Rates for MIl and WMIl #Doc 51 34 20 10 5 2 P-value 0.19 0.101 0.025 0.001 0.000 0.002 set as the error rate: p(error|predicted, real) = d\u2208D s\u2208Sd 1(predicteds=reals)\/ d\u2208D nd.\nIn order to show the benefits of multi-document segmentation and alignment, we compared our method with different parameters on different partitions of the same training set.\nExcept the cases that the number of documents is 102 and one (they are special cases of using the whole set and the pure single-document segmentation), we randomly divided the training set into m partitions, and each has 51, 34, 20, 10, 5, and 2 document samples.\nThen we applied our methods on each partition and calculated the error rate of the whole training set.\nEach case was repeated for 10 times for computing the average error rates.\nFor different partitions of the training set, different k values are used, since the number of terms increases when the document number in each partition increases.\n5.3.2 Experiment Results From the experiment results in Table 4, we can see the following observations: (1) When the number of documents increases, all methods have better performances.\nOnly from one to two documents, MIl has decrease a little.\nWe can observe this from Figure 3 at the point of document number = 2.\nMost curves even have the worst results at this point.\nThere are two reasons.\nFirst, because samples vote for the best multi-document segmentation and alignment, but if only two documents are compared with each other, the one with missing segments or a totally different sequence will affect the correct segmentation and alignment of the other.\nSecond, as noted at the beginning of this section, if two documents have more document-dependent stop words or noisy words than cue words, then the algorithm may view them as two different segments and the other segment is missing.\nGenerally, we can only expect a better performance when the number of documents is larger than the number of segments.\n(2) Except single-document segmentation, WMIl is always better than MIl, and when the number of documents is reaching one or increases to a very large number, their performances become closer.\nTable 5 shows p-values of twosample one-sided t-test between MIl and WMIl.\nWe also can see this trend from p-values.\nWhen document number = 5, we reached the smallest p-value and the largest difference between error rates of MIl and WMIl.\nFor single-document Table 6: Multi-document Segmentation: Average Error Rate for Document Number = 5 in Each Subset with Different Number of Term Clusters #Cluster 75\u00a0100\u00a0150\u00a0250 l MIk 24.67% 24.54% 23.91% 22.59% 15.77% segmentation, WMIl is even a little bit worse than MIl, which is similar as the results of the single-document segmentation on the first data set.\nThe reason is that for singledocument segmentation, we cannot estimate term weights accurately, since multiple documents are unavailable.\n(3) Using term clustering usually gets worse results than MIl and WMIl.\n(4) Using term clustering in WMIk is even worse than in MIk, since in WMIk term clusters are found first using I before using Iw.\nIf the term clusters are not correct, then the term weights are estimated worse, which may mislead the algorithm to reach even worse results.\nFrom the results we also found that in multi-document segmentation and alignment, most documents with missing segments and a reverse order are identified correctly.\nTable 6 illustrates the experiment results for the case of 20 partitions (each has five document samples) of the training set and topic segmentation and alignment using MIk with different numbers of term clusters k. Notice that when the number of term clusters increases, the error rate becomes smaller.\nWithout term clustering, we have the best result.\nWe did not show results for WMIk with term clustering, but the results are similar.\nWe also tested WMIl with different hyper parameters of a and b to adjust term weights.\nThe results are presented in Figure 3.\nIt was shown that the default case WMIl : a = 1, b = 1 gave the best results for different partitions of the training set.\nWe can see the trend that when the document number is very small or large, the difference between MIl : a = 0, b = 0 and WMIl : a = 1, b = 1 becomes quite small.\nWhen the document number is not large (about from 2 to 10), all the cases using term weights have better performances than MIl : a = 0, b = 0 without term weights, but when the document number becomes larger, the cases WMIl : a = 1, b = 0 and WMIl : a = 2, b = 1 become worse than MIl : a = 0, b = 0.\nWhen the document number becomes very large, they are even worse than cases with small document numbers.\nThis means that a proper way to estimate term weights for the criterion of WMI is very important.\nFigure 4 shows the term weights learned from the whole training set.\nFour types of words are categorized roughly even though the transition among them are subtle.\nFigure 5 illustrates the change in (weighted) mutual information for MIl and WMIl.\nAs expected, mutual information for MIl increases monotonically with the number of steps, while WMIl does not.\nFinally, MIl and WMIl are scalable, with computational complexity shown in Figure 6.\nOne advantage for our approach based on MI is that removing stop words is not required.\nAnother important advantage is that there are no necessary hyper parameters to adjust.\nIn single-document segmentation, the performance based on MI is even better for that based on WMI, so no extra hyper parameter is required.\nIn multi-document segmentation, we show in the experiment, a = 1 and b = 1 is the best.\nOur method gives more weights to cue terms.\nHowever, usually cue terms or sentences appear at the beginning of a segment, while the end of the segment may be 1 2 5 10 20 34 51 102 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Document Number ErrorRate MIl :a=0,b=0 WMI l :a=1,b=1 WMI l :a=1,b=0 WMI l :a=2,b=1 Figure 3: Error rates for different hyper parameters of term weights.\n0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized Document Entropy NormalizedSegmentEntropy Noisy words Cue words Common stop words Doc\u2212dep stop words Figure 4: Term weights learned from the whole training set.\n0 100\u00a0200\u00a0300\u00a0400 500 600 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Number of Steps (Weighted)MutualInformation MI l WMI l Figure 5: Change in (weighted) MI for MIl and WMIl.\n0 20 40 60\u00a080\u00a0100\u00a0120 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Document Number TimetoConverge(sec) MI l WMI l Figure 6: Time to converge for MIl and WMIl.\nmuch noisy.\nOne possible solution is giving more weights to terms at the beginning of each segment.\nMoreover, when the length of segments are quite different, long segments have much higher term frequencies, so they may dominate the segmentation boundaries.\nNormalization of term frequencies versus the segment length may be useful.\n6.\nCONCLUSIONS AND FUTURE WORK We proposed a novel method for multi-document topic segmentation and alignment based on weighted mutual information, which can also handle single-document cases.\nWe used dynamic programming to optimize our algorithm.\nOur approach outperforms all the previous methods on singledocument cases.\nMoreover, we also showed that doing segmentation among multiple documents can improve the performance tremendously.\nOur results also illustrated that using weighted mutual information can utilize the information of multiple documents to reach a better performance.\nWe only tested our method on limited data sets.\nMore data sets especially complicated ones should be tested.\nMore previous methods should be compared with.\nMoreover, natural segmentations like paragraphs are hints that can be used to find the optimal boundaries.\nSupervised learning also can be considered.\n7.\nACKNOWLEDGMENTS The authors want to thank Xiang Ji, and Prof. J. Scott Payne for their help.\n8.\nREFERENCES [1] A. Banerjee, I. Ghillon, J. Ghosh, S. Merugu, and D. Modha.\nA generalized maximum entropy approach to bregman co-clustering and matrix approximation.\nIn Proceedings of SIGKDD, 2004.\n[2] R. Bekkerman, R. El-Yaniv, and A. McCallum.\nMulti-way distributional clustering via pairwise interactions.\nIn Proceedings of ICML, 2005.\n[3] D. M. Blei and P. J. Moreno.\nTopic segmentation with an aspect hidden markov model.\nIn Proceedings of SIGIR, 2001.\n[4] D. M. Blei, A. Ng, and M. Jordan.\nLatent dirichlet allocation.\nJournal of Machine Learning Research, 3:993-1022, 2003.\n[5] T. Brants, F. Chen, and I. Tsochantaridis.\nTopic-based document segmentation with probabilistic latent semantic analysis.\nIn Proceedings of CIKM, 2002.\n[6] F. Choi.\nAdvances in domain indepedent linear text segmentation.\nIn Proceedings of the NAACL, 2000.\n[7] H. Christensen, B. Kolluru, Y. Gotoh, and S. Renals.\nMaximum entropy segmentation of broadcast news.\nIn Proceedings of ICASSP, 2005.\n[8] T. Cover and J. Thomas.\nElements of Information Theory.\nJohn Wiley and Sons, New York, USA, 1991.\n[9] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman.\nIndexing by latent semantic analysis.\nJournal of the American Society for Information Systems, 1990.\n[10] I. Dhillon, S. Mallela, and D. Modha.\nInformation-theoretic co-clustering.\nIn Proceedings of SIGKDD, 2003.\n[11] M. Hajime, H. Takeo, and O. Manabu.\nText segmentation with multiple surface linguistic cues.\nIn Proceedings of COLING-ACL, 1998.\n[12] T. K. Ho.\nStop word location and identification for adaptive text recognition.\nInternational Journal of Document Analysis and Recognition, 3(1), August 2000.\n[13] T. Hofmann.\nProbabilistic latent semantic analysis.\nIn Proceedings of the UAI``99, 1999.\n[14] X. Ji and H. Zha.\nCorrelating summarization of a pair of multilingual documents.\nIn Proceedings of RIDE, 2003.\n[15] X. Ji and H. Zha.\nDomain-independent text segmentation using anisotropic diffusion and dynamic programming.\nIn Proceedings of SIGIR, 2003.\n[16] X. Ji and H. Zha.\nExtracting shared topics of multiple documents.\nIn Proceedings of the 7th PAKDD, 2003.\n[17] J. Lafferty, A. McCallum, and F. Pereira.\nConditional random fields: Probabilistic models for segmenting and labeling sequence data.\nIn Proceedings of ICML, 2001.\n[18] T. Li, S. Ma, and M. Ogihara.\nEntropy-based criterion in categorical clustering.\nIn Proceedings of ICML, 2004.\n[19] A. McCallum, D. Freitag, and F. Pereira.\nMaximum entropy markov models for information extraction and segmentation.\nIn Proceedings of ICML, 2000.\n[20] L. Pevzner and M. Hearst.\nA critique and improvement of an evaluation metric for text segmentation.\nComputational Linguistic, 28(1):19-36, 2002.\n[21] J. C. Reynar.\nStatistical models for topic segmentation.\nIn Proceedings of ACL, 1999.\n[22] G. Salton and M. McGill.\nIntroduction to Modern Information Retrieval.\nMcGraw Hill, 1983.\n[23] B. Sun, Q. Tan, P. Mitra, and C. L. Giles.\nExtraction and search of chemical formulae in text documents on the web.\nIn Proceedings of WWW, 2007.\n[24] B. Sun, D. Zhou, H. Zha, and J. Yen.\nMulti-task text segmentation and alignment based on weighted mutual information.\nIn Proceedings of CIKM, 2006.\n[25] M. Utiyama and H. Isahara.\nA statistical model for domain-independent text segmentation.\nIn Proceedings of the 39th ACL, 1999.\n[26] C. Wayne.\nMultilingual topic detection and tracking: Successful research enabled by corpora and evaluation.\nIn Proceedings of LREC, 2000.\n[27] J. Yamron, I. Carp, L. Gillick, S. Lowe, and P. van Mulbregt.\nA hidden markov model approach to text segmentation and event tracking.\nIn Proceedings of ICASSP, 1998.\n[28] H. Zha and X. Ji.\nCorrelating multilingual documents via bipartite graph modeling.\nIn Proceedings of SIGIR, 2002.","lvl-3":"Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents\nABSTRACT\nTopic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents.\nPrevious work in this area usually focuses on single documents, although similar multiple documents are available in many domains.\nIn this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights.\nThe basic idea is that the optimal segmentation maximizes MI (or WMI).\nOur approach can detect shared topics among documents.\nIt can find the optimal boundaries in a document, and align segments among documents at the same time.\nIt also can handle single-document segmentation as a special case of the multi-document segmentation and alignment.\nOur methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents.\nOur experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation.\nUtilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.\n1.\nINTRODUCTION\nMany researchers have worked on topic detection and tracking (TDT) [26] and topic segmentation during the past decade.\nTopic segmentation intends to identify the boundaries in a document with the goal to capture the latent topical structure.\nTopic segmentation tasks usually fall into two categories [15]: text stream segmentation where topic transition is identified, and coherent document segmentation in which documents are split into sub-topics.\nThe former category has applications in automatic speech recognition, while the latter one has more applications such as partial-text query of long documents in information retrieval, text summary, and quality measurement of multiple documents.\nPrevious research in connection with TDT falls into the former category, targeted on topic tracking of broadcast speech data and newswire text, while the latter category has not been studied very well.\nTraditional approaches perform topic segmentation on documents one at a time [15, 25, 6].\nMost of them perform badly in subtle tasks like coherent document segmentation [15].\nOften, end-users seek documents that have the similar content.\nSearch engines, like, Google, provide links to obtain similar pages.\nAt a finer granularity, users may actually be looking to obtain sections of a document similar to a particular section that presumably discusses a topic of the users interest.\nThus, the extension of topic segmentation from single documents to identifying similar segments from multiple similar documents with the same topic is a natural and necessary direction, and multi-document topic segmentation is expected to have a better performance since more information is utilized.\nTraditional approaches using similarity measurement based on term frequency generally have the same assumption that similar vocabulary tends to be in a coherent topic segment [15, 25, 6].\nHowever, they usually suffer the issue of identifying stop words.\nFor example, additional document-dependent stop words are removed together with the generic stop words in [15].\nThere are two reasons that we do not remove stop\nwords directly.\nFirst, identifying stop words is another issue [12] that requires estimation in each domain.\nRemoving common stop words may result in the loss of useful information in a specific domain.\nSecond, even though stop words can be identified, hard classification of stop words and nonstop words cannot represent the gradually changing amount of information content of each word.\nWe employ a soft classification using term weights.\nIn this paper, we view the problem of topic segmentation as an optimization issue using information theoretic techniques to find the optimal boundaries of a document given the number of text segments so as to minimize the loss of mutual information (MI) (or a weighted mutual information (WMI)) after segmentation and alignment.\nThis is equal to maximizing the MI (or WMI).\nThe MI focuses on measuring the difference among segments whereas previous research focused on finding the similarity (e.g. cosine distance) of segments [15, 25, 6].\nTopic alignment of multiple similar documents can be achieved by clustering sentences on the same topic into the same cluster.\nSingle-document topic segmentation is just a special case of the multi-document topic segmentation and alignment problem.\nTerms can be co-clustered as in [10] at the same time, given the number of clusters, but our experimental results show that this method results in a worse segmentation (see Tables 1, 4, and 6).\nUsually, human readers can identify topic transition based on cue words, and can ignore stop words.\nInspired by this, we give each term (or term cluster) a weight based on entropy among different documents and different segments of documents.\nNot only can this approach increase the contribution of cue words, but it can also decrease the effect of common stop words, noisy word, and document-dependent stop words.\nThese words are common in a document.\nMany methods based on sentence similarity require that these words are removed before topic segmentation can be performed [15].\nOur results in Figure 3 show that term weights are useful for multi-document topic segmentation and alignment.\nThe major contribution of this paper is that it introduces a novel method for topic segmentation using MI and shows that this method performs better than previously used criteria.\nAlso, we have addressed the problem of topic segmentation and alignment across multiple documents, whereas most existing research focused on segmentation of single documents.\nMulti-document segmentation and alignment can utilize information from similar documents and improves the performance of topic segmentation greatly.\nObviously, our approach can handle single documents as a special case when multiple documents are unavailable.\nIt can detect shared topics among documents to judge if they are multiple documents on the same topic.\nWe also introduce the new criterion of WMI based on term weights learned from multiple similar documents, which can improve performance of topic segmentation further.\nWe propose an iterative greedy algorithm based on dynamic programming and show that it works well in practice.\nSome of our prior work is in [24].\nThe rest of this paper is organized as follows: In Section 2, we review related work.\nSection 3 contains a formulation of the problem of topic segmentation and alignment of multiple documents with term co-clustering, a review of the criterion of MI for clustering, and finally an introduction to WMI.\nIn Section 4, we first propose the iterative greedy algorithm of topic segmentation and alignment with term co-clustering, and then describe how the algorithm can be optimized by us\nFigure 1: Illustration of multi-document segmentation and alignment.\ning dynamic programming.\nIn Section 5, experiments about single-document segmentation, shared topic detection, and multi-document segmentation are described, and results are presented and discussed to evaluate the performance of our algorithm.\nConclusions and some future directions of the research work are discussed in Section 6.\n2.\nPREVIOUS WORK\nGenerally, the existing approaches to text segmentation fall into two categories: supervised learning [19, 17, 23] and unsupervised learning [3, 27, 5, 6, 15, 25, 21].\nSupervised learning usually has good performance, since it learns functions from labelled training sets.\nHowever, often getting large training sets with manual labels on document sentences is prohibitively expensive, so unsupervised approaches are desired.\nSome models consider dependence between sentences and sections, such as Hidden Markov Model [3, 27], Maximum Entropy Markov Model [19], and Conditional Random Fields [17], while many other approaches are based on lexical cohesion or similarity of sentences [5, 6, 15, 25, 21].\nSome approaches also focus on cue words as hints of topic transitions [11].\nWhile some existing methods only consider information in single documents [6, 15], others utilize multiple documents [16, 14].\nThere are not many works in the latter category, even though the performance of segmentation is expected to be better with utilization of information from multiple documents.\nPrevious research studied methods to find shared topics [16] and topic segmentation and summarization between just a pair of documents [14].\nText classification and clustering is a related research area which categorizes documents into groups using supervised or unsupervised methods.\nTopical classification or clustering is an important direction in this area, especially co-clustering of documents and terms, such as LSA [9], PLSA [13], and approaches based on distances and bipartite graph partitioning [28] or maximum MI [2, 10], or maximum entropy [1, 18].\nCriteria of these approaches can be utilized in the issue of topic segmentation.\nSome of those methods have been extended into the area of topic segmentation, such as PLSA [5] and maximum entropy [7], but to our best knowledge, using MI for topic segmentation has not been studied.\n3.\nPROBLEM FORMULATION\n4.\nMETHODOLOGY\n4.1 Mutual Information\n4.2 Weighted Mutual Information\n4.3 Iterative Greedy Algorithm\n4.3.1 Initialization\n4.3.2 Different Cases\n4.4 Algorithm Optimization\n5.\nEXPERIMENTS\n5.1 Single-document Segmentation\n5.1.1 Test Data and Evaluation\n5.1.2 Experiment Results\n5.2 Shared Topic Detection 5.2.1 Test Data and Evaluation\nRates for Different Numbers of Documents in Each Subset\n5.2.2 Experiment Results\n5.3 Multi-document Segmentation 5.3.1 Test Data and Evaluation\n5.3.2 Experiment Results\n6.\nCONCLUSIONS AND FUTURE WORK\nWe proposed a novel method for multi-document topic segmentation and alignment based on weighted mutual information, which can also handle single-document cases.\nWe used dynamic programming to optimize our algorithm.\nOur approach outperforms all the previous methods on singledocument cases.\nMoreover, we also showed that doing segmentation among multiple documents can improve the performance tremendously.\nOur results also illustrated that using weighted mutual information can utilize the information of multiple documents to reach a better performance.\nWe only tested our method on limited data sets.\nMore data sets especially complicated ones should be tested.\nMore previous methods should be compared with.\nMoreover, natural segmentations like paragraphs are hints that can be used to find the optimal boundaries.\nSupervised learning also can be considered.","lvl-4":"Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents\nABSTRACT\nTopic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents.\nPrevious work in this area usually focuses on single documents, although similar multiple documents are available in many domains.\nIn this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights.\nThe basic idea is that the optimal segmentation maximizes MI (or WMI).\nOur approach can detect shared topics among documents.\nIt can find the optimal boundaries in a document, and align segments among documents at the same time.\nIt also can handle single-document segmentation as a special case of the multi-document segmentation and alignment.\nOur methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents.\nOur experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation.\nUtilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.\n1.\nINTRODUCTION\nMany researchers have worked on topic detection and tracking (TDT) [26] and topic segmentation during the past decade.\nTopic segmentation intends to identify the boundaries in a document with the goal to capture the latent topical structure.\nTopic segmentation tasks usually fall into two categories [15]: text stream segmentation where topic transition is identified, and coherent document segmentation in which documents are split into sub-topics.\nTraditional approaches perform topic segmentation on documents one at a time [15, 25, 6].\nMost of them perform badly in subtle tasks like coherent document segmentation [15].\nOften, end-users seek documents that have the similar content.\nAt a finer granularity, users may actually be looking to obtain sections of a document similar to a particular section that presumably discusses a topic of the users interest.\nThus, the extension of topic segmentation from single documents to identifying similar segments from multiple similar documents with the same topic is a natural and necessary direction, and multi-document topic segmentation is expected to have a better performance since more information is utilized.\nTraditional approaches using similarity measurement based on term frequency generally have the same assumption that similar vocabulary tends to be in a coherent topic segment [15, 25, 6].\nHowever, they usually suffer the issue of identifying stop words.\nFor example, additional document-dependent stop words are removed together with the generic stop words in [15].\nThere are two reasons that we do not remove stop\nwords directly.\nFirst, identifying stop words is another issue [12] that requires estimation in each domain.\nRemoving common stop words may result in the loss of useful information in a specific domain.\nWe employ a soft classification using term weights.\nTopic alignment of multiple similar documents can be achieved by clustering sentences on the same topic into the same cluster.\nSingle-document topic segmentation is just a special case of the multi-document topic segmentation and alignment problem.\nUsually, human readers can identify topic transition based on cue words, and can ignore stop words.\nInspired by this, we give each term (or term cluster) a weight based on entropy among different documents and different segments of documents.\nNot only can this approach increase the contribution of cue words, but it can also decrease the effect of common stop words, noisy word, and document-dependent stop words.\nThese words are common in a document.\nMany methods based on sentence similarity require that these words are removed before topic segmentation can be performed [15].\nOur results in Figure 3 show that term weights are useful for multi-document topic segmentation and alignment.\nThe major contribution of this paper is that it introduces a novel method for topic segmentation using MI and shows that this method performs better than previously used criteria.\nAlso, we have addressed the problem of topic segmentation and alignment across multiple documents, whereas most existing research focused on segmentation of single documents.\nMulti-document segmentation and alignment can utilize information from similar documents and improves the performance of topic segmentation greatly.\nObviously, our approach can handle single documents as a special case when multiple documents are unavailable.\nIt can detect shared topics among documents to judge if they are multiple documents on the same topic.\nWe also introduce the new criterion of WMI based on term weights learned from multiple similar documents, which can improve performance of topic segmentation further.\nWe propose an iterative greedy algorithm based on dynamic programming and show that it works well in practice.\nSome of our prior work is in [24].\nThe rest of this paper is organized as follows: In Section 2, we review related work.\nSection 3 contains a formulation of the problem of topic segmentation and alignment of multiple documents with term co-clustering, a review of the criterion of MI for clustering, and finally an introduction to WMI.\nIn Section 4, we first propose the iterative greedy algorithm of topic segmentation and alignment with term co-clustering, and then describe how the algorithm can be optimized by us\nFigure 1: Illustration of multi-document segmentation and alignment.\ning dynamic programming.\nIn Section 5, experiments about single-document segmentation, shared topic detection, and multi-document segmentation are described, and results are presented and discussed to evaluate the performance of our algorithm.\nConclusions and some future directions of the research work are discussed in Section 6.\n2.\nPREVIOUS WORK\nSupervised learning usually has good performance, since it learns functions from labelled training sets.\nHowever, often getting large training sets with manual labels on document sentences is prohibitively expensive, so unsupervised approaches are desired.\nSome approaches also focus on cue words as hints of topic transitions [11].\nWhile some existing methods only consider information in single documents [6, 15], others utilize multiple documents [16, 14].\nThere are not many works in the latter category, even though the performance of segmentation is expected to be better with utilization of information from multiple documents.\nPrevious research studied methods to find shared topics [16] and topic segmentation and summarization between just a pair of documents [14].\nText classification and clustering is a related research area which categorizes documents into groups using supervised or unsupervised methods.\nCriteria of these approaches can be utilized in the issue of topic segmentation.\nSome of those methods have been extended into the area of topic segmentation, such as PLSA [5] and maximum entropy [7], but to our best knowledge, using MI for topic segmentation has not been studied.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe proposed a novel method for multi-document topic segmentation and alignment based on weighted mutual information, which can also handle single-document cases.\nWe used dynamic programming to optimize our algorithm.\nOur approach outperforms all the previous methods on singledocument cases.\nMoreover, we also showed that doing segmentation among multiple documents can improve the performance tremendously.\nOur results also illustrated that using weighted mutual information can utilize the information of multiple documents to reach a better performance.\nWe only tested our method on limited data sets.\nMore data sets especially complicated ones should be tested.\nMore previous methods should be compared with.\nMoreover, natural segmentations like paragraphs are hints that can be used to find the optimal boundaries.\nSupervised learning also can be considered.","lvl-2":"Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents\nABSTRACT\nTopic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents.\nPrevious work in this area usually focuses on single documents, although similar multiple documents are available in many domains.\nIn this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights.\nThe basic idea is that the optimal segmentation maximizes MI (or WMI).\nOur approach can detect shared topics among documents.\nIt can find the optimal boundaries in a document, and align segments among documents at the same time.\nIt also can handle single-document segmentation as a special case of the multi-document segmentation and alignment.\nOur methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents.\nOur experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation.\nUtilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.\n1.\nINTRODUCTION\nMany researchers have worked on topic detection and tracking (TDT) [26] and topic segmentation during the past decade.\nTopic segmentation intends to identify the boundaries in a document with the goal to capture the latent topical structure.\nTopic segmentation tasks usually fall into two categories [15]: text stream segmentation where topic transition is identified, and coherent document segmentation in which documents are split into sub-topics.\nThe former category has applications in automatic speech recognition, while the latter one has more applications such as partial-text query of long documents in information retrieval, text summary, and quality measurement of multiple documents.\nPrevious research in connection with TDT falls into the former category, targeted on topic tracking of broadcast speech data and newswire text, while the latter category has not been studied very well.\nTraditional approaches perform topic segmentation on documents one at a time [15, 25, 6].\nMost of them perform badly in subtle tasks like coherent document segmentation [15].\nOften, end-users seek documents that have the similar content.\nSearch engines, like, Google, provide links to obtain similar pages.\nAt a finer granularity, users may actually be looking to obtain sections of a document similar to a particular section that presumably discusses a topic of the users interest.\nThus, the extension of topic segmentation from single documents to identifying similar segments from multiple similar documents with the same topic is a natural and necessary direction, and multi-document topic segmentation is expected to have a better performance since more information is utilized.\nTraditional approaches using similarity measurement based on term frequency generally have the same assumption that similar vocabulary tends to be in a coherent topic segment [15, 25, 6].\nHowever, they usually suffer the issue of identifying stop words.\nFor example, additional document-dependent stop words are removed together with the generic stop words in [15].\nThere are two reasons that we do not remove stop\nwords directly.\nFirst, identifying stop words is another issue [12] that requires estimation in each domain.\nRemoving common stop words may result in the loss of useful information in a specific domain.\nSecond, even though stop words can be identified, hard classification of stop words and nonstop words cannot represent the gradually changing amount of information content of each word.\nWe employ a soft classification using term weights.\nIn this paper, we view the problem of topic segmentation as an optimization issue using information theoretic techniques to find the optimal boundaries of a document given the number of text segments so as to minimize the loss of mutual information (MI) (or a weighted mutual information (WMI)) after segmentation and alignment.\nThis is equal to maximizing the MI (or WMI).\nThe MI focuses on measuring the difference among segments whereas previous research focused on finding the similarity (e.g. cosine distance) of segments [15, 25, 6].\nTopic alignment of multiple similar documents can be achieved by clustering sentences on the same topic into the same cluster.\nSingle-document topic segmentation is just a special case of the multi-document topic segmentation and alignment problem.\nTerms can be co-clustered as in [10] at the same time, given the number of clusters, but our experimental results show that this method results in a worse segmentation (see Tables 1, 4, and 6).\nUsually, human readers can identify topic transition based on cue words, and can ignore stop words.\nInspired by this, we give each term (or term cluster) a weight based on entropy among different documents and different segments of documents.\nNot only can this approach increase the contribution of cue words, but it can also decrease the effect of common stop words, noisy word, and document-dependent stop words.\nThese words are common in a document.\nMany methods based on sentence similarity require that these words are removed before topic segmentation can be performed [15].\nOur results in Figure 3 show that term weights are useful for multi-document topic segmentation and alignment.\nThe major contribution of this paper is that it introduces a novel method for topic segmentation using MI and shows that this method performs better than previously used criteria.\nAlso, we have addressed the problem of topic segmentation and alignment across multiple documents, whereas most existing research focused on segmentation of single documents.\nMulti-document segmentation and alignment can utilize information from similar documents and improves the performance of topic segmentation greatly.\nObviously, our approach can handle single documents as a special case when multiple documents are unavailable.\nIt can detect shared topics among documents to judge if they are multiple documents on the same topic.\nWe also introduce the new criterion of WMI based on term weights learned from multiple similar documents, which can improve performance of topic segmentation further.\nWe propose an iterative greedy algorithm based on dynamic programming and show that it works well in practice.\nSome of our prior work is in [24].\nThe rest of this paper is organized as follows: In Section 2, we review related work.\nSection 3 contains a formulation of the problem of topic segmentation and alignment of multiple documents with term co-clustering, a review of the criterion of MI for clustering, and finally an introduction to WMI.\nIn Section 4, we first propose the iterative greedy algorithm of topic segmentation and alignment with term co-clustering, and then describe how the algorithm can be optimized by us\nFigure 1: Illustration of multi-document segmentation and alignment.\ning dynamic programming.\nIn Section 5, experiments about single-document segmentation, shared topic detection, and multi-document segmentation are described, and results are presented and discussed to evaluate the performance of our algorithm.\nConclusions and some future directions of the research work are discussed in Section 6.\n2.\nPREVIOUS WORK\nGenerally, the existing approaches to text segmentation fall into two categories: supervised learning [19, 17, 23] and unsupervised learning [3, 27, 5, 6, 15, 25, 21].\nSupervised learning usually has good performance, since it learns functions from labelled training sets.\nHowever, often getting large training sets with manual labels on document sentences is prohibitively expensive, so unsupervised approaches are desired.\nSome models consider dependence between sentences and sections, such as Hidden Markov Model [3, 27], Maximum Entropy Markov Model [19], and Conditional Random Fields [17], while many other approaches are based on lexical cohesion or similarity of sentences [5, 6, 15, 25, 21].\nSome approaches also focus on cue words as hints of topic transitions [11].\nWhile some existing methods only consider information in single documents [6, 15], others utilize multiple documents [16, 14].\nThere are not many works in the latter category, even though the performance of segmentation is expected to be better with utilization of information from multiple documents.\nPrevious research studied methods to find shared topics [16] and topic segmentation and summarization between just a pair of documents [14].\nText classification and clustering is a related research area which categorizes documents into groups using supervised or unsupervised methods.\nTopical classification or clustering is an important direction in this area, especially co-clustering of documents and terms, such as LSA [9], PLSA [13], and approaches based on distances and bipartite graph partitioning [28] or maximum MI [2, 10], or maximum entropy [1, 18].\nCriteria of these approaches can be utilized in the issue of topic segmentation.\nSome of those methods have been extended into the area of topic segmentation, such as PLSA [5] and maximum entropy [7], but to our best knowledge, using MI for topic segmentation has not been studied.\n3.\nPROBLEM FORMULATION\nOur goal is to segment documents and align the segments across documents (Figure 1).\nLet T be the set of terms {t1, t2,..., tl}, which appear in the unlabelled set of documents D = {d1, d2,..., dm}.\nLet Sd be the set of sentences for document d \u2208 D, i.e. {s1, s2,..., snd}.\nWe have a 3D matrix of term frequency, in which the three dimensions are random variables of D, Sd, and T. Sd actually is a random\nvector including a random variable for each d \u2208 D.\nThe term frequency can be used to estimate the joint probability distribution P (D, Sd, T), which is p (t, d, s) = T (t, d, s) \/ ND, where T (t, d, s) is the number of t in d's sentence s and ND is the total number of terms in D. S\" represents the set of segments {\"s1, s \"2,..., \"sp} after segmentation and alignment among multiple documents, where the number of segments | \"S | = p.\nA segment \"si of document d is a sequence of adjacent sentences in d.\nSince for different documents si may discuss different sub-topics, our goal is to cluster adjacent sentences in each document into segments, and align similar segments among documents, so that for different documents \"si is about the same sub-topic.\nThe goal is to find the optimal topic segmentation and alignment mapping\nand Alid (\"s ~ i): {\"s ~ 1, \"s ~ 2,..., \"s ~ p} \u2192 {\"s1, \"s2,..., \"sp}, for all d \u2208 D, where \"si is ith segment with the constraint that only adjacent sentences can be mapped to the same segment, i.e. for d, {si, si +1,..., sj} \u2192 {\"s ~ q}, where q \u2208 {1,..., p}, where p is the segment number, and if i> j, then for d, \"sq is missing.\nAfter segmentation and alignment, random vector Sd becomes an aligned random variable \"S. Thus, P (D, Sd, T) becomes P (D, \"S, T).\nTerm co-clustering is a technique that has been employed [10] to improve the accuracy of document clustering.\nWe evaluate the effect of it for topic segmentation.\nA term t is mapped to exactly one term cluster.\nTerm co-clustering involves simultaneously finding the optimal term clustering mapping Clu (t): {t1, t2,..., tl} \u2192 {\"t1, \"t2,..., \"tk}, where k \u2264 l, l is the total number of words in all the documents, and k is the number of clusters.\n4.\nMETHODOLOGY\nWe now describe a novel algorithm which can handle singledocument segmentation, shared topic detection, and multidocument segmentation and alignment based on MI or WMI.\n4.1 Mutual Information\nMI I (X; Y) is a quantity to measure the amount of information which is contained in two or more random variables [8, 10].\nFor the case of two random variables, we have\nObviously, when random variables X and Y are independent, I (X; Y) = 0.\nThus, intuitively, the value of MI depends on how random variables are dependent on each other.\nThe optimal co-clustering is the mapping Clux: X \u2192 X\" and Cluy: Y \u2192 Y\" that minimizes the loss: I (X; Y) \u2212 I (\"X; Y\"), which is equal to maximizing I (\"X; Y\").\nThis is the criterion of MI for clustering.\nIn the case of topic segmentation, the two random variables are the term variable T and the segment variable S, and each sample is an occurrence of a term T = t in a particular segment S = s. I (T; S) is used to measure how dependent T and S are.\nHowever, I (T; S) cannot be computed for documents before segmentation, since we do not have a set of S due to the fact that sentences of Document d, si \u2208 Sd, is not aligned with other documents.\nThus, instead of minimizing the loss of MI, we can maximize MI after topic\nwhere p (\"t, \"s) are estimated by the term frequency tf of Term Cluster t\" and Segment s\" in the training set D. Note that here a segment s\" includes sentences about the the same topic among all documents.\nThe optimal solution is the mapping Clut: T \u2192 T\", Segd: Sd \u2192 \"S ~, and Alid: \"S ~ \u2192 \"S, which maximizes I (T\"; \"S).\n4.2 Weighted Mutual Information\nIn topic segmentation and alignment of multiple documents, if P (D, \"S, T) is known, based on the marginal distributions P (D | T) and P (\"S | T) for each term t \u2208 T, we can categorize terms into four types in the data set:\n\u2022 Common stop words are common both along the dimensions of documents and segments.\n\u2022 Document-dependent stop words that depends on the personal writing style are common only along the dimension of segments for some documents.\n\u2022 Cue words are the most important elements for segmentation.\nThey are common along the dimension of documents only for the same segment, and they are not common along the dimensions of segments.\n\u2022 Noisy words are other words which are not common along both dimensions.\nEntropy based on P (D | T) and P (\"S | T) can be used to identify different types of terms.\nTo reinforce the contribution of cue words in the MI computation, and simultaneously reduce the effect of the other three types of words, similar as the idea of the tf-idf weight [22], we use entropies of each term along the dimensions of document D and segment \"S, i.e. ED (\"t) and E\u02c6S (\"t), to compute the weight.\nA cue word usually has a large value of ED (\"t) but a small value of E\u02c6S (\"t).\nWe introduce term weights (or term cluster weights)\npowers to adjust term weights.\nUsually a = 1 and b = 1 as default, and max\u02c6t, \u2208 T\u02c6 (ED (\"t ~)) and max\u02c6t, \u2208 T\u02c6 (E\u02c6S (\"t ~)) are used to normalize the entropy values.\nTerm cluster weights are used to adjust p (\"t, \"s), where pw (\"t) and pw (\"s) are marginal distributions of pw (\"t, \"s).\nHowever, since we do not know either the term weights or P (D, \"S, T), we need to estimate them, but w\u02c6t depends on p (\"s | t) and \"S, while S\" and p (\"s | t) also depend on w\u02c6t that is still unknown.\nThus, an iterative algorithm is required to estimate term weights w\u02c6t and find the best segmentation and alignment to optimize the objective function Iw concurrently.\nAfter a document is segmented into sentences and\nFigure 2: Algorithm: Topic segmentation and alignment based on MI or WMI.\nand each sentence is segmented into words, each word is stemmed.\nThen the joint probability distribution P (D, Sd, T) can be estimated.\nFinally, this distribution can be used to compute MI in our algorithm.\n4.3 Iterative Greedy Algorithm\nOur goal is to maximize the objective function, I (T\u02c6; \u02c6S) or Iw (T\u02c6; \u02c6S), which can measure the dependence of term occurrences in different segments.\nGenerally, first we do not know the estimate term weights, which depend on the optimal topic segmentation and alignment, and term clusters.\nMoreover, this problem is NP-hard [10], even though if we know the term weights.\nThus, an iterative greedy algorithm is desired to find the best solution, even though probably only local maxima are reached.\nWe present the iterative greedy algorithm in Figure 2 to find a local maximum of I (T\u02c6; \u02c6S) or Iw (T\u02c6; \u02c6S) with simultaneous term weight estimation.\nThis algorithm can is iterative and greedy for multi-document cases or single-document cases with term weight estimation and\/or term co-clustering.\nOtherwise, since it is just a one step algorithm to solve the task of single-document segmentation [6, 15, 25], the global maximum of MI is guaranteed.\nWe will show later that term co-clustering reduces the accuracy of the results and is not necessary, and for singledocument segmentation, term weights are also not required.\n4.3.1 Initialization\nIn Step 0, the initial term clustering Clut and topic segmentation and alignment Segd and Alid are important to avoid local maxima and reduce the number of iterations.\nFirst, a good guess of term weights can be made by using the distributions of term frequency along sentences for each document and averaging them to get the initial values of w\u02c6t:\nwhere Dt is the set of documents which contain Term t. Then, for the initial segmentation Seg (0), we can simply segment documents equally by sentences.\nOr we can find the optimal segmentation just for each document d which maximizes the WMI, Seg (0)\nassume that the order of segments for each d is the same.\nFor the initial term clustering Clu (0), first cluster labels can be set randomly, and after the first time of Step 3, a good initial term clustering is obtained.\n4.3.2 Different Cases\nAfter initialization, there are three stages for different cases.\nTotally there are eight cases, | D | = 1 or | D |> 1, k = l or k 1, k = l, w = 0), only iteration of Step 2.2 and 2.3 are required.\nAt Stage 1, the global maximum can be found based on I (T\u02c6; \u02c6S) using dynamic programming in Section 4.4.\nSimultaneously finding a good term clustering and estimated term weights is impossible, since when moving a term to a new term cluster to maximize Iw (T\u02c6; \u02c6S), we do not know that the weight of this term should be the one of the new cluster or the old cluster.\nThus, we first do term clustering at Stage 2, and then estimate term weights at Stage 3.\nAt Stage 2, Step 2.1 is to find the best term clustering and Step 2.2 is to find the best segmentation.\nThis cycle is repeated to find a local maximum based on MI I until it converges.\nThe two steps are: (1) based on current term clustering Clu\u02c6t, for each document d, the algorithm segments all the sentences Sd into p segments sequentially (some segments may be empty), and put them into the p segments S\u02c6 of the whole training set D (all possible cases of different segmentation Segd and alignment Alid are checked) to find the optimal case, and (2) based on the current segmentation\nand alignment, for each term t, the algorithm finds the best term cluster of t based on the current segmentation Segd and alignment Alid.\nAfter finding a good term clustering, term weights are estimated if w = 1.\nAt Stage 3, similar as Stage 2, Step 3.1 is term weight re-estimation and Step 3.2 is to find a better segmentation.\nThey are repeated to find a local maximum based on WMI Iw until it converges.\nHowever, if the term clustering in Stage 2 is not accurate, then the term weight estimation at Stage 3 may have a bad result.\nFinally, at Step 3.3, this algorithm converges and return the output.\nThis algorithm can handle both single-document and multi-document segmentation.\nIt also can detect shared topics among documents by checking the proportion of overlapped sentences on the same topics, as described in Sec 5.2.\n4.4 Algorithm Optimization\nIn many previous works on segmentation, dynamic programming is a technique used to maximize the objective function.\nSimilarly, at Step 1, 2.2, and 3.2 of our algorithm, we can use dynamic programming.\nFor Stage 1, using dynamic programming can still find the global optimum, but for Stage 2 and Stage 3, we can only find the optimum for each step of topic segmentation and alignment of a document.\nHere we only show the dynamic programming for Step 3.2 using WMI (Step 1 and 2.2 are similar but they can use either I or Iw).\nThere are two cases that are not shown in the algorithm in Figure 2: (a) single-document segmentation or multi-document segmentation with the same sequential order of segments, where alignment is not required, and (b) multi-document segmentation with different sequential orders of segments, where alignment is necessary.\nThe alignment mapping function of the former case is simply just Alid (\"s ~ i) = \"si, while for the latter one's alignment mapping function Alid (\"s ~ i) = \"sj, i and j may be different.\nThe computational steps for the two cases are listed below: Case 1 (no alignment): For each document d: (1) Compute pw (\"t), partial pw (\"t, \"s) and partial pw (\"s) without counting sentences from d.\nThen put sentences from i to j into Part k, and compute partial WMI\nwhere Alid (si, si +1,..., sj) = k, k E {1, 2,..., p}, 1 m, no sentences are put into \"sk when compute PIw (note PIw (T\"; \"s (si,..., sm)) = 0 for single-document segmentation).\n(3) Finally M (snd, p) = maxi [M (si \u2212 1, p \u2212 1) +\nIw is found and the corresponding segmentation is the best.\nCase 2 (alignment required): For each document d:\n(1) Compute pw (\"t), partial pw (\"t, \"s), and partial pw (\"s), and PIw (T\"; \"sk (si, si +1,..., sj)) similarly as Case 1.\n(2) Let M (sm, 1, k) = PIw (T\"; \"sk (s1, s2,..., sm)), where k E {1, 2,..., p}.\nThen M (sm, L, kL) = maxi, j [M (si \u2212 1, L \u2212\nwhere 0 0, where s \u2208 Sd, s \u2208 S d 1 (topics = topics) \/ min (ISdI, IS' Sd is the set of sentences of d, and ISdI is the number of sentences of d, then d and d' have the shared topic.\nFor a pair of documents selected randomly, the error rate is computed using the following equation: p (errIreal, pred) = p (missIreal, pred, same) p (sameIreal) + p (f alse alarmIreal, pred, dif f) p (dif f Ireal), where a miss means if they have the same topic (same) for the real case (real), but predicted (pred) as on the same topic.\nIf they are on different topics (dif f), but predicted as on the same topic, it is a false alarm.\n5.2.2 Experiment Results\nThe results are shown in Table 3.\nIf most documents have different topics, in WMIl, the estimation of term weights in Equation (3) is not correct.\nThus, WMIl is not expected to have a better performance than MIl, when most documents have different topics.\nWhen there are fewer documents in a subset with the same number of topics, more documents have different topics, so WMIl is more worse than MIl.\nWe can see that for most cases MIl has a better (or at least similar) performance than LDA.\nAfter shared topic detection, multi-document segmentation of documents with the shared topics is able to be executed.\n5.3 Multi-document Segmentation 5.3.1 Test Data and Evaluation\nFor multi-document segmentation and alignment, our goal is to identify these segments about the same topic among multiple similar documents with shared topics.\nUsing Iw is expected to perform better than I, since without term weights the result is affected seriously by document-dependent stop words and noisy words which depends on the personal writing style.\nIt is more likely to treat the same segments of different documents as different segments under the effect of document-dependent stop words and noisy words.\nTerm weights can reduce the effect of document-dependent stop words and noisy words by giving cue terms more weights.\nThe data set for multi-document segmentation and alignment has 102 samples and 2264 sentences totally.\nEach is the introduction part of a lab report selected from the course of Biol 240W, Pennsylvania State University.\nEach sample has two segments, introduction of plant hormones and the content in the lab.\nThe length range of samples is from two to 56 sentences.\nSome samples only have one part and some have a reverse order the these two segments.\nIt is not hard to identify the boundary between two segments for human.\nWe labelled each sentence manually for evaluation.\nThe criterion of evaluation is just using the proportion of the number of sentences with wrong predicted segment labels in the total number of sentences in the whole training\nTable 4: Average Error Rates of Multi-document\nTable 5: Multi-document Segmentation: P-values of\nset as the error rate: p (error | predicted, real)\nIn order to show the benefits of multi-document segmentation and alignment, we compared our method with different parameters on different partitions of the same training set.\nExcept the cases that the number of documents is 102 and one (they are special cases of using the whole set and the pure single-document segmentation), we randomly divided the training set into m partitions, and each has 51, 34, 20, 10, 5, and 2 document samples.\nThen we applied our methods on each partition and calculated the error rate of the whole training set.\nEach case was repeated for 10 times for computing the average error rates.\nFor different partitions of the training set, different k values are used, since the number of terms increases when the document number in each partition increases.\n5.3.2 Experiment Results\nFrom the experiment results in Table 4, we can see the following observations: (1) When the number of documents increases, all methods have better performances.\nOnly from one to two documents, MIl has decrease a little.\nWe can observe this from Figure 3 at the point of document number = 2.\nMost curves even have the worst results at this point.\nThere are two reasons.\nFirst, because samples vote for the best multi-document segmentation and alignment, but if only two documents are compared with each other, the one with missing segments or a totally different sequence will affect the correct segmentation and alignment of the other.\nSecond, as noted at the beginning of this section, if two documents have more document-dependent stop words or noisy words than cue words, then the algorithm may view them as two different segments and the other segment is missing.\nGenerally, we can only expect a better performance when the number of documents is larger than the number of segments.\n(2) Except single-document segmentation, W MIl is always better than MIl, and when the number of documents is reaching one or increases to a very large number, their performances become closer.\nTable 5 shows p-values of twosample one-sided t-test between MIl and WMIl.\nWe also can see this trend from p-values.\nWhen document number = 5, we reached the smallest p-value and the largest difference between error rates of MIl and WMIl.\nFor single-document\nsegmentation, W MIl is even a little bit worse than MIl, which is similar as the results of the single-document segmentation on the first data set.\nThe reason is that for singledocument segmentation, we cannot estimate term weights accurately, since multiple documents are unavailable.\n(3) Using term clustering usually gets worse results than MIl and WMIl.\n(4) Using term clustering in WMIk is even worse than in MIk, since in WMIk term clusters are found first using I before using Iw.\nIf the term clusters are not correct, then the term weights are estimated worse, which may mislead the algorithm to reach even worse results.\nFrom the results we also found that in multi-document segmentation and alignment, most documents with missing segments and a reverse order are identified correctly.\nTable 6 illustrates the experiment results for the case of 20 partitions (each has five document samples) of the training set and topic segmentation and alignment using MIk with different numbers of term clusters k. Notice that when the number of term clusters increases, the error rate becomes smaller.\nWithout term clustering, we have the best result.\nWe did not show results for WMIk with term clustering, but the results are similar.\nWe also tested W MIl with different hyper parameters of a and b to adjust term weights.\nThe results are presented in Figure 3.\nIt was shown that the default case W MIl: a = 1, b = 1 gave the best results for different partitions of the training set.\nWe can see the trend that when the document number is very small or large, the difference between MIl: a = 0, b = 0 and W MIl: a = 1, b = 1 becomes quite small.\nWhen the document number is not large (about from 2 to 10), all the cases using term weights have better performances than MIl: a = 0, b = 0 without term weights, but when the document number becomes larger, the cases W MIl: a = 1, b = 0 and W MIl: a = 2, b = 1 become worse than MIl: a = 0, b = 0.\nWhen the document number becomes very large, they are even worse than cases with small document numbers.\nThis means that a proper way to estimate term weights for the criterion of WMI is very important.\nFigure 4 shows the term weights learned from the whole training set.\nFour types of words are categorized roughly even though the transition among them are subtle.\nFigure 5 illustrates the change in (weighted) mutual information for MIl and WMIl.\nAs expected, mutual information for MIl increases monotonically with the number of steps, while W MIl does not.\nFinally, MIl and W MIl are scalable, with computational complexity shown in Figure 6.\nOne advantage for our approach based on MI is that removing stop words is not required.\nAnother important advantage is that there are no necessary hyper parameters to adjust.\nIn single-document segmentation, the performance based on MI is even better for that based on WMI, so no extra hyper parameter is required.\nIn multi-document segmentation, we show in the experiment, a = 1 and b = 1 is the best.\nOur method gives more weights to cue terms.\nHowever, usually cue terms or sentences appear at the beginning of a segment, while the end of the segment may be\nFigure 3: Error rates for different hyper parameters of term weights.\nFigure 4: Term weights learned from the whole training set.\nFigure 5: Change in (weighted) MI for MIl and WMIl.\nFigure 6: Time to converge for MIl and WMIl.\nmuch noisy.\nOne possible solution is giving more weights to terms at the beginning of each segment.\nMoreover, when the length of segments are quite different, long segments have much higher term frequencies, so they may dominate the segmentation boundaries.\nNormalization of term frequencies versus the segment length may be useful.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe proposed a novel method for multi-document topic segmentation and alignment based on weighted mutual information, which can also handle single-document cases.\nWe used dynamic programming to optimize our algorithm.\nOur approach outperforms all the previous methods on singledocument cases.\nMoreover, we also showed that doing segmentation among multiple documents can improve the performance tremendously.\nOur results also illustrated that using weighted mutual information can utilize the information of multiple documents to reach a better performance.\nWe only tested our method on limited data sets.\nMore data sets especially complicated ones should be tested.\nMore previous methods should be compared with.\nMoreover, natural segmentations like paragraphs are hints that can be used to find the optimal boundaries.\nSupervised learning also can be considered.","keyphrases":["topic segment","share topic","share topic detect","topic detect","multipl document","track","local and sequenti inform of document","singl document","mutual inform","term weight","optim boundari","cue term","stop word","wmu","singl-document segment","multi-document segment","topic segment perform","topic align"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","U","M","M","R","R"]} {"id":"H-2","title":"Personalized Query Expansion for the Web","abstract":"The inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval. In this paper we propose to improve such Web queries by expanding them with terms collected from each user's Personal Information Repository, thus implicitly personalizing the search output. We introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri. Our extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings. Subsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query. A separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.","lvl-1":"Personalized Query Expansion for the Web Paul - Alexandru Chirita L3S Research Center\u2217 Appelstr.\n9a 30167 Hannover, Germany chirita@l3s.de Claudiu S. Firan L3S Research Center Appelstr.\n9a 30167 Hannover, Germany firan@l3s.de Wolfgang Nejdl L3S Research Center Appelstr.\n9a 30167 Hannover, Germany nejdl@l3s.de ABSTRACT The inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval.\nIn this paper we propose to improve such Web queries by expanding them with terms collected from each user``s Personal Information Repository, thus implicitly personalizing the search output.\nWe introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri.\nOur extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings.\nSubsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query.\nA separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; H.3.5 [Information Storage and Retrieval]: Online Information Services-Web-based services General Terms Algorithms, Experimentation, Measurement 1.\nINTRODUCTION The booming popularity of search engines has determined simple keyword search to become the only widely accepted user interface for seeking information over the Web.\nYet keyword queries are inherently ambiguous.\nThe query canon book for example covers several different areas of interest: religion, photography, literature, and music.\nClearly, one would prefer search output to be aligned with user``s topic(s) of interest, rather than displaying a selection of popular URLs from each category.\nStudies have shown that more than 80% of the users would prefer to receive such personalized search results [33] instead of the currently generic ones.\nQuery expansion assists the user in formulating a better query, by appending additional keywords to the initial search request in order to encapsulate her interests therein, as well as to focus the Web search output accordingly.\nIt has been shown to perform very well over large data sets, especially with short input queries (see for example [19, 3]).\nThis is exactly the Web search scenario!\nIn this paper we propose to enhance Web query reformulation by exploiting the user``s Personal Information Repository (PIR), i.e., the personal collection of text documents, emails, cached Web pages, etc..\nSeveral advantages arise when moving Web search personalization down to the Desktop level (note that by Desktop we refer to PIR, and we use the two terms interchangeably).\nFirst is of course the quality of personalization: The local Desktop is a rich repository of information, accurately describing most, if not all interests of the user.\nSecond, as all profile information is stored and exploited locally, on the personal machine, another very important benefit is privacy.\nSearch engines should not be able to know about a person``s interests, i.e., they should not be able to connect a specific person with the queries she issued, or worse, with the output URLs she clicked within the search interface1 (see Volokh [35] for a discussion on privacy issues related to personalized Web search).\nOur algorithms expand Web queries with keywords extracted from user``s PIR, thus implicitly personalizing the search output.\nAfter a discussion of previous works in Section 2, we first investigate the analysis of local Desktop query context in Section 3.1.1.\nWe propose several keyword, expression, and summary based techniques for determining expansion terms from those personal documents matching the Web query best.\nIn Section 3.1.2 we move our analysis to the global Desktop collection and investigate expansions based on co-occurrence metrics and external thesauri.\nThe experiments presented in Section 3.2 show many of these approaches to perform very well, especially on ambiguous queries, producing NDCG [15] improvements of up to 51.28%.\nIn Section 4 we move this algorithmic framework further and propose to make the expansion process adaptive to the clarity level of the query.\nThis yields an additional improvement of 8.47% over the previously identified best algorithm.\nWe conclude and discuss further work in Section 5.\n1 Search engines can map queries at least to IP addresses, for example by using cookies and mining the query logs.\nHowever, by moving the user profile at the Desktop level we ensure such information is not explicitly associated to a particular user and stored on the search engine side.\n2.\nPREVIOUS WORK This paper brings together two IR areas: Search Personalization and Automatic Query Expansion.\nThere exists a vast amount of algorithms for both domains.\nHowever, not much has been done specifically aimed at combining them.\nIn this section we thus present a separate analysis, first introducing some approaches to personalize search, as this represents the main goal of our research, and then discussing several query expansion techniques and their relationship to our algorithms.\n2.1 Personalized Search Personalized search comprises two major components: (1) User profiles, and (2) The actual search algorithm.\nThis section splits the relevant background according to the focus of each article into either one of these elements.\nApproaches focused on the User Profile.\nSugiyama et al. [32] analyzed surfing behavior and generated user profiles as features (terms) of the visited pages.\nUpon issuing a new query, the search results were ranked based on the similarity between each URL and the user profile.\nQiu and Cho [26] used Machine Learning on the past click history of the user in order to determine topic preference vectors and then apply Topic-Sensitive PageRank [13].\nUser profiling based on browsing history has the advantage of being rather easy to obtain and process.\nThis is probably why it is also employed by several industrial search engines (e.g., Yahoo! MyWeb2 ).\nHowever, it is definitely not sufficient for gathering a thorough insight into user``s interests.\nMore, it requires to store all personal information at the server side, which raises significant privacy concerns.\nOnly two other approaches enhanced Web search using Desktop data, yet both used different core ideas: (1) Teevan et al. [34] modified the query term weights from the BM25 weighting scheme to incorporate user interests as captured by their Desktop indexes; (2) In Chirita et al. [6], we focused on re-ranking the Web search output according to the cosine distance between each URL and a set of Desktop terms describing user``s interests.\nMoreover, none of these investigated the adaptive application of personalization.\nApproaches focused on the Personalization Algorithm.\nEffectively building the personalization aspect directly into PageRank [25] (i.e., by biasing it on a target set of pages) has received much attention recently.\nHaveliwala [13] computed a topicoriented PageRank, in which 16 PageRank vectors biased on each of the main topics of the Open Directory were initially calculated off-line, and then combined at run-time based on the similarity between the user query and each of the 16 topics.\nMore recently, Nie et al. [24] modified the idea by distributing the PageRank of a page across the topics it contains in order to generate topic oriented rankings.\nJeh and Widom [16] proposed an algorithm that avoids the massive resources needed for storing one Personalized PageRank Vector (PPV) per user by precomputing PPVs only for a small set of pages and then applying linear combination.\nAs the computation of PPVs for larger sets of pages was still quite expensive, several solutions have been investigated, the most important ones being those of Fogaras and Racz [12], and Sarlos et al. [30], the latter using rounding and count-min sketching in order to fastly obtain accurate enough approximations of the personalized scores.\n2.2 Automatic Query Expansion Automatic query expansion aims at deriving a better formulation of the user query in order to enhance retrieval.\nIt is based on exploiting various social or collection specific characteristics in order to generate additional terms, which are appended to the original in2 http:\/\/myWeb2.search.yahoo.com put keywords before identifying the matching documents returned as output.\nIn this section we survey some of the representative query expansion works grouped according to the source employed to generate additional terms: (1) Relevance feedback, (2) Collection based co-occurrence statistics, and (3) Thesaurus information.\nSome other approaches are also addressed in the end of the section.\nRelevance Feedback Techniques.\nThe main idea of Relevance Feedback (RF) is that useful information can be extracted from the relevant documents returned for the initial query.\nFirst approaches were manual [28] in the sense that the user was the one choosing the relevant results, and then various methods were applied to extract new terms, related to the query and the selected documents.\nEfthimiadis [11] presented a comprehensive literature review and proposed several simple methods to extract such new keywords based on term frequency, document frequency, etc..\nWe used some of these as inspiration for our Desktop specific techniques.\nChang and Hsu [5] asked users to choose relevant clusters, instead of documents, thus reducing the amount of interaction necessary.\nRF has also been shown to be effectively automatized by considering the top ranked documents as relevant [37] (this is known as Pseudo RF).\nLam and Jones [21] used summarization to extract informative sentences from the top-ranked documents, and appended them to the user query.\nCarpineto et al. [4] maximized the divergence between the language model defined by the top retrieved documents and that defined by the entire collection.\nFinally, Yu et al. [38] selected the expansion terms from vision-based segments of Web pages in order to cope with the multiple topics residing therein.\nCo-occurrence Based Techniques.\nTerms highly co-occurring with the issued keywords have been shown to increase precision when appended to the query [17].\nMany statistical measures have been developed to best assess term relationship levels, either analyzing entire documents [27], lexical affinity relationships [3] (i.e., pairs of closely related words which contain exactly one of the initial query terms), etc..\nWe have also investigated three such approaches in order to identify query relevant keywords from the rich, yet rather complex Personal Information Repository.\nThesaurus Based Techniques.\nA broadly explored method is to expand the user query with new terms, whose meaning is closely related to the input keywords.\nSuch relationships are usually extracted from large scale thesauri, as WordNet [23], in which various sets of synonyms, hypernyms, etc. are predefined.\nJust as for the co-occurrence methods, initial experiments with this approach were controversial, either reporting improvements, or even reductions in output quality [36].\nRecently, as the experimental collections grew larger, and as the employed algorithms became more complex, better results have been obtained [31, 18, 22].\nWe also use WordNet based expansion terms.\nHowever, we base this process on analyzing the Desktop level relationship between the original query and the proposed new keywords.\nOther Techniques.\nThere are many other attempts to extract expansion terms.\nThough orthogonal to our approach, two works are very relevant for the Web environment: Cui et al. [8] generated word correlations utilizing the probability for query terms to appear in each document, as computed over the search engine logs.\nKraft and Zien [19] showed that anchor text is very similar to user queries, and thus exploited it to acquire additional keywords.\n3.\nQUERY EXPANSION USING DESKTOP DATA Desktop data represents a very rich repository of profiling information.\nHowever, this information comes in a very unstructured way, covering documents which are highly diverse in format, content, and even language characteristics.\nIn this section we first tackle this problem by proposing several lexical analysis algorithms which exploit user``s PIR to extract keyword expansion terms at various granularities, ranging from term frequency within Desktop documents up to utilizing global co-occurrence statistics over the personal information repository.\nThen, in the second part of the section we empirically analyze the performance of each approach.\n3.1 Algorithms This section presents the five generic approaches for analyzing user``s Desktop data in order to provide expansion terms for Web search.\nIn the proposed algorithms we gradually increase the amount of personal information utilized.\nThus, in the first part we investigate three local analysis techniques focused only on those Desktop documents matching user``s query best.\nWe append to the Web query the most relevant terms, compounds, and sentence summaries from these documents.\nIn the second part of the section we move towards a global Desktop analysis, proposing to investigate term co-occurrences, as well as thesauri, in the expansion process.\n3.1.1 Expanding with Local Desktop Analysis Local Desktop Analysis is related to enhancing Pseudo Relevance Feedback to generate query expansion keywords from the PIR best hits for user``s Web query, rather than from the top ranked Web search results.\nWe distinguish three granularity levels for this process and we investigate each of them separately.\nTerm and Document Frequency.\nAs the simplest possible measures, TF and DF have the advantage of being very fast to compute.\nPrevious experiments with small data sets have showed them to yield very good results [11].\nWe thus independently associate a score with each term, based on each of the two statistics.\nThe TF based one is obtained by multiplying the actual frequency of a term with a position score descending as the term first appears closer to the end of the document.\nThis is necessary especially for longer documents, because more informative terms tend to appear towards their beginning [10].\nThe complete TF based keyword extraction formula is as follows: TermScore = 1 2 + 1 2 \u00b7 nrWords \u2212 pos nrWords !\n\u00b7 log(1 + TF) (1) where nrWords is the total number of terms in the document and pos is the position of the first appearance of the term; TF represents the frequency of each term in the Desktop document matching user``s Web query.\nThe identification of suitable expansion terms is even simpler when using DF: Given the set of Top-K relevant Desktop documents, generate their snippets as focused on the original search request.\nThis query orientation is necessary, since the DF scores are computed at the level of the entire PIR and would produce too noisy suggestions otherwise.\nOnce the set of candidate terms has been identified, the selection proceeds by ordering them according to the DF scores they are associated with.\nTies are resolved using the corresponding TF scores.\nNote that a hybrid TFxIDF approach is not necessarily efficient, since one Desktop term might have a high DF on the Desktop, while being quite rare in the Web.\nFor example, the term PageRank would be quite frequent on the Desktop of an IR scientist, thus achieving a low score with TFxIDF.\nHowever, as it is rather rare in the Web, it would make a good resolution of the query towards the correct topic.\nLexical Compounds.\nAnick and Tipirneni [2] defined the lexical dispersion hypothesis, according to which an expression``s lexical dispersion (i.e., the number of different compounds it appears in within a document or group of documents) can be used to automatically identify key concepts over the input document set.\nAlthough several possible compound expressions are available, it has been shown that simple approaches based on noun analysis are almost as good as highly complex part-of-speech pattern identification algorithms [1].\nWe thus inspect the matching Desktop documents for all their lexical compounds of the following form: { adjective?\nnoun+ } All such compounds could be easily generated off-line, at indexing time, for all the documents in the local repository.\nMoreover, once identified, they can be further sorted depending on their dispersion within each document in order to facilitate fast retrieval of the most frequent compounds at run-time.\nSentence Selection.\nThis technique builds upon sentence oriented document summarization: First, the set of relevant Desktop documents is identified; then, a summary containing their most important sentences is generated as output.\nSentence selection is the most comprehensive local analysis approach, as it produces the most detailed expansions (i.e., sentences).\nIts downside is that, unlike with the first two algorithms, its output cannot be stored efficiently, and consequently it cannot be computed off-line.\nWe generate sentence based summaries by ranking the document sentences according to their salience score, as follows [21]: SentenceScore = SW2 TW + PS + TQ2 NQ The first term is the ratio between the square amount of significant words within the sentence and the total number of words therein.\nA word is significant in a document if its frequency is above a threshold as follows: TF > ms = V ` X 7 \u2212 0.1 \u2217 (25 \u2212 NS) , if NS < 25 7 , if NS \u2208 [25, 40] 7 + 0.1 \u2217 (NS \u2212 40) , if NS > 40 with NS being the total number of sentences in the document (see [21] for details).\nThe second term is a position score set to (Avg(NS) \u2212 SentenceIndex)\/Avg2 (NS) for the first ten sentences, and to 0 otherwise, Avg(NS) being the average number of sentences over all Desktop items.\nThis way, short documents such as emails are not affected, which is correct, since they usually do not contain a summary in the very beginning.\nHowever, as longer documents usually do include overall descriptive sentences in the beginning [10], these sentences are more likely to be relevant.\nThe final term biases the summary towards the query.\nIt is the ratio between the square number of query terms present in the sentence and the total number of terms from the query.\nIt is based on the belief that the more query terms contained in a sentence, the more likely will that sentence convey information highly related to the query.\n3.1.2 Expanding with Global Desktop Analysis In contrast to the previously presented approach, global analysis relies on information from across the entire personal Desktop to infer the new relevant query terms.\nIn this section we propose two such techniques, namely term co-occurrence statistics, and filtering the output of an external thesaurus.\nTerm Co-occurrence Statistics.\nFor each term, we can easily compute off-line those terms co-occurring with it most frequently in a given collection (i.e., PIR in our case), and then exploit this information at run-time in order to infer keywords highly correlated with the user query.\nOur generic co-occurrence based query expansion algorithm is as follows: Algorithm 3.1.2.1.\nCo-occurrence based keyword similarity search.\nOff-line computation: 1: Filter potential keywords k with DF \u2208 [10, ... , 20% \u00b7 N] 2: For each keyword ki 3: For each keyword kj 4: Compute SCki,kj , the similarity coefficient of (ki, kj) On-line computation: 1: Let S be the set of keywords, potentially similar to an input expression E. 2: For each keyword k of E: 3: S \u2190 S \u222a TSC(k), where TSC(k) contains the Top-K terms most similar to k 4: For each term t of S: 5a: Let Score(t) \u2190 Q k\u2208E(0.01 + SCt,k) 5b: Let Score(t) \u2190 #DesktopHits(E|t) 6: Select Top-K terms of S with the highest scores.\nThe off-line computation needs an initial trimming phase (step 1) for optimization purposes.\nIn addition, we also restricted the algorithm to computing co-occurrence levels across nouns only, as they contain by far the largest amount of conceptual information, and as this approach reduces the size of the co-occurrence matrix considerably.\nDuring the run-time phase, having the terms most correlated with each particular query keyword already identified, one more operation is necessary, namely calculating the correlation of every output term with the entire query.\nTwo approaches are possible: (1) using a product of the correlation between the term and all keywords in the original expression (step 5a), or (2) simply counting the number of documents in which the proposed term co-occurs with the entire user query (step 5b).\nWe considered the following formulas for Similarity Coefficients [17]: \u2022 Cosine Similarity, defined as: CS = DFx,y pDFx \u00b7 DFy (2) \u2022 Mutual Information, defined as: MI = log N \u00b7 DFx,y DFx \u00b7 DFy (3) \u2022 Likelihood Ratio, defined in the paragraphs below.\nDFx is the Document Frequency of term x, and DFx,y is the number of documents containing both x and y. To further increase the quality of the generated scores we limited the latter indicator to cooccurrences within a window of W terms.\nWe set W to be the same as the maximum amount of expansion keywords desired.\nDunning``s Likelihood Ratio \u03bb [9] is a co-occurrence based metric similar to \u03c72 .\nIt starts by attempting to reject the null hypothesis, according to which two terms A and B would appear in text independently from each other.\nThis means that P(A B) = P(A\u00acB) = P(A), where P(A\u00acB) is the probability that term A is not followed by term B. Consequently, the test for independence of A and B can be performed by looking if the distribution of A given that B is present is the same as the distribution of A given that B is not present.\nOf course, in reality we know these terms are not independent in text, and we only use the statistical metrics to highlight terms which are frequently appearing together.\nWe compare the two binomial processes by using likelihood ratios of their associated hypotheses.\nFirst, let us define the likelihood ratio for one hypothesis: \u03bb = max\u03c9\u2208\u21260 H(\u03c9; k) max\u03c9\u2208\u2126 H(\u03c9; k) (4) where \u03c9 is a point in the parameter space \u2126, \u21260 is the particular hypothesis being tested, and k is a point in the space of observations K.\nIf we assume that two binomial distributions have the same underlying parameter, i.e., {(p1, p2) | p1 = p2}, we can write: \u03bb = maxp H(p, p; k1, k2, n1, n2) maxp1,p2 H(p1, p2; k1, k2, n1, n2) (5) where H(p1, p2; k1, k2, n1, n2) = pk1 1 \u00b7 (1 \u2212 p1)(n1\u2212k1) \u00b7 n1 k1 \u00a1 \u00b7 pk2 2 \u00b7 (1 \u2212 p2)(n2\u2212k2) \u00b7 n2 k2 \u00a1 .\nSince the maxima are obtained with p1 = k1 n1 , p2 = k2 n2 , and p = k1+k2 n1+n2 , we have: \u03bb = maxp L(p, k1, n1)L(p, k2, n2) maxp1,p2 L(p1, k1, n1)L(p2, k2, n2) (6) where L(p, k, n) = pk \u00b7 (1 \u2212 p)n\u2212k .\nTaking the logarithm of the likelihood, we obtain: \u22122 \u00b7 log \u03bb = 2 \u00b7 [log L(p1, k1, n1) + log L(p2, k2, n2) \u2212 log L(p, k1, n1) \u2212 log L(p, k2, n2)] where log L(p, k, n) = k \u00b7 log p + (n \u2212 k) \u00b7 log(1 \u2212 p).\nFinally, if we write O11 = P(A B), O12 = P(\u00acA B), O21 = P(A \u00acB), and O22 = P(\u00acA\u00acB), then the co-occurrence likelihood of terms A and B becomes: \u22122 \u00b7 log \u03bb = 2 \u00b7 [O11 \u00b7 log p1 + O12 \u00b7 log (1 \u2212 p1) + O21 \u00b7 log p2 + O22 \u00b7 log (1 \u2212 p2) \u2212 (O11 + O21) \u00b7 log p \u2212 (O12 + O22) \u00b7 log (1 \u2212 p)] where p1 = k1 n1 = O11 O11+O12 , p2 = k2 n2 = O21 O21+O22 , and p = k1+k2 n1+n2 Thesaurus Based Expansion.\nLarge scale thesauri encapsulate global knowledge about term relationships.\nThus, we first identify the set of terms closely related to each query keyword, and then we calculate the Desktop co-occurrence level of each of these possible expansion terms with the entire initial search request.\nIn the end, those suggestions with the highest frequencies are kept.\nThe algorithm is as follows: Algorithm 3.1.2.2.\nFiltered thesaurus based query expansion.\n1: For each keyword k of an input query Q: 2: Select the following sets of related terms using WordNet: 2a: Syn: All Synonyms 2b: Sub: All sub-concepts residing one level below k 2c: Super: All super-concepts residing one level above k 3: For each set Si of the above mentioned sets: 4: For each term t of Si: 5: Search the PIR with (Q|t), i.e., the original query, as expanded with t 6: Let H be the number of hits of the above search (i.e., the co-occurence level of t with Q) 7: Return Top-K terms as ordered by their H values.\nWe observe three types of term relationships (steps 2a-2c): (1) synonyms, (2) sub-concepts, namely hyponyms (i.e., sub-classes) and meronyms (i.e., sub-parts), and (3) super-concepts, namely hypernyms (i.e., super-classes) and holonyms (i.e., super-parts).\nAs they represent quite different types of association, we investigated them separately.\nWe limited the output expansion set (step 7) to contain only terms appearing at least T times on the Desktop, in order to avoid noisy suggestions, with T = min( N DocsPerTopic , MinDocs).\nWe set DocsPerTopic = 2, 500, and MinDocs = 5, the latter one coping with the case of small PIRs.\n3.2 Experiments 3.2.1 Experimental Setup We evaluated our algorithms with 18 subjects (Ph.D. and PostDoc.\nstudents in different areas of computer science and education).\nFirst, they installed our Lucene based search engine3 and 3 Clearly, if one had already installed a Desktop search application, then this overhead would not be present.\nindexed all their locally stored content: Files within user selected paths, Emails, and Web Cache.\nWithout loss of generality, we focused the experiments on single-user machines.\nThen, they chose 4 queries related to their everyday activities, as follows: \u2022 One very frequent AltaVista query, as extracted from the top 2% queries most issued to the search engine within a 7.2 million entries log from October 2001.\nIn order to connect such a query to each user``s interests, we added an off-line preprocessing phase: We generated the most frequent search requests and then randomly selected a query with at least 10 hits on each subject``s Desktop.\nTo further ensure a real life scenario, users were allowed to reject the proposed query and ask for a new one, if they considered it totally outside their interest areas.\n\u2022 One randomly selected log query, filtered using the same procedure as above.\n\u2022 One self-selected specific query, which they thought to have only one meaning.\n\u2022 One self-selected ambiguous query, which they thought to have at least three meanings.\nThe average query lengths were 2.0 and 2.3 terms for the log queries, as well as 2.9 and 1.8 for the self-selected ones.\nEven though our algorithms are mainly intended to enhance search when using ambiguous query keywords, we chose to investigate their performance on a wide span of query types, in order to see how they perform in all situations.\nThe log queries evaluate real life requests, in contrast to the self-selected ones, which target rather the identification of top and bottom performances.\nNote that the former ones were somewhat farther away from each subject``s interest, thus being also more difficult to personalize on.\nTo gain an insight into the relationship between each query type and user interests, we asked each person to rate the query itself with a score of 1 to 5, having the following interpretations: (1) never heard of it, (2) do not know it, but heard of it, (3) know it partially, (4) know it well, (5) major interest.\nThe obtained grades were 3.11 for the top log queries, 3.72 for the randomly selected ones, 4.45 for the self-selected specific ones, and 4.39 for the self-selected ambiguous ones.\nFor each query, we collected the Top-5 URLs generated by 20 versions of the algorithms4 presented in Section 3.1.\nThese results were then shuffled into one set containing usually between 70 and 90 URLs.\nThus, each subject had to assess about 325 documents for all four queries, being neither aware of the algorithm, nor of the ranking of each assessed URL.\nOverall, 72 queries were issued and over 6,000 URLs were evaluated during the experiment.\nFor each of these URLs, the testers had to give a rating ranging from 0 to 2, dividing the relevant results in two categories, (1) relevant and (2) highly relevant.\nFinally, the quality of each ranking was assessed using the normalized version of Discounted Cumulative Gain (DCG) [15].\nDCG is a rich measure, as it gives more weight to highly ranked documents, while also incorporating different relevance levels by giving them different gain values: DCG(i) = & G(1) , if i = 1 DCG(i \u2212 1) + G(i)\/log(i) , otherwise.\nWe used G(i) = 1 for relevant results, and G(i) = 2 for highly relevant ones.\nAs queries having more relevant output documents will have a higher DCG, we also normalized its value to a score between 0 (the worst possible DCG given the ratings) and 1 (the best possible DCG given the ratings) to facilitate averaging over queries.\nAll results were tested for statistical significance using T-tests.\n4 Note that all Desktop level parts of our algorithms were performed with Lucene using its predefined searching and ranking functions.\nAlgorithmic specific aspects.\nThe main parameter of our algorithms is the number of generated expansion keywords.\nFor this experiment we set it to 4 terms for all techniques, leaving an analysis at this level for a subsequent investigation.\nIn order to optimize the run-time computation speed, we chose to limit the number of output keywords per Desktop document to the number of expansion keywords desired (i.e., four).\nFor all algorithms we also investigated bigger limitations.\nThis allowed us to observe that the Lexical Compounds method would perform better if only at most one compound per document were selected.\nWe therefore chose to experiment with this new approach as well.\nFor all other techniques, considering less than four terms per document did not seem to consistently yield any additional qualitative gain.\nWe labeled the algorithms we evaluated as follows: 0.\nGoogle: The actual Google query output, as returned by the Google API; 1.\nTF, DF: Term and Document Frequency; 2.\nLC, LC[O]: Regular and Optimized (by considering only one top compound per document) Lexical Compounds; 3.\nSS: Sentence Selection; 4.\nTC[CS], TC[MI], TC[LR]: Term Co-occurrence Statistics using respectively Cosine Similarity, Mutual Information, and Likelihood Ratio as similarity coefficients; 5.\nWN[SYN], WN[SUB], WN[SUP]: WordNet based expansion with synonyms, sub-concepts, and super-concepts, respectively.\nExcept for the thesaurus based expansion, in all cases we also investigated the performance of our algorithms when exploiting only the Web browser cache to represent user``s personal information.\nThis is motivated by the fact that other personal documents such as for example emails are known to have a somewhat different language than that residing on the world wide Web [34].\nHowever, as this approach performed visibly poorer than using the entire Desktop data, we omitted it from the subsequent analysis.\n3.2.2 Results Log Queries.\nWe evaluated all variants of our algorithms using NDCG.\nFor log queries, the best performance was achieved with TF, LC[O], and TC[LR].\nThe improvements they brought were up to 5.2% for top queries (p = 0.14) and 13.8% for randomly selected queries (p = 0.01, statistically significant), both obtained with LC[O].\nA summary of all results is depicted in Table 1.\nBoth TF and LC[O] yielded very good results, indicating that simple keyword and expression oriented approaches might be sufficient for the Desktop based query expansion task.\nLC[O] was much better than LC, ameliorating its quality with up to 25.8% in the case of randomly selected log queries, improvement which was also significant with p = 0.04.\nThus, a selection of compounds spanning over several Desktop documents is more informative about user``s interests than the general approach, in which there is no restriction on the number of compounds produced from every personal item.\nThe more complex Desktop oriented approaches, namely sentence selection and all term co-occurrence based algorithms, showed a rather average performance, with no visible improvements, except for TC[LR].\nAlso, the thesaurus based expansion usually produced very few suggestions, possibly because of the many technical queries employed by our subjects.\nWe observed however that expanding with sub-concepts is very good for everyday life terms (e.g., car), whereas the use of super-concepts is valuable for compounds having at least one term with low technicality (e.g., document clustering).\nAs expected, the synonym based expansion performed generally well, though in some very Algorithm NDCG Signific.\nNDCG Signific.\nTop vs. Google Random vs. Google Google 0.42 - 0.40TF 0.43 p = 0.32 0.43 p = 0.04 DF 0.17 - 0.23LC 0.39 - 0.36LC[O] 0.44 p = 0.14 0.45 p = 0.01 SS 0.33 - 0.36TC[CS] 0.37 - 0.35TC[MI] 0.40 - 0.36TC[LR] 0.41 - 0.42 p = 0.06 WN[SYN] 0.42 - 0.38WN[SUB] 0.28 - 0.33WN[SUP] 0.26 - 0.26Table 1: Normalized Discounted Cumulative Gain at the first 5 results when searching for top (left) and random (right) log queries.\nAlgorithm NDCG Signific.\nNDCG Signific.\nClear vs. Google Ambiguous vs. Google Google 0.71 - 0.39TF 0.66 - 0.52 p 0.01 DF 0.37 - 0.31LC 0.65 - 0.54 p 0.01 LC[O] 0.69 - 0.59 p 0.01 SS 0.56 - 0.52 p 0.01 TC[CS] 0.60 - 0.50 p = 0.01 TC[MI] 0.60 - 0.47 p = 0.02 TC[LR] 0.56 - 0.47 p = 0.03 WN[SYN] 0.70 - 0.36WN[SUB] 0.46 - 0.32WN[SUP] 0.51 - 0.29Table 2: Normalized Discounted Cumulative Gain at the first 5 results when searching for user selected clear (left) and ambiguous (right) queries.\ntechnical cases it yielded rather general suggestions.\nFinally, we noticed Google to be very optimized for some top frequent queries.\nHowever, even within this harder scenario, some of our personalization algorithms produced statistically significant improvements over regular search (i.e., TF and LC[O]).\nSelf-selected Queries.\nThe NDCG values obtained with selfselected queries are depicted in Table 2.\nWhile our algorithms did not enhance Google for the clear search tasks, they did produce strong improvements of up to 52.9% (which were of course also highly significant with p 0.01) when utilized with ambiguous queries.\nIn fact, almost all our algorithms resulted in statistically significant improvements over Google for this query type.\nIn general, the relative differences between our algorithms were similar to those observed for the log based queries.\nAs in the previous analysis, the simple Desktop based Term Frequency and Lexical Compounds metrics performed best.\nNevertheless, a very good outcome was also obtained for Desktop based sentence selection and all term co-occurrence metrics.\nThere were no visible differences between the behavior of the three different approaches to cooccurrence calculation.\nFinally, for the case of clear queries, we noticed that fewer expansion terms than 4 might be less noisy and thus helpful in bringing further improvements.\nWe thus pursued this idea with the adaptive algorithms presented in the next section.\n4.\nINTRODUCING ADAPTIVITY In the previous section we have investigated the behavior of each technique when adding a fixed number of keywords to the user query.\nHowever, an optimal personalized query expansion algorithm should automatically adapt itself to various aspects of each query, as well as to the particularities of the person using it.\nIn this section we discuss the factors influencing the behavior of our expansion algorithms, which might be used as input for the adaptivity process.\nThen, in the second part we present some initial experiments with one of them, namely query clarity.\n4.1 Adaptivity Factors Several indicators could assist the algorithm to automatically tune the number of expansion terms.\nWe start by discussing adaptation by analyzing the query clarity level.\nThen, we briefly introduce an approach to model the generic query formulation process in order to tailor the search algorithm automatically, and discuss some other possible factors that might be of use for this task.\nQuery Clarity.\nThe interest for analyzing query difficulty has increased only recently, and there are not many papers addressing this topic.\nYet it has been long known that query disambiguation has a high potential of improving retrieval effectiveness for low recall searches with very short queries [20], which is exactly our targeted scenario.\nAlso, the success of IR systems clearly varies across different topics.\nWe thus propose to use an estimate number expressing the calculated level of query clarity in order to automatically tweak the amount of personalization fed into the algorithm.\nThe following metrics are available: \u2022 The Query Length is expressed simply by the number of words in the user query.\nThe solution is rather inefficient, as reported by He and Ounis [14].\n\u2022 The Query Scope relates to the IDF of the entire query, as in: C1 = log( #DocumentsInCollection #Hits(Query) ) (7) This metric performs well when used with document collections covering a single topic, but poor otherwise [7, 14].\n\u2022 The Query Clarity [7] seems to be the best, as well as the most applied technique so far.\nIt measures the divergence between the language model associated to the user query and the language model associated to the collection.\nIn a simplified version (i.e., without smoothing over the terms which are not present in the query), it can be expressed as follows: C2 = w\u2208Query Pml(w|Query) \u00b7 log Pml(w|Query) Pcoll(w) (8) where Pml(w|Query) is the probability of the word w within the submitted query, and Pcoll(w) is the probability of w within the entire collection of documents.\nOther solutions exist, but we think they are too computationally expensive for the huge amount of data that needs to be processed within Web applications.\nWe thus decided to investigate only C1 and C2.\nFirst, we analyzed their performance over a large set of queries and split their clarity predictions in three categories: \u2022 Small Scope \/ Clear Query: C1 \u2208 [0, 12], C2 \u2208 [4, \u221e).\n\u2022 Medium Scope \/ Semi-Ambiguous Query: C1 \u2208 [12, 17), C2 \u2208 [2.5, 4).\n\u2022 Large Scope \/ Ambiguous Query: C1 \u2208 [17, \u221e), C2 \u2208 [0, 2.5].\nIn order to limit the amount of experiments, we analyzed only the results produced when employing C1 for the PIR and C2 for the Web.\nAs algorithmic basis we used LC[O], i.e., optimized lexical compounds, which was clearly the winning method in the previous analysis.\nAs manual investigation showed it to slightly overfit the expansion terms for clear queries, we utilized a substitute for this particular case.\nTwo candidates were considered: (1) TF, i.e., the second best approach, and (2) WN[SYN], as we observed that its first and second expansion terms were often very good.\nDesktop Scope Web Clarity No.\nof Terms Algorithm Large Ambiguous 4 LC[O] Large Semi-Ambig.\n3 LC[O] Large Clear 2 LC[O] Medium Ambiguous 3 LC[O] Medium Semi-Ambig.\n2 LC[O] Medium Clear 1 TF \/ WN[SYN] Small Ambiguous 2 TF \/ WN[SYN] Small Semi-Ambig.\n1 TF \/ WN[SYN] Small Clear 0Table 3: Adaptive Personalized Query Expansion.\nGiven the algorithms and clarity measures, we implemented the adaptivity procedure by tailoring the amount of expansion terms added to the original query, as a function of its ambiguity in the Web, as well as within user``s PIR.\nNote that the ambiguity level is related to the number of documents covering a certain query.\nThus, to some extent, it has different meanings on the Web and within PIRs.\nWhile a query deemed ambiguous on a large collection such as the Web will very likely indeed have a large number of meanings, this may not be the case for the Desktop.\nTake for example the query PageRank.\nIf the user is a link analysis expert, many of her documents might match this term, and thus the query would be classified as ambiguous.\nHowever, when analyzed against the Web, this is definitely a clear query.\nConsequently, we employed more additional terms, when the query was more ambiguous in the Web, but also on the Desktop.\nPut another way, queries deemed clear on the Desktop were inherently not well covered within user``s PIR, and thus had fewer keywords appended to them.\nThe number of expansion terms we utilized for each combination of scope and clarity levels is depicted in Table 3.\nQuery Formulation Process.\nInteractive query expansion has a high potential for enhancing search [29].\nWe believe that modeling its underlying process would be very helpful in producing qualitative adaptive Web search algorithms.\nFor example, when the user is adding a new term to her previously issued query, she is basically reformulating her original request.\nThus, the newly added terms are more likely to convey information about her search goals.\nFor a general, non personalized retrieval engine, this could correspond to giving more weight to these new keywords.\nWithin our personalized scenario, the generated expansions can similarly be biased towards these terms.\nNevertheless, more investigations are necessary in order to solve the challenges posed by this approach.\nOther Features.\nThe idea of adapting the retrieval process to various aspects of the query, of the user itself, and even of the employed algorithm has received only little attention in the literature.\nOnly some approaches have been investigated, usually indirectly.\nThere exist studies of query behaviors at different times of day, or of the topics spanned by the queries of various classes of users, etc..\nHowever, they generally do not discuss how these features can be actually incorporated in the search process itself and they have almost never been related to the task of Web personalization.\n4.2 Experiments We used exactly the same experimental setup as for our previous analysis, with two log-based queries and two self-selected ones (all different from before, in order to make sure there is no bias on the new approaches), evaluated with NDCG over the Top-5 results output by each algorithm.\nThe newly proposed adaptive personalized query expansion algorithms are denoted as A[LCO\/TF] for the approach using TF with the clear Desktop queries, and as A[LCO\/WN] when WN[SYN] was utilized instead of TF.\nThe overall results were at least similar, or better than Google for all kinds of log queries (see Table 4).\nFor top frequent queries, Algorithm NDCG Signific.\nNDCG Signific.\nTop vs. Google Random vs. Google Google 0.51 - 0.45TF 0.51 - 0.48 p = 0.04 LC[O] 0.53 p = 0.09 0.52 p < 0.01 WN[SYN] 0.51 - 0.45A[LCO\/TF] 0.56 p < 0.01 0.49 p = 0.04 A[LCO\/WN] 0.55 p = 0.01 0.44Table 4: Normalized Discounted Cumulative Gain at the first 5 results when using our adaptive personalized search algorithms on top (left) and random (right) log queries.\nAlgorithm NDCG Signific.\nNDCG Signific.\nClear vs. Google Ambiguous vs. Google Google 0.81 - 0.46TF 0.76 - 0.54 p = 0.03 LC[O] 0.77 - 0.59 p 0.01 WN[SYN] 0.79 - 0.44A[LCO\/TF] 0.81 - 0.64 p 0.01 A[LCO\/WN] 0.81 - 0.63 p 0.01 Table 5: Normalized Discounted Cumulative Gain at the first 5 results when using our adaptive personalized search algorithms on user selected clear (left) and ambiguous (right) queries.\nboth adaptive algorithms, A[LCO\/TF] and A[LCO\/WN], improve with 10.8% and 7.9% respectively, both differences being also statistically significant with p \u2264 0.01.\nThey also achieve an improvement of up to 6.62% over the best performing static algorithm, LC[O] (p = 0.07).\nFor randomly selected queries, even though A[LCO\/TF] yields significantly better results than Google (p = 0.04), both adaptive approaches fall behind the static algorithms.\nThe major reason seems to be the imperfect selection of the number of expansion terms, as a function of query clarity.\nThus, more experiments are needed in order to determine the optimal number of generated expansion keywords, as a function of the query ambiguity level.\nThe analysis of the self-selected queries shows that adaptivity can bring even further improvements into Web search personalization (see Table 5).\nFor ambiguous queries, the scores given to Google search are enhanced by 40.6% through A[LCO\/TF] and by 35.2% through A[LCO\/WN], both strongly significant with p 0.01.\nAdaptivity also brings another 8.9% improvement over the static personalization of LC[O] (p = 0.05).\nEven for clear queries, the newly proposed flexible algorithms perform slightly better, improving with 0.4% and 1.0% respectively.\nAll results are depicted graphically in Figure 1.\nWe notice that A[LCO\/TF] is the overall best algorithm, performing better than Google for all types of queries, either extracted from the search engine log, or self-selected.\nThe experiments presented in this section confirm clearly that adaptivity is a necessary further step to take in Web search personalization.\n5.\nCONCLUSIONS AND FURTHER WORK In this paper we proposed to expand Web search queries by exploiting the user``s Personal Information Repository in order to automatically extract additional keywords related both to the query itself and to user``s interests, personalizing the search output.\nIn this context, the paper includes the following contributions: \u2022 We proposed five techniques for determining expansion terms from personal documents.\nEach of them produces additional query keywords by analyzing user``s Desktop at increasing granularity levels, ranging from term and expression level analysis up to global co-occurrence statistics and external thesauri.\nFigure 1: Relative NDCG gain (in %) for each algorithm overall, as well as separated per query category.\n\u2022 We provided a thorough empirical analysis of several variants of our approaches, under four different scenarios.\nWe showed some of these approaches to perform very well, producing NDCG improvements of up to 51.28%.\n\u2022 We moved this personalized search framework further and proposed to make the expansion process adaptive to features of each query, a strong focus being put on its clarity level.\n\u2022 Within a separate set of experiments, we showed our adaptive algorithms to provide an additional improvement of 8.47% over the previously identified best approach.\nWe are currently performing investigations on the dependency between various query features and the optimal number of expansion terms.\nWe are also analyzing other types of approaches to identify query expansion suggestions, such as applying Latent Semantic Analysis on the Desktop data.\nFinally, we are designing a set of more complex combinations of these metrics in order to provide enhanced adaptivity to our algorithms.\n6.\nACKNOWLEDGEMENTS We thank Ricardo Baeza-Yates, Vassilis Plachouras, Carlos Castillo and Vanessa Murdock from Yahoo! for the interesting discussions about the experimental setup and the algorithms we presented.\nWe are grateful to Fabrizio Silvestri from CNR and to Ronny Lempel from IBM for providing us the AltaVista query log.\nFinally, we thank our colleagues from L3S for participating in the time consuming experiments we performed, as well as to the European Commission for the funding support (project Nepomuk, 6th Framework Programme, IST contract no. 027705).\n7.\nREFERENCES [1] J. Allan and H. Raghavan.\nUsing part-of-speech patterns to reduce query ambiguity.\nIn Proc.\nof the 25th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, 2002.\n[2] P. G. Anick and S. Tipirneni.\nThe paraphrase search assistant: Terminological feedback for iterative information seeking.\nIn Proc.\nof the 22nd Intl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, 1999.\n[3] D. Carmel, E. Farchi, Y. Petruschka, and A. Soffer.\nAutomatic query wefinement using lexical affinities with maximal information gain.\nIn Proc.\nof the 25th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, pages 283-290, 2002.\n[4] C. Carpineto, R. de Mori, G. Romano, and B. Bigi.\nAn information-theoretic approach to automatic query expansion.\nACM TOIS, 19(1):1-27, 2001.\n[5] C.-H.\nChang and C.-C.\nHsu.\nIntegrating query expansion and conceptual relevance feedback for personalized web information retrieval.\nIn Proc.\nof the 7th Intl..\nConf.\non World Wide Web, 1998.\n[6] P. A. Chirita, C. Firan, and W. Nejdl.\nSummarizing local context to personalize global web search.\nIn Proc.\nof the 15th Intl..\nCIKM Conf.\non Information and Knowledge Management, 2006.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft.\nPredicting query performance.\nIn Proc.\nof the 25th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, 2002.\n[8] H. Cui, J.-R.\nWen, J.-Y.\nNie, and W.-Y.\nMa.\nProbabilistic query expansion using query logs.\nIn Proc.\nof the 11th Intl..\nConf.\non World Wide Web, 2002.\n[9] T. Dunning.\nAccurate methods for the statistics of surprise and coincidence.\nComputational Linguistics, 19:61-74, 1993.\n[10] H. P. Edmundson.\nNew methods in automatic extracting.\nJournal of the ACM, 16(2):264-285, 1969.\n[11] E. N. Efthimiadis.\nUser choices: A new yardstick for the evaluation of ranking algorithms for interactive query expansion.\nInformation Processing and Management, 31(4):605-620, 1995.\n[12] D. Fogaras and B. Racz.\nScaling link based similarity search.\nIn Proc.\nof the 14th Intl..\nWorld Wide Web Conf., 2005.\n[13] T. Haveliwala.\nTopic-sensitive pagerank.\nIn Proc.\nof the 11th Intl..\nWorld Wide Web Conf., Honolulu, Hawaii, May 2002.\n[14] B.\nHe and I. Ounis.\nInferring query performance using pre-retrieval predictors.\nIn Proc.\nof the 11th Intl..\nSPIRE Conf.\non String Processing and Information Retrieval, 2004.\n[15] K. J\u00a8arvelin and J. Keklinen.\nIr evaluation methods for retrieving highly relevant documents.\nIn Proc.\nof the 23th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, 2000.\n[16] G. Jeh and J. Widom.\nScaling personalized web search.\nIn Proc.\nof the 12th Intl..\nWorld Wide Web Conference, 2003.\n[17] M.-C.\nKim and K.-S.\nChoi.\nA comparison of collocation-based similarity measures in query expansion.\nInf.\nProc.\nand Mgmt., 35(1):19-30, 1999.\n[18] S.-B.\nKim, H.-C.\nSeo, and H.-C.\nRim.\nInformation retrieval using word senses: root sense tagging approach.\nIn Proc.\nof the 27th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, 2004.\n[19] R. Kraft and J. Zien.\nMining anchor text for query refinement.\nIn Proc.\nof the 13th Intl..\nConf.\non World Wide Web, 2004.\n[20] R. Krovetz and W. B. Croft.\nLexical ambiguity and information retrieval.\nACM Trans.\nInf.\nSyst., 10(2), 1992.\n[21] A. M. Lam-Adesina and G. J. F. Jones.\nApplying summarization techniques for term selection in relevance feedback.\nIn Proc.\nof the 24th Intl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, 2001.\n[22] S. Liu, F. Liu, C. Yu, and W. Meng.\nAn effective approach to document retrieval via utilizing wordnet and recognizing phrases.\nIn Proc.\nof the 27th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, 2004.\n[23] G. Miller.\nWordnet: An electronic lexical database.\nCommunications of the ACM, 38(11):39-41, 1995.\n[24] L. Nie, B. Davison, and X. Qi.\nTopical link analysis for web search.\nIn Proc.\nof the 29th Intl..\nACM SIGIR Conf.\non Res.\nand Development in Inf.\nRetr., 2006.\n[25] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe PageRank citation ranking: Bringing order to the web.\nTechnical report, Stanford Univ., 1998.\n[26] F. Qiu and J. Cho.\nAutomatic indentification of user interest for personalized search.\nIn Proc.\nof the 15th Intl..\nWWW Conf., 2006.\n[27] Y. Qiu and H.-P.\nFrei.\nConcept based query expansion.\nIn Proc.\nof the 16th Intl..\nACM SIGIR Conf.\non Research and Development in Inf.\nRetr., 1993.\n[28] J. Rocchio.\nRelevance feedback in information retrieval.\nThe Smart Retrieval System: Experiments in Automatic Document Processing, pages 313-323, 1971.\n[29] I. Ruthven.\nRe-examining the potential effectiveness of interactive query expansion.\nIn Proc.\nof the 26th Intl..\nACM SIGIR Conf., 2003.\n[30] T. Sarlos, A. A. Benczur, K. Csalogany, D. Fogaras, and B. Racz.\nTo randomize or not to randomize: Space optimal summaries for hyperlink analysis.\nIn Proc.\nof the 15th Intl..\nWWW Conf., 2006.\n[31] C. Shah and W. B. Croft.\nEvaluating high accuracy retrieval techniques.\nIn Proc.\nof the 27th Intl..\nACM SIGIR Conf.\non Research and development in information retrieval, pages 2-9, 2004.\n[32] K. Sugiyama, K. Hatano, and M. Yoshikawa.\nAdaptive web search based on user profile constructed without any effort from users.\nIn Proc.\nof the 13th Intl..\nWorld Wide Web Conf., 2004.\n[33] D. Sullivan.\nThe older you are, the more you want personalized search, 2004.\nhttp:\/\/searchenginewatch.com\/searchday\/article.php\/3385131.\n[34] J. Teevan, S. Dumais, and E. Horvitz.\nPersonalizing search via automated analysis of interests and activities.\nIn Proc.\nof the 28th Intl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, 2005.\n[35] E. Volokh.\nPersonalization and privacy.\nCommun.\nACM, 43(8), 2000.\n[36] E. M. Voorhees.\nQuery expansion using lexical-semantic relations.\nIn Proc.\nof the 17th Intl..\nACM SIGIR Conf.\non Res.\nand development in Inf.\nRetr., 1994.\n[37] J. Xu and W. B. Croft.\nQuery expansion using local and global document analysis.\nIn Proc.\nof the 19th Intl..\nACM SIGIR Conf.\non Research and Development in Information Retrieval, 1996.\n[38] S. Yu, D. Cai, J.-R.\nWen, and W.-Y.\nMa.\nImproving pseudo-relevance feedback in web information retrieval using web page segmentation.\nIn Proc.\nof the 12th Intl..\nConf.\non World Wide Web, 2003.","lvl-3":"Personalized Query Expansion for the Web\nABSTRACT\nThe inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval.\nIn this paper we propose to improve such Web queries by expanding them with terms collected from each user's Personal Information Repository, thus implicitly personalizing the search output.\nWe introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri.\nOur extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings.\nSubsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query.\nA separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.\n1.\nINTRODUCTION\nThe booming popularity of search engines has determined simple keyword search to become the only widely accepted user interface for seeking information over the Web.\nYet keyword queries are * Part of this work was performed while the author was visiting Yahoo! Research, Barcelona, Spain.\ninherently ambiguous.\nThe query \"canon book\" for example covers several different areas of interest: religion, photography, literature, and music.\nClearly, one would prefer search output to be aligned with user's topic (s) of interest, rather than displaying a selection of popular URLs from each category.\nStudies have shown that more than 80% of the users would prefer to receive such personalized search results [33] instead of the currently generic ones.\nQuery expansion assists the user in formulating a better query, by appending additional keywords to the initial search request in order to encapsulate her interests therein, as well as to focus the Web search output accordingly.\nIt has been shown to perform very well over large data sets, especially with short input queries (see for example [19, 3]).\nThis is exactly the Web search scenario!\nIn this paper we propose to enhance Web query reformulation by exploiting the user's Personal Information Repository (PIR), i.e., the personal collection of text documents, emails, cached Web pages, etc. .\nSeveral advantages arise when moving Web search personalization down to the Desktop level (note that by \"Desktop\" we refer to PIR, and we use the two terms interchangeably).\nFirst is of course the quality of personalization: The local Desktop is a rich repository of information, accurately describing most, if not all interests of the user.\nSecond, as all \"profile\" information is stored and exploited locally, on the personal machine, another very important benefit is privacy.\nSearch engines should not be able to know about a person's interests, i.e., they should not be able to connect a specific person with the queries she issued, or worse, with the output URLs she clicked within the search interface1 (see Volokh [35] for a discussion on privacy issues related to personalized Web search).\nOur algorithms expand Web queries with keywords extracted from user's PIR, thus implicitly personalizing the search output.\nAfter a discussion of previous works in Section 2, we first investigate the analysis of local Desktop query context in Section 3.1.1.\nWe propose several keyword, expression, and summary based techniques for determining expansion terms from those personal documents matching the Web query best.\nIn Section 3.1.2 we move our analysis to the global Desktop collection and investigate expansions based on co-occurrence metrics and external thesauri.\nThe experiments presented in Section 3.2 show many of these approaches to perform very well, especially on ambiguous queries, producing NDCG [15] improvements of up to 51.28%.\nIn Section 4 we move this algorithmic framework further and propose to make the expansion process adaptive to the clarity level of the query.\nThis yields an additional improvement of 8.47% over the previously identified best algorithm.\nWe conclude and discuss further work in Section 5.\n2.\nPREVIOUS WORK\nThis paper brings together two IR areas: Search Personalization and Automatic Query Expansion.\nThere exists a vast amount of algorithms for both domains.\nHowever, not much has been done specifically aimed at combining them.\nIn this section we thus present a separate analysis, first introducing some approaches to personalize search, as this represents the main goal of our research, and then discussing several query expansion techniques and their relationship to our algorithms.\n2.1 Personalized Search\nPersonalized search comprises two major components: (1) User profiles, and (2) The actual search algorithm.\nThis section splits the relevant background according to the focus of each article into either one of these elements.\nApproaches focused on the User Profile.\nSugiyama et al. [32] analyzed surfing behavior and generated user profiles as features (terms) of the visited pages.\nUpon issuing a new query, the search results were ranked based on the similarity between each URL and the user profile.\nQiu and Cho [26] used Machine Learning on the past click history of the user in order to determine topic preference vectors and then apply Topic-Sensitive PageRank [13].\nUser profiling based on browsing history has the advantage of being rather easy to obtain and process.\nThis is probably why it is also employed by several industrial search engines (e.g., Yahoo! MyWeb2).\nHowever, it is definitely not sufficient for gathering a thorough insight into user's interests.\nMore, it requires to store all personal information at the server side, which raises significant privacy concerns.\nOnly two other approaches enhanced Web search using Desktop data, yet both used different core ideas: (1) Teevan et al. [34] modified the query term weights from the BM25 weighting scheme to incorporate user interests as captured by their Desktop indexes; (2) In Chirita et al. [6], we focused on re-ranking the Web search output according to the cosine distance between each URL and a set of Desktop terms describing user's interests.\nMoreover, none of these investigated the adaptive application of personalization.\nApproaches focused on the Personalization Algorithm.\nEffectively building the personalization aspect directly into PageRank [25] (i.e., by biasing it on a target set of pages) has received much attention recently.\nHaveliwala [13] computed a topicoriented PageRank, in which 16 PageRank vectors biased on each of the main topics of the Open Directory were initially calculated off-line, and then combined at run-time based on the similarity between the user query and each of the 16 topics.\nMore recently, Nie et al. [24] modified the idea by distributing the PageRank of a page across the topics it contains in order to generate topic oriented rankings.\nJeh and Widom [16] proposed an algorithm that avoids the massive resources needed for storing one Personalized PageRank Vector (PPV) per user by precomputing PPVs only for a small set of pages and then applying linear combination.\nAs the computation of PPVs for larger sets of pages was still quite expensive, several solutions have been investigated, the most important ones being those of Fogaras and Racz [12], and Sarlos et al. [30], the latter using rounding and count-min sketching in order to fastly obtain accurate enough approximations of the personalized scores.\n2.2 Automatic Query Expansion\nAutomatic query expansion aims at deriving a better formulation of the user query in order to enhance retrieval.\nIt is based on exploiting various social or collection specific characteristics in order to generate additional terms, which are appended to the original in\nput keywords before identifying the matching documents returned as output.\nIn this section we survey some of the representative query expansion works grouped according to the source employed to generate additional terms: (1) Relevance feedback, (2) Collection based co-occurrence statistics, and (3) Thesaurus information.\nSome other approaches are also addressed in the end of the section.\nRelevance Feedback Techniques.\nThe main idea of Relevance Feedback (RF) is that useful information can be extracted from the relevant documents returned for the initial query.\nFirst approaches were manual [28] in the sense that the user was the one choosing the relevant results, and then various methods were applied to extract new terms, related to the query and the selected documents.\nEfthimiadis [11] presented a comprehensive literature review and proposed several simple methods to extract such new keywords based on term frequency, document frequency, etc. .\nWe used some of these as inspiration for our Desktop specific techniques.\nChang and Hsu [5] asked users to choose relevant clusters, instead of documents, thus reducing the amount of interaction necessary.\nRF has also been shown to be effectively automatized by considering the top ranked documents as relevant [37] (this is known as Pseudo RF).\nLam and Jones [21] used summarization to extract informative sentences from the top-ranked documents, and appended them to the user query.\nCarpineto et al. [4] maximized the divergence between the language model defined by the top retrieved documents and that defined by the entire collection.\nFinally, Yu et al. [38] selected the expansion terms from vision-based segments of Web pages in order to cope with the multiple topics residing therein.\nCo-occurrence Based Techniques.\nTerms highly co-occurring with the issued keywords have been shown to increase precision when appended to the query [17].\nMany statistical measures have been developed to best assess \"term relationship\" levels, either analyzing entire documents [27], lexical affinity relationships [3] (i.e., pairs of closely related words which contain exactly one of the initial query terms), etc. .\nWe have also investigated three such approaches in order to identify query relevant keywords from the rich, yet rather complex Personal Information Repository.\nThesaurus Based Techniques.\nA broadly explored method is to expand the user query with new terms, whose meaning is closely related to the input keywords.\nSuch relationships are usually extracted from large scale thesauri, as WordNet [23], in which various sets of synonyms, hypernyms, etc. are predefined.\nJust as for the co-occurrence methods, initial experiments with this approach were controversial, either reporting improvements, or even reductions in output quality [36].\nRecently, as the experimental collections grew larger, and as the employed algorithms became more complex, better results have been obtained [31, 18, 22].\nWe also use WordNet based expansion terms.\nHowever, we base this process on analyzing the Desktop level relationship between the original query and the proposed new keywords.\nOther Techniques.\nThere are many other attempts to extract expansion terms.\nThough orthogonal to our approach, two works are very relevant for the Web environment: Cui et al. [8] generated word correlations utilizing the probability for query terms to appear in each document, as computed over the search engine logs.\nKraft and Zien [19] showed that anchor text is very similar to user queries, and thus exploited it to acquire additional keywords.\n3.\nQUERY EXPANSION USING DESKTOP DATA\n3.1 Algorithms\n3.1.1 Expanding with Local Desktop Analysis\n3.1.2 Expanding with Global Desktop Analysis\n3.2 Experiments\n3.2.1 Experimental Setup\n3.2.2 Results\n4.\nINTRODUCING ADAPTIVITY\n4.1 Adaptivity Factors\n4.2 Experiments\n5.\nCONCLUSIONS AND FURTHER WORK\nIn this paper we proposed to expand Web search queries by exploiting the user's Personal Information Repository in order to automatically extract additional keywords related both to the query itself and to user's interests, personalizing the search output.\nIn this context, the paper includes the following contributions: \u2022 We proposed five techniques for determining expansion terms from personal documents.\nEach of them produces additional query keywords by analyzing user's Desktop at increasing granularity levels, ranging from term and expression level analysis up to global co-occurrence statistics and external thesauri.\nFigure 1: Relative NDCG gain (in%) for each algorithm overall, as well as separated per query category.\n\u2022 We provided a thorough empirical analysis of several variants of our approaches, under four different scenarios.\nWe showed some of these approaches to perform very well, producing NDCG improvements of up to 51.28%.\n\u2022 We moved this personalized search framework further and proposed to make the expansion process adaptive to features of each query, a strong focus being put on its clarity level.\n\u2022 Within a separate set of experiments, we showed our adaptive\nalgorithms to provide an additional improvement of 8.47% over the previously identified best approach.\nWe are currently performing investigations on the dependency between various query features and the optimal number of expansion terms.\nWe are also analyzing other types of approaches to identify query expansion suggestions, such as applying Latent Semantic Analysis on the Desktop data.\nFinally, we are designing a set of more complex combinations of these metrics in order to provide enhanced adaptivity to our algorithms.","lvl-4":"Personalized Query Expansion for the Web\nABSTRACT\nThe inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval.\nIn this paper we propose to improve such Web queries by expanding them with terms collected from each user's Personal Information Repository, thus implicitly personalizing the search output.\nWe introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri.\nOur extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings.\nSubsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query.\nA separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.\n1.\nINTRODUCTION\nThe booming popularity of search engines has determined simple keyword search to become the only widely accepted user interface for seeking information over the Web.\nYet keyword queries are * Part of this work was performed while the author was visiting Yahoo! Research, Barcelona, Spain.\ninherently ambiguous.\nThe query \"canon book\" for example covers several different areas of interest: religion, photography, literature, and music.\nClearly, one would prefer search output to be aligned with user's topic (s) of interest, rather than displaying a selection of popular URLs from each category.\nStudies have shown that more than 80% of the users would prefer to receive such personalized search results [33] instead of the currently generic ones.\nQuery expansion assists the user in formulating a better query, by appending additional keywords to the initial search request in order to encapsulate her interests therein, as well as to focus the Web search output accordingly.\nIt has been shown to perform very well over large data sets, especially with short input queries (see for example [19, 3]).\nThis is exactly the Web search scenario!\nIn this paper we propose to enhance Web query reformulation by exploiting the user's Personal Information Repository (PIR), i.e., the personal collection of text documents, emails, cached Web pages, etc. .\nSeveral advantages arise when moving Web search personalization down to the Desktop level (note that by \"Desktop\" we refer to PIR, and we use the two terms interchangeably).\nFirst is of course the quality of personalization: The local Desktop is a rich repository of information, accurately describing most, if not all interests of the user.\nOur algorithms expand Web queries with keywords extracted from user's PIR, thus implicitly personalizing the search output.\nAfter a discussion of previous works in Section 2, we first investigate the analysis of local Desktop query context in Section 3.1.1.\nWe propose several keyword, expression, and summary based techniques for determining expansion terms from those personal documents matching the Web query best.\nIn Section 3.1.2 we move our analysis to the global Desktop collection and investigate expansions based on co-occurrence metrics and external thesauri.\nThe experiments presented in Section 3.2 show many of these approaches to perform very well, especially on ambiguous queries, producing NDCG [15] improvements of up to 51.28%.\nIn Section 4 we move this algorithmic framework further and propose to make the expansion process adaptive to the clarity level of the query.\nThis yields an additional improvement of 8.47% over the previously identified best algorithm.\nWe conclude and discuss further work in Section 5.\n2.\nPREVIOUS WORK\nThis paper brings together two IR areas: Search Personalization and Automatic Query Expansion.\nThere exists a vast amount of algorithms for both domains.\nIn this section we thus present a separate analysis, first introducing some approaches to personalize search, as this represents the main goal of our research, and then discussing several query expansion techniques and their relationship to our algorithms.\n2.1 Personalized Search\nPersonalized search comprises two major components: (1) User profiles, and (2) The actual search algorithm.\nThis section splits the relevant background according to the focus of each article into either one of these elements.\nApproaches focused on the User Profile.\nSugiyama et al. [32] analyzed surfing behavior and generated user profiles as features (terms) of the visited pages.\nUpon issuing a new query, the search results were ranked based on the similarity between each URL and the user profile.\nQiu and Cho [26] used Machine Learning on the past click history of the user in order to determine topic preference vectors and then apply Topic-Sensitive PageRank [13].\nUser profiling based on browsing history has the advantage of being rather easy to obtain and process.\nThis is probably why it is also employed by several industrial search engines (e.g., Yahoo! MyWeb2).\nHowever, it is definitely not sufficient for gathering a thorough insight into user's interests.\nMoreover, none of these investigated the adaptive application of personalization.\nApproaches focused on the Personalization Algorithm.\nHaveliwala [13] computed a topicoriented PageRank, in which 16 PageRank vectors biased on each of the main topics of the Open Directory were initially calculated off-line, and then combined at run-time based on the similarity between the user query and each of the 16 topics.\n2.2 Automatic Query Expansion\nAutomatic query expansion aims at deriving a better formulation of the user query in order to enhance retrieval.\nIt is based on exploiting various social or collection specific characteristics in order to generate additional terms, which are appended to the original in\nput keywords before identifying the matching documents returned as output.\nIn this section we survey some of the representative query expansion works grouped according to the source employed to generate additional terms: (1) Relevance feedback, (2) Collection based co-occurrence statistics, and (3) Thesaurus information.\nSome other approaches are also addressed in the end of the section.\nRelevance Feedback Techniques.\nThe main idea of Relevance Feedback (RF) is that useful information can be extracted from the relevant documents returned for the initial query.\nFirst approaches were manual [28] in the sense that the user was the one choosing the relevant results, and then various methods were applied to extract new terms, related to the query and the selected documents.\nEfthimiadis [11] presented a comprehensive literature review and proposed several simple methods to extract such new keywords based on term frequency, document frequency, etc. .\nWe used some of these as inspiration for our Desktop specific techniques.\nChang and Hsu [5] asked users to choose relevant clusters, instead of documents, thus reducing the amount of interaction necessary.\nRF has also been shown to be effectively automatized by considering the top ranked documents as relevant [37] (this is known as Pseudo RF).\nLam and Jones [21] used summarization to extract informative sentences from the top-ranked documents, and appended them to the user query.\nFinally, Yu et al. [38] selected the expansion terms from vision-based segments of Web pages in order to cope with the multiple topics residing therein.\nCo-occurrence Based Techniques.\nTerms highly co-occurring with the issued keywords have been shown to increase precision when appended to the query [17].\nWe have also investigated three such approaches in order to identify query relevant keywords from the rich, yet rather complex Personal Information Repository.\nThesaurus Based Techniques.\nA broadly explored method is to expand the user query with new terms, whose meaning is closely related to the input keywords.\nJust as for the co-occurrence methods, initial experiments with this approach were controversial, either reporting improvements, or even reductions in output quality [36].\nWe also use WordNet based expansion terms.\nHowever, we base this process on analyzing the Desktop level relationship between the original query and the proposed new keywords.\nOther Techniques.\nThere are many other attempts to extract expansion terms.\nThough orthogonal to our approach, two works are very relevant for the Web environment: Cui et al. [8] generated word correlations utilizing the probability for query terms to appear in each document, as computed over the search engine logs.\nKraft and Zien [19] showed that anchor text is very similar to user queries, and thus exploited it to acquire additional keywords.\n5.\nCONCLUSIONS AND FURTHER WORK\nIn this paper we proposed to expand Web search queries by exploiting the user's Personal Information Repository in order to automatically extract additional keywords related both to the query itself and to user's interests, personalizing the search output.\nIn this context, the paper includes the following contributions: \u2022 We proposed five techniques for determining expansion terms from personal documents.\nEach of them produces additional query keywords by analyzing user's Desktop at increasing granularity levels, ranging from term and expression level analysis up to global co-occurrence statistics and external thesauri.\nFigure 1: Relative NDCG gain (in%) for each algorithm overall, as well as separated per query category.\n\u2022 We provided a thorough empirical analysis of several variants of our approaches, under four different scenarios.\nWe showed some of these approaches to perform very well, producing NDCG improvements of up to 51.28%.\n\u2022 We moved this personalized search framework further and proposed to make the expansion process adaptive to features of each query, a strong focus being put on its clarity level.\n\u2022 Within a separate set of experiments, we showed our adaptive\nalgorithms to provide an additional improvement of 8.47% over the previously identified best approach.\nWe are currently performing investigations on the dependency between various query features and the optimal number of expansion terms.\nWe are also analyzing other types of approaches to identify query expansion suggestions, such as applying Latent Semantic Analysis on the Desktop data.\nFinally, we are designing a set of more complex combinations of these metrics in order to provide enhanced adaptivity to our algorithms.","lvl-2":"Personalized Query Expansion for the Web\nABSTRACT\nThe inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval.\nIn this paper we propose to improve such Web queries by expanding them with terms collected from each user's Personal Information Repository, thus implicitly personalizing the search output.\nWe introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri.\nOur extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings.\nSubsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query.\nA separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.\n1.\nINTRODUCTION\nThe booming popularity of search engines has determined simple keyword search to become the only widely accepted user interface for seeking information over the Web.\nYet keyword queries are * Part of this work was performed while the author was visiting Yahoo! Research, Barcelona, Spain.\ninherently ambiguous.\nThe query \"canon book\" for example covers several different areas of interest: religion, photography, literature, and music.\nClearly, one would prefer search output to be aligned with user's topic (s) of interest, rather than displaying a selection of popular URLs from each category.\nStudies have shown that more than 80% of the users would prefer to receive such personalized search results [33] instead of the currently generic ones.\nQuery expansion assists the user in formulating a better query, by appending additional keywords to the initial search request in order to encapsulate her interests therein, as well as to focus the Web search output accordingly.\nIt has been shown to perform very well over large data sets, especially with short input queries (see for example [19, 3]).\nThis is exactly the Web search scenario!\nIn this paper we propose to enhance Web query reformulation by exploiting the user's Personal Information Repository (PIR), i.e., the personal collection of text documents, emails, cached Web pages, etc. .\nSeveral advantages arise when moving Web search personalization down to the Desktop level (note that by \"Desktop\" we refer to PIR, and we use the two terms interchangeably).\nFirst is of course the quality of personalization: The local Desktop is a rich repository of information, accurately describing most, if not all interests of the user.\nSecond, as all \"profile\" information is stored and exploited locally, on the personal machine, another very important benefit is privacy.\nSearch engines should not be able to know about a person's interests, i.e., they should not be able to connect a specific person with the queries she issued, or worse, with the output URLs she clicked within the search interface1 (see Volokh [35] for a discussion on privacy issues related to personalized Web search).\nOur algorithms expand Web queries with keywords extracted from user's PIR, thus implicitly personalizing the search output.\nAfter a discussion of previous works in Section 2, we first investigate the analysis of local Desktop query context in Section 3.1.1.\nWe propose several keyword, expression, and summary based techniques for determining expansion terms from those personal documents matching the Web query best.\nIn Section 3.1.2 we move our analysis to the global Desktop collection and investigate expansions based on co-occurrence metrics and external thesauri.\nThe experiments presented in Section 3.2 show many of these approaches to perform very well, especially on ambiguous queries, producing NDCG [15] improvements of up to 51.28%.\nIn Section 4 we move this algorithmic framework further and propose to make the expansion process adaptive to the clarity level of the query.\nThis yields an additional improvement of 8.47% over the previously identified best algorithm.\nWe conclude and discuss further work in Section 5.\n2.\nPREVIOUS WORK\nThis paper brings together two IR areas: Search Personalization and Automatic Query Expansion.\nThere exists a vast amount of algorithms for both domains.\nHowever, not much has been done specifically aimed at combining them.\nIn this section we thus present a separate analysis, first introducing some approaches to personalize search, as this represents the main goal of our research, and then discussing several query expansion techniques and their relationship to our algorithms.\n2.1 Personalized Search\nPersonalized search comprises two major components: (1) User profiles, and (2) The actual search algorithm.\nThis section splits the relevant background according to the focus of each article into either one of these elements.\nApproaches focused on the User Profile.\nSugiyama et al. [32] analyzed surfing behavior and generated user profiles as features (terms) of the visited pages.\nUpon issuing a new query, the search results were ranked based on the similarity between each URL and the user profile.\nQiu and Cho [26] used Machine Learning on the past click history of the user in order to determine topic preference vectors and then apply Topic-Sensitive PageRank [13].\nUser profiling based on browsing history has the advantage of being rather easy to obtain and process.\nThis is probably why it is also employed by several industrial search engines (e.g., Yahoo! MyWeb2).\nHowever, it is definitely not sufficient for gathering a thorough insight into user's interests.\nMore, it requires to store all personal information at the server side, which raises significant privacy concerns.\nOnly two other approaches enhanced Web search using Desktop data, yet both used different core ideas: (1) Teevan et al. [34] modified the query term weights from the BM25 weighting scheme to incorporate user interests as captured by their Desktop indexes; (2) In Chirita et al. [6], we focused on re-ranking the Web search output according to the cosine distance between each URL and a set of Desktop terms describing user's interests.\nMoreover, none of these investigated the adaptive application of personalization.\nApproaches focused on the Personalization Algorithm.\nEffectively building the personalization aspect directly into PageRank [25] (i.e., by biasing it on a target set of pages) has received much attention recently.\nHaveliwala [13] computed a topicoriented PageRank, in which 16 PageRank vectors biased on each of the main topics of the Open Directory were initially calculated off-line, and then combined at run-time based on the similarity between the user query and each of the 16 topics.\nMore recently, Nie et al. [24] modified the idea by distributing the PageRank of a page across the topics it contains in order to generate topic oriented rankings.\nJeh and Widom [16] proposed an algorithm that avoids the massive resources needed for storing one Personalized PageRank Vector (PPV) per user by precomputing PPVs only for a small set of pages and then applying linear combination.\nAs the computation of PPVs for larger sets of pages was still quite expensive, several solutions have been investigated, the most important ones being those of Fogaras and Racz [12], and Sarlos et al. [30], the latter using rounding and count-min sketching in order to fastly obtain accurate enough approximations of the personalized scores.\n2.2 Automatic Query Expansion\nAutomatic query expansion aims at deriving a better formulation of the user query in order to enhance retrieval.\nIt is based on exploiting various social or collection specific characteristics in order to generate additional terms, which are appended to the original in\nput keywords before identifying the matching documents returned as output.\nIn this section we survey some of the representative query expansion works grouped according to the source employed to generate additional terms: (1) Relevance feedback, (2) Collection based co-occurrence statistics, and (3) Thesaurus information.\nSome other approaches are also addressed in the end of the section.\nRelevance Feedback Techniques.\nThe main idea of Relevance Feedback (RF) is that useful information can be extracted from the relevant documents returned for the initial query.\nFirst approaches were manual [28] in the sense that the user was the one choosing the relevant results, and then various methods were applied to extract new terms, related to the query and the selected documents.\nEfthimiadis [11] presented a comprehensive literature review and proposed several simple methods to extract such new keywords based on term frequency, document frequency, etc. .\nWe used some of these as inspiration for our Desktop specific techniques.\nChang and Hsu [5] asked users to choose relevant clusters, instead of documents, thus reducing the amount of interaction necessary.\nRF has also been shown to be effectively automatized by considering the top ranked documents as relevant [37] (this is known as Pseudo RF).\nLam and Jones [21] used summarization to extract informative sentences from the top-ranked documents, and appended them to the user query.\nCarpineto et al. [4] maximized the divergence between the language model defined by the top retrieved documents and that defined by the entire collection.\nFinally, Yu et al. [38] selected the expansion terms from vision-based segments of Web pages in order to cope with the multiple topics residing therein.\nCo-occurrence Based Techniques.\nTerms highly co-occurring with the issued keywords have been shown to increase precision when appended to the query [17].\nMany statistical measures have been developed to best assess \"term relationship\" levels, either analyzing entire documents [27], lexical affinity relationships [3] (i.e., pairs of closely related words which contain exactly one of the initial query terms), etc. .\nWe have also investigated three such approaches in order to identify query relevant keywords from the rich, yet rather complex Personal Information Repository.\nThesaurus Based Techniques.\nA broadly explored method is to expand the user query with new terms, whose meaning is closely related to the input keywords.\nSuch relationships are usually extracted from large scale thesauri, as WordNet [23], in which various sets of synonyms, hypernyms, etc. are predefined.\nJust as for the co-occurrence methods, initial experiments with this approach were controversial, either reporting improvements, or even reductions in output quality [36].\nRecently, as the experimental collections grew larger, and as the employed algorithms became more complex, better results have been obtained [31, 18, 22].\nWe also use WordNet based expansion terms.\nHowever, we base this process on analyzing the Desktop level relationship between the original query and the proposed new keywords.\nOther Techniques.\nThere are many other attempts to extract expansion terms.\nThough orthogonal to our approach, two works are very relevant for the Web environment: Cui et al. [8] generated word correlations utilizing the probability for query terms to appear in each document, as computed over the search engine logs.\nKraft and Zien [19] showed that anchor text is very similar to user queries, and thus exploited it to acquire additional keywords.\n3.\nQUERY EXPANSION USING DESKTOP DATA\nDesktop data represents a very rich repository of profiling information.\nHowever, this information comes in a very unstructured way, covering documents which are highly diverse in for\nmat, content, and even language characteristics.\nIn this section we first tackle this problem by proposing several lexical analysis algorithms which exploit user's PIR to extract keyword expansion terms at various granularities, ranging from term frequency within Desktop documents up to utilizing global co-occurrence statistics over the personal information repository.\nThen, in the second part of the section we empirically analyze the performance of each approach.\n3.1 Algorithms\nThis section presents the five generic approaches for analyzing user's Desktop data in order to provide expansion terms for Web search.\nIn the proposed algorithms we gradually increase the amount of personal information utilized.\nThus, in the first part we investigate three local analysis techniques focused only on those Desktop documents matching user's query best.\nWe append to the Web query the most relevant terms, compounds, and sentence summaries from these documents.\nIn the second part of the section we move towards a global Desktop analysis, proposing to investigate term co-occurrences, as well as thesauri, in the expansion process.\n3.1.1 Expanding with Local Desktop Analysis\nLocal Desktop Analysis is related to enhancing Pseudo Relevance Feedback to generate query expansion keywords from the PIR best hits for user's Web query, rather than from the top ranked Web search results.\nWe distinguish three granularity levels for this process and we investigate each of them separately.\nTerm and Document Frequency.\nAs the simplest possible measures, TF and DF have the advantage of being very fast to compute.\nPrevious experiments with small data sets have showed them to yield very good results [11].\nWe thus independently associate a score with each term, based on each of the two statistics.\nThe TF based one is obtained by multiplying the actual frequency of a term with a position score descending as the term first appears closer to the end of the document.\nThis is necessary especially for longer documents, because more informative terms tend to appear towards their beginning [10].\nThe complete TF based keyword extraction formula is as follows:\nwhere nrWords is the total number of terms in the document and pos is the position of the first appearance of the term; TF represents the frequency of each term in the Desktop document matching user's Web query.\nThe identification of suitable expansion terms is even simpler when using DF: Given the set of Top-K relevant Desktop documents, generate their snippets as focused on the original search request.\nThis query orientation is necessary, since the DF scores are computed at the level of the entire PIR and would produce too noisy suggestions otherwise.\nOnce the set of candidate terms has been identified, the selection proceeds by ordering them according to the DF scores they are associated with.\nTies are resolved using the corresponding TF scores.\nNote that a hybrid TFxIDF approach is not necessarily efficient, since one Desktop term might have a high DF on the Desktop, while being quite rare in the Web.\nFor example, the term \"PageRank\" would be quite frequent on the Desktop of an IR scientist, thus achieving a low score with TFxIDF.\nHowever, as it is rather rare in the Web, it would make a good resolution of the query towards the correct topic.\nLexical Compounds.\nAnick and Tipirneni [2] defined the lexical dispersion hypothesis, according to which an expression's lexical dispersion (i.e., the number of different compounds it appears in within a document or group of documents) can be used to automatically identify key concepts over the input document set.\nAlthough several possible compound expressions are available, it has been shown that simple approaches based on noun analysis are almost as good as highly complex part-of-speech pattern identification algorithms [1].\nWe thus inspect the matching Desktop documents for all their lexical compounds of the following form:\nAll such compounds could be easily generated off-line, at indexing time, for all the documents in the local repository.\nMoreover, once identified, they can be further sorted depending on their dispersion within each document in order to facilitate fast retrieval of the most frequent compounds at run-time.\nSentence Selection.\nThis technique builds upon sentence oriented document summarization: First, the set of relevant Desktop documents is identified; then, a summary containing their most important sentences is generated as output.\nSentence selection is the most comprehensive local analysis approach, as it produces the most detailed expansions (i.e., sentences).\nIts downside is that, unlike with the first two algorithms, its output cannot be stored efficiently, and consequently it cannot be computed off-line.\nWe generate sentence based summaries by ranking the document sentences according to their salience score, as follows [21]:\nThe first term is the ratio between the square amount of significant words within the sentence and the total number of words therein.\nA word is significant in a document if its frequency is above a threshold as follows:\nwith NS being the total number of sentences in the document (see [21] for details).\nThe second term is a position score set to (Avg (NS) \u2212 SentenceIndex) \/ Avg2 (NS) for the first ten sentences, and to 0 otherwise, Avg (NS) being the average number of sentences over all Desktop items.\nThis way, short documents such as emails are not affected, which is correct, since they usually do not contain a summary in the very beginning.\nHowever, as longer documents usually do include overall descriptive sentences in the beginning [10], these sentences are more likely to be relevant.\nThe final term biases the summary towards the query.\nIt is the ratio between the square number of query terms present in the sentence and the total number of terms from the query.\nIt is based on the belief that the more query terms contained in a sentence, the more likely will that sentence convey information highly related to the query.\n3.1.2 Expanding with Global Desktop Analysis\nIn contrast to the previously presented approach, global analysis relies on information from across the entire personal Desktop to infer the new relevant query terms.\nIn this section we propose two such techniques, namely term co-occurrence statistics, and filtering the output of an external thesaurus.\nTerm Co-occurrence Statistics.\nFor each term, we can easily compute off-line those terms co-occurring with it most frequently in a given collection (i.e., PIR in our case), and then exploit this information at run-time in order to infer keywords highly correlated with the user query.\nOur generic co-occurrence based query expansion algorithm is as follows:\n1: Filter potential keywords k with DF \u2208 [10,..., 20% \u00b7 N] 2: For each keyword ki 3: For each keyword kj 4: Compute SCki, kj, the similarity coefficient of (ki, kj) On-line computation: 1: Let S be the set of keywords, potentially similar to an input expression E. 2: For each keyword k of E: 3: S \u2190 S \u222a TSC (k), where TSC (k) contains the Top-K terms most similar to k 4: For each term t of S: 5a: Let Score (t) \u2190 \u25d7 k \u2208 E (0.01 + SCt, k) 5b: Let Score (t) \u2190 #DesktopHits (E | t) 6: Select Top-K terms of S with the highest scores.\nThe off-line computation needs an initial trimming phase (step 1) for optimization purposes.\nIn addition, we also restricted the algorithm to computing co-occurrence levels across nouns only, as they contain by far the largest amount of conceptual information, and as this approach reduces the size of the co-occurrence matrix considerably.\nDuring the run-time phase, having the terms most correlated with each particular query keyword already identified, one more operation is necessary, namely calculating the correlation of every output term with the entire query.\nTwo approaches are possible: (1) using a product of the correlation between the term and all keywords in the original expression (step 5a), or (2) simply counting the number of documents in which the proposed term co-occurs with the entire user query (step 5b).\nWe considered the following formulas for Similarity Coefficients [17]:\n9 Cosine Similarity, defined as:\n9 Likelihood Ratio, defined in the paragraphs below.\nDFx is the Document Frequency of term x, and DFx, y is the number of documents containing both x and y. To further increase the quality of the generated scores we limited the latter indicator to cooccurrences within a window of W terms.\nWe set W to be the same as the maximum amount of expansion keywords desired.\nDunning's Likelihood Ratio \u03bb [9] is a co-occurrence based metric similar to \u03c72.\nIt starts by attempting to reject the null hypothesis, according to which two terms A and B would appear in text independently from each other.\nThis means that P (A B) = P (A-B) = P (A), where P (A-B) is the probability that term A is not followed by term B. Consequently, the test for independence of A and B can be performed by looking if the distribution of A given that B is present is the same as the distribution of A given that B is not present.\nOf course, in reality we know these terms are not independent in text, and we only use the statistical metrics to highlight terms which are frequently appearing together.\nWe compare the two binomial processes by using likelihood ratios of their associated hypotheses.\nFirst, let us define the likelihood ratio for one hypothesis:\nwhere \u03c9 is a point in the parameter space 52, 520 is the particular hypothesis being tested, and k is a point in the space of observations K.\nIf we assume that two binomial distributions have the same underlying parameter, i.e., {(p1, p2) I p1 = p2}, we can write:\nwhere H (p1, p2; k1, k2, n1, n2) = pk1 k2 \u2701.\nSince the maxima are obtained with\nwhere L (p, k, n) = pk (1--p) n \u2212 k. Taking the logarithm of the likelihood, we obtain:\nwhere log L (p, k, n) = k log p + (n--k) log (1--p).\nFinally, if we write O11 = P (A B), O12 = P (- A B), O21 = P (A - B), and O22 = P (- A-B), then the co-occurrence likelihood of terms A and B becomes:\nO21 + O22, and p = k1 + k2 n1 + n2 Thesaurus Based Expansion.\nLarge scale thesauri encapsulate global knowledge about term relationships.\nThus, we first identify the set of terms closely related to each query keyword, and then we calculate the Desktop co-occurrence level of each of these possible expansion terms with the entire initial search request.\nIn the end, those suggestions with the highest frequencies are kept.\nThe algorithm is as follows:\nAlgorithm 3.1.2.2.\nFiltered thesaurus based query expansion.\n1: For each keyword k of an input query Q: 2: Select the following sets of related terms using WordNet: 2a: Syn: All Synonyms 2b: Sub: All sub-concepts residing one level below k 2c: Super: All super-concepts residing one level above k 3: For each set Si of the above mentioned sets: 4: For each term t of Si: 5: Search the PIR with (Q | t), i.e., the original query, as expanded with t 6: Let H be the number of hits of the above search (i.e., the co-occurence level of t with Q) 7: Return Top-K terms as ordered by their H values.\nWe observe three types of term relationships (steps 2a-2c): (1) synonyms, (2) sub-concepts, namely hyponyms (i.e., sub-classes) and meronyms (i.e., sub-parts), and (3) super-concepts, namely hypernyms (i.e., super-classes) and holonyms (i.e., super-parts).\nAs they represent quite different types of association, we investigated them separately.\nWe limited the output expansion set (step 7) to contain only terms appearing at least T times on the Desktop, in order to avoid noisy suggestions, with T = min (N DocsPerTopic, MinDocs).\nWe set DocsPerTopic = 2, 500, and MinDocs = 5, the latter one coping with the case of small PIRs.\n3.2 Experiments\n3.2.1 Experimental Setup\nWe evaluated our algorithms with 18 subjects (Ph.D. and PostDoc.\nstudents in different areas of computer science and education).\nFirst, they installed our Lucene based search engine3 and 3Clearly, if one had already installed a Desktop search application, then this overhead would not be present.\nindexed all their locally stored content: Files within user selected paths, Emails, and Web Cache.\nWithout loss of generality, we focused the experiments on single-user machines.\nThen, they chose 4 queries related to their everyday activities, as follows: \u2022 One very frequent AltaVista query, as extracted from the top 2% queries most issued to the search engine within a 7.2 million entries log from October 2001.\nIn order to connect such a query to each user's interests, we added an off-line preprocessing phase: We generated the most frequent search requests and then randomly selected a query with at least 10 hits on each subject's Desktop.\nTo further ensure a real life scenario, users were allowed to reject the proposed query and ask for a new one, if they considered it totally outside their interest areas.\n\u2022 One randomly selected log query, filtered using the same procedure as above.\n\u2022 One self-selected specific query, which they thought to have only one meaning.\n\u2022 One self-selected ambiguous query, which they thought to have at least three meanings.\nThe average query lengths were 2.0 and 2.3 terms for the log queries, as well as 2.9 and 1.8 for the self-selected ones.\nEven though our algorithms are mainly intended to enhance search when using ambiguous query keywords, we chose to investigate their performance on a wide span of query types, in order to see how they perform in all situations.\nThe log queries evaluate real life requests, in contrast to the self-selected ones, which target rather the identification of top and bottom performances.\nNote that the former ones were somewhat farther away from each subject's interest, thus being also more difficult to personalize on.\nTo gain an insight into the relationship between each query type and user interests, we asked each person to rate the query itself with a score of 1 to 5, having the following interpretations: (1) never heard of it, (2) do not know it, but heard of it, (3) know it partially, (4) know it well, (5) major interest.\nThe obtained grades were 3.11 for the top log queries, 3.72 for the randomly selected ones, 4.45 for the self-selected specific ones, and 4.39 for the self-selected ambiguous ones.\nFor each query, we collected the Top-5 URLs generated by 20 versions of the algorithms4 presented in Section 3.1.\nThese results were then shuffled into one set containing usually between 70 and 90 URLs.\nThus, each subject had to assess about 325 documents for all four queries, being neither aware of the algorithm, nor of the ranking of each assessed URL.\nOverall, 72 queries were issued and over 6,000 URLs were evaluated during the experiment.\nFor each of these URLs, the testers had to give a rating ranging from 0 to 2, dividing the relevant results in two categories, (1) relevant and (2) highly relevant.\nFinally, the quality of each ranking was assessed using the normalized version of Discounted Cumulative Gain (DCG) [15].\nDCG is a rich measure, as it gives more weight to highly ranked documents, while also incorporating different relevance levels by giving them different gain values:\nWe used G (i) = 1 for relevant results, and G (i) = 2 for highly relevant ones.\nAs queries having more relevant output documents will have a higher DCG, we also normalized its value to a score between 0 (the worst possible DCG given the ratings) and 1 (the best possible DCG given the ratings) to facilitate averaging over queries.\nAll results were tested for statistical significance using T-tests.\nAlgorithmic specific aspects.\nThe main parameter of our algorithms is the number of generated expansion keywords.\nFor this experiment we set it to 4 terms for all techniques, leaving an analysis at this level for a subsequent investigation.\nIn order to optimize the run-time computation speed, we chose to limit the number of output keywords per Desktop document to the number of expansion keywords desired (i.e., four).\nFor all algorithms we also investigated bigger limitations.\nThis allowed us to observe that the Lexical Compounds method would perform better if only at most one compound per document were selected.\nWe therefore chose to experiment with this new approach as well.\nFor all other techniques, considering less than four terms per document did not seem to consistently yield any additional qualitative gain.\nWe labeled the algorithms we evaluated as follows:\n0.\nGoogle: The actual Google query output, as returned by the Google API; 1.\nTF, DF: Term and Document Frequency; 2.\nLC, LC [O]: Regular and Optimized (by considering only one top compound per document) Lexical Compounds; 3.\nSS: Sentence Selection; 4.\nTC [CS], TC [MI], TC [LR]: Term Co-occurrence Statistics using respectively Cosine Similarity, Mutual Information, and Likelihood Ratio as similarity coefficients; 5.\nWN [SYN], WN [SUB], WN [SUP]: WordNet based expan\nsion with synonyms, sub-concepts, and super-concepts, respectively.\nExcept for the thesaurus based expansion, in all cases we also investigated the performance of our algorithms when exploiting only the Web browser cache to represent user's personal information.\nThis is motivated by the fact that other personal documents such as for example emails are known to have a somewhat different language than that residing on the world wide Web [34].\nHowever, as this approach performed visibly poorer than using the entire Desktop data, we omitted it from the subsequent analysis.\n3.2.2 Results\nLog Queries.\nWe evaluated all variants of our algorithms using NDCG.\nFor log queries, the best performance was achieved with TF, LC [O], and TC [LR].\nThe improvements they brought were up to 5.2% for top queries (p = 0.14) and 13.8% for randomly selected queries (p = 0.01, statistically significant), both obtained with LC [O].\nA summary of all results is depicted in Table 1.\nBoth TF and LC [O] yielded very good results, indicating that simple keyword and expression oriented approaches might be sufficient for the Desktop based query expansion task.\nLC [O] was much better than LC, ameliorating its quality with up to 25.8% in the case of randomly selected log queries, improvement which was also significant with p = 0.04.\nThus, a selection of compounds spanning over several Desktop documents is more informative about user's interests than the general approach, in which there is no restriction on the number of compounds produced from every personal item.\nThe more complex Desktop oriented approaches, namely sentence selection and all term co-occurrence based algorithms, showed a rather average performance, with no visible improvements, except for TC [LR].\nAlso, the thesaurus based expansion usually produced very few suggestions, possibly because of the many technical queries employed by our subjects.\nWe observed however that expanding with sub-concepts is very good for everyday life terms (e.g., \"car\"), whereas the use of super-concepts is valuable for compounds having at least one term with low technicality (e.g., \"document clustering\").\nAs expected, the synonym based expansion performed generally well, though in some very\nTable 1: Normalized Discounted Cumulative Gain at the first 5 results when searching for top (left) and random (right) log queries.\nTable 2: Normalized Discounted Cumulative Gain at the first 5 results when searching for user selected clear (left) and ambiguous (right) queries.\ntechnical cases it yielded rather general suggestions.\nFinally, we noticed Google to be very optimized for some top frequent queries.\nHowever, even within this harder scenario, some of our personalization algorithms produced statistically significant improvements over regular search (i.e., TF and LC [O]).\nSelf-selected Queries.\nThe NDCG values obtained with selfselected queries are depicted in Table 2.\nWhile our algorithms did not enhance Google for the clear search tasks, they did produce strong improvements of up to 52.9% (which were of course also highly significant with p \"0.01) when utilized with ambiguous queries.\nIn fact, almost all our algorithms resulted in statistically significant improvements over Google for this query type.\nIn general, the relative differences between our algorithms were similar to those observed for the log based queries.\nAs in the previous analysis, the simple Desktop based Term Frequency and Lexical Compounds metrics performed best.\nNevertheless, a very good outcome was also obtained for Desktop based sentence selection and all term co-occurrence metrics.\nThere were no visible differences between the behavior of the three different approaches to cooccurrence calculation.\nFinally, for the case of clear queries, we noticed that fewer expansion terms than 4 might be less noisy and thus helpful in bringing further improvements.\nWe thus pursued this idea with the adaptive algorithms presented in the next section.\n4.\nINTRODUCING ADAPTIVITY\nIn the previous section we have investigated the behavior of each technique when adding a fixed number of keywords to the user query.\nHowever, an optimal personalized query expansion algorithm should automatically adapt itself to various aspects of each query, as well as to the particularities of the person using it.\nIn this section we discuss the factors influencing the behavior of our expansion algorithms, which might be used as input for the adaptivity process.\nThen, in the second part we present some initial experiments with one of them, namely query clarity.\n4.1 Adaptivity Factors\nSeveral indicators could assist the algorithm to automatically tune the number of expansion terms.\nWe start by discussing adaptation by analyzing the query clarity level.\nThen, we briefly introduce an approach to model the generic query formulation process in order to tailor the search algorithm automatically, and discuss some other possible factors that might be of use for this task.\nQuery Clarity.\nThe interest for analyzing query difficulty has increased only recently, and there are not many papers addressing this topic.\nYet it has been long known that query disambiguation has a high potential of improving retrieval effectiveness for low recall searches with very short queries [20], which is exactly our targeted scenario.\nAlso, the success of IR systems clearly varies across different topics.\nWe thus propose to use an estimate number expressing the calculated level of query clarity in order to automatically tweak the amount of personalization fed into the algorithm.\nThe following metrics are available:.\nThe Query Length is expressed simply by the number of words in the user query.\nThe solution is rather inefficient, as reported by He and Ounis [14].\n.\nThe Query Scope relates to the IDF of the entire query, as in:\nThis metric performs well when used with document collections covering a single topic, but poor otherwise [7, 14].\n.\nThe Query Clarity [7] seems to be the best, as well as the most applied technique so far.\nIt measures the divergence between the language model associated to the user query and the language model associated to the collection.\nIn a simplified version (i.e., without smoothing over the terms which are not present in the query), it can be expressed as follows:\nwhere Pml (wlQuery) is the probability of the word w within the submitted query, and P.oll (w) is the probability of w within the entire collection of documents.\nOther solutions exist, but we think they are too computationally expensive for the huge amount of data that needs to be processed within Web applications.\nWe thus decided to investigate only C1 and C2.\nFirst, we analyzed their performance over a large set of queries and split their clarity predictions in three categories:.\nSmall Scope \/ Clear Query: C1 E [0, 12], C2 E [4, oo).\n.\nMedium Scope \/ Semi-Ambiguous Query: C1 E [12, 17), C2 E [2.5, 4).\n.\nLarge Scope \/ Ambiguous Query: C1 E [17, oo), C2 E [0, 2.5].\nIn order to limit the amount of experiments, we analyzed only the results produced when employing C1 for the PIR and C2 for the Web.\nAs algorithmic basis we used LC [O], i.e., optimized lexical compounds, which was clearly the winning method in the previous analysis.\nAs manual investigation showed it to slightly overfit the expansion terms for clear queries, we utilized a substitute for this particular case.\nTwo candidates were considered: (1) TF, i.e., the second best approach, and (2) WN [SYN], as we observed that its first and second expansion terms were often very good.\nTable 3: Adaptive Personalized Query Expansion.\nGiven the algorithms and clarity measures, we implemented the adaptivity procedure by tailoring the amount of expansion terms added to the original query, as a function of its ambiguity in the Web, as well as within user's PIR.\nNote that the ambiguity level is related to the number of documents covering a certain query.\nThus, to some extent, it has different meanings on the Web and within PIRs.\nWhile a query deemed ambiguous on a large collection such as the Web will very likely indeed have a large number of meanings, this may not be the case for the Desktop.\nTake for example the query \"PageRank\".\nIf the user is a link analysis expert, many of her documents might match this term, and thus the query would be classified as ambiguous.\nHowever, when analyzed against the Web, this is definitely a clear query.\nConsequently, we employed more additional terms, when the query was more ambiguous in the Web, but also on the Desktop.\nPut another way, queries deemed clear on the Desktop were inherently not well covered within user's PIR, and thus had fewer keywords appended to them.\nThe number of expansion terms we utilized for each combination of scope and clarity levels is depicted in Table 3.\nQuery Formulation Process.\nInteractive query expansion has a high potential for enhancing search [29].\nWe believe that modeling its underlying process would be very helpful in producing qualitative adaptive Web search algorithms.\nFor example, when the user is adding a new term to her previously issued query, she is basically reformulating her original request.\nThus, the newly added terms are more likely to convey information about her search goals.\nFor a general, non personalized retrieval engine, this could correspond to giving more weight to these new keywords.\nWithin our personalized scenario, the generated expansions can similarly be biased towards these terms.\nNevertheless, more investigations are necessary in order to solve the challenges posed by this approach.\nOther Features.\nThe idea of adapting the retrieval process to various aspects of the query, of the user itself, and even of the employed algorithm has received only little attention in the literature.\nOnly some approaches have been investigated, usually indirectly.\nThere exist studies of query behaviors at different times of day, or of the topics spanned by the queries of various classes of users, etc. .\nHowever, they generally do not discuss how these features can be actually incorporated in the search process itself and they have almost never been related to the task of Web personalization.\n4.2 Experiments\nWe used exactly the same experimental setup as for our previous analysis, with two log-based queries and two self-selected ones (all different from before, in order to make sure there is no bias on the new approaches), evaluated with NDCG over the Top-5 results output by each algorithm.\nThe newly proposed adaptive personalized query expansion algorithms are denoted as A [LCO\/TF] for the approach using TF with the clear Desktop queries, and as A [LCO\/WN] when WN [SYN] was utilized instead of TF.\nThe overall results were at least similar, or better than Google for all kinds of log queries (see Table 4).\nFor top frequent queries,\nTable 4: Normalized Discounted Cumulative Gain at the first 5 results when using our adaptive personalized search algorithms on top (left) and random (right) log queries.\nTable 5: Normalized Discounted Cumulative Gain at the first 5 results when using our adaptive personalized search algorithms on user selected clear (left) and ambiguous (right) queries.\nboth adaptive algorithms, A [LCO\/TF] and A [LCO\/WN], improve with 10.8% and 7.9% respectively, both differences being also statistically significant with P \u2264 0.01.\nThey also achieve an improvement of up to 6.62% over the best performing static algorithm, LC [O] (P = 0.07).\nFor randomly selected queries, even though A [LCO\/TF] yields significantly better results than Google (P = 0.04), both adaptive approaches fall behind the static algorithms.\nThe major reason seems to be the imperfect selection of the number of expansion terms, as a function of query clarity.\nThus, more experiments are needed in order to determine the optimal number of generated expansion keywords, as a function of the query ambiguity level.\nThe analysis of the self-selected queries shows that adaptivity can bring even further improvements into Web search personalization (see Table 5).\nFor ambiguous queries, the scores given to Google search are enhanced by 40.6% through A [LCO\/TF] and by 35.2% through A [LCO\/WN], both strongly significant with P \"0.01.\nAdaptivity also brings another 8.9% improvement over the static personalization of LC [O] (P = 0.05).\nEven for clear queries, the newly proposed flexible algorithms perform slightly better, improving with 0.4% and 1.0% respectively.\nAll results are depicted graphically in Figure 1.\nWe notice that A [LCO\/TF] is the overall best algorithm, performing better than Google for all types of queries, either extracted from the search engine log, or self-selected.\nThe experiments presented in this section confirm clearly that adaptivity is a necessary further step to take in Web search personalization.\n5.\nCONCLUSIONS AND FURTHER WORK\nIn this paper we proposed to expand Web search queries by exploiting the user's Personal Information Repository in order to automatically extract additional keywords related both to the query itself and to user's interests, personalizing the search output.\nIn this context, the paper includes the following contributions: \u2022 We proposed five techniques for determining expansion terms from personal documents.\nEach of them produces additional query keywords by analyzing user's Desktop at increasing granularity levels, ranging from term and expression level analysis up to global co-occurrence statistics and external thesauri.\nFigure 1: Relative NDCG gain (in%) for each algorithm overall, as well as separated per query category.\n\u2022 We provided a thorough empirical analysis of several variants of our approaches, under four different scenarios.\nWe showed some of these approaches to perform very well, producing NDCG improvements of up to 51.28%.\n\u2022 We moved this personalized search framework further and proposed to make the expansion process adaptive to features of each query, a strong focus being put on its clarity level.\n\u2022 Within a separate set of experiments, we showed our adaptive\nalgorithms to provide an additional improvement of 8.47% over the previously identified best approach.\nWe are currently performing investigations on the dependency between various query features and the optimal number of expansion terms.\nWe are also analyzing other types of approaches to identify query expansion suggestions, such as applying Latent Semantic Analysis on the Desktop data.\nFinally, we are designing a set of more complex combinations of these metrics in order to provide enhanced adaptivity to our algorithms.","keyphrases":["queri expans","short keyword queri","web retriev","web queri","person inform repositori","search output","addit queri keyword","granular level","term and compound level analysi","global co-occurr statist","extens empir analysi","ambigu queri","qualiti","output rank","person search framework","expans process","variou featur of each queri","adapt algorithm","signific improv","static expans approach","extern thesauru","person web search","desktop profil","keyword extract","keyword co-occurr"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","U","M","R"]} {"id":"J-21","title":"A Strategic Model for Information Markets","abstract":"Information markets, which are designed specifically to aggregate traders' information, are becoming increasingly popular as a means for predicting future events. Recent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets. We develop an analytic method to guide the design and strategic analysis of information markets. Our central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets. We demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules. The projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent. We use it to prove several strategic properties about the dynamic parimutuel market. We also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules. Finally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader's beliefs.","lvl-1":"A Strategic Model for Information Markets Evdokia Nikolova\u2217 MIT CSAIL Cambridge, MA nikolova @mit.\nedu Rahul Sami University of Michigan School of Information rsami @umich.\nedu ABSTRACT Information markets, which are designed specifically to aggregate traders'' information, are becoming increasingly popular as a means for predicting future events.\nRecent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets.\nWe develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets.\nWe demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules.\nThe projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent.\nWe use it to prove several strategic properties about the dynamic parimutuel market.\nWe also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules.\nFinally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader``s beliefs.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Theory 1.\nINTRODUCTION Markets have long been used as a medium for trade.\nAs a side effect of trade, the participants in a market reveal something about their preferences and beliefs.\nFor example, in a financial market, agents would buy shares which they think are undervalued, and sell shares which they think are overvalued.\nIt has long been observed that, because the market price is influenced by all the trades taking place, it aggregates the private information of all the traders.\nThus, in a situation in which future events are uncertain, and each trader might have a little information, the aggregated information contained in the market prices can be used to predict future events.\nThis has motivated the creation of information markets, which are mechanisms for aggregating the traders'' information about an uncertain event.\nInformation markets can be modeled as a game in which the participants bet on a number of possible outcomes, such as the results of a presidential election, by buying shares of the outcomes and receiving payoffs when the outcome is realized.\nAs in financial markets, the participants aim to maximize their profit by buying low and selling high.\nIn this way, the players'' behavior transmits their personal information and beliefs about the possible outcomes, and can be used to predict the event more accurately.\nThe benefit of well-designed information markets goes beyond information aggregation; they can also be used as a hedging instrument, to allow traders to insure against risk.\nRecently, researchers have turned to the problem of designing market structures specifically to achieve better information aggregation properties than traditional markets.\nTwo designs for information markets have been proposed: the Dynamic Parimutuel Market (DPM) by Pennock [10] and the Market Scoring Rules (MSR) by Hanson [6].\nBoth the DPM and the MSR were designed with the goal of giving informed traders an incentive to trade, and to reveal their information as soon as possible, while also controlling the subsidy that the market designer needs to pump into the market.\nThe DPM was created as a combination of a pari-mutuel market (which is commonly used for betting on horses) and a continuous double auction, in order to simultaneously obtain the first one``s infinite buy-in liquidity and the latter``s ability to react continuously to new information.\nOne version of the DPM was implemented in the Yahoo! Buzz market [8] to experimentally test the market``s prediction properties.\nThe foundations of the MSR lie in the idea of a proper scoring rule, which is a technique to reward forecasters in a way that encourages them to give their best prediction.\n316 The innovation in the MSR is to use these scoring rules as instruments that can be traded, thus providing traders who have new information an incentive to trade.\nThe MSR was to be used in a policy analysis market in the Middle East [15], which was subsequently withdrawn.\nInformation markets rely on informed traders trading for their own profit, so it is critical to understand the strategic properties of these markets.\nThis is not an easy task, because markets are complex, and traders can influence each other``s beliefs through their trades, and hence, can potentially achieve long term gains by manipulating the market.\nFor the MSR, it has been shown that, if we exclude the possibility of achieving gain through misleading other traders, it is optimal for each trader to honestly reflect her private belief in her trades.\nFor the DPM, we are not aware of any prior strategic analysis of this nature; in fact, a strategic hole was discovered while testing the DPM in the Yahoo! Buzz market [8].\n1.1 Our Results In this paper, we seek to develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection 1 game, that serves as a useful model for information markets.\nThe projection game is conceptually simpler than the MSR and DPM, and thus it is easier to analyze.\nIn addition it has an attractive geometric visualization, which makes the strategic moves and interactions more transparent.\nWe present an analysis of the optimal strategies and profits in this game.\nWe then undertake an analysis of traders'' costs and profits in the dynamic parimutuel market.\nRemarkably, we find that the cost of a sequence of trades in the DPM is identical to the cost of the corresponding moves in the projection game.\nFurther, if we assume that the traders beliefs at the end of trading match the true probability of the event being predicted, the traders'' payoffs and profits in the DPM are identical to their payoffs and profits in a corresponding projection game.\nWe use the equivalence between the DPM and the projection game to prove that the DPM is arbitrage-free, deduce profitable strategies in the DPM, and demonstrate that constraints on the agents'' trades are necessary to prevent a strategic breakdown.\nWe also prove an equivalence between the projection game and the MSR: We show that play in the MSR is strategically equivalent to play in a restricted projection game, at least for myopic strategies and small trades.\nIn particular, the profitability of any move under the spherical scoring rule is exactly proportional to the profitability of the corresponding move in the projection game restricted to a circle, with slight distortion of the prior probabilities.\nThis allows us to use the projection game as a conceptual model for market scoring rules.\nWe note that while the MSR with the spherical scoring rule somewhat resembles the projection game, due to the mathematical similarity of their profit expressions, the DPM model is markedly different and thus its equivalence to the projection game is especially striking.\nFurther, because the restricted projection game corresponds to a DPM with a natural trading constraint, this sheds light on an intriguing connection between the MSR and the DPM.\n1 In an earlier version of this paper, we called this the segment game.\nLastly, we illustrate how the projection game model can be used to analyze the potential for manipulation of information markets for long-term gain.2 We present an example scenario in which such manipulation can occur, and suggest additional rules that might mitigate the possibility of manipulation.\nWe also illustrate another application to analyzing how a market maker can improve the prediction accuracy of a market in which traders will not trade unless their expected profit is above a threshold.\n1.2 Related Work Numerous studies have demonstrated empirically that market prices are good predictors of future events, and seem to aggregate the collected wisdom of all the traders [2, 3, 12, 1, 5, 16].\nThis effect has also been demonstrated in laboratory studies [13, 14], and has theoretical support in the literature of rational expectations [9].\nA number of recent studies have addressed the design of the market structure and trading rules for information markets, as well as the incentive to participate and other strategic issues.\nThe two papers most closely related to our work are the papers by Hanson [6] and Pennock [10].\nHowever, strategic issues in information markets have also been studied by Mangold et al. [8] and by Hanson, Oprea and Porter [7].\nAn upcoming survey paper [11] discusses costfunction formulations of automated market makers.\nOrganization of the paper The rest of this paper is organized as follows: In Section 2, we describe the projection game, and analyze the players'' costs, profits, and optimal strategies in this game.\nIn Section 3, we study the dynamic parimutuel market, and show that trade in a DPM is equivalent to a projection game.\nWe establish a connection between the projection game and the MSR in Section 4.\nIn Section 5, we illustrate how the projection game can be used to analyze non-myopic, and potentially manipulative, actions.\nWe present our conclusions, and suggestions for future work, in Section 6.\n2.\nTHE PROJECTION GAME In this section, we describe an abstract betting game, the projection game; in the following sections, we will argue that both the MSR and the DPM are strategically similar to the projection game.\nThe projection game is conceptually simpler than MSR and DPM, and hence should prove easier to analyze.\nFor clarity of exposition, here and in the rest of the paper we assume the space is two dimensional, i.e., there are only two possible events.\nOur results easily generalize to more than two dimensions.\nWe also assume throughout that players are risk-neutral.\nSuppose there are two mutually exclusive and exhaustive events, A and B. (In other words, B is the same as not A.) There are n agents who may have information about the likelihood of A and B, and we (the designers) would like to aggregate their information.\nWe invite them to play the game described below: At any point in the game, there is a current state described by a pair of parameters, (x, y), which we sometimes write in vector form as x. Intuitively, x corresponds to the 2 Here, we are referring only to manipulation of the information market for later gain from the market itself; we do not consider the possibility of traders having vested interests in the underlying events.\n317 total holding of shares in A, and y corresponds to the holding of shares in B.\nIn each move of the game, one player (say i) plays an arrow (or segment) from (x, y) to (x , y ).\nWe use the notation [(x, y) \u2192 (x , y )] or [x, x ] to denote this move.\nThe game starts at (0, 0), but the market maker makes the first move; without loss of generality, we can assume the move is to (1, 1).\nAll subsequent moves are made by players, in an arbitrary (and potentially repeating) sequence.\nEach move has a cost associated with it, given by C[x, x ] = |x | \u2212 |x|, where | \u00b7 | denotes the Euclidean norm, |x| = p x2 + y2.\nNote that none of the variables are constrained to be nonnegative, and hence, the cost of a move can be negative.\nThe cost can be expressed in an alternative form, that is also useful.\nSuppose player i moves from (x, y) to (x , y ).\nWe can write (x , y ) as (x + lex, y + ley), such that l \u2265 0 and |ex|2 + |ey|2 = 1.\nWe call l the volume of the move, and (ex, ey) the direction of the move.\nAt any point (\u02c6x, \u02c6y), there is an instantaneous price charged, defined as follows: c((\u02c6x, \u02c6y), (ex, ey)) = \u02c6xex + \u02c6yey |(\u02c6x, \u02c6y)| = \u02c6x \u00b7 e |\u02c6x| .\nNote that the price depends only on the angle between the line joining the vector (\u02c6x, \u02c6y) and the segment [(x, y), (x , y )], and not the lengths.\nThe total cost of the move is the price integrated over the segment [(x, y) \u2192 (x , y )], i.e., C[(x, y) \u2192 (x , y )] = Z l w=0 c((x+wex, y +wey), (ex, ey))dw We assume that the game terminates after a finite number of moves.\nAt the end of the game, the true probability p of event A is determined, and the agents receive payoffs for the moves they made.\nLet q = (qx, qy) = (p,1\u2212p) |(p,1\u2212p)| .\nThe payoff to agent i for a segment [(x, y) \u2192 (x , y )] is given by: P([(x, y) \u2192 (x , y )]) = qx(x \u2212 x) + qy(y \u2212 y) = q.(x \u2212 x) We call the line through the origin with slope (1 \u2212 p)\/p = qy\/qx the p-line.\nNote that the payoff, too, may be negative.\nOne drawback of the definition of a projection game is that implementing the payoffs requires us to know the actual probability p.\nThis is feasible if the probability can eventually be determined statistically, such as when predicting the relative frequency of different recurring events, or vote shares.\nIt is also feasible for one-off events in which there is reason to believe that the true probability is either 0 or 1.\nFor other one-off events, it cannot be implemented directly (unlike scoring rules, which can be implemented in expectation).\nHowever, we believe that even in these cases, the projection game can be useful as a conceptual and analytical tool.\nThe moves, costs and payoffs have a natural geometric representation, which is shown in Figure 1 for three players with one move each.\nThe players append directed line segments in turn, and the payoff player i finally receives for a move is the projection of her segment onto the line with slope (1 \u2212 p)\/p.\nHer cost is the difference of distances of the endpoints of her move to the origin.\n2.1 Strategicproperties oftheprojectiongame We begin our strategic analysis of the projection game by observing the following simple path-independence property.\n1\u2212p p 3``s m ove 1``s payoff M M m ove 1``s move 2``smove 3``s payoff 2``s payoff x y Figure 1: A projection game with three players Lemma 1.\n[Path-Independence] Suppose there is a sequence of moves leading from (x, y) to (x , y ).\nThen, the total cost of all the moves is equal to the cost of the single move [(x, y) \u2192 (x , y )], and the total payoff of all the moves is equal to the payoff of the single move [(x, y) \u2192 (x , y )].\nProof.\nThe proof follows trivially from the definition of the costs and payoffs: If we consider a path from point x to point x , both the net change in the vector lengths and the net projection onto the p-line are completely determined by x and x .\nAlthough simple, path independence of profits is vitally important, because it implies (and is implied by) the absence of arbitrage in the market.\nIn other words, there is no sequence of moves that start and end at the same point, but result in a positive profit.\nOn the other hand, if there were two paths from (x, y) to (x , y ) with different profits, there would be a cyclic path with positive profit.\nFor ease of reference, we summarize some more useful properties of the cost and payoff functions in the projection game.\nLemma 2.\n1.\nThe instantaneous price for moving along a line through the origin is 1 or \u22121, when the move is away or toward the origin respectively.\nThe instantaneous price along a circle centered at the origin is 0.\n2.\nWhen x moves along a circle centered at the origin to point \u00afx on the positive p-line, the corresponding payoff is P(x, \u00afx) = |x| \u2212 x \u00b7 q, and the cost is C[x, \u00afx] = 0.\n3.\nThe two cost function formulations are equivalent: C[x, x ] = Z l w=0 cos(x + we, e)dw = |x |\u2212|x| \u2200x, x , where e is the unit vector giving the direction of move.\nIn addition, when x moves along the positive p-line, the payoff is equal to the cost, P(x, x ) = |x | \u2212 |x|.\nProof.\n1.\nThe instantaneous price is c(x, e) = x \u00b7 e\/|x| = cos(x, e), where e is the direction of movement, and the result follows.\n2.\nSince \u00afx is on the positive p-line, q\u00b7\u00afx = |\u00afx| = |x|, hence P(x, \u00afx) = q \u00b7 (\u00afx \u2212 x) = |x| \u2212 x \u00b7 q; the cost is 0 from the definition.\n318 3.\nFrom Part 1, the cost of moving from x to the origin is C[x, 0] = Z l w=0 cos(x + we, e)dw = Z l w=0 (\u22121)dw = \u2212|x|, where l = |x|, e = x\/|x|.\nBy the path-independence property, C[x, x ] = C[x, 0] + C[0, x ] = |x | \u2212 |x|.\nFinally, a point on the positive p-line gets projected to itself, namely q \u00b7 x = |x| so when the movement is along the positive p-line, P(x, x ) = q \u00b7 (x \u2212 x) = |x | \u2212 |x| = C[x, x ].\nWe now consider the question of which moves are profitable in this game.\nThe eventual profit of a move [x, x ], where x = x + l.(ex, ey), is profit[x, x ] = P[x, x ] \u2212 C[x, x ] = lq.e \u2212 C[x, x ] Differentiating with respect to l, we get d(profit) dl = q.e \u2212 c(x + le, e) = q.e \u2212 x + le |x + le| .\ne We observe that this is 0 if p(y + ley) = (1 \u2212 p)(x + lex), in other words, when the vectors q and (x + le) are exactly aligned.\nFurther, we observe that the price is non-decreasing with increasing l. Thus, along the direction e, the profit is maximized at the point of intersection with the p-line.\nBy Lemma 2, there is always a path from x to the positive p-line with 0 cost, which is given by an arc of the circle with center at the origin and radius |x|.\nAlso, any movement along the p-line has 0 additional profit.\nThus, for any point x, we can define the profit potential \u03c6(x, p) by \u03c6(x, p) = |x| \u2212 x \u00b7 q. Note, the potential is positive for x off the positive p-line and zero for x on the line.\nNext we show that a move to a lower potential is always profitable.\nLemma 3.\nThe profit of a move [x, x ] is equal to the difference in potential \u03c6(x, p) \u2212 \u03c6(x , p).\nProof.\nDenote z = |x|q and z = |x |q, i.e., these are the points of intersection of the positive p-line with the circles centered at the origin with radii |x| and |x | respectively.\nBy the path-independence property and Lemma 2, the profit of move [x, x ] is profit(x, x ) = profit(x, z) + profit(z, z ) + profit(z , x ) = (|x| \u2212 x \u00b7 q) + 0 + (x \u00b7 q \u2212 |x |) = \u03c6(x, p) \u2212 \u03c6(x , p).\nThus, the profit of the move is equal to the change in profit potential between the endpoints.\nThis lemma offers another way of seeing that it is optimal to move to the point of lowest potential, namely to the p-line.\np y 1\u2212p x x x'' z z'' profit = |x|\u2212x.q profit = x''.\nq\u2212|x''| profit = 0 Figure 2: The profit of move [x, x ] is equal to the change in profit potential from x to x .\n3.\nDYNAMIC PARIMUTUEL MARKETS The dynamic parimutuel market (DPM) was introduced by Pennock [10] as an information market structure that encourages informed traders to trade early, has guaranteed liquidity, and requires a bounded subsidy.\nThis market structure was used in the Yahoo! Buzz market [8].\nIn this section, we show that the dynamic parimutuel market is also remarkably similar to the projection game.\nCoupled with section 4, this also demonstrates a strong connection between the DPM and MSR.\nIn a two-event DPM, users can place bets on either event A or B at any time, by buying a share in the appropriate event.\nThe price of a share is variable, determined by the total amount of money in the market and the number of shares currently outstanding.\nFurther, existing shares can be sold at the current price.\nAfter it is determined which event really happens, the shares are liquidated for cash.\nIn the total-money-redistributed variant of DPM, which is the variant used in the Yahoo! market, the total money is divided equally among the shares of the winning event; shares of the losing event are worthless.\nNote that the payoffs are undefined if the event has zero outstanding shares; the DPM rules should preclude this possibility.\nWe use the following notation: Let x be the number of outstanding shares of A (totalled over all traders), and y be the number of outstanding shares in B. Let M denote the total money currently in the market.\nLet cA and cB denote the prices of shares in A and B respectively.\nThe price of a share in the Yahoo! DPM is determined by the share-ratio principle: cA cB = x y (1) The form of the prices can be fully determined by stipulating that, for any given value of M, x, and y, there must be some probability pA such that, if a trader believes that pA is the probability that A will occur and the market will liquidate in the current state, she cannot expect to profit from either buying or selling either share.\nThis gives us cA = pA hM x i cB = pB hM y i 319 Since pA + pB = 1, we have: xcA + ycB = M (2) Finally, combining Equations 1 and 2, we get cA = x M x2 + y2 cB = y M x2 + y2 Cost of a trade in the DPM Consider a trader who comes to a DPM in state (M, x, y), and buys or sells shares such that the eventual state is (M , x , y ).\nWhat is the net cost, M \u2212 M, of her move?\nTheorem 4.\nThe cost of the move from (x, y) to (x , y ) is M \u2212 M = M0[ p x 2 + y 2 \u2212 p x2 + y2] for some constant M0.\nIn other words, it is a constant multiple of the corresponding cost in the projection game.\nProof.\nConsider the function G(x, y) = M0[ p x2 + y2].\nThe function G is differentiable for all x, y = 0, and it``s partial derivatives are: \u2202G \u2202x = M0[ x p x2 + y2 ] = x G(x, y) x2 + y2 \u2202G \u2202y = M0[ y p x2 + y2 ] = y G(x, y) x2 + y2 Now, compare these equations to the prices in the DPM, and observe that, as a trader buys or sells in the DPM, the instantaneous price is the derivative of the money.\nIt follows that, if at any point of time the DPM is in a state (M, x, y) such that M = G(x, y), then, at all subsequent points of time, the state (M , x , y ) of the DPM will satisfy M = G(x , y ).\nFinally, note that we can pick the constant M0 such that the equation is satisfied for the initial state of the DPM, and hence, it will always be satisfied.\nOne important consequence of Theorem 4 is that the dynamic parimutuel market is arbitrage-free (using Lemma 1).\nIt is interesting to note that the original Yahoo! Buzz market used a different pricing rule, which did permit arbitrage; the price rule was changed to the share-ratio rule after traders started exploiting the arbitrage opportunities [8].\nAnother somewhat surprising consequence is that the numbers of outstanding shares x, y completely determines the total capitalization M of the DPM.\nConstraints in the DPM Although it might seem, based on the costs, that any move in the projection game has an equivalent move in the DPM, the DPM places some constraints on trades.\nFirstly, no trader is allowed to have a net negative holding in either share.\nThis is important, because it ensures that the total holdings in each share are always positive.\nHowever, this is a boundary constraint, and does not impact the strategic choices for a player with a sufficiently large positive holding in each share.\nThus, we can ignore this constraint from a first-order strategic analysis of the DPM.\nSecondly, for practical reasons a DPM will probably have a minimum unit of trade, but we assume here that arbitrarily small quantities can be traded.\nPayoffs in the DPM At some point, trading in the DPM ceases and shares are liquidated.\nWe assume here that the true probability becomes known at liquidation time, and describe the payoffs in terms of the probability; however, if the probability is not revealed, only the event that actually occurs, these payoffs can be implemented in expectation.\nSuppose the DPM terminates in a state (M, x, y), and the true probability of event A is p.\nWhen the dynamic parimutuel market is liquidated, the shares are paid off in the following way: Each owner of a share of A receives pM x , and each owner of a share of B receives (1 \u2212 p)M y , for each share owned.\nThe payoffs in the DPM, although given by a fairly simple form, are conceptually complex, because the payoff of a move depends on the subsequent moves before the market liquidates.\nThus, a fully rational choice of move in the DPM for player i should take into account the actions of subsequent players, including player i himself.\nHere, we restrict the analysis to myopic, infinitesimal strategies: Given the market position is (M, x, y), in which direction should a player make an infinitesimal move in order to maximize her profit?\nWe show that the infinitesimal payoffs and profits of a DPM with true probability p correspond strategically to the infinitesimal payoffs and profits of a projection game with odds p p\/(1 \u2212 p), in the following sense: Lemma 5.\nSuppose player i is about to make a move in a dynamic parimutuel market in a state (M, x, y), and the true probability of event A is p. Then, assuming the market is liquidated after i``s move, \u2022 If x y < q p 1\u2212p , player i profits by buying shares in A , or selling shares in B. \u2022 If x y > q p 1\u2212p , player i profits by selling shares in A, or buying shares in B. Proof.\nConsider the cost and payoff of buying a small quantity \u0394x of shares in A.\nThe cost is C[(x, y) \u2192 (x + \u0394x, y)] = \u0394x \u00b7 x M x2+y2 , and the payoff is \u0394x \u00b7 pM x .\nThus, buying the shares is profitable iff \u0394x \u00b7 x M x2 + y2 < \u0394x \u00b7 p M x \u21d4 x2 x2 + y2 < p \u21d4 x2 + y2 x2 > 1 p \u21d4 1 + ( y x )2 > 1 p \u21d4 y x > r 1 \u2212 p p \u21d4 x y < r p 1 \u2212 p Thus, buying A is profitable if x y < q p 1\u2212p , and selling A is profitable if x y > q p 1\u2212p .\nThe analysis for buying or selling B is similar, with p and (1 \u2212 p) interchanged.\nIt follows from Lemma 5 that it is myopically profitable for players to move towards the line with slope q 1\u2212p p .\nNote that there is a one-to-one mapping between 1\u2212p p and q 1\u2212p p 320 in their respective ranges, so this line is uniquely defined, and each such line also corresponds to a unique p. However, because the actual payoff of a move depends on the future moves, players must base their decisions on some belief about the final state of the market.\nIn the light of Lemma 5, one natural, rational-expectation style assumption is that the final state (M, x\u2217 , y\u2217 ) will satisfy x\u2217 y\u2217 = q p 1\u2212p .\n(In other words, one might assume that the traders'' beliefs will ultimately converge to the true probability p; knowing p, the traders will drive the market state to satisfy x y = q p 1\u2212p .)\nThis is very plausible in markets (such as the Yahoo! Buzz market) in which trading is permitted right until the market is liquidated, at which point there is no remaining uncertainty about the relevant frequencies.\nUnder this assumption, we can prove an even tighter connection between payoffs in the DPM (where the true probability is p) and payoffs in the projection game, with odds q p 1\u2212p : Theorem 6.\nSuppose that the DPM ultimately terminates in a state (M, X, Y ) satisfying X Y = q p 1\u2212p .\nAssume without loss of generality that the constant M0 = 1, so M =\u221a X2 + Y 2.\nThen, the final payoff for any move [x \u2192 x ] made in the course of trading is (x \u2212 x) \u00b7 ( \u221a p, \u221a 1 \u2212 p), i.e., it is the same as the payoff in the projection game with oddsq p 1\u2212p .\nProof.\nFirst, observe that X M = \u221a p and Y M = \u221a 1 \u2212 p.\nThe final payoff is the liquidation value of (x \u2212 x) shares of A and (y \u2212 y) shares of B, which is PayoffDP M [x \u2212 x] = p M X (x \u2212 x) + (1 \u2212 p) M Y (y \u2212 y) = p 1 \u221a p (x \u2212 x) + (1 \u2212 p) 1 \u221a 1 \u2212 p (y \u2212 y) = \u221a p(x \u2212 x) + p 1 \u2212 p(y \u2212 y).\nStrategic Analysis for the DPM Theorems 4 and 6 give us a very strong equivalence between the projection game and the dynamic parimutuel market, under the assumption that the DPM converges to the optimal value for the true probability.\nA player playing in a DPM with true odds p\/(1 \u2212 p), can imagine himself playing in the projection game with odds q p 1\u2212p , because both the costs and the payoffs of any given move are identical.\nUsing this equivalence, we can transfer all the strategic properties proven for the projection game directly to the analysis of the dynamic parimutuel market.\nOne particularly interesting conclusion we can draw is as follows: In the absence of any constraint that disallows it, it is always profitable for an agent to move towards the origin, by selling shares in both A and B while maintaining the ratio x\/y.\nIn the DPM, this is limited by forbidding short sales, so players can never have negative holdings in either share.\nAs a result, when their holding in one share (say A) is 0, they can``t use the strategy of moving towards the origin.\nWe can conclude that a rational player should never hold shares of both A and B simultaneously, regardless of her beliefs and the market position.\nThis discussion leads us to consider a modified DPM, in which this strategic loophole is addressed directly: Instead of disallowing all short sales, we place a constraint that no agent ever reduce the total market capitalization M (or, alternatively, that any agent``s total investment in the market is always non-negative).\nWe call this the nondecreasing market capitalization constraint for the DPM.\nThis corresponds to a restriction that no move in the projection game reduces the radius.\nHowever, we can conclude from the preceding discussion that players have no incentive to ever increase the radius.\nThus, the moves of the projection game would all lie on the quarter circle in the positive quadrant, with radius determined by the market maker``s move.\nIn section 4, we show that the projection game on this quarter circle is strategically equivalent (at least myopically) to trade in a Market Scoring Rule.\nThus, the DPM and MSR appear to be deeply connected to each other, like different interfaces to the same underlying game.\n4.\nMARKET SCORING RULES The Market Scoring Rule (MSR) was introduced by Hanson [6].\nIt is based on the concept of a proper scoring rule, a technique which rewards forecasters to give their best prediction.\nHanson``s innovation was to turn the scoring rules into instruments that can be traded, thereby providing traders who have new information an incentive to trade.\nOne positive effect of this design is that a single trader would still have incentive to trade, which is equivalent to updating the scoring rule report to reflect her information, thereby eliminating the problem of thin markets and illiquidity.\nIn this section, we show that, when the scoring rule used is the spherical scoring rule [4], there is a strong strategic equivalence between the projection game and the market scoring rule.\nProper scoring rules are tools used to reward forecasters who predict the probability distribution of an event.\nWe describe this in the simple setting of two exhaustive, mutually exclusive events A and B.\nIn the simple setting of two exhaustive, mutually exclusive events A and B, proper scoring rules are defined as follows.\nSuppose the forecaster predicts that the probabilities of the events are r = (rA, rB), with rA + rB = 1.\nThe scoring rule is specified by functions sA(rA, rB) and sB(rA, rB), which are applied as follows: If the event A occurs, the forecaster is paid sA(rA, rB), and if the event B occurs, the forecaster is paid sB(rA, rB).\nThe key property that a proper scoring rule satisfies is that the expected payment is maximized when the report is identical to the true probability distribution.\n4.1 Equivalence with Spherical Scoring Rule In this section, we focus on one specific scoring rule: the spherical scoring rule [4].\nDefinition 1.\nThe spherical scoring rule [4] is defined by si(r) def = ri\/||r||.\nFor two events, this can be written as: sA(rA, rB) = rA p r2 A + r2 B ; sB(rA, rB) = rB p r2 A + r2 B The spherical scoring rule is known to be a proper scoring rule.\nThe definition generalizes naturally to higher dimensions.\nWe now demonstrate a close connection between the projection game restricted to a circular arc and a market scoring rule that uses the spherical scoring rule.\nAt this point, it is 321 convenient to use vector notation.\nLet x = (x, y) denote a position in the projection game.\nWe consider the projection game restricted to the circle |x| = 1.\nRestricted projection game Consider a move in this restricted projection game from x to x .\nRecall that q = ( p \u221a p2+(1\u2212p)2 , 1\u2212p \u221a p2+(1\u2212p)2 ), where p is the true probability of the event.\nThen, the projection game profit of a move [x, x ] is q \u00b7 [x \u2212 x] (noting that |x| = |x |).\nWe can extend this to an arbitrary collection3 of (not necessarily contiguous) moves X = {[x1, x1], [x2, x2], \u00b7 \u00b7 \u00b7 , [xl, xl]}.\nSEG-PROFITp(X ) = X [x,x ]\u2208X q \u00b7 [x \u2212 x] = q \u00b7 2 4 X [x,x ]\u2208X [x \u2212 x] 3 5 Spherical scoring rule profit We now turn our attention to the MSR with the spherical scoring rule (SSR).\nConsider a player who changes the report from r to r .\nThen, if the true probability of A is p, her expected profit is SSR-PROFIT([r, r ]) = p(sA(r )\u2212sA(r))+(1\u2212p)(sB(r )\u2212sB(r)) Now, let us represent the initial and final position in terms of circular coordinates.\nFor r = (rA, rB), define the corresponding coordinates x = ( rA\u221a r2 A+r2 B , rB\u221a r2 A+r2 B ).\nNote that the coordinates satisfy |x| = 1, and thus correspond to valid coordinates for the restricted projection game.\nNow, let p denote the vector [p, 1 \u2212 p].\nThen, expanding the spherical scoring functions sA, sB, the player``s profit for a move from r to r can be rewritten in terms of the corresponding coordinates x, x as: SSR-PROFIT([x, x ]) = p \u00b7 (x \u2212 x) For any collection X of moves, the total payoff in the SSR market is given by: SSR-PROFITp(X ) = X [x,x ]\u2208X p \u00b7 [x \u2212 x] = p \u00b7 2 4 X [x,x ]\u2208X [x \u2212 x] 3 5 Finally, we note that p and q are related by q = \u03bcpp, where \u03bcp = 1\/ p p2 + (1 \u2212 p)2 is a scalar that depends only on p.\nThis immediately gives us the following strong strategic equivalence for the restricted projection game and the SSR market: Theorem 7.\nAny collection of moves X yields a positive (negative) payoff in the restricted projection game iff X yields a positive (negative) payoff in the Spherical Scoring Rule market.\nProof.\nAs derived above, SEG-PROFITp(X ) = \u03bcpSSR-PROFITp(X ).\nFor all p, 1 \u2264 \u03bcp \u2264 \u221a 2, (or more generally for an ndimensional probability vector p, 1 \u2264 \u03bcp = 1 |p| \u2264 \u221a n, by the arithmetic mean-root mean square inequality), and the result follows immediately.\n3 We allow the collection to contain repeated moves, i.e., it is a multiset.\nAlthough theorem 7 is stated in terms of the sign of the payoff, it extends to relative payoffs of two collections of moves: Corollary 8.\nConsider any two collections of moves X , X .\nThen, X yields a greater payoff than X in the projection game iff X yields a greater payment than X in the SSR market.\nProof.\nEvery move [x, x ] has a corresponding inverse move [x , x].\nIn both the projection game and the SSR, the inverse move profit is simply the negative profit of the move (the moves are reversible).\nWe can define a collection of moves X = X \u2212 X by adding the inverse of X to X .\nNote that SEG-PROFITp(X ) = SEG-PROFITp(X )\u2212SEG-PROFITp(X ) and SSR-PROFITp(X ) = SSR-PROFITp(X )\u2212SSR-PROFITp(X ); applying theorem 7 completes the proof.\nIt follows that the ex post optimality of a move (or set of moves) is the same in both the projection game and the SSR market.\nOn its own, this strong ex post equivalence is not completely satisfying, because in any non-trivial game there is uncertainty about the value of p, and the different scaling ratios for different p could lead to different ex ante optimal behavior.\nWe can extend the correspondence to settings with uncertain p, as follows: Theorem 9.\nConsider the restricted projection game with some prior probability distribution F over possible values of p. Then, there is a probability distribution G with the same support as F, and a strictly positive constant c that depends only on F such that: \u2022 (i) For any collection X of moves, the expected profits are related by: EF (SEG-PROFIT(X )) = cEG(SSR-PROFIT(X )) \u2022 (ii) For any collection X , and any measurable information set I \u2286 [0, 1], the expected profits conditioned on knowing that p \u2208 I satisfy EF (SEG-PROFIT(X )|p \u2208 I) = cEG(SSR-PROFIT(X )|p \u2208 I) The converse also holds: For any probability distribution G, there is a distribution F such that both these statements are true.\nProof.\nFor simplicity, assume that F has a density function f. (The result holds even for non-continuous distributions).\nThen, let c = R 1 0 \u03bcpf(p)dp.\nDefine the density function g of distribution G by g(p) = \u03bcpf(p) c Now, for a collection of moves X , EF (SEG-PROFIT(X )) = Z SEG-PROFITp(X )f(p)dp = Z SSR-PROFITp(X )\u03bcpf(p)dp = Z SSR-PROFITp(X )cg(p)dp = cEG(SSR-PROFIT(X )) 322 \u22121 \u22120.5 0 0.5 1 1.5 2 2.5 \u22121 \u22120.5 0 0.5 1 1.5 2 2.5 x y log scoring rule quadratic scoring rule Figure 3: Sample score curves for the log scoring rule si(r) = ai + b log ri and the quadratic scoring rule si(r) = ai + b(2ri \u2212 P k r2 k).\nTo prove part (ii), we simply restrict the integral to values in I.\nThe converse follows similarly by constructing F from G. Analysis of MSR strategies Theorem 9 provides the foundation for analysis of strategies in scoring rule markets.\nTo the extent that strategies in these markets are independent of the specific scoring rule used, we can use the spherical scoring rule as the market instrument.\nThen, analysis of strategies in the projection game with a slightly distorted distribution over p can be used to understand the strategic properties of the original market situation.\nImplementation in expectation Another important consequence of Theorem 9 is that the restricted projection game can be implemented with a small distortion in the probability distribution over values of p, by using a Spherical Scoring Rule to implement the payoffs.\nThis makes the projection game valuable as a design tool; for example, we can analyze new constraints and rules in the projection game, and then implement them via the SSR.\nUnfortunately, the result does not extend to unrestricted projection games, because the relative profit of moving along the circle versus changing radius is not preserved through this transformation.\nHowever, it is possible to extend the transformation to projection games in which the radius ri after the ith move is a fixed function of i (not necessarily constant), so that it is not within the strategic control of the player making the move; such games can also be strategically implemented via the spherical scoring rule (with distortion of priors).\n4.2 Connection to other scoring rules In this section, we show a weaker similarity between the projection game and the MSR with other scoring rules.\nWe prove an infinitesimal similarity between the restricted projection game and the MSR with log scoring rule; the result generalizes to all proper scoring rules that have a unique local and global maximum.\nA geometric visualization of some common scoring rules in two dimensions is depicted in Figure 3.\nThe score curves in the figure are defined by {(s1(r), s2(r)) | r = (r, 1 \u2212 r), r \u2208 [0, 1]}.\nSimilarly to the projection game, define the profit potential of a probability r in MSR to be the change in profit for moving from r to the optimum p, \u03c6MSR(s(r), p) = profitMSR[s(r), s(p)].\nWe will show that the profit potentials in the two games have analogous roles for analyzing the optimal strategies, in particular both potential functions have a global minimum 0 at r = p. Theorem 10.\nConsider the projection game restricted to the non-negative unit circle where strategies x have the natural one-to-one correspondence to probability distributions r = (r, 1 \u2212 r) given by x = ( r |r| , 1\u2212r |r| ).\nTrade in a log market scoring rule is strategically similar to trade in the projection game on the quarter-circle, in that d dr \u03c6(s(r), p) < 0 for r < p d dr \u03c6(s(r), p) > 0 for r > p, both for the projection game and MSR potentials \u03c6(.)\n.\nProof.\n(sketch) The derivative of the MSR potential is d dr \u03c6(s(r), p) = \u2212p \u00b7 d dr s(r) = \u2212 X i pisi(r).\nFor the log scoring rule si(r) = ai + b log ri with b > 0, d dr \u03c6MSR(s(r), p) = \u2212p \u00b7 b r , \u2212 b 1 \u2212 r = \u2212b p r \u2212 1 \u2212 p 1 \u2212 r = b r \u2212 p r(1 \u2212 r) .\nSince r = (r, 1 \u2212 r) is a probability distribution, this expression is positive for r > p and negative for r < p as desired.\nNow, consider the projection game on the non-negative unit circle.\nThe potential for any x = ( r |r| , 1\u2212r |r| ) is given by \u03c6(x(r), p) = |x| \u2212 q \u00b7 x(r), It is easy to show that d dr \u03c6(x(r), p) < 0 for r < p and the derivative is positive for r > p, so the potential function along the circle is decreasing and then increasing with r similarly to an energy function, with a global minimum at r = p, as desired.\nTheorem 10 establishes that the market log-scoring rule is strategically similar to the projection game played on a circle, in the sense that the optimal direction of movement at the current state is the same in both games.\nFor example, if the current state is r < p, it is profitable to move to r+dr since the effective profit of that move is profit(r, r ) = \u03c6(s(r), p) \u2212 \u03c6(s(r + dr), p) > 0.\nAlthough stated for logscoring rules, the theorem holds for any scoring rules that induce a potential with a unique local and global minimum at p, such as the quadratic scoring rule and others.\n5.\nUSING THEPROJECTION-GAMEMODEL The chief advantages of the projection game are that it is analytically tractable, and also easy to visualize.\nIn Section 3, we used the projection-game model of the DPM to prove the absence of arbitrage, and to infer strategic properties that might have been difficult to deduce otherwise.\nIn this section, we provide two examples that illustrate the power of projection-game analysis for gaining insight about more complex strategic settings.\n323 5.1 Traders with inertia The standard analysis of the trader behavior in any of the market forms we have studied asserts that traders who disagree with the market probabilities will expect to gain from changing the probability, and thus have a strict incentive to trade in the market.\nThe expected gain may, however, be very small.\nA plausible model of real trader behavior might include some form of inertia or -optimality: We assume that traders will trade if their expected profit is greater than some constant .\nWe do not attempt to justify this model here; rather, we illustrate how the projection game may be used to analyze such situations, and shed some light on how to modify the trading rules to alleviate this problem.\nConsider the simple projection game restricted to a circular arc with unit radius; as we have seen, this corresponds closely to the spherical market scoring rule, and to the dynamic parimutuel market under a reasonable constraint.\nNow, suppose the market probability is p, and a trader believes the true probability is p .\nThen, his expected gain can be calculated, as follows: Let q and q be the unit vectors in the directions of p and p respectively.\nThe expected profit is given by E = \u03c6(q, p ) = 1\u2212 q \u00b7q .\nThus, the trader will trade only if 1\u2212q\u00b7q > .\nIf we let \u03b8 and \u03b8 be the angles of the p-line and p -line respectively (from the x-axis), we get E = 1 \u2212 cos(\u03b8 \u2212 \u03b8 ); when \u03b8 is close to \u03b8 , a Taylor series approximation gives us that E \u2248 (\u03b8 \u2212 \u03b8 )2 \/2.\nThus, we can derive a bound on the limit of the market accuracy: The market price will not change as long as (\u03b8 \u2212 \u03b8 )2 \u2264 2 .\nNow, suppose a market operator faced with this situation wanted to sharpen the accuracy of the market.\nOne natural approach is simply to multiply all payoffs by a constant.\nThis corresponds to using a larger circle in the projection game, and would indeed improve the accuracy.\nHowever, it will also increase the market-maker``s exposure to loss: the market-maker would have to pump in more money to achieve this.\nThe projection game model suggests a natural approach to improving the accuracy while retaining the same bounds on the market maker``s loss.\nThe idea is that, instead of restricting all moves to being on the unit circle, we force each move to have a slightly larger radius than the previous move.\nSuppose we insist that, if the current radius is r, the next trader has to move to r + 1.\nThen, the trader``s expected profit would be E = r(1 \u2212 cos(\u03b8 \u2212 \u03b8 )).\nUsing the same approximation as above, the trader would trade as long as (\u03b8 \u2212 \u03b8 )2 > 2 \/r. Now, even if the market maker seeded the market with r = 1, it would increase with each trade, and the incentives to sharpen the estimate increase with every trade.\n5.2 Analyzing long-term strategies Up to this point, our analysis has been restricted to trader strategies that are myopic in the sense that traders do not consider the impact of their trades on other traders'' beliefs.\nIn practice, an informed trader can potentially profit by playing a suboptimal strategy to mislead other traders, in a way that allows her to profit later.\nIn this section, we illustrate how the projection game can be used to analyze an instance of this phenomenon, and to design market rules that mitigate this effect.\nThe scenario we consider is as follows.\nThere are two traders speculating on the probability of an event E, who each get a 1-bit signal.\nThe optimal probability for each 2bit signal pair is as follows.\nIf trader 1 gets the signal 0, and trader 2 gets signal 0, the optimal probability is 0.3.\nIf trader 1 got a 0, but trader 2 got a 1, the optimal probability is 0.9.\nIf trader 1 gets 1, and trader 2 gets signal 0, the optimal probability is 0.7.\nIf trader 1 got a 0, but trader 2 got a 1, the optimal probability is 0.1.\n(Note that the impact of trader 2``s signal is in a different direction, depending on trader 1``s signal).\nSuppose that the prior distribution of the signals is that trader 1 is equally likely to get a 0 or a 1, but trader 2 gets a 0 with probability 0.55 and a 1 with probability 0.45.\nThe traders are playing the projection game restricted to a circular arc.\nThis setup is depicted in Figure 4.\nA B D C X Y Signals Opt.\nPt 00 C D11 10 01 Event does not happenEventhappens B A Figure 4: Example illustrating non-myopic deception Suppose that, for some exogenous reason, trader 1 has the opportunity to trade, followed by trader 2.\nThen, trader 1 has the option of placing a last-minute trade just before the market closes.\nIf traders were playing their myopically optimal strategies, here is how the market should run: If trader 1 sees a 0, he would move to some point Y that is between A and C, but closer to C. Trader 2 would then infer that trader 1 received a 0 signal and move to A or C if she got 1 or 0 respectively.\nTrader 1 has no reason to move again.\nIf trader 1 had got a 1, he would move to a different point X instead, and trader 2 would move to D if she saw 1 and B if she saw 0.\nAgain, trader 1 would not want to move again.\nUsing the projection game, it is easy to show that, if traders consider non-myopic strategies, this set of strategies is not an equilibrium.\nThe exact position of the points does not matter; all we need is the relative position, and the observation that, because of the perfect symmetry in the setup, segments XY, BC, and AD are all parallel to each other.\nNow, suppose trader 1 got a 0.\nHe could move to X instead of Y , to mislead trader 2 into thinking he got a 1.\nThen, when trader 2 moved to, say, D, trader 1 could correct the rating to A. To show that this is a profitable deviation, observe that this strategy is equivalent to playing two additional moves over trader 1``s myopic strategy of moving to Y .\nThe first move, Y X may either move toward or away from the optimal final position.\nThe second move, DA or BC, is always in the correct direction.\nFurther, because DA and BC are longer than XY , and parallel to XY , their projection on the final p-line will always be greater 324 in absolute value than the projection of XY , regardless of what the true p-line is!\nThus, the deception would result in a strictly higher expected profit for trader 1.\nNote that this problem is not specific to the projection game form: Our equivalence results show that it could arise in the MSR or DPM (perhaps with a different prior distribution and different numerical values).\nObserve also that a strategy profile in which neither trader moved in the first two rounds, and trader 1 moved to either X or Y would be a subgame-perfect equilibrium in this setup.\nWe suggest that one approach to mitigating this problem might be by reducing the radius at every move.\nThis essentially provides a form of discounting that motivates trader 1 to take his profit early rather than mislead trader 2.\nGraphically, the right reduction factor would make the segments AD and BC shorter than XY (as they are chords on a smaller circle), thus making the myopic strategy optimal.\n6.\nCONCLUSIONS AND FUTURE WORK We have presented a simple geometric game, the projection game, that can serve as a model for strategic behavior in information markets, as well as a tool to guide the design of new information markets.\nWe have used this model to analyze the cost, profit, and strategies of a trader in a dynamic parimutuel market, and shown that both the dynamic parimutuel market and the spherical market scoring rule are strategically equivalent to the restricted projection game under slight distortion of the prior probabilities.\nThe general analysis was based on the assumption that traders do not actively try to mislead other traders for future profit.\nIn section 5, however, we analyze a small example market without this assumption.\nWe demonstrate that the projection game can be used to analyze traders'' strategies in this scenario, and potentially to help design markets with better strategic properties.\nOur results raise several very interesting open questions.\nFirstly, the payoffs of the projection game cannot be directly implemented in situations in which the true probability is not ultimately revealed.\nIt would be very useful to have an automatic transformation of a given projection game into another game in which the payoffs can be implemented in expectation without knowing the probability, and preserves the strategic properties of the projection game.\nSecond, given the tight connection between the projection game and the spherical market scoring rule, it is natural to ask if we can find as strong a connection to other scoring rules or if not, to understand what strategic differences are implied by the form of the scoring rule used in the market.\nFinally, the existence of long-range manipulative strategies in information markets is of great interest.\nThe example we studied in section 5 merely scratches the surface of this area.\nA general study of this class of manipulations, together with a characterization of markets in which it can or cannot arise, would be very useful for the design of information markets.\n7.\nREFERENCES [1] S. Debnath, D. M. Pennock, S. Lawrence, E. J. Glover, and C. L. Giles.\nInformation incorporation in online in-game sports betting markets.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce (EC``03), pages 258-259, June 2003.\n[2] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright.\nAnatomy of an experimental political stock market.\nAmerican Economic Review, 82(5):1142-1161, 1992.\n[3] R. Forsythe, T. A. Rietz, and T. W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[4] D. Friedman.\nEffective scoring rules for probabilistic forecasts.\nManagement Science, 29(4):447-454, 1983.\n[5] J. M. Gandar, W. H. Dare, C. R. Brown, and R. A. Zuber.\nInformed traders and price variations in the betting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[6] R. Hanson.\nCombinatorial information market design.\nInformation Systems Frontiers, 5(1):107-119, 2003.\n[7] R. Hanson, R. Oprea, and D. Porter.\nInformation aggregation and manipulation in an experimental market.\nJournal of Economic Behavior and Organization, page to appear, 2006.\n[8] B. Mangold, M. Dooley, G. W. Flake, H. Hoffman, T. Kasturi, D. M. Pennock, and R. Dornfest.\nThe tech buzz game.\nIEEE Computer, 38(7):94-97, July 2005.\n[9] J. A. Muth.\nRational expectations and the theory of price movements.\nEconometrica, 29(6):315-335, 1961.\n[10] D. Pennock.\nA dynamic parimutuel market for information aggregation.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce (EC ``04), June 2004.\n[11] D. Pennock and R. Sami.\nComputational aspects of prediction markets.\nIn N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani, editors, Algorithmic Game Theory.\nCambridge University Press, 2007.\n(to appear).\n[12] D. M. Pennock, S. Debnath, E. J. Glover, and C. L. Giles.\nModeling information incorporation in markets, with application to detecting and explaining events.\nIn Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, pages 405-413, 2002.\n[13] C. R. Plott and S. Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56(5):1085-1118, 1988.\n[14] C. R. Plott, J. Wit, and W. C. Yang.\nParimutuel betting markets as information aggregation devices: Experimental results.\nTechnical Report Social Science Working Paper 986, California Institute of Technology, Apr. 1997.\n[15] C. Polk, R. Hanson, J. Ledyard, and T. Ishikida.\nPolicy analysis market: An electronic commerce application of a combinatorial information market.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce (EC``03), pages 272-273, June 2003.\n[16] C. Schmidt and A. Werwatz.\nHow accurate do markets predict the outcome of an event?\nthe Euro 2000 soccer championships experiment.\nTechnical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002.\n325","lvl-3":"A Strategic Model for Information Markets\nABSTRACT\nInformation markets, which are designed specifically to aggregate traders' information, are becoming increasingly popular as a means for predicting future events.\nRecent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets.\nWe develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets.\nWe demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules.\nThe projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent.\nWe use it to prove several strategic properties about the dynamic parimutuel market.\nWe also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules.\nFinally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader's beliefs.\n1.\nINTRODUCTION\nMarkets have long been used as a medium for trade.\nAs a side effect of trade, the participants in a market reveal something about their preferences and beliefs.\nFor example, in a financial market, agents would buy shares which they think are undervalued, and sell shares which they think are overvalued.\nIt has long been observed that, because the market price is influenced by all the trades taking place, it aggregates the private information of all the traders.\nThus, in a situation in which future events are uncertain, and each trader might have a little information, the aggregated information contained in the market prices can be used to predict future events.\nThis has motivated the creation of information markets, which are mechanisms for aggregating the traders' information about an uncertain event.\nInformation markets can be modeled as a game in which the participants bet on a number of possible outcomes, such as the results of a presidential election, by buying shares of the outcomes and receiving payoffs when the outcome is realized.\nAs in financial markets, the participants aim to maximize their profit by buying low and selling high.\nIn this way, the players' behavior transmits their personal information and beliefs about the possible outcomes, and can be used to predict the event more accurately.\nThe benefit of well-designed information markets goes beyond information aggregation; they can also be used as a hedging instrument, to allow traders to insure against risk.\nRecently, researchers have turned to the problem of designing market structures specifically to achieve better information aggregation properties than traditional markets.\nTwo designs for information markets have been proposed: the Dynamic Parimutuel Market (DPM) by Pennock [10] and the Market Scoring Rules (MSR) by Hanson [6].\nBoth the DPM and the MSR were designed with the goal of giving informed traders an incentive to trade, and to reveal their information as soon as possible, while also controlling the subsidy that the market designer needs to pump into the market.\nThe DPM was created as a combination of a pari-mutuel market (which is commonly used for betting on horses) and a continuous double auction, in order to simultaneously obtain the first one's infinite buy-in liquidity and the latter's ability to react continuously to new information.\nOne version of the DPM was implemented in the Yahoo! Buzz market [8] to experimentally test the market's prediction properties.\nThe foundations of the MSR lie in the idea of a proper scoring rule, which is a technique to reward forecasters in a way that encourages them to give their best prediction.\nThe innovation in the MSR is to use these scoring rules as instruments that can be traded, thus providing traders who have new information an incentive to trade.\nThe MSR was to be used in a policy analysis market in the Middle East [15], which was subsequently withdrawn.\nInformation markets rely on informed traders trading for their own profit, so it is critical to understand the strategic properties of these markets.\nThis is not an easy task, because markets are complex, and traders can influence each other's beliefs through their trades, and hence, can potentially achieve long term gains by manipulating the market.\nFor the MSR, it has been shown that, if we exclude the possibility of achieving gain through misleading other traders, it is optimal for each trader to honestly reflect her private belief in her trades.\nFor the DPM, we are not aware of any prior strategic analysis of this nature; in fact, a strategic hole was discovered while testing the DPM in the Yahoo! Buzz market [8].\n1.1 Our Results\nIn this paper, we seek to develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection 1 game, that serves as a useful model for information markets.\nThe projection game is conceptually simpler than the MSR and DPM, and thus it is easier to analyze.\nIn addition it has an attractive geometric visualization, which makes the strategic moves and interactions more transparent.\nWe present an analysis of the optimal strategies and profits in this game.\nWe then undertake an analysis of traders' costs and profits in the dynamic parimutuel market.\nRemarkably, we find that the cost of a sequence of trades in the DPM is identical to the cost of the corresponding moves in the projection game.\nFurther, if we assume that the traders beliefs at the end of trading match the true probability of the event being predicted, the traders' payoffs and profits in the DPM are identical to their payoffs and profits in a corresponding projection game.\nWe use the equivalence between the DPM and the projection game to prove that the DPM is arbitrage-free, deduce profitable strategies in the DPM, and demonstrate that constraints on the agents' trades are necessary to prevent a strategic breakdown.\nWe also prove an equivalence between the projection game and the MSR: We show that play in the MSR is strategically equivalent to play in a restricted projection game, at least for myopic strategies and small trades.\nIn particular, the profitability of any move under the spherical scoring rule is exactly proportional to the profitability of the corresponding move in the projection game restricted to a circle, with slight distortion of the prior probabilities.\nThis allows us to use the projection game as a conceptual model for market scoring rules.\nWe note that while the MSR with the spherical scoring rule somewhat resembles the projection game, due to the mathematical similarity of their profit expressions, the DPM model is markedly different and thus its equivalence to the projection game is especially striking.\nFurther, because the restricted projection game corresponds to a DPM with a natural trading constraint, this sheds light on an intriguing connection between the MSR and the DPM.\nLastly, we illustrate how the projection game model can be used to analyze the potential for manipulation of information markets for long-term gain .2 We present an example scenario in which such manipulation can occur, and suggest additional rules that might mitigate the possibility of manipulation.\nWe also illustrate another application to analyzing how a market maker can improve the prediction accuracy of a market in which traders will not trade unless their expected profit is above a threshold.\n1.2 Related Work\nNumerous studies have demonstrated empirically that market prices are good predictors of future events, and seem to aggregate the collected wisdom of all the traders [2, 3, 12, 1, 5, 16].\nThis effect has also been demonstrated in laboratory studies [13, 14], and has theoretical support in the literature of rational expectations [9].\nA number of recent studies have addressed the design of the market structure and trading rules for information markets, as well as the incentive to participate and other strategic issues.\nThe two papers most closely related to our work are the papers by Hanson [6] and Pennock [10].\nHowever, strategic issues in information markets have also been studied by Mangold et al. [8] and by Hanson, Oprea and Porter [7].\nAn upcoming survey paper [11] discusses costfunction formulations of automated market makers.\nOrganization of the paper The rest of this paper is organized as follows: In Section 2, we describe the projection game, and analyze the players' costs, profits, and optimal strategies in this game.\nIn Section 3, we study the dynamic parimutuel market, and show that trade in a DPM is equivalent to a projection game.\nWe establish a connection between the projection game and the MSR in Section 4.\nIn Section 5, we illustrate how the projection game can be used to analyze non-myopic, and potentially manipulative, actions.\nWe present our conclusions, and suggestions for future work, in Section 6.\n2.\nTHE PROJECTION GAME\n2.1 Strategic properties of the projection game\n3.\nDYNAMIC PARIMUTUEL MARKETS\n4.\nMARKET SCORING RULES\n4.1 Equivalence with Spherical Scoring Rule\nA + r2 A + r2\n4.2 Connection to other scoring rules\n5.\nUSING THE PROJECTION-GAME MODEL\n5.1 Traders with inertia\n5.2 Analyzing long-term strategies\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple geometric game, the projection game, that can serve as a model for strategic behavior in information markets, as well as a tool to guide the design of new information markets.\nWe have used this model to analyze the cost, profit, and strategies of a trader in a dynamic parimutuel market, and shown that both the dynamic parimutuel market and the spherical market scoring rule are strategically equivalent to the restricted projection game under slight distortion of the prior probabilities.\nThe general analysis was based on the assumption that traders do not actively try to mislead other traders for future profit.\nIn section 5, however, we analyze a small example market without this assumption.\nWe demonstrate that the projection game can be used to analyze traders' strategies in this scenario, and potentially to help design markets with better strategic properties.\nOur results raise several very interesting open questions.\nFirstly, the payoffs of the projection game cannot be directly implemented in situations in which the true probability is not ultimately revealed.\nIt would be very useful to have an automatic transformation of a given projection game into another game in which the payoffs can be implemented in expectation without knowing the probability, and preserves the strategic properties of the projection game.\nSecond, given the tight connection between the projection game and the spherical market scoring rule, it is natural to ask if we can find as strong a connection to other scoring rules or if not, to understand what strategic differences are implied by the form of the scoring rule used in the market.\nFinally, the existence of long-range manipulative strategies in information markets is of great interest.\nThe example we studied in section 5 merely scratches the surface of this area.\nA general study of this class of manipulations, together with a characterization of markets in which it can or cannot arise, would be very useful for the design of information markets.","lvl-4":"A Strategic Model for Information Markets\nABSTRACT\nInformation markets, which are designed specifically to aggregate traders' information, are becoming increasingly popular as a means for predicting future events.\nRecent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets.\nWe develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets.\nWe demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules.\nThe projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent.\nWe use it to prove several strategic properties about the dynamic parimutuel market.\nWe also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules.\nFinally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader's beliefs.\n1.\nINTRODUCTION\nMarkets have long been used as a medium for trade.\nAs a side effect of trade, the participants in a market reveal something about their preferences and beliefs.\nFor example, in a financial market, agents would buy shares which they think are undervalued, and sell shares which they think are overvalued.\nIt has long been observed that, because the market price is influenced by all the trades taking place, it aggregates the private information of all the traders.\nThus, in a situation in which future events are uncertain, and each trader might have a little information, the aggregated information contained in the market prices can be used to predict future events.\nThis has motivated the creation of information markets, which are mechanisms for aggregating the traders' information about an uncertain event.\nInformation markets can be modeled as a game in which the participants bet on a number of possible outcomes, such as the results of a presidential election, by buying shares of the outcomes and receiving payoffs when the outcome is realized.\nAs in financial markets, the participants aim to maximize their profit by buying low and selling high.\nThe benefit of well-designed information markets goes beyond information aggregation; they can also be used as a hedging instrument, to allow traders to insure against risk.\nRecently, researchers have turned to the problem of designing market structures specifically to achieve better information aggregation properties than traditional markets.\nTwo designs for information markets have been proposed: the Dynamic Parimutuel Market (DPM) by Pennock [10] and the Market Scoring Rules (MSR) by Hanson [6].\nBoth the DPM and the MSR were designed with the goal of giving informed traders an incentive to trade, and to reveal their information as soon as possible, while also controlling the subsidy that the market designer needs to pump into the market.\nOne version of the DPM was implemented in the Yahoo! Buzz market [8] to experimentally test the market's prediction properties.\nThe innovation in the MSR is to use these scoring rules as instruments that can be traded, thus providing traders who have new information an incentive to trade.\nThe MSR was to be used in a policy analysis market in the Middle East [15], which was subsequently withdrawn.\nInformation markets rely on informed traders trading for their own profit, so it is critical to understand the strategic properties of these markets.\nThis is not an easy task, because markets are complex, and traders can influence each other's beliefs through their trades, and hence, can potentially achieve long term gains by manipulating the market.\nFor the DPM, we are not aware of any prior strategic analysis of this nature; in fact, a strategic hole was discovered while testing the DPM in the Yahoo! Buzz market [8].\n1.1 Our Results\nIn this paper, we seek to develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection 1 game, that serves as a useful model for information markets.\nThe projection game is conceptually simpler than the MSR and DPM, and thus it is easier to analyze.\nIn addition it has an attractive geometric visualization, which makes the strategic moves and interactions more transparent.\nWe present an analysis of the optimal strategies and profits in this game.\nWe then undertake an analysis of traders' costs and profits in the dynamic parimutuel market.\nRemarkably, we find that the cost of a sequence of trades in the DPM is identical to the cost of the corresponding moves in the projection game.\nFurther, if we assume that the traders beliefs at the end of trading match the true probability of the event being predicted, the traders' payoffs and profits in the DPM are identical to their payoffs and profits in a corresponding projection game.\nWe use the equivalence between the DPM and the projection game to prove that the DPM is arbitrage-free, deduce profitable strategies in the DPM, and demonstrate that constraints on the agents' trades are necessary to prevent a strategic breakdown.\nWe also prove an equivalence between the projection game and the MSR: We show that play in the MSR is strategically equivalent to play in a restricted projection game, at least for myopic strategies and small trades.\nThis allows us to use the projection game as a conceptual model for market scoring rules.\nFurther, because the restricted projection game corresponds to a DPM with a natural trading constraint, this sheds light on an intriguing connection between the MSR and the DPM.\nLastly, we illustrate how the projection game model can be used to analyze the potential for manipulation of information markets for long-term gain .2 We present an example scenario in which such manipulation can occur, and suggest additional rules that might mitigate the possibility of manipulation.\nWe also illustrate another application to analyzing how a market maker can improve the prediction accuracy of a market in which traders will not trade unless their expected profit is above a threshold.\n1.2 Related Work\nNumerous studies have demonstrated empirically that market prices are good predictors of future events, and seem to aggregate the collected wisdom of all the traders [2, 3, 12, 1, 5, 16].\nA number of recent studies have addressed the design of the market structure and trading rules for information markets, as well as the incentive to participate and other strategic issues.\nHowever, strategic issues in information markets have also been studied by Mangold et al. [8] and by Hanson, Oprea and Porter [7].\nAn upcoming survey paper [11] discusses costfunction formulations of automated market makers.\nOrganization of the paper The rest of this paper is organized as follows: In Section 2, we describe the projection game, and analyze the players' costs, profits, and optimal strategies in this game.\nIn Section 3, we study the dynamic parimutuel market, and show that trade in a DPM is equivalent to a projection game.\nWe establish a connection between the projection game and the MSR in Section 4.\nIn Section 5, we illustrate how the projection game can be used to analyze non-myopic, and potentially manipulative, actions.\nWe present our conclusions, and suggestions for future work, in Section 6.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple geometric game, the projection game, that can serve as a model for strategic behavior in information markets, as well as a tool to guide the design of new information markets.\nWe have used this model to analyze the cost, profit, and strategies of a trader in a dynamic parimutuel market, and shown that both the dynamic parimutuel market and the spherical market scoring rule are strategically equivalent to the restricted projection game under slight distortion of the prior probabilities.\nThe general analysis was based on the assumption that traders do not actively try to mislead other traders for future profit.\nIn section 5, however, we analyze a small example market without this assumption.\nWe demonstrate that the projection game can be used to analyze traders' strategies in this scenario, and potentially to help design markets with better strategic properties.\nOur results raise several very interesting open questions.\nFirstly, the payoffs of the projection game cannot be directly implemented in situations in which the true probability is not ultimately revealed.\nFinally, the existence of long-range manipulative strategies in information markets is of great interest.\nThe example we studied in section 5 merely scratches the surface of this area.\nA general study of this class of manipulations, together with a characterization of markets in which it can or cannot arise, would be very useful for the design of information markets.","lvl-2":"A Strategic Model for Information Markets\nABSTRACT\nInformation markets, which are designed specifically to aggregate traders' information, are becoming increasingly popular as a means for predicting future events.\nRecent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets.\nWe develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets.\nWe demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules.\nThe projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent.\nWe use it to prove several strategic properties about the dynamic parimutuel market.\nWe also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules.\nFinally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader's beliefs.\n1.\nINTRODUCTION\nMarkets have long been used as a medium for trade.\nAs a side effect of trade, the participants in a market reveal something about their preferences and beliefs.\nFor example, in a financial market, agents would buy shares which they think are undervalued, and sell shares which they think are overvalued.\nIt has long been observed that, because the market price is influenced by all the trades taking place, it aggregates the private information of all the traders.\nThus, in a situation in which future events are uncertain, and each trader might have a little information, the aggregated information contained in the market prices can be used to predict future events.\nThis has motivated the creation of information markets, which are mechanisms for aggregating the traders' information about an uncertain event.\nInformation markets can be modeled as a game in which the participants bet on a number of possible outcomes, such as the results of a presidential election, by buying shares of the outcomes and receiving payoffs when the outcome is realized.\nAs in financial markets, the participants aim to maximize their profit by buying low and selling high.\nIn this way, the players' behavior transmits their personal information and beliefs about the possible outcomes, and can be used to predict the event more accurately.\nThe benefit of well-designed information markets goes beyond information aggregation; they can also be used as a hedging instrument, to allow traders to insure against risk.\nRecently, researchers have turned to the problem of designing market structures specifically to achieve better information aggregation properties than traditional markets.\nTwo designs for information markets have been proposed: the Dynamic Parimutuel Market (DPM) by Pennock [10] and the Market Scoring Rules (MSR) by Hanson [6].\nBoth the DPM and the MSR were designed with the goal of giving informed traders an incentive to trade, and to reveal their information as soon as possible, while also controlling the subsidy that the market designer needs to pump into the market.\nThe DPM was created as a combination of a pari-mutuel market (which is commonly used for betting on horses) and a continuous double auction, in order to simultaneously obtain the first one's infinite buy-in liquidity and the latter's ability to react continuously to new information.\nOne version of the DPM was implemented in the Yahoo! Buzz market [8] to experimentally test the market's prediction properties.\nThe foundations of the MSR lie in the idea of a proper scoring rule, which is a technique to reward forecasters in a way that encourages them to give their best prediction.\nThe innovation in the MSR is to use these scoring rules as instruments that can be traded, thus providing traders who have new information an incentive to trade.\nThe MSR was to be used in a policy analysis market in the Middle East [15], which was subsequently withdrawn.\nInformation markets rely on informed traders trading for their own profit, so it is critical to understand the strategic properties of these markets.\nThis is not an easy task, because markets are complex, and traders can influence each other's beliefs through their trades, and hence, can potentially achieve long term gains by manipulating the market.\nFor the MSR, it has been shown that, if we exclude the possibility of achieving gain through misleading other traders, it is optimal for each trader to honestly reflect her private belief in her trades.\nFor the DPM, we are not aware of any prior strategic analysis of this nature; in fact, a strategic hole was discovered while testing the DPM in the Yahoo! Buzz market [8].\n1.1 Our Results\nIn this paper, we seek to develop an analytic method to guide the design and strategic analysis of information markets.\nOur central contribution is a new abstract betting game, the projection 1 game, that serves as a useful model for information markets.\nThe projection game is conceptually simpler than the MSR and DPM, and thus it is easier to analyze.\nIn addition it has an attractive geometric visualization, which makes the strategic moves and interactions more transparent.\nWe present an analysis of the optimal strategies and profits in this game.\nWe then undertake an analysis of traders' costs and profits in the dynamic parimutuel market.\nRemarkably, we find that the cost of a sequence of trades in the DPM is identical to the cost of the corresponding moves in the projection game.\nFurther, if we assume that the traders beliefs at the end of trading match the true probability of the event being predicted, the traders' payoffs and profits in the DPM are identical to their payoffs and profits in a corresponding projection game.\nWe use the equivalence between the DPM and the projection game to prove that the DPM is arbitrage-free, deduce profitable strategies in the DPM, and demonstrate that constraints on the agents' trades are necessary to prevent a strategic breakdown.\nWe also prove an equivalence between the projection game and the MSR: We show that play in the MSR is strategically equivalent to play in a restricted projection game, at least for myopic strategies and small trades.\nIn particular, the profitability of any move under the spherical scoring rule is exactly proportional to the profitability of the corresponding move in the projection game restricted to a circle, with slight distortion of the prior probabilities.\nThis allows us to use the projection game as a conceptual model for market scoring rules.\nWe note that while the MSR with the spherical scoring rule somewhat resembles the projection game, due to the mathematical similarity of their profit expressions, the DPM model is markedly different and thus its equivalence to the projection game is especially striking.\nFurther, because the restricted projection game corresponds to a DPM with a natural trading constraint, this sheds light on an intriguing connection between the MSR and the DPM.\nLastly, we illustrate how the projection game model can be used to analyze the potential for manipulation of information markets for long-term gain .2 We present an example scenario in which such manipulation can occur, and suggest additional rules that might mitigate the possibility of manipulation.\nWe also illustrate another application to analyzing how a market maker can improve the prediction accuracy of a market in which traders will not trade unless their expected profit is above a threshold.\n1.2 Related Work\nNumerous studies have demonstrated empirically that market prices are good predictors of future events, and seem to aggregate the collected wisdom of all the traders [2, 3, 12, 1, 5, 16].\nThis effect has also been demonstrated in laboratory studies [13, 14], and has theoretical support in the literature of rational expectations [9].\nA number of recent studies have addressed the design of the market structure and trading rules for information markets, as well as the incentive to participate and other strategic issues.\nThe two papers most closely related to our work are the papers by Hanson [6] and Pennock [10].\nHowever, strategic issues in information markets have also been studied by Mangold et al. [8] and by Hanson, Oprea and Porter [7].\nAn upcoming survey paper [11] discusses costfunction formulations of automated market makers.\nOrganization of the paper The rest of this paper is organized as follows: In Section 2, we describe the projection game, and analyze the players' costs, profits, and optimal strategies in this game.\nIn Section 3, we study the dynamic parimutuel market, and show that trade in a DPM is equivalent to a projection game.\nWe establish a connection between the projection game and the MSR in Section 4.\nIn Section 5, we illustrate how the projection game can be used to analyze non-myopic, and potentially manipulative, actions.\nWe present our conclusions, and suggestions for future work, in Section 6.\n2.\nTHE PROJECTION GAME\nIn this section, we describe an abstract betting game, the projection game; in the following sections, we will argue that both the MSR and the DPM are strategically similar to the projection game.\nThe projection game is conceptually simpler than MSR and DPM, and hence should prove easier to analyze.\nFor clarity of exposition, here and in the rest of the paper we assume the space is two dimensional, i.e., there are only two possible events.\nOur results easily generalize to more than two dimensions.\nWe also assume throughout that players are risk-neutral.\nSuppose there are two mutually exclusive and exhaustive events, A and B. (In other words, B is the same as \"not A\".)\nThere are n agents who may have information about the likelihood of A and B, and we (the designers) would like to aggregate their information.\nWe invite them to play the game described below: At any point in the game, there is a current state described by a pair of parameters, (x, y), which we sometimes write in vector form as x. Intuitively, x corresponds to the 2Here, we are referring only to manipulation of the information market for later gain from the market itself; we do not consider the possibility of traders having vested interests in the underlying events.\ntotal holding of shares in A, and y corresponds to the holding of shares in B.\nIn each move of the game, one player (say i) plays an arrow (or segment) from (x, y) to (x', y').\nWe use the notation [(x, y) \u2192 (x', y')] or [x, x'] to denote this move.\nThe game starts at (0, 0), but the market maker makes the first move; without loss of generality, we can assume the move is to (1, 1).\nAll subsequent moves are made by players, in an arbitrary (and potentially repeating) sequence.\nEach move has a cost associated with it, given by\npwhere | \u00b7 | denotes the Euclidean norm, | x | = x2 + y2.\nNote that none of the variables are constrained to be nonnegative, and hence, the cost of a move can be negative.\nThe cost can be expressed in an alternative form, that is also useful.\nSuppose player i moves from (x, y) to (x', y').\nWe can write (x', y') as (x + lex, y + ley), such that l \u2265 0 and | ex | 2 + | ey | 2 = 1.\nWe call l the volume of the move, and (ex, ey) the direction of the move.\nAt any point (\u02c6x, \u02c6y), there is an instantaneous price charged, defined as follows:\nNote that the price depends only on the angle between the line joining the vector (\u02c6x, \u02c6y) and the segment [(x, y), (x', y')], and not the lengths.\nThe total cost of the move is the price integrated over the segment [(x, y) \u2192 (x', y')], i.e.,\nWe assume that the game terminates after a finite number of moves.\nAt the end of the game, the true probability p of event A is determined, and the agents receive payoffs for the moves they made.\nLet q = (qx, qy) = (p,1-p) I (p,1-p) I.\nThe payoff to agent i for a segment [(x, y) \u2192 (x', y')] is given by:\nWe call the line through the origin with slope (1 \u2212 p) \/ p = qy\/qx the p-line.\nNote that the payoff, too, may be negative.\nOne drawback of the definition of a projection game is that implementing the payoffs requires us to know the actual probability p.\nThis is feasible if the probability can eventually be determined statistically, such as when predicting the relative frequency of different recurring events, or vote shares.\nIt is also feasible for one-off events in which there is reason to believe that the true probability is either 0 or 1.\nFor other one-off events, it cannot be implemented directly (unlike scoring rules, which can be implemented in expectation).\nHowever, we believe that even in these cases, the projection game can be useful as a conceptual and analytical tool.\nThe moves, costs and payoffs have a natural geometric representation, which is shown in Figure 1 for three players with one move each.\nThe players append directed line segments in turn, and the payoff player i finally receives for a move is the projection of her segment onto the line with slope (1 \u2212 p) \/ p.\nHer cost is the difference of distances of the endpoints of her move to the origin.\n2.1 Strategic properties of the projection game\nWe begin our strategic analysis of the projection game by observing the following simple path-independence property.\nFigure 1: A projection game with three players LEMMA 1.\n[Path-Independence] Suppose there is a sequence of moves leading from (x, y) to (x', y').\nThen, the total cost of all the moves is equal to the cost of the single move [(x, y) \u2192 (x', y')], and the total payoff\" of all the moves is equal to the payoff\" of the single move [(x, y) \u2192 (x', y')].\nPROOF.\nThe proof follows trivially from the definition of the costs and payoffs: If we consider a path from point x to point x', both the net change in the vector lengths and the net projection onto the p-line are completely determined by x and x'.\nAlthough simple, path independence of profits is vitally important, because it implies (and is implied by) the absence of arbitrage in the market.\nIn other words, there is no sequence of moves that start and end at the same point, but result in a positive profit.\nOn the other hand, if there were two paths from (x, y) to (x', y') with different profits, there would be a cyclic path with positive profit.\nFor ease of reference, we summarize some more useful properties of the cost and payoff functions in the projection game.\nLEMMA 2.\n1.\nThe instantaneous price for moving along a line through the origin is 1 or \u2212 1, when the move is away or toward the origin respectively.\nThe instantaneous price along a circle centered at the origin is 0.\n2.\nWhen x moves along a circle centered at the origin to point x \u00af on the positive p-line, the corresponding payoff\" is P (x, \u00af x) = | x | \u2212 x \u00b7 q, and the cost is C [x, \u00af x] = 0.\n3.\nThe two cost function formulations are equivalent:\nwhere e is the unit vector giving the direction of move.\nIn addition, when x moves along the positive p-line, the payoff\" is equal to the cost, P (x, x') = | x' | \u2212 | x |.\nPROOF.\n1.\nThe instantaneous price is c (x, e) = x \u00b7 e \/ | x | = cos (x, e), where e is the direction of movement, and the result follows.\n2.\nSince x \u00af is on the positive p-line, q \u00b7 x \u00af = | \u00af x | = | x |, hence P (x, \u00af x) = q \u00b7 (\u00af x \u2212 x) = | x | \u2212 x \u00b7 q; the cost is 0 from the definition.\n3.\nFrom Part 1, the cost of moving from x to the origin is\nwhere l = | x |, e = x \/ | x |.\nBy the path-independence property, C [x, x'] = C [x, 0] + C [0, x'] = | x' | \u2212 | x |.\nFinally, a point on the positive p-line gets projected to itself, namely q \u00b7 x = | x | so when the movement is along the positive p-line, P (x, x') = q \u00b7 (x' \u2212 x) = | x' | \u2212 | x | = C [x, x'].\nWe now consider the question of which moves are profitable in this game.\nThe eventual profit of a move [x, x'], where x' = x + l. (ex, ey), is\nWe observe that this is 0 if p (y + ley) = (1 \u2212 p) (x + lex), in other words, when the vectors q and (x + le) are exactly aligned.\nFurther, we observe that the price is non-decreasing with increasing l. Thus, along the direction e, the profit is maximized at the point of intersection with the p-line.\nBy Lemma 2, there is always a path from x to the positive p-line with 0 cost, which is given by an arc of the circle with center at the origin and radius | x |.\nAlso, any movement along the p-line has 0 additional profit.\nThus, for any point x, we can define the profit potential \u03c6 (x, p) by \u03c6 (x, p) = | x | \u2212 x \u00b7 q. Note, the potential is positive for x off the positive p-line and zero for x on the line.\nNext we show that a move to a lower potential is always profitable.\nLEMMA 3.\nThe profit of a move [x, x'] is equal to the difference in potential \u03c6 (x, p) \u2212 \u03c6 (x', p).\nPROOF.\nDenote z = | x | q and z' = | x' | q, i.e., these are the points of intersection of the positive p-line with the circles centered at the origin with radii | x | and | x' | respectively.\nBy the path-independence property and Lemma 2, the profit of move [x, x'] is\nThus, the profit of the move is equal to the change in profit potential between the endpoints.\nThis lemma offers another way of seeing that it is optimal to move to the point of lowest potential, namely to the p-line.\nFigure 2: The profit of move [x, x'] is equal to the change in profit potential from x to x'.\n3.\nDYNAMIC PARIMUTUEL MARKETS\nThe dynamic parimutuel market (DPM) was introduced by Pennock [10] as an information market structure that encourages informed traders to trade early, has guaranteed liquidity, and requires a bounded subsidy.\nThis market structure was used in the Yahoo! Buzz market [8].\nIn this section, we show that the dynamic parimutuel market is also remarkably similar to the projection game.\nCoupled with section 4, this also demonstrates a strong connection between the DPM and MSR.\nIn a two-event DPM, users can place bets on either event A or B at any time, by buying a share in the appropriate event.\nThe price of a share is variable, determined by the total amount of money in the market and the number of shares currently outstanding.\nFurther, existing shares can be sold at the current price.\nAfter it is determined which event really happens, the shares are liquidated for cash.\nIn the\" total-money-redistributed\" variant of DPM, which is the variant used in the Yahoo! market, the total money is divided equally among the shares of the winning event; shares of the losing event are worthless.\nNote that the payoffs are undefined if the event has zero outstanding shares; the DPM rules should preclude this possibility.\nWe use the following notation: Let x be the number of outstanding shares of A (totalled over all traders), and y be the number of outstanding shares in B. Let M denote the total money currently in the market.\nLet cA and cB denote the prices of shares in A and B respectively.\nThe price of a share in the Yahoo! DPM is determined by the \"share-ratio\" principle: The form of the prices can be fully determined by stipulating that, for any given value of M, x, and y, there must be some probability pA such that, if a trader believes that pA is the probability that A will occur and the market will liquidate in the current state, she cannot expect to profit from either buying or selling either share.\nThis gives us\nCost of a trade in the DPM Consider a trader who comes to a DPM in state (M, x, y), and buys or sells shares such that the eventual state is (M', x', y').\nWhat is the net cost, M' \u2212 M, of her move?\nfor some constant M0.\nIn other words, it is a constant multiple of the corresponding cost in the projection game.\np PROOF.\nConsider the function G (x, y) = M0 [x2 + y2].\nThe function G is differentiable for all x, y = ~ 0, and it's partial derivatives are: true probability becomes known at liquidation time, and describe the payoffs in terms of the probability; however, if the probability is not revealed, only the event that actually occurs, these payoffs can be implemented in expectation.\nSuppose the DPM terminates in a state (M, x, y), and the true probability of event A is p.\nWhen the dynamic parimutuel market is liquidated, the shares are paid off in the following way: Each owner of a share of A receives p Mx, and each owner of a share of B receives (1 \u2212 p) My, for each share owned.\nThe payoffs in the DPM, although given by a fairly simple form, are conceptually complex, because the payoff of a move depends on the subsequent moves before the market liquidates.\nThus, a fully rational choice of move in the DPM for player i should take into account the actions of subsequent players, including player i himself.\nHere, we restrict the analysis to myopic, infinitesimal strategies: Given the market position is (M, x, y), in which direction should a player make an infinitesimal move in order to maximize her profit?\nWe show that the infinitesimal payoffs and profits of a DPM with true probability p correspond strategically to the infinitesimal payoffs and profits of a propjection game with odds p \/ (1 \u2212 p), in the following sense:\nNow, compare these equations to the prices in the DPM, and observe that, as a trader buys or sells in the DPM, the instantaneous price is the derivative of the money.\nIt follows that, if at any point of time the DPM is in a state (M, x, y) such that M = G (x, y), then, at all subsequent points of time, the state (M', x', y') of the DPM will satisfy M' = G (x', y').\nFinally, note that we can pick the constant M0 such that the equation is satisfied for the initial state of the DPM, and hence, it will always be satisfied.\nOne important consequence of Theorem 4 is that the dynamic parimutuel market is arbitrage-free (using Lemma 1).\nIt is interesting to note that the original Yahoo! Buzz market used a different pricing rule, which did permit arbitrage; the price rule was changed to the share-ratio rule after traders started exploiting the arbitrage opportunities [8].\nAnother somewhat surprising consequence is that the numbers of outstanding shares x, y completely determines the total capitalization M of the DPM.\nConstraints in the DPM Although it might seem, based on the costs, that any move in the projection game has an equivalent move in the DPM, the DPM places some constraints on trades.\nFirstly, no trader is allowed to have a net negative holding in either share.\nThis is important, because it ensures that the total holdings in each share are always positive.\nHowever, this is a boundary constraint, and does not impact the strategic choices for a player with a sufficiently large positive holding in each share.\nThus, we can ignore this constraint from a first-order strategic analysis of the DPM.\nSecondly, for practical reasons a DPM will probably have a minimum unit of trade, but we assume here that arbitrarily small quantities can be traded.\nPayoffs in the DPM At some point, trading in the DPM ceases and shares are liquidated.\nWe assume here that the\nq p \u2022 If x y <1 \u2212 p, player i profits by buying shares in A, or selling shares in B. q p \u2022 If xy> 1 \u2212 p, player i profits by selling shares in A, or buying shares in B. PROOF.\nConsider the cost and payoff of buying a small quantity \u0394x of shares in A.\nThe cost is C [(x, y) \u2192 (x + \u0394x, y)] = \u0394x \u00b7 x M\nx2 + y2, and the payoff is \u0394x \u00b7 p Mx.\nThus, buying the shares is profitable iff\nis profitable if x y> 1 \u2212 p.\nThe analysis for buying or selling B is similar, with p and (1 \u2212 p) interchanged.\nIt follows from Lemma 5 that it is myopically profitable q1 \u2212 p for players to move towards the line with slope p.\nNote\nin their respective ranges, so this line is uniquely defined, and each such line also corresponds to a unique p. However, because the actual payoff of a move depends on the future moves, players must base their decisions on some belief about the final state of the market.\nIn the light of Lemma 5, one natural, rational-expectation style assumption is that\nwords, one might assume that the traders' beliefs will ultimately converge to the true probability p; knowing p, the q p traders will drive the market state to satisfy x y = 1 \u2212 p.)\nThis is very plausible in markets (such as the Yahoo! Buzz market) in which trading is permitted right until the market is liquidated, at which point there is no remaining uncertainty about the relevant frequencies.\nUnder this assumption, we can prove an even tighter connection between payoffs in the DPM (where the true probability is p) and payoffs\n\u221a X2 + Y 2.\nThen, the final payoff\" for any move [x \u2192 x ~] out loss of generality that the constant M0 = 1, so M = made in the course of trading is (x ~ \u2212 x) \u00b7 (\u221a p, \u221a 1 \u2212 p), i.e., it is the same as the payoff\" in the projection game with odds\nPROOF.\nFirst, observe that XM = \u221a p and YM = \u221a 1 \u2212 p.\nThe final payoff is the liquidation value of (x ~ \u2212 x) shares of A and (y ~ \u2212 y) shares of B, which is\nStrategic Analysis for the DPM Theorems 4 and 6 give us a very strong equivalence between the projection game and the dynamic parimutuel market, under the assumption that the DPM converges to the optimal value for the true probability.\nA player playing in a DPM with true odds p \/ (1 \u2212 p), can imagine himself playing in the projecq p tion game with odds 1 \u2212 p, because both the costs and the payoffs of any given move are identical.\nUsing this equivalence, we can transfer all the strategic properties proven for the projection game directly to the analysis of the dynamic parimutuel market.\nOne particularly interesting conclusion we can draw is as follows: In the absence of any constraint that disallows it, it is always profitable for an agent to move towards the origin, by selling shares in both A and B while maintaining the ratio x\/y.\nIn the DPM, this is limited by forbidding short sales, so players can never have negative holdings in either share.\nAs a result, when their holding in one share (say A) is 0, they can't use the strategy of moving towards the origin.\nWe can conclude that a rational player should never hold shares of both A and B simultaneously, regardless of her beliefs and the market position.\nThis discussion leads us to consider a modified DPM, in which this strategic loophole is addressed directly: Instead of disallowing all short sales, we place a constraint that no agent ever reduce the total market capitalization M (or, alternatively, that any agent's total investment in the market is always non-negative).\nWe call this the \"nondecreasing market capitalization\" constraint for the DPM.\nThis corresponds to a restriction that no move in the projection game reduces the radius.\nHowever, we can conclude from the preceding discussion that players have no incentive to ever increase the radius.\nThus, the moves of the projection game would all lie on the quarter circle in the positive quadrant, with radius determined by the market maker's move.\nIn section 4, we show that the projection game on this quarter circle is strategically equivalent (at least myopically) to trade in a Market Scoring Rule.\nThus, the DPM and MSR appear to be deeply connected to each other, like different interfaces to the same underlying game.\n4.\nMARKET SCORING RULES\nThe Market Scoring Rule (MSR) was introduced by Hanson [6].\nIt is based on the concept of a proper scoring rule, a technique which rewards forecasters to give their best prediction.\nHanson's innovation was to turn the scoring rules into instruments that can be traded, thereby providing traders who have new information an incentive to trade.\nOne positive effect of this design is that a single trader would still have incentive to trade, which is equivalent to updating the scoring rule report to reflect her information, thereby eliminating the problem of thin markets and illiquidity.\nIn this section, we show that, when the scoring rule used is the spherical scoring rule [4], there is a strong strategic equivalence between the projection game and the market scoring rule.\nProper scoring rules are tools used to reward forecasters who predict the probability distribution of an event.\nWe describe this in the simple setting of two exhaustive, mutually exclusive events A and B.\nIn the simple setting of two exhaustive, mutually exclusive events A and B, proper scoring rules are defined as follows.\nSuppose the forecaster predicts that the probabilities of the events are r = (rA, rB), with rA + rB = 1.\nThe scoring rule is specified by functions sA (rA, rB) and sB (rA, rB), which are applied as follows: If the event A occurs, the forecaster is paid sA (rA, rB), and if the event B occurs, the forecaster is paid sB (rA, rB).\nThe key property that a proper scoring rule satisfies is that the expected payment is maximized when the report is identical to the true probability distribution.\n4.1 Equivalence with Spherical Scoring Rule\nIn this section, we focus on one specific scoring rule: the spherical scoring rule [4].\nThe spherical scoring rule is known to be a proper scoring rule.\nThe definition generalizes naturally to higher dimensions.\nWe now demonstrate a close connection between the projection game restricted to a circular arc and a market scoring rule that uses the spherical scoring rule.\nAt this point, it is = p\n(convenient to use vector notation.\nLet x = (x, y) denote a position in the projection game.\nWe consider the projection game restricted to the circle | x | = 1.\nRestricted projection game Consider a move in this restricted projection game from x to x ~.\nRecall that q =\nof the event.\nThen, the projection game profit of a move\nSpherical scoring rule profit We now turn our attention to the MSR with the spherical scoring rule (SSR).\nConsider a player who changes the report from r to r ~.\nThen, if the true probability of A is p, her expected profit is SSR-PROFIT ([r, r ~]) = p (sA (r ~) \u2212 sA (r)) + (1 \u2212 p) (sB (r ~) \u2212 sB (r)) Now, let us represent the initial and final position in terms of circular coordinates.\nFor r = (rA, rB), define the corresponding coordinates x = (\u221a r2 rA B, \u221a r2 rB B).\nNote that\nA + r2 A + r2\nthe coordinates satisfy | x | = 1, and thus correspond to valid coordinates for the restricted projection game.\nNow, let p denote the vector [p, 1 \u2212 p].\nThen, expanding the spherical scoring functions sA, sB, the player's profit for a move from r to r ~ can be rewritten in terms of the corresponding coordinates x, x ~ as: SSR-PROFIT ([x, x ~]) = p \u00b7 (x ~ \u2212 x) For any collection X of moves, the total payoff in the SSR market is given by:\nFinally, we note that p and q are related by q = \u03bcpp, pwhere \u03bcp = 1 \/ p2 + (1 \u2212 p) 2 is a scalar that depends only on p.\nThis immediately gives us the following strong strategic equivalence for the restricted projection game and the SSR market:\nPROOF.\nAs derived above,\nFor all p, 1 \u2264 \u03bcp \u2264 \u221a 2, (or more generally for an ndimensional probability vector p, 1 \u2264 \u03bcp = 1\nAlthough theorem 7 is stated in terms of the sign of the payoff, it extends to relative payoffs of two collections of moves:\nPROOF.\nEvery move [x, x ~] has a corresponding inverse move [x ~, x].\nIn both the projection game and the SSR, the inverse move profit is simply the negative profit of the move (the moves are reversible).\nWe can define a collection of moves X ~ ~ = X \u2212 X ~ by adding the inverse of X ~ to X. Note that SEG-PROFITp (X ~ ~) = SEG-PROFITp (X) \u2212 SEG-PROFITp (X ~) and SSR-PROFITp (X ~ ~) = SSR-PROFITp (X) \u2212 SSR-PROFITp (X ~); applying theorem 7 completes the proof.\nIt follows that the ex post optimality of a move (or set of moves) is the same in both the projection game and the SSR market.\nOn its own, this strong ex post equivalence is not completely satisfying, because in any non-trivial game there is uncertainty about the value of p, and the different scaling ratios for different p could lead to different ex ante optimal behavior.\nWe can extend the correspondence to settings with uncertain p, as follows:\nTHEOREM 9.\nConsider the restricted projection game with some prior probability distribution F over possible values of p. Then, there is a probability distribution G with the same support as F, and a strictly positive constant c that depends only on F such that: \u2022 (i) For any collection X of moves, the expected profits are related by: EF (SEG-PROFIT (X)) = cEG (SSR-PROFIT (X)) \u2022 (ii) For any collection X, and any measurable information set I \u2286 [0, 1], the expected profits conditioned on knowing that p \u2208 I satisfy\nThe converse also holds: For any probability distribution G, there is a distribution F such that both these statements are true.\nPROOF.\nFor simplicity, assume that F has a density function f. (The result holds even for non-continuous distributions).\nThen, let c = R01 \u03bcpf (p) dp.\nDefine the density function g of distribution G by\nFigure 3: Sample score curves for the log scoring\nTo prove part (ii), we simply restrict the integral to values in I.\nThe converse follows similarly by constructing F from G. Analysis of MSR strategies Theorem 9 provides the foundation for analysis of strategies in scoring rule markets.\nTo the extent that strategies in these markets are independent of the specific scoring rule used, we can use the spherical scoring rule as the market instrument.\nThen, analysis of strategies in the projection game with a slightly distorted distribution over p can be used to understand the strategic properties of the original market situation.\nImplementation in expectation Another important consequence of Theorem 9 is that the restricted projection game can be implemented with a small distortion in the probability distribution over values of p, by using a Spherical Scoring Rule to implement the payoffs.\nThis makes the projection game valuable as a design tool; for example, we can analyze new constraints and rules in the projection game, and then implement them via the SSR.\nUnfortunately, the result does not extend to unrestricted projection games, because the relative profit of moving along the circle versus changing radius is not preserved through this transformation.\nHowever, it is possible to extend the transformation to projection games in which the radius ri after the ith move is a fixed function of i (not necessarily constant), so that it is not within the strategic control of the player making the move; such games can also be strategically implemented via the spherical scoring rule (with distortion of priors).\n4.2 Connection to other scoring rules\nIn this section, we show a weaker similarity between the projection game and the MSR with other scoring rules.\nWe prove an infinitesimal similarity between the restricted projection game and the MSR with log scoring rule; the result generalizes to all proper scoring rules that have a unique local and global maximum.\nA geometric visualization of some common scoring rules in two dimensions is depicted in Figure 3.\nThe score curves in the figure are defined by {(s1 (r), s2 (r)) I r = (r, 1--r), r E [0, 1]}.\nSimilarly to the projection game, define the profit potential of a probability r in MSR to be the change in profit for moving from r to the optimum p, \u03c6MSR (s (r), p) = prof itMSR [s (r), s (p)].\nWe will show that the profit potentials in the two games have analogous roles for analyzing the optimal strategies, in particular both potential functions have a global minimum 0 at r = p. THEOREM 10.\nConsider the projection game restricted to the non-negative unit circle where strategies x have the natural one-to-one correspondence to probability distributions r = (r, 1--r) given by x = (rIrI, 1 \u2212 r IrI).\nTrade in a log market scoring rule is strategically similar to trade in the projection game on the quarter-circle, in that\nboth for the projection game and MSR potentials\u03c6 (.)\n.\nPROOF.\n(sketch) The derivative of the MSR potential isd d X\nSince r = (r, 1--r) is a probability distribution, this expression is positive for r> p and negative for r

p, so the potential function along the circle is decreasing and then increasing with r similarly to an energy function, with a global minimum at r = p, as desired.\nTheorem 10 establishes that the market log-scoring rule is strategically similar to the projection game played on a circle, in the sense that the optimal direction of movement at the current state is the same in both games.\nFor example, if the current state is r 0.\nAlthough stated for logscoring rules, the theorem holds for any scoring rules that induce a potential with a unique local and global minimum at p, such as the quadratic scoring rule and others.\n5.\nUSING THE PROJECTION-GAME MODEL\nThe chief advantages of the projection game are that it is analytically tractable, and also easy to visualize.\nIn Section 3, we used the projection-game model of the DPM to prove the absence of arbitrage, and to infer strategic properties that might have been difficult to deduce otherwise.\nIn this section, we provide two examples that illustrate the power of projection-game analysis for gaining insight about more complex strategic settings.\n5.1 Traders with inertia\nThe standard analysis of the trader behavior in any of the market forms we have studied asserts that traders who disagree with the market probabilities will expect to gain from changing the probability, and thus have a strict incentive to trade in the market.\nThe expected gain may, however, be very small.\nA plausible model of real trader behavior might include some form of inertia or e-optimality: We assume that traders will trade if their expected profit is greater than some constant E.\nWe do not attempt to justify this model here; rather, we illustrate how the projection game may be used to analyze such situations, and shed some light on how to modify the trading rules to alleviate this problem.\nConsider the simple projection game restricted to a circular arc with unit radius; as we have seen, this corresponds closely to the spherical market scoring rule, and to the dynamic parimutuel market under a reasonable constraint.\nNow, suppose the market probability is p, and a trader believes the true probability is p'.\nThen, his expected gain can be calculated, as follows: Let q and q' be the unit vectors in the directions of p and p' respectively.\nThe expected profit is given by E = \u03c6 (q, p') = 1--q \u2022 q'.\nThus, the trader will trade only if 1--q \u2022 q'> E.\nIf we let 0 and 0' be the angles of the p-line and p' - line respectively (from the x-axis), we get E = 1--cos (0--0'); when 0 is close to 0', a Taylor series approximation gives us that E Pz (0--0') 2\/2.\nThus, we can derive a bound on the limit of the market accuracy: The market price will not change as long as (0--0') 2 <2e.\nNow, suppose a market operator faced with this situation wanted to sharpen the accuracy of the market.\nOne natural approach is simply to multiply all payoffs by a constant.\nThis corresponds to using a larger circle in the projection game, and would indeed improve the accuracy.\nHowever, it will also increase the market-maker's exposure to loss: the market-maker would have to pump in more money to achieve this.\nThe projection game model suggests a natural approach to improving the accuracy while retaining the same bounds on the market maker's loss.\nThe idea is that, instead of restricting all moves to being on the unit circle, we force each move to have a slightly larger radius than the previous move.\nSuppose we insist that, if the current radius is r, the next trader has to move to r + 1.\nThen, the trader's expected profit would be E = r (1--cos (0--0')).\nUsing the same approximation as above, the trader would trade as long as (0--0') 2> 2e\/r.\nNow, even if the market maker seeded the market with r = 1, it would increase with each trade, and the incentives to sharpen the estimate increase with every trade.\n5.2 Analyzing long-term strategies\nUp to this point, our analysis has been restricted to trader strategies that are myopic in the sense that traders do not consider the impact of their trades on other traders' beliefs.\nIn practice, an informed trader can potentially profit by playing a suboptimal strategy to mislead other traders, in a way that allows her to profit later.\nIn this section, we illustrate how the projection game can be used to analyze an instance of this phenomenon, and to design market rules that mitigate this effect.\nThe scenario we consider is as follows.\nThere are two traders speculating on the probability of an event E, who each get a 1-bit signal.\nThe optimal probability for each 2bit signal pair is as follows.\nIf trader 1 gets the signal 0, and trader 2 gets signal 0, the optimal probability is 0.3.\nIf trader 1 got a 0, but trader 2 got a 1, the optimal probability is 0.9.\nIf trader 1 gets 1, and trader 2 gets signal 0, the optimal probability is 0.7.\nIf trader 1 got a 0, but trader 2 got a 1, the optimal probability is 0.1.\n(Note that the impact of trader 2's signal is in a different direction, depending on trader 1's signal).\nSuppose that the prior distribution of the signals is that trader 1 is equally likely to get a 0 or a 1, but trader 2 gets a 0 with probability 0.55 and a 1 with probability 0.45.\nThe traders are playing the projection game restricted to a circular arc.\nThis setup is depicted in Figure 4.\nFigure 4: Example illustrating non-myopic deception\nSuppose that, for some exogenous reason, trader 1 has the opportunity to trade, followed by trader 2.\nThen, trader 1 has the option of placing a last-minute trade just before the market closes.\nIf traders were playing their myopically optimal strategies, here is how the market should run: If trader 1 sees a 0, he would move to some point Y that is between A and C, but closer to C. Trader 2 would then infer that trader 1 received a 0 signal and move to A or C if she got 1 or 0 respectively.\nTrader 1 has no reason to move again.\nIf trader 1 had got a 1, he would move to a different point X instead, and trader 2 would move to D if she saw 1 and B if she saw 0.\nAgain, trader 1 would not want to move again.\nUsing the projection game, it is easy to show that, if traders consider non-myopic strategies, this set of strategies is not an equilibrium.\nThe exact position of the points does not matter; all we need is the relative position, and the observation that, because of the perfect symmetry in the setup, segments XY, BC, and AD are all parallel to each other.\nNow, suppose trader 1 got a 0.\nHe could move to X instead of Y, to mislead trader 2 into thinking he got a 1.\nThen, when trader 2 moved to, say, D, trader 1 could correct the rating to A. To show that this is a profitable deviation, observe that this strategy is equivalent to playing two additional moves over trader 1's myopic strategy of moving to Y.\nThe first move, YX may either move toward or away from the optimal final position.\nThe second move, DA or BC, is always in the correct direction.\nFurther, because DA and BC are longer than XY, and parallel to XY, their projection on the final p-line will always be greater\nin absolute value than the projection of XY, regardless of what the true p-line is!\nThus, the deception would result in a strictly higher expected profit for trader 1.\nNote that this problem is not specific to the projection game form: Our equivalence results show that it could arise in the MSR or DPM (perhaps with a different prior distribution and different numerical values).\nObserve also that a strategy profile in which neither trader moved in the first two rounds, and trader 1 moved to either X or Y would be a subgame-perfect equilibrium in this setup.\nWe suggest that one approach to mitigating this problem might be by reducing the radius at every move.\nThis essentially provides a form of discounting that motivates trader 1 to take his profit early rather than mislead trader 2.\nGraphically, the right reduction factor would make the segments AD and BC shorter than XY (as they are chords on a smaller circle), thus making the myopic strategy optimal.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple geometric game, the projection game, that can serve as a model for strategic behavior in information markets, as well as a tool to guide the design of new information markets.\nWe have used this model to analyze the cost, profit, and strategies of a trader in a dynamic parimutuel market, and shown that both the dynamic parimutuel market and the spherical market scoring rule are strategically equivalent to the restricted projection game under slight distortion of the prior probabilities.\nThe general analysis was based on the assumption that traders do not actively try to mislead other traders for future profit.\nIn section 5, however, we analyze a small example market without this assumption.\nWe demonstrate that the projection game can be used to analyze traders' strategies in this scenario, and potentially to help design markets with better strategic properties.\nOur results raise several very interesting open questions.\nFirstly, the payoffs of the projection game cannot be directly implemented in situations in which the true probability is not ultimately revealed.\nIt would be very useful to have an automatic transformation of a given projection game into another game in which the payoffs can be implemented in expectation without knowing the probability, and preserves the strategic properties of the projection game.\nSecond, given the tight connection between the projection game and the spherical market scoring rule, it is natural to ask if we can find as strong a connection to other scoring rules or if not, to understand what strategic differences are implied by the form of the scoring rule used in the market.\nFinally, the existence of long-range manipulative strategies in information markets is of great interest.\nThe example we studied in section 5 merely scratches the surface of this area.\nA general study of this class of manipulations, together with a characterization of markets in which it can or cannot arise, would be very useful for the design of information markets.","keyphrases":["inform market","market score rule","dynam parimutuel market","strateg analysi","project game","spheric score rule","project game model","predict market","long-rang manipul strategi","social and behavior scienc-econom","liquid time","dpm","msr"],"prmu":["P","P","P","P","P","P","R","R","M","M","U","U","U"]} {"id":"C-38","title":"A Framework for Architecting Peer-to-Peer Receiver-driven Overlays","abstract":"This paper presents a simple and scalable framework for architecting peer-to-peer overlays called Peer-to-peer Receiverdriven Overlay (or PRO). PRO is designed for non-interactive streaming applications and its primary design goal is to maximize delivered bandwidth (and thus delivered quality) to peers with heterogeneous and asymmetric bandwidth. To achieve this goal, PRO adopts a receiver-driven approach where each receiver (or participating peer) (i) independently discovers other peers in the overlay through gossiping, and (ii) selfishly determines the best subset of parent peers through which to connect to the overlay to maximize its own delivered bandwidth. Participating peers form an unstructured overlay which is inherently robust to high churn rate. Furthermore, each receiver leverages congestion controlled bandwidth from its parents as implicit signal to detect and react to long-term changes in network or overlay condition without any explicit coordination with other participating peers. Independent parent selection by individual peers dynamically converge to an efficient overlay structure.","lvl-1":"A Framework for Architecting Peer-to-Peer Receiver-driven Overlays Reza Rejaie Department of Computer Science University of Oregon reza@cs.uoregon.edu Shad Stafford Department of Computer Science University of Oregon staffors@cs.uoregon.edu ABSTRACT This paper presents a simple and scalable framework for architecting peer-to-peer overlays called Peer-to-peer Receiverdriven Overlay (or PRO).\nPRO is designed for non-interactive streaming applications and its primary design goal is to maximize delivered bandwidth (and thus delivered quality) to peers with heterogeneous and asymmetric bandwidth.\nTo achieve this goal, PRO adopts a receiver-driven approach where each receiver (or participating peer) (i) independently discovers other peers in the overlay through gossiping, and (ii) selfishly determines the best subset of parent peers through which to connect to the overlay to maximize its own delivered bandwidth.\nParticipating peers form an unstructured overlay which is inherently robust to high churn rate.\nFurthermore, each receiver leverages congestion controlled bandwidth from its parents as implicit signal to detect and react to long-term changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers dynamically converge to an efficient overlay structure.\nCategories and Subject Descriptors: C.2.4 [ComputerCommunication Networks]: Distributed Systems General Terms: Design, Measurement 1.\nINTRODUCTION Limited deployment of IP multicast has motivated a new distribution paradigm over the Internet based on overlay networks where a group of participating end-systems (or peers) form an overlay structure and actively participate in distribution of content without any special support from the network (e.g., [7]).\nSince overlay structures are layered over the best-effort Internet, any approach for constructing overlay should address the following fundamental challenges: (i) Scalability with the number of participating peers, (ii) Robustness to dynamics of peer participation, (iii) Adaptation to variations of network bandwidth, and (iv) Accommodating heterogeneity and asymmetry of bandwidth connectivity among participating peers[19].\nCoping with bandwidth variations, heterogeneity and asymmetry are particularly important in design of peer-to-peer overlay for streaming applications because delivered quality to each peer is directly determined by its bandwidth connectivity to (other peer(s) on) the overlay.\nThis paper presents a simple framework for architecting Peer-to-peer Receiver-driven Overlay, called PRO.\nPRO can accommodate a spectrum of non-interactive streaming applications ranging from playback to lecture-mode live sessions.\nThe main design philosophy in PRO is that each peer should be allowed to independently and selfishly determine the best way to connect to the overlay in order to maximize its own delivered quality.\nToward this end, each peer can connect to the overlay topology at multiple points (i.e., receive content through multiple parent peers).\nTherefore, participating peers form an unstructured overlay that can gracefully cope with high churn rate[5].\nFurthermore, having multiple parent peers accommodates bandwidth heterogeneity and asymmetry while improves resiliency against dynamics of peer participation.\nPRO consists of two key components: (i) Gossip-based Peer Discovery: Each peer periodically exchanges message (i.e., gossips) with other known peers to progressively learn about a subset of participating peers in the overlay that are likely to be good parents.\nGossiping provides a scalable and efficient approach to peer discovery in unstructured peer-to-peer networks that can be customized to guide direction of discovery towards peers with desired properties (e.g., peers with shorter distance or higher bandwidth).\n(ii) Receiver-driven Parent Selection: Given the collected information about other participating peers by gossiping mechanism, each peer (or receiver) gradually improves its own delivered quality by dynamically selecting a proper subset of parent peers that collectively maximize provided bandwidth to the receiver.\nSince the available bandwidth from different participating peers to a receiver (and possible correlation among them) can be measured only at that receiver, a receiver-driven approach is the natural solution to maximize available bandwidth to heterogeneous peers.\nFurthermore, the available bandwidth from parent peers serves as an implicit signal for a receiver to detect and react to changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers leads to an efficient overlay that maximizes delivered quality to each peer.\nPRO incorporates 42 several damping functions to ensure stability of the overlay despite uncoordinated actions by different peers.\nPRO is part of a larger architecture that we have developed for peer-to-peer streaming.\nIn our earlier work, we developed a mechanism called PALS [18] that enables a receiver to stream layered structured content from a given set of congestion controlled senders.\nThus, PRO and PALS are both receiver-driven but complement each other.\nMore specifically, PRO determines a proper subset of parent peers that collectively maximize delivered bandwidth to each receiver whereas PALS coordinates in-time streaming of different segments of multimedia content from these parents despite unpredictable variations in their available bandwidth.\nThis division of functionality provides a great deal of flexibility because it decouples overlay construction from delivery mechanism.\nIn this paper, we primarily focus on the overlay construction mechanism, or PRO.\nThe rest of this paper is organized as follows: In Section 2, we revisit the problem of overlay construction for peerto-peer streaming and identify its two key components and explore their design space.\nWe illustrate the differences between PRO and previous solutions, and justify our design choices.\nWe present our proposed framework in Section 3.\nIn Sections 4 and 5, the key components of our framework are described in further detail.\nFinally, Section 6 concludes the paper and presents our future plans.\n2.\nREVISITING THE PROBLEM Constructing a peer-to-peer overlay for streaming applications should not only accommodate global design goals such as scalability and resilience but also satisfy the local design goal of maximizing delivered quality to individual peers 1 .\nMore specifically, delivered quality of streaming content to each peer should be proportional to its incoming access link bandwidth.\nAchieving these goals is particularly challenging because participating peers often exhibit heterogeneity and asymmetry in their bandwidth connectivity.\nSolutions for constructing peer-to-peer overlays often require two key mechanisms to be implemented at each peer: Peer Discovery (PD) and Parent Selection (PS).\nThe PD mechanism enables each peer to learn about other participating peers in the overlay.\nInformation about other peers are used by the PS mechanism at each peer to determine proper parent peers through which it should connect to the overlay.\nThe collective behavior of PD and PS mechanisms at all participating peers leads to an overlay structure that achieves the above design goals.\nThere has been a wealth of previous research that explored design space of the PD and PS mechanisms as follows: Peer Discovery: In structured peer-to-peer networks, the existing structure enables each peer to find other participating peers in a scalable fashion (e.g., [4]).\nHowever, structured peer-to-peer networks may not be robust against high churn rate [5].\nIn contrast, unstructured peer-to-peer networks can gracefully accommodate high churn rate [5] but require a separate peer discovery mechanism.\nMeshfirst approaches (e.g., [7, 6]) that require each peer to know about all other participating peers as well as centralized approaches (e.g., [16]) to peer discovery exhibit limited scalability.\nNICE [2] leverages a hierarchal structure to achieve 1 It is worth clarifying that our design goal is different from common goals in building application-level multicast trees [7] (i.e., minimizing stretch and stress).\nscalability but each peer only knows about a group of closeby peers who may not be good parents (i.e., may not provide sufficient bandwidth).\nParent Selection: We examine two key aspects of parent selections: (i) Selection Criteria: There are two main criteria for parent selections: relative delay and available bandwidth between two peers.\nRelative delay between any two peers can be estimated in a scalable fashion with one of the existing landmark-based solutions such as Global Network Positioning (GNP) [15].\nHowever, estimating available bandwidth between two peers requires end-to-end measurement.\nUsing available bandwidth as criteria for parent selection does not scale for two reasons: First, to cope with dynamics of bandwidth variations, each peer requires to periodically estimate the available bandwidth from all other peers through measurement (e.g., [6]).\nSecond, the probability of interference among different measurements grows with the number of peers in an overlay (similar to joint experiment in RLM [13]).\nMost of the previous solutions adopted the idea of application level multicast and used delay as the main selection criteria.\nParticipating peers cooperatively run a distributed algorithm to organize themselves into a source-rooted tree structure in order to minimize either overall delay across all branches of the tree (e.g., [7]), or delay between source and each receiver peer (e.g., [20]).\nWhile these parent selection strategies minimize associated network load, they may not provide sufficient bandwidth to individual peers because delay is often not a good indicator for available bandwidth between two peers [12, 14].\nThe key issue is that minimizing overall delay (global design goal) and maximizing delivered bandwidth to each peer (local design goal) could easily be in conflict.\nMore specifically, parent peers with longer relative distance may provide higher bandwidth than close-by parents.\nThis suggests that there might exist a tradeoff between maximizing provided bandwidth to each peer and minimizing overall delay across the overlay.\n(ii) Single vs Multiple Parents: A single tree structure for the overlay (where each peer has a single parent) is inherently unable to accommodate peers with heterogeneous and asymmetric bandwidth.\nA common approach to accommodating bandwidth heterogeneity is to use layer structured content (either layered or multiple description encodings) and allow each receiver to have multiple parents.\nThis approach could accommodate heterogeneity but it introduces several new challenges.\nFirst, parent selection strategy should be determined based on location of a bottleneck.\nIf the bottleneck is at the (outgoing) access links of parent peers 2 , then a receiver should simply look for more parents.\nHowever, when the bottleneck is else where in the network, a receiver should select parents with a diverse set of paths (i.e., utilize different network paths).\nIn practice, a combination of these cases might simultaneously exist among participating peers [1].\nSecond, streaming a single content from multiple senders is challenging for two reasons: 1) This requires tight coordination among senders to determine overall delivered quality (e.g., number of layers) and decide which sender is responsible for delivery of each segment.\n2) Delivered segments from different senders should arrive before their playout times despite uncorrelated vari2 if bottleneck is at the receiver``s access link, then provided bandwidth to the receiver is already maximized.\n43 ations in (congestion controlled) bandwidth from different senders.\nThis also implies that those solutions that build multi-parent overlay structure but do not explicitly ensure in-time delivery of individual segments (e.g., [3, 11]) may not be able to support streaming applications.\nOne approach to build a multi-parent overlay is to organize participating peers into different trees where each layer of the stream is sent to a separate tree (e.g., [4, 16]).\nEach peer can maximize its quality by participating in a proper number of trees.\nThis approach raises several issues: 1) the provided bandwidth to peers in each tree is limited by minimum uplink bandwidth among upstream peers on that tree.\nIn the presence of bandwidth asymmetry, this could easily limit delivered bandwidth on each tree below the required bandwidth for a single layer, 2) it is not feasible to build separate trees that are all optimal for a single selection criteria (e.g., overall delay),.\n3) connections across different trees are likely to compete for available bandwidth on a single bottleneck3 .\nWe conclude that a practical solution for peer-topeer streaming applications should incorporate the following design properties: (i) it should use an unstructured, multiparent peer-to-peer overlay, (ii) it should provide a scalable peer discovery mechanism that enables each peer to find its good parents efficiently, (iii) it should detect (and possibly avoid) any shared bottleneck among different connections in the overlay, and (iv) it should deploy congestion controlled connections but ensure in-time arrival of delivered segments to each receiver.\nIn the next section, we explain how PRO incorporates all the above design properties.\n3.\nP2P RECEIVER-DRIVEN OVERLAY Assumptions: We assume that each peer can estimate the relative distance between any two peers using the GNP mechanism [15].\nFurthermore, each peer knows the incoming and outgoing bandwidth of its access link.\nEach peer uses the PALS mechanism to stream content from multiple parent peers.\nAll connections are congestion controlled by senders (e.g., [17]).\nTo accommodate peer bandwidth heterogeneity, we assume that the content has a layered representation.\nIn other words, with proper adjustment, the framework should work with both layered and multipledescription encodings.\nParticipating peers have heterogeneous and asymmetric bandwidth connectivity.\nFurthermore, peers may join and leave in an arbitrary fashion.\nOverview: In PRO, each peer (or receiver) progressively searches for a subset of parents that collectively maximize delivered bandwidth and minimize overall delay from all parents to the receiver.\nSuch a subset of parents may change over time as some parents join (or leave) the overlay, or available bandwidth from current parents significantly changes.\nNote that each peer can be both receiver and parent at the same time 4 .\nEach receiver periodically exchanges messages (i.e., gossips) with other peers in the overlay to learn about those participating peers that are potentially good parents.\nPotentially good parents for a receiver are identified based on their relative utility for the receiver.\nThe utility of a parent peer pi for a receiver pj is a function of their relative network distance (delij) and the outgoing access link bandwidth of the parent (outbwi), (i.e., U(pi, pj) 3 These multi-tree approaches often do not use congestion control for each connection.\n4 Throughout this paper we use receiver and parent as short form for receiver peer and parent peer.\n= f(delij, outbwi)).\nUsing parents'' access link bandwidth instead of available bandwidth has several advantages: (i) outgoing bandwidth is an upper bound for available bandwidth from a parent.\nTherefore, it enables the receiver to roughly classify different parents.\n(ii) estimating available bandwidth requires end-to-end measurement and such a solution does not scale with the number of peers, and more importantly, (iii) given a utility function, this approach enables any peer in the overlay to estimate relative utility of any other two peers.\nEach receiver only maintains information about a fixed (and relatively small) number of promising parent peers in its local image.\nThe local image at each receiver is dynamically updated with new gossip messages as other peers join\/leave the overlay.\nEach peer selects a new parent in a demand-driven fashion in order to minimize the number of end-to-end bandwidth measurements, and thus improve scalability.\nWhen a receiver needs a new parent, its PS mechanism randomly selects a peer from its local image where probability of selecting a peer directly depends on its utility.\nThen, the actual properties (i.e., available bandwidth and delay) of the selected parent are verified through passive measurement.\nToward this end, the selected parent is added to the parent list which triggers PALS to request content from this parent.\nFigure 1 depicts the interactions between PD and PS mechanisms.\nIn PRO, each receiver leverages congestion controlled bandwidth from its parents as an implicit signal to detect two events: (i) any measurable shared bottleneck among connections from different parents, and (ii) any change in network or overlay conditions (e.g., departure or arrival of other close-by peers).\nFigure 2 shows part of an overlay to illustrate this feature.\nEach receiver continuously monitors available bandwidth from all its parents.\nReceiver p0 initially has only p1 as a parent.\nWhen p0 adds a new parent (p2), the receiver examines the smoothed available bandwidth from p1 and p2 and any measurable correlation between them.\nIf the available bandwidth from p1 decreases after p2 is added, the receiver can conclude that these two parents are behind the same bottleneck (i.e., link L0).\nWe note that paths from two parents might have some overlap that does not include any bottleneck.\nAssume another receiver p3 selects p1 as a parent and thus competes with receiver p0 for available bandwidth on link L1.\nSuppose that L1 becomes a bottleneck and the connection between p1 to p3 obtains a significantly higher share of L1``s bandwidth than connection between p1 to p0.\nThis change in available bandwidth from p1 serves as a signal for p0.\nWhenever a receiver detects such a drop in bandwidth, it waits for a random period of time (proportional to the available bandwidth) and then drops Source Peer Disc.\nPeer Selec.\ngossip Exam.\na New Parent Criteriafor PeerDiscovery Update LocalImage oftheOverlay Unknown peers in the overlay Known peers in the overlay Select Internal Components of Receiver Peer Receiver Peer Figure 1: Interactions between PD and PS mechanisms through local image 44 P1 P3 P0 L0 L1 L3 Overlay connection Network Path P2 L2 P1 P3 P0 L0 L1 L3 P2 L2 Initial Overlay Reshaped Overlay Figure 2: Using congestion controlled bandwidth as signal to reshape the overlay the corresponding parent if its bandwidth remains low [8].\nTherefore, the receiver with a higher bandwidth connectivity (p3) is more likely to keep p1 as parent whereas p0 may examine other parents with higher bandwidth including p3.\nThe congestion controlled bandwidth signals the receiver to properly reshape the overlay.\nWe present a summary of key features and limitations of PRO in the next two sections.\nTable 1 summarizes our notation throughout this paper.\nMain Features: Gossiping provides a scalable approach to peer discovery because each peer does not require global knowledge about all group members, and its generated traffic can be controlled.\nThe PD mechanism actively participates in peer selection by identifying peers for the local image which limits the possible choices of parents for the PS mechanism.\nPRO constructs a multi-parent, unstructured overlay.\nBut PRO does not have the same limitations that exist in multi-tree approaches because it allows each receiver to independently micro-manage its parents to maximize its overall bandwidth based on local information.\nPRO conducts passive measurement not only to determine available bandwidth from a parent but also to detect any shared bottleneck between paths from different parents.\nFurthermore, by selecting a new parent from the local image, PRO increases the probability of finding a good parent in each selection, and thus significantly decreases number of required measurements which in turn improves scalability.\nPRO can gracefully accommodate bandwidth heterogeneity and asymmetry among peers since PALS is able to manage delivery of content from a group of parents with different bandwidth.\nLimitations and Challenges: The main hypothesis in our framework is that the best subset of parents for each receiver are likely to be part of its local image i.e., PD mechanism can find the best parents.\nWhenever this condition is not satisfied, either a receiver may not be able to maximize its overall bandwidth or resulting overlay may not be efficient.\nTable 1: Notation used throughout the paper Symbol Definition.\npi Peer i inbwi Incoming access link BW for pi outbwi Outgoing access link BW for pi min nopi Min.\nNo of parents for pi max nopi Max.\nNo of parents for pi nopi(t) No of active parents for pi at time t img sz Size of local image at each peer sgm Size of gossip message delij Estimated delay between pi and pj Clearly, properties of the selected utility function as well as accuracy of estimated parameters (in particular using outgoing bandwidth instead of available bandwidth) determine properties of the local image at each peer which in turn affects performance of the framework in some scenarios.\nIn these cases, the utility value may not effectively guide the search process in identifying good parents which increases the average convergence time until each peer finds a good subset of parents.\nSimilar to many other adaptive mechanisms (e.g., [13]), the parent selection mechanism should address the fundamental tradeoff between responsiveness and stability.\nFinally, the congestion controlled bandwidth from parent peers may not provide a measurable signal to detect a shared bottleneck when level of multiplexing is high at the bottleneck link.\nHowever, this is not a major limitation since the negative impact of a shared bottleneck in these cases is minimal.\nAll the above limitations are in part due to the simplicity of our framework and would adversely affect its performance.\nHowever, we believe that this is a reasonable design tradeoff since simplicity is one of our key design goals.\nIn the following sections, we describe the two key components of our framework in further details.\n4.\nGOSSIP-BASED PEER DISCOVERY Peer discovery at each receiver is basically a search among all participating peers in the overlay for a certain number (img sz) of peers with the highest relative utility.\nPRO adopts a gossip-like [10] approach to peer discovery.\nGossiping (or rumor spreading) has been frequently used as a scalable alternative to flooding that gradually spreads information among a group of peers.However, we use gossiping as a search mechanism [9] for finding promising parents since it has two appealing properties (i) the volume of exchanged messages can be controlled, and (ii) the gossip-based information exchange can be customized to leverage relative utility values to improve search efficiency.\nThe gossip mechanism works as follow: each peer maintains a local image that contains up to img sz records where each record represents the following information for a previously discovered peer pi in the overlay: 1) IP address, 2) GNP coordinates, 3) number of received layers, 4) timestamp when the record was last generated by a peer, 5) outbwi and 6) inbwi.\nTo bootstrap the discovery process, a new receiver needs to learn about a handful of other participating peers in the overlay.\nThis information can be obtained from the original server (or a well-known rendezvous point).\nThe server should implement a strategy for selecting the initial peers that are provided to each new receiver.\nWe call this the initial parent selection mechanism.\nOnce the initial set of peers are known, each peer pi periodically invokes a target selection mechanism to determine a target peer (pj) from its local image for gossip.\nGiven a utility function, peer pi uses a content selection strategy to select sgm records (or smaller number when sgm records are not available) from its local image that are most useful for pj and send those records to pj.\nIn response, pj follows the same steps and replies with a gossip message that includes sgm records from its local image that are most useful for pi, i.e., bidirectional gossip.\nWhen a gossip message arrives at each peer, an image maintenance scheme integrates new records into the current local image and discards excess records such that certain property of the local image is improved (e.g., increase overall utility of peers in the image) Aggregate performance of 45 a gossip mechanism can be presented by two average metrics and their distribution among peers: (i) Average Convergence Time: average number of gossip messages until all peers in an overlay reach their final images, and (ii) Average Efficiency Ratio: average ratio of unique records to the total number of received records by each peer.\nWe have been exploring the design space of four key components of the gossip mechanism.\nFrequency and size of gossip messages determine average freshness of local images.\nCurrently, the server randomly selects the initial parents from its local image for each new peer.\nTarget Selection: Target selection randomly picks a peer from the current image to evenly obtain information from different areas of the overlay and speed up discovery.\nContent Selection: peer pk determines relative utility of all the peers (pj) in its local image for target peer pi, and then randomly selects sgm peers to prepare a gossip message for pi.\nHowever, probability of selecting a peer directly depends on its utility.\nThis approach is biased towards peers with higher utility but its randomness tend to reduce number of duplicate records in different gossip message from one peer (i.e., improves efficiency).\nA potential drawback of this approach is the increase in convergence time.\nWe plan to examine more efficient information sharing schemes such as bloom filters [3] in our future work.\nPRO uses joint-ranking [15] to determine relative utility of a parent for a receiver.\nGiven a collection of peers in a local image of pk, the jointranking scheme ranks all the peers once based on their outgoing bandwidth, and then based on their estimated delay from a target peer pi.\nThe utility of peer pj (U(pj, pi)) is inversely proportional to the sum of pj``s ranks in both rankings.\nValues for each property (i.e., bandwidth and delay) of various peers are divided into multiple ranges (i.e., bins) where all peers within each range are assumed to have the same value for that property.\nThis binning scheme minimizes the sensitivity to minor differences in delay or bandwidth among different peers.\nImage maintenance: Image maintenance mechanism evicts extra records (beyond img sz) that satisfy one of the following conditions: (i) represent peers with the lower utility, (ii) represent peers that were already dropped by the PS mechanism due to poor performance and (iii) have a timestamp older than a threshold.\nThis approach attempts to balance image quality (in terms of overall utility of existing peers) and its freshness.\nNote that the gossip mechanism can discover any peer in the overlay as long as reachability is provided through overlap among local images at different peers.\nThe higher the amount of overlap, the higher the efficiency of discovery, and the higher the robustness of the overlay to dynamics of peer participations.\nThe amount of overlap among images depends on both the size and shape of the local images at each peer.\nThe shape of the local image is a function of the deployed utility function.\nJoint-ranking utility gives the same weight to delay and bandwidth.\nDelay tends to bias selection towards near-by peers whereas outgoing bandwidth introduces some degree of randomness in location of selected peers.\nTherefore, the resulting local images should exhibit a sufficient degree of overlap.\n5.\nPARENT SELECTION The PS mechanism at each peer is essentially a progressive search within the local image for a subset of parent peers such that the following design goals are achieved: (i) maximizing delivered bandwidth 5 , (ii) minimizing the total delay from all parents to the receiver, and (iii) maximizing diversity of paths from parents (whenever it is feasible).\nWhenever these goals are in conflict, a receiver optimizes the goal with the highest priority.\nCurrently, our framework does not directly consider diversity of paths from different parents as a criteria for parent selection.\nHowever, the indirect effect of shared path among parents is addressed because of its potential impact on available bandwidth from a parent when two or more parents are behind the same bottleneck.\nThe number of active parents (nopi(t)) for each receiver should be within a configured range [min nop, max nop].\nEach receiver tries to maximize its delivered bandwidth with the minimum number of parents.\nIf this goal can not be achieved after evaluation of a certain number of new parents, the receiver will gradually increase its number of parents.\nThis flexibility is important in order to utilize available bandwidth from low bandwidth parents, i.e., cope with bandwidth heterogeneity.\nmin nop determines minimum degree of resilience to parent departure, and minimum level of path diversity (whenever diverse paths are available).\nThe number of children for each peer should not be limited.\nInstead, each peer only limits maximum outgoing bandwidth that it is able (or willing) to provide to its children.\nThis allows child peers to compete for congestion controlled bandwidth from a parent which motivates child peers with poor bandwidth connectivity to look for other parents (i.e., properly reshape the overlay).\nDesign of a PS mechanism should address three main questions as follows: 1) When should a new parent be selected?\nThere is a fundamental tradeoff between responsiveness of a receiver to changes in network conditions (or convergence time after a change) and stability of the overlay.\nPRO adopts a conservative approach where each peer selects a new parent in a demand-driven fashion.\nThis should significantly reduce number of new parent selections, which in turn improves scalability (by minimizing the interference caused by new connections) and stability of the overlay structure.\nA new parent is selected in the following scenarios: (i) Initial Phase: when a new peer joins the overlay, it periodically adds a new parent until it has min nop parents.\n(ii) Replacing a Poorly-Performing Parent: when available bandwidth from an existing parent is significantly reduced for a long time or a parent leaves the session, the receiver can select another peer after a random delay.\nEach receiver selects a random delay proportional to its available bandwidth from the parent peer [8].\nThis approach dampens potential oscillation in the overlay while increasing the chance for receivers with higher bandwidth connectivity to keep a parent (i.e., properly reshapes the overlay).\n(iii) Improvement in Performance: when it is likely that a new parent would significantly improve a non-optimized aspect of performance (increase the bandwidth or decrease the delay).\nThis strategy allows gradual improvement of the parent subset as new peers are discovered (or joined) the overlay.\nThe available information for each peer in the image is used as a heuristic to predict performance of a new peer.\nSuch an improvement should be examined infrequently.\nA hysteresis mechanism 5 The target bandwidth is the lower value between maximum stream bandwidth and receiver``s incoming bandwidth.\n46 is implemented in scenario (ii) and (iii) to dampen any potential oscillation in the overlay.\n2) Which peer should be selected as a new parent?\nAt any point of time, peers in the local image are the best known candidate peers to serve as parent.\nIn PRO, each receiver randomly selects a parent from its current image where the probability of selecting a parent is proportional to its utility.\nDeploying this selection strategy by all peers lead to proportional utilization of outgoing bandwidth of all peers without making the selection heavily biased towards high bandwidth peers.\nThis approach (similar to [5]) leverages heterogeneity among peers since number of children for each peer is proportional to its outgoing bandwidth.\n3) How should a new parent be examined?\nEach receiver continuously monitors available bandwidth from all parents and potential correlation between bandwidth of two or more connections as signal for shared bottleneck.\nThe degree of such correlation also reveals the level of multiplexing at the bottleneck link, and could serve as an indicator for separating remote bottlenecks from a local one.\nSuch a monitoring should use average bandwidth of each flow over a relatively long time scale (e.g., hundreds of RTT) to filter out any transient variations in bandwidth.\nTo avoid selecting a poorly-performing parent in the near future, the receiver associates a timer to each parent and exponentially backs off the timer after each failed experience [13].\nAfter the initial phase, each receiver maintains a fixed number of parents at any point of time (nopi(t)).\nThus, a new parent should replace one of the active parents.\nHowever, to ensure monotonic improvement in overall performance of active parents, a new parent is always added before one of the existing parents is dropped (i.e., a receiver can temporarily have one extra parent).\nGiven the available bandwidth from all parents (including the new one) and possible correlation among them, a receiver can use one of the following criteria to drop a parent: (i) to maximize the bandwidth, the receiver can drop the parent that contributes minimum bandwidth, (ii) to maximize path diversity among connections from parents, the receiver should drop the parent that is located behind the same bottleneck with the largest number of active parents and contributes minimum bandwidth among them.\nFinally, if the aggregate bandwidth from all parents remains below the required bandwidth after examining certain number of new parents (and nopi(t) < max nop), the receiver can increase the total number of parents by one.\n6.\nCONCLUSIONS AND FUTURE WORK In this paper, we presented a simple receiver-driven framework for architecting peer-to-pee overlay structures called PRO.\nPRO allows each peer to selfishly and independently determine the best way to connect to the overlay to maximize its performance.\nTherefore, PRO should be able to maximize delivered quality to peers with heterogeneous and asymmetric bandwidth connectivity.\nBoth peer discovery and peer selection in this framework are scalable.\nFurthermore, PRO uses congestion controlled bandwidth as an implicit signal to detect shared bottleneck among existing parents as well as changes in network or overlay conditions to properly reshape the structure.\nWe described the basic framework and its key components, and sketched our strawman solutions.\nThis is a starting point for our work on PRO.\nWe are currently evaluating various aspects of this framework via simulation, and exploring the design space of key components.\nWe are also prototyping this framework to conduct real-world experiments on the Planet-Lab in a near future.\n7.\nREFERENCES [1] A. Akella, S. Seshan, and A. Shaikh.\nAn empirical evaluation of wide-area internet bottlenecks.\nIn Internet Measurement Conference, 2003.\n[2] S. Banerjee, B. Bhattacharjee, and C. Kommareddy.\nScalable application layer multicast.\nIn ACM SIGCOMM, 2002.\n[3] J. Byers, J. Considine, M. Mitzenmacher, and S. Rost.\nInformed Content Delivery Across Adaptive Overlay Networks.\nIn ACM SIGCOMM, 2002.\n[4] M. Castro, P. Druschel, A.-M.\nKermarrec, A. R. A. Nandi, and A. Singh.\nSplitStream: High-bandwidth content distribution in a cooperative environment.\nIn ACM SOSP, 2003.\n[5] Y. Chawathe, S. Ratnasamy, L. Breslau, N. Lanham, and S. Shenker.\nMaking gnutella-like p2p systems scalable.\nIn ACM SIGCOMM, 2003.\n[6] Y. Chu, S. G. Rao, S. Seshan, and H. Zhang.\nEnabling conferencing applications on the internet using an overlay multicast architecture.\nIn ACM SIGCOMM, 2001.\n[7] Y. Chu, S. G. Rao, and H. Zhang.\nA case for end-system multicast.\nIn ACM SIGMETRICS, 2000.\n[8] S. Floyd, V. Jacobson, C. Liu, S. McCanne, and L. Zhang.\nReliable Multicast Framework for Light-Weight Sessions and Application Level Framing.\nACM\/IEEE Transactions on Networking, 1997.\n[9] M. Harchol-Balter, F. T. Leighton, and D. Lewin.\nResource discovery in distributed networks.\nIn Symposium on Principles of Distributed Computing, pages 229-237, 1999.\n[10] S. Hedetniemi, S. Hedetniemi, and A. Liestman.\nA Survey of Gossiping and Broadcasting in Communication Networks.\nIn Networks, 1988.\n[11] D. Kostic, A. Rodriguez, J. Albrecht, and A. Vahdat.\nBullet: High bandwidth data dissemination using an overlay mesh.\nIn SOSP, 2003.\n[12] K. Lakshminarayanan and V. N. Padmanabhan.\nSome findings on the network performance of broadband hosts.\nIn Internet Measurement Conference, 2003.\n[13] S. McCanne, V. Jacobson, and M. Vettereli.\nReceiver-driven layered multicast.\nIn ACM SIGCOMM, 1996.\n[14] T. S. E. Ng, Y. Chu, S. G. Rao, K. Sripanidkulchai, and H. Zhang.\nMeasurement-based optimization techniques for bandwidth-demanding peer-to-peer systems.\nIn IEEE INFOCOM, 2003.\n[15] T. S. E. Ng and H. Zhang.\nPredicting internet network distance with coordinates-based approaches.\nIn IEEE INFOCOM, 2002.\n[16] V. N. Padmanabhan, H. J. Wang, and P. A. Chou.\nResilient peer-to-peer streaming.\nIn IEEE ICNP, 2003.\n[17] R. Rejaie, M. Handley, and D. Estrin.\nRAP: An end-to-end rate-based congestion control mechanism for realtime streams in the internet.\nIn IEEE INFOCOM, 1999.\n[18] R. Rejaie and A. Ortega.\nPALS: Peer-to-Peer Adaptive Layered Streaming.\nIn NOSSDAV, 2003.\n[19] S. Saroiu, P. K. Gummadi, and S. D. Gribble.\nMeasurement study of peer-to-peer file system sharing.\nIn SPIE MMCN, 2002.\n[20] D. A. Tran, K. A. Hua, and T. Do.\nZigzag: An efficient peer-to-peer scheme for media streaming.\nIn IEEE INFOCOM, 2003.\n47","lvl-3":"A Framework for Architecting Peer-to-Peer Receiver-driven Overlays\nABSTRACT\nThis paper presents a simple and scalable framework for architecting peer-to-peer overlays called Peer-to-peer Receiverdriven Overlay (or PRO).\nPRO is designed for non-interactive streaming applications and its primary design goal is to maximize delivered bandwidth (and thus delivered quality) to peers with heterogeneous and asymmetric bandwidth.\nTo achieve this goal, PRO adopts a receiver-driven approach where each receiver (or participating peer) (i) independently discovers other peers in the overlay through gossiping, and (ii) selfishly determines the best subset of parent peers through which to connect to the overlay to maximize its own delivered bandwidth.\nParticipating peers form an unstructured overlay which is inherently robust to high churn rate.\nFurthermore, each receiver leverages congestion controlled bandwidth from its parents as implicit signal to detect and react to long-term changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers dynamically converge to an efficient overlay structure.\n1.\nINTRODUCTION\nLimited deployment of IP multicast has motivated a new distribution paradigm over the Internet based on overlay networks where a group of participating end-systems (or peers) form an overlay structure and actively participate in distribution of content without any special support from the network (e.g., [7]).\nSince overlay structures are layered over the best-effort Internet, any approach for constructing overlay should address the following fundamental challenges: (i) Scalability with the number of participating peers, (ii) Robustness to dynamics of peer participation, (iii) Adaptation to variations of network bandwidth, and (iv) Accommodat\ning heterogeneity and asymmetry of bandwidth connectivity among participating peers [19].\nCoping with bandwidth variations, heterogeneity and asymmetry are particularly important in design of peer-to-peer overlay for streaming applications because delivered quality to each peer is directly determined by its bandwidth connectivity to (other peer (s) on) the overlay.\nThis paper presents a simple framework for architecting Peer-to-peer Receiver-driven Overlay, called PRO.\nPRO can accommodate a spectrum of non-interactive streaming applications ranging from playback to lecture-mode live sessions.\nThe main design philosophy in PRO is that each peer should be allowed to independently and selfishly determine the best way to connect to the overlay in order to maximize its own delivered quality.\nToward this end, each peer can connect to the overlay topology at multiple points (i.e., receive content through multiple parent peers).\nTherefore, participating peers form an unstructured overlay that can gracefully cope with high churn rate [5].\nFurthermore, having multiple parent peers accommodates bandwidth heterogeneity and asymmetry while improves resiliency against dynamics of peer participation.\nPRO consists of two key components: (i) Gossip-based Peer Discovery: Each peer periodically exchanges message (i.e., gossips) with other known peers to progressively learn about a subset of participating peers in the overlay that are likely to be good parents.\nGossiping provides a scalable and efficient approach to peer discovery in unstructured peer-to-peer networks that can be customized to guide direction of discovery towards peers with desired properties (e.g., peers with shorter distance or higher bandwidth).\n(ii) Receiver-driven Parent Selection: Given the collected information about other participating peers by gossiping mechanism, each peer (or receiver) gradually improves its own delivered quality by dynamically selecting a proper subset of parent peers that collectively maximize provided bandwidth to the receiver.\nSince the available bandwidth from different participating peers to a receiver (and possible correlation among them) can be measured only at that receiver, a receiver-driven approach is the natural solution to maximize available bandwidth to heterogeneous peers.\nFurthermore, the available bandwidth from parent peers serves as an implicit signal for a receiver to detect and react to changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers leads to an efficient overlay that maximizes delivered quality to each peer.\nPRO incorporates\nseveral damping functions to ensure stability of the overlay despite uncoordinated actions by different peers.\nPRO is part of a larger architecture that we have developed for peer-to-peer streaming.\nIn our earlier work, we developed a mechanism called PALS [18] that enables a receiver to stream layered structured content from a given set of congestion controlled senders.\nThus, PRO and PALS are both receiver-driven but complement each other.\nMore specifically, PRO determines a proper subset of parent peers that collectively maximize delivered bandwidth to each receiver whereas PALS coordinates \"in-time\" streaming of different segments of multimedia content from these parents despite unpredictable variations in their available bandwidth.\nThis division of functionality provides a great deal of flexibility because it decouples overlay construction from delivery mechanism.\nIn this paper, we primarily focus on the overlay construction mechanism, or PRO.\nThe rest of this paper is organized as follows: In Section 2, we revisit the problem of overlay construction for peerto-peer streaming and identify its two key components and explore their design space.\nWe illustrate the differences between PRO and previous solutions, and justify our design choices.\nWe present our proposed framework in Section 3.\nIn Sections 4 and 5, the key components of our framework are described in further detail.\nFinally, Section 6 concludes the paper and presents our future plans.\n2.\nREVISITING THE PROBLEM\n3.\nP2P RECEIVER-DRIVEN OVERLAY\nUnknown peers in the overlayParent Known peers in the overlay\n4.\nGOSSIP-BASED PEER DISCOVERY\n5.\nPARENT SELECTION\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we presented a simple receiver-driven framework for architecting peer-to-pee overlay structures called PRO.\nPRO allows each peer to selfishly and independently determine the best way to connect to the overlay to maximize its performance.\nTherefore, PRO should be able to maximize delivered quality to peers with heterogeneous and asymmetric bandwidth connectivity.\nBoth peer discovery and peer selection in this framework are scalable.\nFurthermore, PRO uses congestion controlled bandwidth as an implicit signal to detect shared bottleneck among existing parents as well as changes in network or overlay conditions to properly reshape the structure.\nWe described the basic framework and its key components, and sketched our strawman solutions.\nThis is a starting point for our work on PRO.\nWe are currently evaluating various aspects of this framework via simulation, and exploring the design space of key components.\nWe are also prototyping this framework to conduct real-world experiments on the Planet-Lab in a near future.","lvl-4":"A Framework for Architecting Peer-to-Peer Receiver-driven Overlays\nABSTRACT\nThis paper presents a simple and scalable framework for architecting peer-to-peer overlays called Peer-to-peer Receiverdriven Overlay (or PRO).\nPRO is designed for non-interactive streaming applications and its primary design goal is to maximize delivered bandwidth (and thus delivered quality) to peers with heterogeneous and asymmetric bandwidth.\nTo achieve this goal, PRO adopts a receiver-driven approach where each receiver (or participating peer) (i) independently discovers other peers in the overlay through gossiping, and (ii) selfishly determines the best subset of parent peers through which to connect to the overlay to maximize its own delivered bandwidth.\nParticipating peers form an unstructured overlay which is inherently robust to high churn rate.\nFurthermore, each receiver leverages congestion controlled bandwidth from its parents as implicit signal to detect and react to long-term changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers dynamically converge to an efficient overlay structure.\n1.\nINTRODUCTION\ning heterogeneity and asymmetry of bandwidth connectivity among participating peers [19].\nCoping with bandwidth variations, heterogeneity and asymmetry are particularly important in design of peer-to-peer overlay for streaming applications because delivered quality to each peer is directly determined by its bandwidth connectivity to (other peer (s) on) the overlay.\nThis paper presents a simple framework for architecting Peer-to-peer Receiver-driven Overlay, called PRO.\nThe main design philosophy in PRO is that each peer should be allowed to independently and selfishly determine the best way to connect to the overlay in order to maximize its own delivered quality.\nToward this end, each peer can connect to the overlay topology at multiple points (i.e., receive content through multiple parent peers).\nTherefore, participating peers form an unstructured overlay that can gracefully cope with high churn rate [5].\nFurthermore, having multiple parent peers accommodates bandwidth heterogeneity and asymmetry while improves resiliency against dynamics of peer participation.\nPRO consists of two key components: (i) Gossip-based Peer Discovery: Each peer periodically exchanges message (i.e., gossips) with other known peers to progressively learn about a subset of participating peers in the overlay that are likely to be good parents.\n(ii) Receiver-driven Parent Selection: Given the collected information about other participating peers by gossiping mechanism, each peer (or receiver) gradually improves its own delivered quality by dynamically selecting a proper subset of parent peers that collectively maximize provided bandwidth to the receiver.\nSince the available bandwidth from different participating peers to a receiver (and possible correlation among them) can be measured only at that receiver, a receiver-driven approach is the natural solution to maximize available bandwidth to heterogeneous peers.\nFurthermore, the available bandwidth from parent peers serves as an implicit signal for a receiver to detect and react to changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers leads to an efficient overlay that maximizes delivered quality to each peer.\nPRO incorporates\nseveral damping functions to ensure stability of the overlay despite uncoordinated actions by different peers.\nPRO is part of a larger architecture that we have developed for peer-to-peer streaming.\nThus, PRO and PALS are both receiver-driven but complement each other.\nMore specifically, PRO determines a proper subset of parent peers that collectively maximize delivered bandwidth to each receiver whereas PALS coordinates \"in-time\" streaming of different segments of multimedia content from these parents despite unpredictable variations in their available bandwidth.\nThis division of functionality provides a great deal of flexibility because it decouples overlay construction from delivery mechanism.\nIn this paper, we primarily focus on the overlay construction mechanism, or PRO.\nThe rest of this paper is organized as follows: In Section 2, we revisit the problem of overlay construction for peerto-peer streaming and identify its two key components and explore their design space.\nWe present our proposed framework in Section 3.\nIn Sections 4 and 5, the key components of our framework are described in further detail.\nFinally, Section 6 concludes the paper and presents our future plans.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we presented a simple receiver-driven framework for architecting peer-to-pee overlay structures called PRO.\nPRO allows each peer to selfishly and independently determine the best way to connect to the overlay to maximize its performance.\nTherefore, PRO should be able to maximize delivered quality to peers with heterogeneous and asymmetric bandwidth connectivity.\nBoth peer discovery and peer selection in this framework are scalable.\nFurthermore, PRO uses congestion controlled bandwidth as an implicit signal to detect shared bottleneck among existing parents as well as changes in network or overlay conditions to properly reshape the structure.\nWe described the basic framework and its key components, and sketched our strawman solutions.\nThis is a starting point for our work on PRO.\nWe are currently evaluating various aspects of this framework via simulation, and exploring the design space of key components.\nWe are also prototyping this framework to conduct real-world experiments on the Planet-Lab in a near future.","lvl-2":"A Framework for Architecting Peer-to-Peer Receiver-driven Overlays\nABSTRACT\nThis paper presents a simple and scalable framework for architecting peer-to-peer overlays called Peer-to-peer Receiverdriven Overlay (or PRO).\nPRO is designed for non-interactive streaming applications and its primary design goal is to maximize delivered bandwidth (and thus delivered quality) to peers with heterogeneous and asymmetric bandwidth.\nTo achieve this goal, PRO adopts a receiver-driven approach where each receiver (or participating peer) (i) independently discovers other peers in the overlay through gossiping, and (ii) selfishly determines the best subset of parent peers through which to connect to the overlay to maximize its own delivered bandwidth.\nParticipating peers form an unstructured overlay which is inherently robust to high churn rate.\nFurthermore, each receiver leverages congestion controlled bandwidth from its parents as implicit signal to detect and react to long-term changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers dynamically converge to an efficient overlay structure.\n1.\nINTRODUCTION\nLimited deployment of IP multicast has motivated a new distribution paradigm over the Internet based on overlay networks where a group of participating end-systems (or peers) form an overlay structure and actively participate in distribution of content without any special support from the network (e.g., [7]).\nSince overlay structures are layered over the best-effort Internet, any approach for constructing overlay should address the following fundamental challenges: (i) Scalability with the number of participating peers, (ii) Robustness to dynamics of peer participation, (iii) Adaptation to variations of network bandwidth, and (iv) Accommodat\ning heterogeneity and asymmetry of bandwidth connectivity among participating peers [19].\nCoping with bandwidth variations, heterogeneity and asymmetry are particularly important in design of peer-to-peer overlay for streaming applications because delivered quality to each peer is directly determined by its bandwidth connectivity to (other peer (s) on) the overlay.\nThis paper presents a simple framework for architecting Peer-to-peer Receiver-driven Overlay, called PRO.\nPRO can accommodate a spectrum of non-interactive streaming applications ranging from playback to lecture-mode live sessions.\nThe main design philosophy in PRO is that each peer should be allowed to independently and selfishly determine the best way to connect to the overlay in order to maximize its own delivered quality.\nToward this end, each peer can connect to the overlay topology at multiple points (i.e., receive content through multiple parent peers).\nTherefore, participating peers form an unstructured overlay that can gracefully cope with high churn rate [5].\nFurthermore, having multiple parent peers accommodates bandwidth heterogeneity and asymmetry while improves resiliency against dynamics of peer participation.\nPRO consists of two key components: (i) Gossip-based Peer Discovery: Each peer periodically exchanges message (i.e., gossips) with other known peers to progressively learn about a subset of participating peers in the overlay that are likely to be good parents.\nGossiping provides a scalable and efficient approach to peer discovery in unstructured peer-to-peer networks that can be customized to guide direction of discovery towards peers with desired properties (e.g., peers with shorter distance or higher bandwidth).\n(ii) Receiver-driven Parent Selection: Given the collected information about other participating peers by gossiping mechanism, each peer (or receiver) gradually improves its own delivered quality by dynamically selecting a proper subset of parent peers that collectively maximize provided bandwidth to the receiver.\nSince the available bandwidth from different participating peers to a receiver (and possible correlation among them) can be measured only at that receiver, a receiver-driven approach is the natural solution to maximize available bandwidth to heterogeneous peers.\nFurthermore, the available bandwidth from parent peers serves as an implicit signal for a receiver to detect and react to changes in network or overlay condition without any explicit coordination with other participating peers.\nIndependent parent selection by individual peers leads to an efficient overlay that maximizes delivered quality to each peer.\nPRO incorporates\nseveral damping functions to ensure stability of the overlay despite uncoordinated actions by different peers.\nPRO is part of a larger architecture that we have developed for peer-to-peer streaming.\nIn our earlier work, we developed a mechanism called PALS [18] that enables a receiver to stream layered structured content from a given set of congestion controlled senders.\nThus, PRO and PALS are both receiver-driven but complement each other.\nMore specifically, PRO determines a proper subset of parent peers that collectively maximize delivered bandwidth to each receiver whereas PALS coordinates \"in-time\" streaming of different segments of multimedia content from these parents despite unpredictable variations in their available bandwidth.\nThis division of functionality provides a great deal of flexibility because it decouples overlay construction from delivery mechanism.\nIn this paper, we primarily focus on the overlay construction mechanism, or PRO.\nThe rest of this paper is organized as follows: In Section 2, we revisit the problem of overlay construction for peerto-peer streaming and identify its two key components and explore their design space.\nWe illustrate the differences between PRO and previous solutions, and justify our design choices.\nWe present our proposed framework in Section 3.\nIn Sections 4 and 5, the key components of our framework are described in further detail.\nFinally, Section 6 concludes the paper and presents our future plans.\n2.\nREVISITING THE PROBLEM\nConstructing a peer-to-peer overlay for streaming applications should not only accommodate global design goals such as scalability and resilience but also satisfy the local design goal of maximizing delivered quality to individual peers 1.\nMore specifically, delivered quality of streaming content to each peer should be proportional to its incoming access link bandwidth.\nAchieving these goals is particularly challenging because participating peers often exhibit heterogeneity and asymmetry in their bandwidth connectivity.\nSolutions for constructing peer-to-peer overlays often require two key mechanisms to be implemented at each peer: Peer Discovery (PD) and Parent Selection (PS).\nThe PD mechanism enables each peer to learn about other participating peers in the overlay.\nInformation about other peers are used by the PS mechanism at each peer to determine proper parent peers through which it should connect to the overlay.\nThe collective behavior of PD and PS mechanisms at all participating peers leads to an overlay structure that achieves the above design goals.\nThere has been a wealth of previous research that explored design space of the PD and PS mechanisms as follows: Peer Discovery: In structured peer-to-peer networks, the existing structure enables each peer to find other participating peers in a scalable fashion (e.g., [4]).\nHowever, structured peer-to-peer networks may not be robust against high churn rate [5].\nIn contrast, unstructured peer-to-peer networks can gracefully accommodate high churn rate [5] but require a separate peer discovery mechanism.\nMeshfirst approaches (e.g., [7, 6]) that require each peer to know about all other participating peers as well as centralized approaches (e.g., [16]) to peer discovery exhibit limited scalability.\nNICE [2] leverages a hierarchal structure to achieve\nscalability but each peer only knows about a group of closeby peers who may not be good parents (i.e., may not provide sufficient bandwidth).\nParent Selection: We examine two key aspects of parent selections: (i) Selection Criteria: There are two main criteria for parent selections: relative delay and available bandwidth between two peers.\nRelative delay between any two peers can be estimated in a scalable fashion with one of the existing landmark-based solutions such as Global Network Positioning (GNP) [15].\nHowever, estimating available bandwidth between two peers requires end-to-end measurement.\nUsing available bandwidth as criteria for parent selection does not scale for two reasons: First, to cope with dynamics of bandwidth variations, each peer requires to periodically estimate the available bandwidth from all other peers through measurement (e.g., [6]).\nSecond, the probability of interference among different measurements grows with the number of peers in an overlay (similar to joint experiment in RLM [13]).\nMost of the previous solutions adopted the idea of application level multicast and used delay as the main selection criteria.\nParticipating peers cooperatively run a distributed algorithm to organize themselves into a source-rooted tree structure in order to minimize either overall delay across all branches of the tree (e.g., [7]), or delay between source and each receiver peer (e.g., [20]).\nWhile these parent selection strategies minimize associated network load, they may not provide sufficient bandwidth to individual peers because delay is often not a good indicator for available bandwidth between two peers [12, 14].\nThe key issue is that minimizing overall delay (global design goal) and maximizing delivered bandwidth to each peer (local design goal) could easily be in conflict.\nMore specifically, parent peers with longer relative distance may provide higher bandwidth than close-by parents.\nThis suggests that there might exist a tradeoff between maximizing provided bandwidth to each peer and minimizing overall delay across the overlay.\n(ii) Single vs Multiple Parents: A single tree structure for the overlay (where each peer has a single parent) is inherently unable to accommodate peers with heterogeneous and asymmetric bandwidth.\nA common approach to accommodating bandwidth heterogeneity is to use layer structured content (either layered or multiple description encodings) and allow each receiver to have multiple parents.\nThis approach could accommodate heterogeneity but it introduces several new challenges.\nFirst, parent selection strategy should be determined based on location of a bottleneck.\nIf the bottleneck is at the (outgoing) access links of parent peers 2, then a receiver should simply look for more parents.\nHowever, when the bottleneck is else where in the network, a receiver should select parents with a diverse set of paths (i.e., utilize different network paths).\nIn practice, a combination of these cases might simultaneously exist among participating peers [1].\nSecond, streaming a single content from multiple senders is challenging for two reasons: 1) This requires tight coordination among senders to determine overall delivered quality (e.g., number of layers) and decide which sender is responsible for delivery of each segment.\n2) Delivered segments from different senders should arrive before their playout times despite uncorrelated vari2if bottleneck is at the receiver's access link, then provided bandwidth to the receiver is already maximized.\nations in (congestion controlled) bandwidth from different senders.\nThis also implies that those solutions that build multi-parent overlay structure but do not explicitly ensure in-time delivery of individual segments (e.g., [3, 11]) may not be able to support streaming applications.\nOne approach to build a multi-parent overlay is to organize participating peers into different trees where each layer of the stream is sent to a separate tree (e.g., [4, 16]).\nEach peer can maximize its quality by participating in a proper number of trees.\nThis approach raises several issues: 1) the provided bandwidth to peers in each tree is limited by minimum uplink bandwidth among upstream peers on that tree.\nIn the presence of bandwidth asymmetry, this could easily limit delivered bandwidth on each tree below the required bandwidth for a single layer, 2) it is not feasible to build separate trees that are all optimal for a single selection criteria (e.g., overall delay),.\n3) connections across different trees are likely to compete for available bandwidth on a single bottleneck3.\nWe conclude that a practical solution for peer-topeer streaming applications should incorporate the following design properties: (i) it should use an unstructured, multiparent peer-to-peer overlay, (ii) it should provide a scalable peer discovery mechanism that enables each peer to find its good parents efficiently, (iii) it should detect (and possibly avoid) any shared bottleneck among different connections in the overlay, and (iv) it should deploy congestion controlled connections but ensure in-time arrival of delivered segments to each receiver.\nIn the next section, we explain how PRO incorporates all the above design properties.\n3.\nP2P RECEIVER-DRIVEN OVERLAY\nAssumptions: We assume that each peer can estimate the relative distance between any two peers using the GNP mechanism [15].\nFurthermore, each peer knows the incoming and outgoing bandwidth of its access link.\nEach peer uses the PALS mechanism to stream content from multiple parent peers.\nAll connections are congestion controlled by senders (e.g., [17]).\nTo accommodate peer bandwidth heterogeneity, we assume that the content has a layered representation.\nIn other words, with proper adjustment, the framework should work with both layered and multipledescription encodings.\nParticipating peers have heterogeneous and asymmetric bandwidth connectivity.\nFurthermore, peers may join and leave in an arbitrary fashion.\nOverview: In PRO, each peer (or receiver) progressively searches for a subset of parents that collectively maximize delivered bandwidth and minimize overall delay from all parents to the receiver.\nSuch a subset of parents may change over time as some parents join (or leave) the overlay, or available bandwidth from current parents significantly changes.\nNote that each peer can be both receiver and parent at the same time 4.\nEach receiver periodically exchanges messages (i.e., gossips) with other peers in the overlay to learn about those participating peers that are potentially good parents.\nPotentially good parents for a receiver are identified based on their relative utility for the receiver.\nThe utility of a parent peer pi for a receiver pj is a function of their relative network distance (delij) and the outgoing access link bandwidth of the parent (outbwi), (i.e., U (pi, pj)\n= f (delij, outbwi)).\nUsing parents' access link bandwidth instead of available bandwidth has several advantages: (i) outgoing bandwidth is an upper bound for available bandwidth from a parent.\nTherefore, it enables the receiver to roughly classify different parents.\n(ii) estimating available bandwidth requires end-to-end measurement and such a solution does not scale with the number of peers, and more importantly, (iii) given a utility function, this approach enables any peer in the overlay to estimate relative utility of any other two peers.\nEach receiver only maintains information about a fixed (and relatively small) number of promising parent peers in its local image.\nThe local image at each receiver is dynamically updated with new gossip messages as other peers join\/leave the overlay.\nEach peer selects a new parent in a demand-driven fashion in order to minimize the number of end-to-end bandwidth measurements, and thus improve scalability.\nWhen a receiver needs a new parent, its PS mechanism randomly selects a peer from its local image where probability of selecting a peer directly depends on its utility.\nThen, the actual properties (i.e., available bandwidth and delay) of the selected parent are verified through passive measurement.\nToward this end, the selected parent is added to the parent list which triggers PALS to request content from this parent.\nFigure 1 depicts the interactions between PD and PS mechanisms.\nIn PRO, each receiver leverages congestion controlled bandwidth from its parents as an implicit signal to detect two events: (i) any measurable shared bottleneck among connections from different parents, and (ii) any change in network or overlay conditions (e.g., departure or arrival of other close-by peers).\nFigure 2 shows part of an overlay to illustrate this feature.\nEach receiver continuously monitors available bandwidth from all its parents.\nReceiver p0 initially has only p1 as a parent.\nWhen p0 adds a new parent (p2), the receiver examines the smoothed available bandwidth from p1 and p2 and any measurable correlation between them.\nIf the available bandwidth from p1 decreases after p2 is added, the receiver can conclude that these two parents are behind the same bottleneck (i.e., link L0).\nWe note that paths from two parents might have some overlap that does not include any bottleneck.\nAssume another receiver p3 selects p1 as a parent and thus competes with receiver p0 for available bandwidth on link L1.\nSuppose that L1 becomes a bottleneck and the connection between p1 to p3 obtains a significantly higher share of L1's bandwidth than connection between p1 to p0.\nThis change in available bandwidth from p1 serves as a signal for p0.\nWhenever a receiver detects such a drop in bandwidth, it waits for a random period of time (proportional to the available bandwidth) and then drops\nUnknown peers in the overlayParent Known peers in the overlay\nFigure 1: Interactions between PD and PS mechanisms through local image\nFigure 2: Using congestion controlled bandwidth as signal to reshape the overlay\nthe corresponding parent if its bandwidth remains low [8].\nTherefore, the receiver with a higher bandwidth connectivity (p3) is more likely to keep p1 as parent whereas p0 may examine other parents with higher bandwidth including p3.\nThe congestion controlled bandwidth signals the receiver to properly reshape the overlay.\nWe present a summary of key features and limitations of PRO in the next two sections.\nTable 1 summarizes our notation throughout this paper.\nMain Features: Gossiping provides a scalable approach to peer discovery because each peer does not require global knowledge about all group members, and its generated traffic can be controlled.\nThe PD mechanism actively participates in peer selection by identifying peers for the local image which limits the possible choices of parents for the PS mechanism.\nPRO constructs a multi-parent, unstructured overlay.\nBut PRO does not have the same limitations that exist in multi-tree approaches because it allows each receiver to independently micro-manage its parents to maximize its overall bandwidth based on local information.\nPRO conducts passive measurement not only to determine available bandwidth from a parent but also to detect any shared bottleneck between paths from different parents.\nFurthermore, by selecting a new parent from the local image, PRO increases the probability of finding a good parent in each selection, and thus significantly decreases number of required measurements which in turn improves scalability.\nPRO can gracefully accommodate bandwidth heterogeneity and asymmetry among peers since PALS is able to manage delivery of content from a group of parents with different bandwidth.\nLimitations and Challenges: The main hypothesis in our framework is that the best subset of parents for each receiver are likely to be part of its local image i.e., PD mechanism can find the best parents.\nWhenever this condition is not satisfied, either a receiver may not be able to maximize its overall bandwidth or resulting overlay may not be efficient.\nTable 1: Notation used throughout the paper\nSymbol Definition.\npi Peer i inbwi Incoming access link BW for pi outbwi Outgoing access link BW for pi min nopi Min.\nNo of parents for pi max nopi Max.\nNo of parents for pi nopi (t) No of active parents for pi at time t img sz Size of local image at each peer sgm Size of gossip message delij Estimated delay between pi and pj Clearly, properties of the selected utility function as well as accuracy of estimated parameters (in particular using outgoing bandwidth instead of available bandwidth) determine properties of the local image at each peer which in turn affects performance of the framework in some scenarios.\nIn these cases, the utility value may not effectively guide the search process in identifying good parents which increases the average convergence time until each peer finds a good subset of parents.\nSimilar to many other adaptive mechanisms (e.g., [13]), the parent selection mechanism should address the fundamental tradeoff between responsiveness and stability.\nFinally, the congestion controlled bandwidth from parent peers may not provide a measurable signal to detect a shared bottleneck when level of multiplexing is high at the bottleneck link.\nHowever, this is not a major limitation since the negative impact of a shared bottleneck in these cases is minimal.\nAll the above limitations are in part due to the simplicity of our framework and would adversely affect its performance.\nHowever, we believe that this is a reasonable design tradeoff since simplicity is one of our key design goals.\nIn the following sections, we describe the two key components of our framework in further details.\n4.\nGOSSIP-BASED PEER DISCOVERY\nPeer discovery at each receiver is basically a search among all participating peers in the overlay for a certain number (img sz) of peers with the highest relative utility.\nPRO adopts a gossip-like [10] approach to peer discovery.\nGossiping (or rumor spreading) has been frequently used as a scalable alternative to flooding that gradually spreads information among a group of peers.However, we use gossiping as a search mechanism [9] for finding promising parents since it has two appealing properties (i) the volume of exchanged messages can be controlled, and (ii) the gossip-based information exchange can be customized to leverage relative utility values to improve search efficiency.\nThe gossip mechanism works as follow: each peer maintains a local image that contains up to img sz records where each record represents the following information for a previously discovered peer pi in the overlay: 1) IP address, 2) GNP coordinates, 3) number of received layers, 4) timestamp when the record was last generated by a peer, 5) outbwi and 6) inbwi.\nTo bootstrap the discovery process, a new receiver needs to learn about a handful of other participating peers in the overlay.\nThis information can be obtained from the original server (or a well-known rendezvous point).\nThe server should implement a strategy for selecting the initial peers that are provided to each new receiver.\nWe call this the initial parent selection mechanism.\nOnce the initial set of peers are known, each peer pi periodically invokes a target selection mechanism to determine a target peer (pj) from its local image for gossip.\nGiven a utility function, peer pi uses a content selection strategy to select sgm records (or smaller number when sgm records are not available) from its local image that are most useful for pj and send those records to pj.\nIn response, pj follows the same steps and replies with a gossip message that includes sgm records from its local image that are most useful for pi, i.e., bidirectional gossip.\nWhen a gossip message arrives at each peer, an image maintenance scheme integrates new records into the current local image and discards excess records such that certain property of the local image is improved (e.g., increase overall utility of peers in the image) Aggregate performance of\na gossip mechanism can be presented by two average metrics and their distribution among peers: (i) Average Convergence Time: average number of gossip messages until all peers in an overlay reach their final images, and (ii) Average Efficiency Ratio: average ratio of unique records to the total number of received records by each peer.\nWe have been exploring the design space of four key components of the gossip mechanism.\nFrequency and size of gossip messages determine average freshness of local images.\nCurrently, the server randomly selects the initial parents from its local image for each new peer.\nTarget Selection: Target selection randomly picks a peer from the current image to evenly obtain information from different areas of the overlay and speed up discovery.\nContent Selection: peer pk determines relative utility of all the peers (pj) in its local image for target peer pi, and then randomly selects sgm peers to prepare a gossip message for pi.\nHowever, probability of selecting a peer directly depends on its utility.\nThis approach is biased towards peers with higher utility but its randomness tend to reduce number of duplicate records in different gossip message from one peer (i.e., improves efficiency).\nA potential drawback of this approach is the increase in convergence time.\nWe plan to examine more efficient information sharing schemes such as bloom filters [3] in our future work.\nPRO uses joint-ranking [15] to determine relative utility of a parent for a receiver.\nGiven a collection of peers in a local image of pk, the jointranking scheme ranks all the peers once based on their outgoing bandwidth, and then based on their estimated delay from a target peer pi.\nThe utility of peer pj (U (pj, pi)) is inversely proportional to the sum of pj's ranks in both rankings.\nValues for each property (i.e., bandwidth and delay) of various peers are divided into multiple ranges (i.e., bins) where all peers within each range are assumed to have the same value for that property.\nThis \"binning\" scheme minimizes the sensitivity to minor differences in delay or bandwidth among different peers.\nImage maintenance: Image maintenance mechanism evicts extra records (beyond img sz) that satisfy one of the following conditions: (i) represent peers with the lower utility, (ii) represent peers that were already dropped by the PS mechanism due to poor performance and (iii) have a timestamp older than a threshold.\nThis approach attempts to balance image quality (in terms of overall utility of existing peers) and its freshness.\nNote that the gossip mechanism can discover any peer in the overlay as long as reachability is provided through overlap among local images at different peers.\nThe higher the amount of overlap, the higher the efficiency of discovery, and the higher the robustness of the overlay to dynamics of peer participations.\nThe amount of overlap among images depends on both the size and shape of the local images at each peer.\nThe shape of the local image is a function of the deployed utility function.\nJoint-ranking utility gives the same weight to delay and bandwidth.\nDelay tends to bias selection towards near-by peers whereas outgoing bandwidth introduces some degree of randomness in location of selected peers.\nTherefore, the resulting local images should exhibit a sufficient degree of overlap.\n5.\nPARENT SELECTION\nThe PS mechanism at each peer is essentially a progressive search within the local image for a subset of parent peers such that the following design goals are achieved: (i) maximizing delivered bandwidth 5, (ii) minimizing the total delay from all parents to the receiver, and (iii) maximizing diversity of paths from parents (whenever it is feasible).\nWhenever these goals are in conflict, a receiver optimizes the goal with the highest priority.\nCurrently, our framework does not directly consider diversity of paths from different parents as a criteria for parent selection.\nHowever, the indirect effect of shared path among parents is addressed because of its potential impact on available bandwidth from a parent when two or more parents are behind the same bottleneck.\nThe number of active parents (nopi (t)) for each receiver should be within a configured range [min nop, max nop].\nEach receiver tries to maximize its delivered bandwidth with the minimum number of parents.\nIf this goal cannot be achieved after evaluation of a certain number of new parents, the receiver will gradually increase its number of parents.\nThis flexibility is important in order to utilize available bandwidth from low bandwidth parents, i.e., cope with bandwidth heterogeneity.\nmin nop determines minimum degree of resilience to parent departure, and minimum level of path diversity (whenever diverse paths are available).\nThe number of children for each peer should not be limited.\nInstead, each peer only limits maximum outgoing bandwidth that it is able (or willing) to provide to its children.\nThis allows child peers to compete for congestion controlled bandwidth from a parent which motivates child peers with poor bandwidth connectivity to look for other parents (i.e., properly reshape the overlay).\nDesign of a PS mechanism should address three main questions as follows: 1) When should a new parent be selected?\nThere is a fundamental tradeoff between responsiveness of a receiver to changes in network conditions (or convergence time after a change) and stability of the overlay.\nPRO adopts a conservative approach where each peer selects a new parent in a demand-driven fashion.\nThis should significantly reduce number of new parent selections, which in turn improves scalability (by minimizing the interference caused by new connections) and stability of the overlay structure.\nA new parent is selected in the following scenarios: (i) Initial Phase: when a new peer joins the overlay, it periodically adds a new parent until it has min nop parents.\n(ii) Replacing a Poorly-Performing Parent: when available bandwidth from an existing parent is significantly reduced for a long time or a parent leaves the session, the receiver can select another peer after a random delay.\nEach receiver selects a random delay proportional to its available bandwidth from the parent peer [8].\nThis approach dampens potential oscillation in the overlay while increasing the chance for receivers with higher bandwidth connectivity to keep a parent (i.e., properly reshapes the overlay).\n(iii) Improvement in Performance: when it is likely that a new parent would significantly improve a non-optimized aspect of performance (increase the bandwidth or decrease the delay).\nThis strategy allows gradual improvement of the parent subset as new peers are discovered (or joined) the overlay.\nThe available information for each peer in the image is used as a heuristic to predict performance of a new peer.\nSuch an improvement should be examined infrequently.\nA hysteresis mechanism\nis implemented in scenario (ii) and (iii) to dampen any potential oscillation in the overlay.\n2) Which peer should be selected as a new parent?\nAt any point of time, peers in the local image are the best known candidate peers to serve as parent.\nIn PRO, each receiver randomly selects a parent from its current image where the probability of selecting a parent is proportional to its utility.\nDeploying this selection strategy by all peers lead to proportional utilization of outgoing bandwidth of all peers without making the selection heavily biased towards high bandwidth peers.\nThis approach (similar to [5]) leverages heterogeneity among peers since number of children for each peer is proportional to its outgoing bandwidth.\n3) How should a new parent be examined?\nEach receiver continuously monitors available bandwidth from all parents and potential correlation between bandwidth of two or more connections as signal for shared bottleneck.\nThe degree of such correlation also reveals the level of multiplexing at the bottleneck link, and could serve as an indicator for separating remote bottlenecks from a local one.\nSuch a monitoring should use average bandwidth of each flow over a relatively long time scale (e.g., hundreds of RTT) to filter out any transient variations in bandwidth.\nTo avoid selecting a poorly-performing parent in the near future, the receiver associates a timer to each parent and exponentially backs off the timer after each failed experience [13].\nAfter the initial phase, each receiver maintains a fixed number of parents at any point of time (nopi (t)).\nThus, a new parent should replace one of the active parents.\nHowever, to ensure monotonic improvement in overall performance of active parents, a new parent is always added before one of the existing parents is dropped (i.e., a receiver can temporarily have one extra parent).\nGiven the available bandwidth from all parents (including the new one) and possible correlation among them, a receiver can use one of the following criteria to drop a parent: (i) to maximize the bandwidth, the receiver can drop the parent that contributes minimum bandwidth, (ii) to maximize path diversity among connections from parents, the receiver should drop the parent that is located behind the same bottleneck with the largest number of active parents and contributes minimum bandwidth among them.\nFinally, if the aggregate bandwidth from all parents remains below the required bandwidth after examining certain number of new parents (and nopi (t) L, it could contain as many as |c| violating paths.\nIn our second constraint generator, we only add one constraint for such cycles: the sum of edges in the cycle can be at most |c|(L \u2212 1)\/L .\nThis generator made the algorithm slower, so we went in the other direction in developing our final generator.\nIt adds one constraint per violating path p, and furthermore, it adds a constraint for each path with the same interior vertices (not counting the endpoints) as p.\nThis improved the overall speed.\n4.3 Experimental performance It turned out that even with these improvements, the edge formulation approach cannot clear a kidney exchange with 100 vertices in the time the cycle formulation (described later in Section 5) can clear one with 10,000 vertices.\nIn other words, column generation based approaches turned out to be drastically better than constraint generation based approaches.\nTherefore, in the rest of the paper, we will focus on the cycle formulation and the column generation based approaches.\n5.\nSOLUTION APPROACHES BASED ON A CYCLE FORMULATION In this section, we consider a formulation of the clearing problem as an ILP with one variable for each cycle.\nThis encoding is based on the following classical algorithm for solving the directed cycle cover problem when cycles have length 2.\nGiven a market G = (V, E), construct a new graph on V with a weight wc edge for each cycle c of length 2.\nIt is easy to see that matchings in this new graph correspond to cycle covers by length-2 cycles in the original market graph.\nHence, the market clearing problem with L = 2 can be solved in polynomial time by finding a maximum-weight matching.\nc_1 v 1 v 2 v 3 v 4 c_3c_2 Figure 4: Maximum-weight matching encoding of the market in Figure 1.\nWe can generalize this encoding for arbitrary L. Let C(L) be the set of all cycles of G with length at most L.\nThen the following ILP finds the maximum-weight cycle cover by C(L) cycles: max c\u2208C(L) wcc subject to c:vi\u2208c c \u2264 1 \u2200vi \u2208 V with c \u2208 {0, 1} \u2200c \u2208 C(L) 5.1 Edge vs Cycle Formulation In this section, we consider the merits of the edge formulation and cycle formulation.\nThe edge formulation can be solved in polynomial time when there are no constraints on the cycle size.\nThe cycle formulation can be solved in polynomial time when the cycle size is at most 2.\nWe now consider the case of short cycles of length at most L, where L \u2265 3.\nOur tree search algorithms use the LP relaxation of these formulations to provide upper bounds on the optimal solution.\nThese bounds help prune subtrees and guide the search in the usual ways.\nTheorem 2.\nThe LP relaxation of the cycle formulation weakly dominates the LP relaxation of the edge formulation.\nProof.\nConsider an optimal solution to the LP relaxation of the cycle formulation.\nWe show how to construct an equivalent solution in the edge formulation.\nFor each edge in the graph, set its value as the sum of values of all the cycles of which it is a member.\nAlso, define the value of a vertex in the same manner.\nBecause of the cycle constraints, the conservation and capacity constraints of the edge encoding are clearly satisfied.\nIt remains to show that none of the path constraints are violated.\nLet p be any length-L path in the graph.\nSince p has L\u22121 interior vertices (not counting the endpoints), the value sum of these interior vertices is at most L\u22121.\nNow, for any cycle c of length at most L, the number of edges it has in p, which we denote by ec(p), is at most the number of interior vertices it has in p, which we denote by vc(p).\nHence, \u00c8 e\u2208p e = \u00c8 c\u2208C(L) c\u2217ec(p) \u2264 \u00c8 c\u2208C(L) c\u2217vc(p) = \u00c8 v\u2208p v = L\u22121.\n299 The converse of this theorem is not true.\nConsider a graph which is simply a cycle with n edges.\nClearly, the LP relaxation of the cycle formulation has optimal value 0, since there are no cycles of size at most L. However, the edge formulation has a solution of size n\/2, with each edge having value 1\/2.\nHence, the cycle formulation is tighter than the edge formulation.\nAdditionally, for a graph with m edges, the edge formulation requires O(m3 ) constraints, while the cycle formulation requires only O(m2 ).\n5.2 Column Generation for the LP Table 2 shows how the number of cycles of length at most 3 grows with the size of the market.\nWith one variable per cycle in the cycle formulation, CPLEX cannot even clear markets with 1,000 patients without running out of memory (see Figure 6).\nTo address this problem, we used an incremental formulation approach.\nThe first step in LP-guided tree search is to solve the LP relaxation.\nSince the cycle formulation does not fit in memory, this LP stage would fail immediately without an incremental formulation approach.\nHowever, motivated by the observation that an exchange solution can include only a tiny fraction of the cycles, we explored the approach of using column (i.e., cycle) generation.\nThe idea of column generation is to start with a restricted LP containing only a small number of columns (variables, i.e., cycles), and then to repeatedly add columns until an optimal solution to this partially formulated LP is an optimal solution to the original (aka master) LP.\nWe explain this further by way of an example.\nConsider the market in Figure 1 with L = 2.\nFigure 5 gives the corresponding master LP, P, and its dual, D. Primal P max 2c1 +2c2 +2c3 s.t. c1 \u2264 1 (v1) c1 +c2 \u2264 1 (v2) +c2 +c3 \u2264 1 (v3) +c3 \u2264 1 (v4) with c1, c2, c3 \u2265 0 Dual D min v1 +v2 +v3 +v4 s.t v1 +v2 \u2265 2 (c1) +v2 +v3 \u2265 2 (c2) +v3 +v4 \u2265 2 (c3) with v1, v2, v3, v4 \u2265 0 Figure 5: Cycle formulation.\nLet P be the restriction of P containing columns c1 and c3 only.\nLet D be the dual of P , that is, D is just D without the constraint c2.\nBecause P and D are small, we can solve them to obtain OPT(P ) = OPT(D ) = 4, with cOP T (P ) = c1 = c3 = 1 and vOP T (D ) = v1 = v2 = v3 = v4 = 1 .\nWhile cOP T (P ) must be a feasible solution of P, it turns out (fortunately) that vOP T (D ) is feasible for D, so that OPT(D ) \u2265 OPT(D).\nWe can verify this by checking that vOP T (D ) satisfies the constraints of D not already in Di.e.\nconstraint c2.\nIt follows that OPT(P ) = OPT(D ) \u2265 OPT(D) = OPT(P), and so vOP T (P ) is provably an optimal solution for P, even though P is contains a only strict subset of the columns of P. Of course, it may turn out (unfortunately) that vOP T (D ) is not feasible for D.\nThis can happen above if vOP T (D ) = v1 = 2, v2 = 0, v3 = 0, v4 = 2 .\nAlthough we can still see that OPT(D ) = OPT(D), in general we cannot prove this because D and P are too large to solve.\nInstead, because constraint c2 is violated, we add column c2 to P , update D , and repeat.\nThe problem of finding a violated constraint is called the pricing problem.\nHere, the price of a column (cycle in our setting) is the difference between its weight, and the dual-value sum of the cycle``s vertices.\nIf any column of P has a positive price, its corresponding constraint is violated and we have not yet proven optimality.\nIn this case, we must continue generating columns to add to P .\n5.2.1 Pricing Problem For smaller instances, we can maintain an explicit collection of all feasible cycles.\nThis makes the pricing problem easy and efficient to solve: we simply traverse the collection of cycles, and look for cycles with positive price.\nWe can even find cycles with the most positive price, which are the ones most likely to improve the objective value of restricted LP [1].\nThis approach does not scale however.\nA market with 5000 patients can have as many as 400 million cycles of length at most 3 (see Table 2).\nThis is too many cycles to keep in memory.\nHence, for larger instances, we have to generate feasible cycles while looking for one with a positive price.\nWe do this using a depth-first search algorithm on the market graph (see Figure 1).\nIn order to make this search faster, we explore vertices in non-decreasing value order, as these vertices are more likely to belong to cycles with positive weight.\nWe also use several pruning rules to determine if the current search path can lead to a positive weight cycle.\nFor example, at a given vertex in the search, we can prune based on the fact that every vertex we visit from this point onwards will have value at least as great the current vertex.\nEven with these pruning rules, column generation is a bottleneck.\nHence, we also implemented the following optimizations.\nWhenever the search exhaustively proves that a vertex belongs to no positive price cycle, we mark the vertex and do not use it as the root of a depth-first search until its dual value decreases.\nIn this way, we avoid unnecessarily repeating our computational efforts from a previous column generation iteration.\nFinally, it can sometimes be beneficial for column generation to include several positive-price columns in one iteration, since it may be faster to generate a second column, once the first one is found.\nHowever, we avoid this for the following reason.\nIf we attempt to find more positive-price columns than there are to be found, or if the columns are far apart in the search space, we end up having to generate and check a large part of the collection of feasible cycles.\nIn our experiments, we have seen this occur in markets with hundreds of millions of cycles, resulting in prohibitively expensive computation costs.\n5.2.2 Column Seeding Even if there is only a small gap to the master LP relaxation, column generation requires many iterations to improve the objective value of the restricted LP.\nEach of these 300 iterations is expensive, as we must solve the pricing problem, and re-solve the restricted LP.\nHence, although we could begin with no columns in the restricted LP, it is much faster to seed the LP with enough columns that the optimal objective value is not too far from the master LP.\nOf course, we cannot include so many columns that we run out of memory.\nWe experimented with several column seeders.\nIn one class of seeder, we use a heuristic to find an exchange, and then add the cycles of that exchange to the initial restricted LP.\nWe implemented two heuristics.\nThe first is a greedy algorithm: for each vertex in a random order, if it is uncovered, we attempt to include a cycle containing it and other uncovered vertices.\nThe other heuristic uses specialized maximum-weight matching code [16] to find an optimal cover by length-2 cycles.\nThese heuristics perform extremely well, especially taking into account the fact that they only add a small number of columns.\nFor example, Table 1 shows that an optimal cover by length-2 cycles has almost as much weight as the exchange with unrestricted cycle size.\nHowever, we have enough memory to include hundreds-of-thousands of additional columns and thereby get closer still to the upper bound.\nOur best column seeder constructs a random collection of feasible cycles.\nSince a market with 5000 patients can have as many as 400 million feasible cycles, it takes too long to generate and traverse all feasible cycles, and so we do not include a uniformly random collection.\nInstead, we perform a random walk on the market graph (see, for example, Figure 1), in which, after each step of the walk, we test whether there is an edge back onto our path that forms a feasible cycle.\nIf we find a cycle, it is included in the restricted LP, and we start a new walk from a random vertex.\nIn our experiments (see Section 6), we use this algorithm to seed the LP with 400,000 cycles.\nThis last approach outperforms the heuristic seeders described above.\nHowever, in our algorithm, we use a combination that takes the union of all columns from all three seeders.\nIn Figure 6, we compare the performance of the combination seeder against the combination without the random collection seeder.\nWe do not plot the performance of the algorithm without any seeder at all, because it can take hours to clear markets we can otherwise clear in a few minutes.\n5.2.3 Proving Optimality Recall that our aim is to find an optimal solution to the master LP relaxation.\nUsing column generation, we can prove that a restricted-primal solution is optimal once all columns have non-positive prices.\nUnfortunately though, our clearing problem has the so-called tailing-off effect [1, Section 6.3], in which, even though the restricted primal is optimal in hindsight, a large number of additional iterations are required in order to prove optimality (i.e., eliminate all positive-price columns).\nThere is no good general solution to the tailing-off effect.\nHowever, to mitigate this effect, we take advantage of the following problem-specific observation.\nRecall from Section 1.1 that, almost always, a maximum-weight exchange with cycles of length at most 3 has the same weight as an unrestricted maximum-weight exchange.\n(This does not mean that the solver for the unrestricted case will find a solution with short cycles, however.)\nFurthermore, the unrestricted clearing problem can be solved in polynomial time (recall Section 4).\nHence, we can efficiently compute an upper bound on the master LP relaxation, and, whenever the restricted primal achieves this upper bound, we have proven optimality without necessarily having to eliminate all positive-price columns!\nIn order for this to improve the running time of the overall algorithm, we need to be able to clear the unrestricted market in less time than it takes column generation to eliminate all the positive-price cycles.\nEven though the first problem is polynomial-time solvable, this is not trivial for large instances.\nFor example, for a market with 10,000 patients and 25 million edges, specialized maximum-weight matching code [16] was too slow, and CPLEX ran out of memory on the edge formulation encoding from Section 4.\nTo make this idea work then, we used column generation to solve the edge formulation.\nThis involves starting with a small random subset of the edges, and then adding positive price edges one-by-one until none remain.\nWe conduct this secondary column generation not in the original market graph G, but in the perfect matching bipartite graph of Figure 3.\nWe do this so that we only need to solve the LP, not the ILP, since the integrality gap in the perfect matching bipartite graph is 1-i.e. there always exists an integral solution that achieves the fractional upper bound.\nThe resulting speedup to the overall algorithm is dramatic, as can be seen in Figure 6.\n5.2.4 Column Management If the optimal value of the initial restricted LP P is far from the the master LP P, then a large number of columns are generated before the gap is closed.\nThis leads to memory problems on markets with as few as 4,000 patients.\nAlso, even before memory becomes an issue, the column generation iterations become slow because of the additional overhead of solving a larger LP.\nTo address these issues, we implemented a column management scheme to limit the size of the restricted LP.\nWhenever we add columns to the LP, we check to see if it contains more than a threshold number of columns.\nIf this is the case, we selectively remove columns until it is again below the threshold2 .\nAs we discussed earlier, only a tiny fraction of all the cycles will end up in the final solution.\nIt is unlikely that we delete such a cycle, and even if we do, it can always be generated again.\nOf course, we must not be too aggressive with the threshold, because doing so may offset the per-iteration performance gains by significantly increasing the number of iterations required to get a suitable column set in the LP at the same time.\nThere are some columns we never delete, for example those we have branched on (see Section 5.3.2), or those with a non-zero LP value.\nAmongst the rest, we delete those with the lowest price, since those correspond to the dual constraints that are most satisfied.\nThis column management scheme works well and has enabled us to clear markets with 10,000 patients, as seen in Figure 6.\n5.3 Branch-and-Price Search for the ILP Given a large market clearing problem, we can successfully solve its LP relaxation to optimality by using the column generation enhancements described above.\nHowever, the solutions we find are usually fractional.\nThus the next 2 Based on memory size, we set the threshold at 400,000.\n301 step involves performing a branch-and-price tree search [1] to find an optimal integral solution.\nBriefly, this is the idea of branch-and-price.\nWhenever we set a fractional variable to 0 or 1 (branch), both the master LP, and the restriction we are working with, are changed (constrained).\nBy default then, we need to perform column generation (go through the effort of pricing) at each node of the search tree to prove that the constrained restriction is optimal for constrained master LP.\n(However, as discussed in Section 5.2.3, we compute the integral upper bound for the root node based on relaxing the cycle length constraint completely, and whenever any node``s LP in the tree achieves that value, we do not need to continue pricing columns at that node.)\nFor the clearing problem with cycles of length at most 3, we have found that there is rarely a gap between the optimal integral and fractional solutions.\nThis means we can largely avoid the expensive per node pricing step: whenever the constrained restricted LP has the same optimal value as its parent in the tree search, we can prove LP optimality, as in Section 5.2.3, without having to include any additional columns in the restricted LP.\nAlthough CPLEX can solve ILPs, it does not support branch-and-price (for example, because there can be problemspecific complications involving the interaction between the branching rule and the pricing problem).\nHence, we implemented our own branch-and-price algorithm, which explores the search tree in depth-first order.\nWe also experimented with the A* node selection order [7, 2].\nHowever, this search strategy requires significantly more memory, which we found was better employed in making the column generation phase faster (see Section 5.2.2).\nThe remaining major components of the algorithm are described in the next two subsections.\n5.3.1 Primal Heuristics Before branching on a fractional variable, we use primal heuristics to construct a feasible integral solution.\nThese solutions are lower bounds on the final optimal integral solutions.\nHence, whenever a restricted fractional solution is no better than the best integral solution found so far, we prune the current subtree.\nA primal heuristic is effective if it is efficient and constructs tight lower bounds.\nWe experimented with two primal heuristics.\nThe first is a simple rounding algorithm [8]: include all cycles with fractional value at least 0.5, and then, ensuring feasibility, greedily add the remaining cycles.\nWhilst this heuristic is efficient, we found that the lower bounds it constructs rarely enable much pruning.\nWe also tried using CPLEX as a primal heuristic.\nAt any given node of the search tree, we can convert the restricted LP relaxation back to an ILP by reintroducing the integrality constraints.\nCPLEX has several built-in primal heuristics, which we can apply to this ILP.\nMoreover, we can use CPLEX``s own tree search to find an optimal integral solution.\nIn general, this tree search is much faster than our own.\nIf CPLEX finds an integral solution that matches the fractional upper bound at the root node, we are done.\nOtherwise, no such integral solution exists, or we don``t yet have the right combination of cycles in the restricted LP.\nFor kidney-exchange markets, it is usually the second reason that applies (see Sections 5.2.2 and 5.2.4).\nHence, at some point in the tree search, once more columns have been generated as a result of branching, the CPLEX heuristic will find an optimal integral solution.\nAlthough CPLEX tree search is faster than our own, it is not so fast that we can apply it to every node in our search tree.\nHence, we make the following optimizations.\nFirstly, we add a constraint that requires the objective value of the ILP to be as large as the fractional target.\nIf this is not the case, we want to abort and proceed to generate more columns with our branch-and-price search.\nSecondly, we limit the number of nodes in CPLEX``s search tree.\nThis is because we have observed that no integral solution exists, CPLEX can take a very long time to prove that.\nFinally, we only apply the CPLEX heuristic at a node if it has a sufficiently different set of cycles from its parent.\nUsing CPLEX as a primal heuristic has a large impact because it makes the search tree smaller, so all the computationally expensive pricing work is avoided at nodes that are not generated in this smaller tree.\n5.3.2 Cycle Brancher We experimented with two branching strategies, both of which select one variable per node.\nThe first strategy, branching by certainty, randomly selects a variable from those whose LP value is closest to 1.\nThe second strategy, branching by uncertainty, randomly selects a variable whose LP value is closest to 0.5.\nIn either case, two children of the node are generated corresponding to two subtrees, one in which the variable is set to 0, the other in which it is set to 1.\nOur depth-first search always chooses to explore first the subtree in which the value of the variable is closest to its fractional value.\nFor our clearing problem with cycles of length at most 3, we found branching by uncertainty to be superior, rarely requiring any backtracking.\n6.\nEXPERIMENTAL RESULTS All our experiments were performed in Linux (Red Hat 9.0), using a Dell PC with a 3GHz Intel Pentium 4 processor, and 1GB of RAM.\nWherever we used CPLEX (e.g., in solving the LP and as a primal heuristic, as discussed in the previous sections), we used CPLEX 10.010.\nFigure 6 shows the runtime performance of four clearing algorithms.\nFor each market size listed, we randomly generated 10 markets, and attempted to clear them using each of the algorithms.\nThe first algorithm is CPLEX on the full cycle formulation.\nThis algorithm fails to clear any markets with 1000 patients or more.\nAlso, its running time on markets smaller than this is significantly worse than the other algorithms.\nThe other algorithms are variations of the incremental column generation approach described in Section 5.\nWe begin with the following settings (all optimizations are switched on): Category Setting Column Seeder Combination of greedy exchange and maximum-weight matching heuristics, and random walk seeder (400,000 cycles).\nColumn Generation One column at a time.\nColumn Management On, with 400,000 column limit.\nOptimality Prover On.\nPrimal Heuristic Rounding & CPLEX tree search.\nBranching Rule Uncertainty.\n302 The combination of these optimizations allows us to easily clear markets with over 10,000 patients.\nIn each of the next two algorithms, we turn one of these optimizations off to highlight its effectiveness.\nFirst, we restrict the seeder so that it only begins with 10,000 cycles.\nThis setting is faster for smaller instances, since the LP relaxations are smaller, and faster to solve.\nHowever, at 5000 vertices, this effect starts to be offset by the additional column generation that must be performed.\nFor larger instance, this restricted seeder is clearly worse.\nFinally, we restore the seeder to its optimized setting, but this time, remove the optimality prover described in Section 5.2.3.\nAs in many column generation problems, the tailing-off effect is substantial.\nBy taking advantage of the properties of our problem, we manage to clear a market with 10,000 patients in about the same time it would otherwise have taken to clear a 6000 patient market.\n7.\nFIELDING THE TECHNOLOGY Our algorithm and implementation replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges, in December 2006.\nWe conduct a match run every two weeks, and the first transplants based on our solutions have already been conducted.\nWhile there are (for political\/inter-personal reasons) at least four kidney exchanges in the US currently, everyone understands that a unified unfragmented national exchange would save more lives.\nWe are in discussions with additional kidney exchanges that are interested in adopting our technology.\nThis way our technology (and the processes around it) will hopefully serve as a substrate that will eventually help in unifying the exchanges.\nAt least computational scalability is no longer an obstacle.\n8.\nCONCLUSIONANDFUTURERESEARCH In this work we have developed the most scalable exact algorithms for barter exchanges to date, with special focus on the upcoming national kidney-exchange market in which patients with kidney disease will be matched with compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nOur work presents the first algorithm capable of clearing these markets on a nationwide scale.\nIt optimally solves the kidney exchange clearing problem with 10,000 donordonee pairs.\nThus there is no need to resort to approximate solutions.\nThe best prior technology (vanilla CPLEX) cannot handle instances beyond about 900 donor-donee pairs because it runs out of memory.\nThe key to our improvement is incremental problem formulation.\nWe adapted two paradigms for the task: constraint generation and column generation.\nFor each, we developed a host of techniques that substantially improve both runtime and memory usage.\nSome of the techniques use domain-specific observations while others are domain independent.\nWe conclude that column generation scales dramatically better than constraint generation.\nFor column generation in the LP, our enhancements include pricing techniques, column seeding techniques, techniques for proving optimality without having to bring in all positive-price columns (and using another column-generation process in a different formulation to do so), and column removal techniques.\nFor the branch-andprice search in the integer program that surrounds the LP, our enhancements include primal heuristics and we also compared branching strategies.\nUndoubtedly, further parameter tuning and perhaps additional speed improvement techniques could be used to make the algorithm even faster.\nOur algorithm also supports several generalizations, as desired by real-world kidney exchanges.\nThese include multiple alternative donors per patient, weighted edges in the market graph (to encode differences in expected life years added based on degrees of compatibility, patient age and weight, etc., as well as the probability of last-minute incompatibility), angel-triggered chains (chains of transplants triggered by altruistic donors who do not have patients associated with them, each chain ending with a left-over kidney), and additional issues (such as different scores for saving different altruistic donors or left-over kidneys for future match runs based on blood type, tissue type, and likelihood that the organ would not disappear from the market by the donor getting second thoughts).\nBecause we use an ILP methodology, we can also support a variety of side constraints, which often play an important role in markets in practice [19].\nWe can also support forcing part of the allocation, for example, This acutely sick teenager has to get a kidney if possible.\nOur work has treated the kidney exchange as a batch problem with full information (at least in the short run, kidney exchanges will most likely continue to run in batch mode every so often).\nTwo important directions for future work are to explicitly address both online and limited-information aspects of the problem.\nThe online aspect is that donees and donors will be arriving into the system over time, and it may be best to not execute the myopically optimal exchange now, but rather save part of the current market for later matches.\nIn fact, some work has been done on this in certain restricted settings [22, 24].\nThe limited-information aspect is that even in batch mode, the graph provided as input is not completely correct: a number of donor-donee pairs believed to be compatible turn out to be incompatible when more expensive last-minute tests are performed.\nTherefore, it would be desirable to perform an optimization with this in mind, such as outputting a low-degree robust subgraph to be tested before the final match is produced, or to output a contingency plan in case of failure.\nWe are currently exploring a number of questions along these lines but there is certainly much more to be done.\nAcknowledgments We thank economists Al Roth and Utku Unver, as well as kidney transplant surgeon Michael Rees, for alerting us to the fact that prior technology was inadequate for the clearing problem on a national scale, supplying initial data sets, and discussions on details of the kidney exchange process.\nWe also thank Don Sheehy for bringing to our attention the idea of shoe exchange.\nThis work was supported in part by the National Science Foundation under grants IIS-0427858 and CCF-0514922.\n9.\nREFERENCES [1] C. Barnhart, E. L. Johnson, G. L. Nemhauser, M. W. P. Savelsbergh, and P. H. Vance.\n303 0 500 1000 1500 2000 2500 3000 3500 4000 0 2000\u00a04000\u00a06000\u00a08000 10000 Clearingtime(seconds) Number of patients Our algorithm Our algorithm with restricted column seeder Our algorithm with no optimality prover CPLEX cycle formulation Figure 6: Experimental results: average runtime with standard deviation bars.\nBranch-and-price: Column generation for solving huge integer programs.\nOperations Research, 46:316-329, May-June 1998.\n[2] R. Dechter and J. Pearl.\nGeneralized best-first search strategies and the optimality of A*.\nJournal of the ACM, 32(3):505-536, 1985.\n[3] F. L. Delmonico.\nExchanging kidneys - advances in living-donor transplantation.\nNew England Journal of Medicine, 350:1812-1814, 2004.\n[4] J. Edmonds.\nPath, trees, and flowers.\nCanadian Journal of Mathematics, 17:449-467, 1965.\n[5] M. R. Garey and D. S. Johnson.\nComputers and Intractability; A Guide to the Theory of NP-Completeness.\n1990.\n[6] S. E. Gentry, D. L. Segev, and R. A. Montgomery.\nA comparison of populations served by kidney paired donation and list paired donation.\nAmerican Journal of Transplantation, 5(8):1914-1921, August 2005.\n[7] P. Hart, N. Nilsson, and B. Raphael.\nA formal basis for the heuristic determination of minimum cost paths.\nIEEE Transactions on Systems Science and Cybernetics, 4(2):100-107, 1968.\n[8] K. Hoffman and M. Padberg.\nSolving airline crew-scheduling problems by branch-and-cut.\nManagement Science, 39:657-682, 1993.\n[9] Intervac.\nhttp:\/\/intervac-online.com\/.\n[10] National odd shoe exchange.\nhttp:\/\/www.oddshoe.org\/.\n[11] Peerflix.\nhttp:\/\/www.peerflix.com.\n[12] Read it swap it.\nhttp:\/\/www.readitswapit.co.uk\/.\n[13] A. E. Roth, T. Sonmez, and M. U. Unver.\nKidney exchange.\nQuarterly Journal of Economics, 119(2):457-488, May 2004.\n[14] A. E. Roth, T. Sonmez, and M. U. Unver.\nA kidney exchange clearinghouse in New England.\nAmerican Economic Review, 95(2):376-380, May 2005.\n[15] A. E. Roth, T. Sonmez, and M. U. Unver.\nEfficient kidney exchange: Coincidence of wants in a market with compatibility-based preferences.\nAmerican Economic Review, forthcoming.\n[16] E. Rothberg.\nGabow``s n3 maximum-weight matching algorithm: an implementation.\nThe First DIMACS Implementation Challenge, 1990.\n[17] S. L. Saidman, A. E. Roth, T. Snmez, M. U. Unver, and F. L. Delmonico.\nIncreasing the opportunity of live kidney donation by matching for two and three way exchanges.\nTransplantation, 81(5):773-782, 2006.\n[18] T. Sandholm.\nOptimal winner determination algorithms.\nIn Combinatorial Auctions, Cramton, Shoham, and Steinberg, eds.\nMIT Press, 2006.\n[19] T. Sandholm and S. Suri.\nSide constraints and non-price attributes in markets.\nIn IJCAI-2001 Workshop on Distributed Constraint Reasoning, pages 55-61, Seattle, WA, 2001.\nTo appear in Games and Economic Behavior.\n[20] D. L. Segev, S. E. Gentry, D. S. Warren, B. Reeb, and R. A. Montgomery.\nKidney paired donation and optimizing the use of live donor organs.\nJournal of the American Medical Association, 293(15):1883-1890, April 2005.\n[21] United Network for Organ Sharing (UNOS).\nhttp:\/\/www.unos.org\/.\n[22] M. U. Unver.\nDynamic kidney exchange.\nWorking paper.\n[23] United States Renal Data System (USRDS).\nhttp:\/\/www.usrds.org\/.\n[24] S. A. Zenios.\nOptimal control of a paired-kidney exchange program.\nManagement Science, 48(3):328-342, March 2002.\n304","lvl-3":"Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges\nABSTRACT\nIn barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nWe focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nThe clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed.\nLong cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously.\nAlso, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle.\nWe prove that the clearing problem with this cycle-length constraint is NP-hard.\nSolving it exactly is one of the main challenges in establishing a national kidney exchange.\nWe present the first algorithm capable of clearing these markets on a nationwide scale.\nThe key is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop techniques that dramatically improve both runtime and memory usage.\nWe conclude that column generation scales drastically better than constraint generation.\nOur algorithm also supports several generalizations, as demanded by real-world kidney exchanges.\nOur algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges.\nThe match runs are conducted every two weeks and transplants based on our optimizations have already been conducted.\n1.\nINTRODUCTION\nThe role of kidneys is to filter waste from blood.\nKidney failure results in accumulation of this waste, which leads to death in months.\nOne treatment option is dialysis, in which the patient goes to a hospital to have his\/her blood filtered by an external machine.\nSeveral visits are required per week, and each takes several hours.\nThe quality of life on dialysis can be extremely low, and in fact many patients opt to withdraw from dialysis, leading to a natural death.\nOnly 12% of dialysis patients survive 10 years [23].\nInstead, the preferred treatment is a kidney transplant.\nKidney transplants are by far the most common transplant.\nUnfortunately, the demand for kidneys far outstrips supply.\nIn the United States in 2005, 4,052 people died waiting for a life-saving kidney transplant.\nDuring this time, almost 30,000 people were added to the national waiting list, while only 9,913 people left the list after receiving a deceaseddonor kidney.\nThe waiting list currently has over 70,000 people, and the median waiting time ranges from 2 to 5 years, depending on blood type . '\nFor many patients with kidney disease, the best option is to find a living donor, that is, a healthy person willing to donate one of his\/her two kidneys.\nAlthough there are marketplaces for buying and selling living-donor kidneys, the commercialization of human organs is almost universally regarded as unethical, and the practice is often explicitly illegal, such as in the US.\nHowever, in most countries, live donation is legal, provided it occurs as a gift with no financial compensation.\nIn 2005, there were 6,563 live donations in the US.\nThe number of live donations would have been much higher if it were not for the fact that, frequently, a potential donor'D ata from the United Network for Organ Sharing [21].\nand his intended recipient are blood-type or tissue-type incompatible.\nIn the past, the incompatible donor was sent home, leaving the patient to wait for a deceased-donor kidney.\nHowever, there are now a few regional kidney exchanges in the United States, in which patients can swap their incompatible donors with each other, in order to each obtain a compatible donor.\nThese markets are examples of barter exchanges.\nIn a barter-exchange market, agents (patients) seek to swap their items (incompatible donors) with each other.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nBarter exchanges are ubiquitous: examples include Peerflix (DVDs) [11], Read It Swap It (books) [12], and Intervac (holiday houses) [9].\nFor many years, there has even been a large shoe exchange in the United States [10].\nPeople with different-sized feet use this to avoid having to buy two pairs of shoes.\nLeg amputees have a separate exchange to share the cost of buying a single pair of shoes.\nWe can encode a barter exchange market as a directed graph G = (V, E) in the following way.\nConstruct one vertex for each agent.\nAdd a weighted edge e from one agent vi to another vj, if vi wants the item of vj.\nThe weight we of e represents the utility to vi of obtaining vj's item.\nA cycle c in this graph represents a possible swap, with each agent in the cycle obtaining the item of the next agent.\nThe weight wc of a cycle c is the sum of its edge weights.\nAn exchange is a collection of disjoint cycles.\nThe weight of an exchange is the sum of its cycle weights.\nA social welfare maximizing exchange is one with maximum weight.\nFigure 1 illustrates an example market with 5 agents, {v1, v2,..., v5}, in which all edges have weight 1.\nThe market has 4 cycles, c1 = (v1, v2), c2 = (v2, v3), c3 = (v3, v4) and c4 = (v1, v2, v3, v4, v5), and two (inclusion) maximal exchanges, namely M1 = {c4} and M2 = {c1, c3}.\nExchange M1 has both maximum weight and maximum cardinality (i.e., it includes the most edges\/vertices).\nFigure 1: Example barter exchange market.\nThe clearing problem is to find a maximum-weight exchange consisting of cycles with length at most some small constant L.\nThis cycle-length constraint arises naturally for several reasons.\nFor example, in a kidney exchange, all operations in a cycle have to be performed simultaneously; otherwise a donor might back out after his incompatible partner has received a kidney.\n(One cannot write a binding contract to donate an organ.)\nThis gives rise to a logistical constraint on cycle size: even if all the donors are operated on first and the same personnel and facilities are used to then operate on the donees, a k-cycle requires between 3k and 6k doctors, around 4k nurses, and almost 2k operating rooms.\nDue to such resource constraints, the upcoming national kidney exchange market will likely allow only cycles of length 2 and 3.\nAnother motivation for short cycles is that if the cycle fails to exchange, fewer agents are affected.\nFor example, last-minute testing in a kidney exchange often reveals new incompatibilities that were not detected in the initial testing (based on which the compatibility graph was constructed).\nMore generally, an agent may drop out of a cycle if his preferences have changed, or he\/she simply fails to fulfill his obligations (such as sending a book to another agent in the cycle) due to forgetfulness.\nIn Section 3, we show that (the decision version of) the clearing problem is NP-complete for L> 3.\nOne approach then might be to look for a good heuristic or approximation algorithm.\nHowever, for two reasons, we aim for an exact algorithm based on an integer-linear program (ILP) formulation, which we solve using specialized tree search.\n9 First, any loss of optimality could lead to unnecessary patient deaths.\n9 Second, an attractive feature of using an ILP formula\ntion is that it allows one to easily model a number of variations on the objective, and to add additional constraints to the problem.\nFor example, if 3-cycles are believed to be more likely to fail than 2-cycles, then one can simply give them a weight that is appropriately lower than 3\/2 the weight of a 2-cycle.\nOr, if for various (e.g., ethical) reasons one requires a maximum cardinality exchange, one can at least in a second pass find the solution (out of all maximum cardinality solutions) that has the fewest 3-cycles.\nOther variations one can solve for include finding various forms of \"fault tolerant\" (non-disjoint) collections of cycles in the event that certain pairs that were thought to be compatible turn out to be incompatible after all.\nIn this paper, we present the first algorithm capable of clearing these markets on a nationwide scale.\nStraight-forward ILP encodings are too large to even construct on current hardware--not to talk about solving them.\nThe key then is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop a host of (mainly problemspecific) techniques that dramatically improve both runtime and memory usage.\n1.1 Prior Work\nSeveral recent papers have used simulations and marketclearing algorithms to explore the impact of a national kidney exchange [13, 20, 6, 14, 15, 17].\nFor example, using Edmond's maximum-matching algorithm [4], [20] shows that a national pairwise-exchange market (using length-2 cycles only) would result in more transplants, reduced waiting time, and savings of $750 million in heath care costs over 5 years.\nThose results are conservative in two ways.\nFirstly, the simulated market contained only 4,000 initial patients, with 250 patients added every 3 months.\nIt has been reported to us that the market could be almost double this size.\nSecondly, the exchanges were restricted to length-2 cycles (because that is all that can be modeled as maximum matching, and solved using Edmonds's algorithm).\nAllowing length-3 cycles leads to additional significant gains.\nThis has been demonstrated on kidney exchange markets with 100 patients by using CPLEX to solve an integer-program encoding of the clearing problem [15].\nIn this paper, we\npresent an alternative algorithm for this integer program that can clear markets with over 10,000 patients (and that same number of willing donors).\nAllowing cycles of length more than 3 often leads to no improvement in the size of the exchange [15].\n(Furthermore, in a simplified theoretical model, any kidney exchange can be converted into one with cycles of length at most 4 [15].)\nWhilst this does not hold for general barter exchanges, or even for all kidney exchange markets, in Section 5.2.3 we make use of the observation that short cycles suffice to dramatically increase the speed of our algorithm.\nAt a high-level, the clearing problem for barter exchanges is similar to the clearing problem (aka winner determination problem) in combinatorial auctions.\nIn both settings, the idea is to gather all the pertinent information about the agents into a central clearing point and to run a centralized clearing algorithm to determine the allocation.\nBoth problems are NP-hard.\nBoth are best solved using tree search techniques.\nSince 1999, significant work has been done in computer science and operations research on faster optimal tree search algorithms for clearing combinatorial auctions.\n(For a recent review, see [18].)\nHowever, the kidney exchange clearing problem (with a limit of 3 or more on cycle size) is different from the combinatorial auction clearing problem in significant ways.\nThe most important difference is that the natural formulations of the combinatorial auction problem tend to easily fit in memory, so time is the bottleneck in practice.\nIn contrast, the natural formulations of the kidney exchange problem (with L = 3) take at least cubic space in the number of patients to even model, and therefore memory becomes a bottleneck much before time does when using standard tree search, such as branch-andcut in CPLEX, to tackle the problem.\n(On a 1GB computer and a realistic standard instance generator, discussed later, CPLEX 10.010 runs out of memory on five of the ten 900patient instances and ten of the ten 1,000-patient instances that we generated.)\nTherefore, the approaches that have been developed for combinatorial auctions cannot handle the kidney exchange problem.\n1.2 Paper Outline\nThe rest of the paper is organized as follows.\nSection 2 discusses the process by which we generate realistic kidney exchange market data, in order to benchmark the clearing algorithms.\nSection 3 contains a proof that the market clearing decision problem is NP-complete.\nSections 4 and 5 each contain an ILP formulation of the clearing problem.\nWe also detail in those sections our techniques used to solve those programs on large instances.\nSection 6 presents experiments on the various techniques.\nSection 7 discusses recent fielding of our algorithm.\nFinally, we present our conclusions in Section 8, and suggest future research directions.\n2.\nMARKET CHARACTERISTICS AND INSTANCE GENERATOR\n3.\nPROBLEM COMPLEXITY\n4.\nSOLUTION APPROACHES BASED ON AN EDGE FORMULATION\n4.1 Constraint Seeder\n4.2 Constraint Generation\n4.3 Experimental performance\n5.\nSOLUTION APPROACHES BASED ON A CYCLE FORMULATION\n5.1 Edge vs Cycle Formulation\n5.2 Column Generation for the LP\n5.2.1 Pricing Problem\n5.2.2 Column Seeding\n5.2.3 Proving Optimality\n5.2.4 Column Management\n5.3 Branch-and-Price Search for the ILP\n5.3.1 Primal Heuristics\n5.3.2 Cycle Brancher\n6.\nEXPERIMENTAL RESULTS\n7.\nFIELDING THE TECHNOLOGY\nOur algorithm and implementation replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges, in December 2006.\nWe conduct a match run every two weeks, and the first transplants based on our solutions have already been conducted.\nWhile there are (for political\/inter-personal reasons) at least four kidney exchanges in the US currently, everyone understands that a unified unfragmented national exchange would save more lives.\nWe are in discussions with additional kidney exchanges that are interested in adopting our technology.\nThis way our technology (and the processes around it) will hopefully serve as a substrate that will eventually help in unifying the exchanges.\nAt least computational scalability is no longer an obstacle.\n8.\nCONCLUSION AND FUTURE RESEARCH In this work we have developed the most scalable exact algorithms for barter exchanges to date, with special focus on the upcoming national kidney-exchange market in which patients with kidney disease will be matched with compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nOur work presents the first algorithm capable of clearing these markets on a nationwide scale.\nIt optimally solves the kidney exchange clearing problem with 10,000 donordonee pairs.\nThus there is no need to resort to approximate solutions.\nThe best prior technology (vanilla CPLEX) cannot handle instances beyond about 900 donor-donee pairs because it runs out of memory.\nThe key to our improvement is incremental problem formulation.\nWe adapted two paradigms for the task: constraint generation and column generation.\nFor each, we developed a host of techniques that substantially improve both runtime and memory usage.\nSome of the techniques use domain-specific observations while others are domain independent.\nWe conclude that column generation scales dramatically better than constraint generation.\nFor column generation in the LP, our enhancements include pricing techniques, column seeding techniques, techniques for proving optimality without having to bring in all positive-price columns (and using another column-generation process in a different formulation to do so), and column removal techniques.\nFor the branch-andprice search in the integer program that surrounds the LP, our enhancements include primal heuristics and we also compared branching strategies.\nUndoubtedly, further parameter tuning and perhaps additional speed improvement techniques could be used to make the algorithm even faster.\nOur algorithm also supports several generalizations, as desired by real-world kidney exchanges.\nThese include multiple alternative donors per patient, weighted edges in the market graph (to encode differences in expected life years added based on degrees of compatibility, patient age and weight, etc., as well as the probability of last-minute incompatibility), \"angel-triggered chains\" (chains of transplants triggered by altruistic donors who do not have patients associated with them, each chain ending with a left-over kidney), and additional issues (such as different scores for saving different altruistic donors or left-over kidneys for future match runs based on blood type, tissue type, and likelihood that the organ would not disappear from the market by the donor getting second thoughts).\nBecause we use an ILP methodology, we can also support a variety of side constraints, which often play an important role in markets in practice [19].\nWe can also support forcing part of the allocation, for example, \"This acutely sick teenager has to get a kidney if possible.\"\nOur work has treated the kidney exchange as a batch problem with full information (at least in the short run, kidney exchanges will most likely continue to run in batch mode every so often).\nTwo important directions for future work are to explicitly address both online and limited-information aspects of the problem.\nThe online aspect is that donees and donors will be arriving into the system over time, and it may be best to not execute the myopically optimal exchange now, but rather save part of the current market for later matches.\nIn fact, some work has been done on this in certain restricted settings [22, 24].\nThe limited-information aspect is that even in batch mode, the graph provided as input is not completely correct: a number of donor-donee pairs believed to be compatible turn out to be incompatible when more expensive last-minute tests are performed.\nTherefore, it would be desirable to perform an optimization with this in mind, such as outputting a low-degree \"robust\" subgraph to be tested before the final match is produced, or to output a contingency plan in case of failure.\nWe are currently exploring a number of questions along these lines but there is certainly much more to be done.","lvl-4":"Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges\nABSTRACT\nIn barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nWe focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nThe clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed.\nLong cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously.\nAlso, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle.\nWe prove that the clearing problem with this cycle-length constraint is NP-hard.\nSolving it exactly is one of the main challenges in establishing a national kidney exchange.\nWe present the first algorithm capable of clearing these markets on a nationwide scale.\nThe key is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop techniques that dramatically improve both runtime and memory usage.\nWe conclude that column generation scales drastically better than constraint generation.\nOur algorithm also supports several generalizations, as demanded by real-world kidney exchanges.\nOur algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges.\nThe match runs are conducted every two weeks and transplants based on our optimizations have already been conducted.\n1.\nINTRODUCTION\nThe role of kidneys is to filter waste from blood.\nKidney failure results in accumulation of this waste, which leads to death in months.\nOne treatment option is dialysis, in which the patient goes to a hospital to have his\/her blood filtered by an external machine.\nSeveral visits are required per week, and each takes several hours.\nThe quality of life on dialysis can be extremely low, and in fact many patients opt to withdraw from dialysis, leading to a natural death.\nOnly 12% of dialysis patients survive 10 years [23].\nInstead, the preferred treatment is a kidney transplant.\nKidney transplants are by far the most common transplant.\nUnfortunately, the demand for kidneys far outstrips supply.\nIn the United States in 2005, 4,052 people died waiting for a life-saving kidney transplant.\nDuring this time, almost 30,000 people were added to the national waiting list, while only 9,913 people left the list after receiving a deceaseddonor kidney.\nFor many patients with kidney disease, the best option is to find a living donor, that is, a healthy person willing to donate one of his\/her two kidneys.\nIn 2005, there were 6,563 live donations in the US.\nand his intended recipient are blood-type or tissue-type incompatible.\nIn the past, the incompatible donor was sent home, leaving the patient to wait for a deceased-donor kidney.\nHowever, there are now a few regional kidney exchanges in the United States, in which patients can swap their incompatible donors with each other, in order to each obtain a compatible donor.\nThese markets are examples of barter exchanges.\nIn a barter-exchange market, agents (patients) seek to swap their items (incompatible donors) with each other.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nBarter exchanges are ubiquitous: examples include Peerflix (DVDs) [11], Read It Swap It (books) [12], and Intervac (holiday houses) [9].\nFor many years, there has even been a large shoe exchange in the United States [10].\nPeople with different-sized feet use this to avoid having to buy two pairs of shoes.\nLeg amputees have a separate exchange to share the cost of buying a single pair of shoes.\nWe can encode a barter exchange market as a directed graph G = (V, E) in the following way.\nConstruct one vertex for each agent.\nAdd a weighted edge e from one agent vi to another vj, if vi wants the item of vj.\nThe weight we of e represents the utility to vi of obtaining vj's item.\nA cycle c in this graph represents a possible swap, with each agent in the cycle obtaining the item of the next agent.\nThe weight wc of a cycle c is the sum of its edge weights.\nAn exchange is a collection of disjoint cycles.\nThe weight of an exchange is the sum of its cycle weights.\nA social welfare maximizing exchange is one with maximum weight.\nFigure 1 illustrates an example market with 5 agents, {v1, v2,..., v5}, in which all edges have weight 1.\nThe market has 4 cycles, c1 = (v1, v2), c2 = (v2, v3), c3 = (v3, v4) and c4 = (v1, v2, v3, v4, v5), and two (inclusion) maximal exchanges, namely M1 = {c4} and M2 = {c1, c3}.\nExchange M1 has both maximum weight and maximum cardinality (i.e., it includes the most edges\/vertices).\nFigure 1: Example barter exchange market.\nThe clearing problem is to find a maximum-weight exchange consisting of cycles with length at most some small constant L.\nThis cycle-length constraint arises naturally for several reasons.\nFor example, in a kidney exchange, all operations in a cycle have to be performed simultaneously; otherwise a donor might back out after his incompatible partner has received a kidney.\nDue to such resource constraints, the upcoming national kidney exchange market will likely allow only cycles of length 2 and 3.\nAnother motivation for short cycles is that if the cycle fails to exchange, fewer agents are affected.\nFor example, last-minute testing in a kidney exchange often reveals new incompatibilities that were not detected in the initial testing (based on which the compatibility graph was constructed).\nIn Section 3, we show that (the decision version of) the clearing problem is NP-complete for L> 3.\nOne approach then might be to look for a good heuristic or approximation algorithm.\nHowever, for two reasons, we aim for an exact algorithm based on an integer-linear program (ILP) formulation, which we solve using specialized tree search.\n9 First, any loss of optimality could lead to unnecessary patient deaths.\n9 Second, an attractive feature of using an ILP formula\ntion is that it allows one to easily model a number of variations on the objective, and to add additional constraints to the problem.\nOr, if for various (e.g., ethical) reasons one requires a maximum cardinality exchange, one can at least in a second pass find the solution (out of all maximum cardinality solutions) that has the fewest 3-cycles.\nOther variations one can solve for include finding various forms of \"fault tolerant\" (non-disjoint) collections of cycles in the event that certain pairs that were thought to be compatible turn out to be incompatible after all.\nIn this paper, we present the first algorithm capable of clearing these markets on a nationwide scale.\nStraight-forward ILP encodings are too large to even construct on current hardware--not to talk about solving them.\nThe key then is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop a host of (mainly problemspecific) techniques that dramatically improve both runtime and memory usage.\n1.1 Prior Work\nSeveral recent papers have used simulations and marketclearing algorithms to explore the impact of a national kidney exchange [13, 20, 6, 14, 15, 17].\nFor example, using Edmond's maximum-matching algorithm [4], [20] shows that a national pairwise-exchange market (using length-2 cycles only) would result in more transplants, reduced waiting time, and savings of $750 million in heath care costs over 5 years.\nThose results are conservative in two ways.\nFirstly, the simulated market contained only 4,000 initial patients, with 250 patients added every 3 months.\nIt has been reported to us that the market could be almost double this size.\nSecondly, the exchanges were restricted to length-2 cycles (because that is all that can be modeled as maximum matching, and solved using Edmonds's algorithm).\nAllowing length-3 cycles leads to additional significant gains.\nThis has been demonstrated on kidney exchange markets with 100 patients by using CPLEX to solve an integer-program encoding of the clearing problem [15].\nIn this paper, we\npresent an alternative algorithm for this integer program that can clear markets with over 10,000 patients (and that same number of willing donors).\nAllowing cycles of length more than 3 often leads to no improvement in the size of the exchange [15].\n(Furthermore, in a simplified theoretical model, any kidney exchange can be converted into one with cycles of length at most 4 [15].)\nWhilst this does not hold for general barter exchanges, or even for all kidney exchange markets, in Section 5.2.3 we make use of the observation that short cycles suffice to dramatically increase the speed of our algorithm.\nAt a high-level, the clearing problem for barter exchanges is similar to the clearing problem (aka winner determination problem) in combinatorial auctions.\nIn both settings, the idea is to gather all the pertinent information about the agents into a central clearing point and to run a centralized clearing algorithm to determine the allocation.\nBoth problems are NP-hard.\nBoth are best solved using tree search techniques.\nSince 1999, significant work has been done in computer science and operations research on faster optimal tree search algorithms for clearing combinatorial auctions.\nHowever, the kidney exchange clearing problem (with a limit of 3 or more on cycle size) is different from the combinatorial auction clearing problem in significant ways.\nThe most important difference is that the natural formulations of the combinatorial auction problem tend to easily fit in memory, so time is the bottleneck in practice.\nIn contrast, the natural formulations of the kidney exchange problem (with L = 3) take at least cubic space in the number of patients to even model, and therefore memory becomes a bottleneck much before time does when using standard tree search, such as branch-andcut in CPLEX, to tackle the problem.\nTherefore, the approaches that have been developed for combinatorial auctions cannot handle the kidney exchange problem.\n1.2 Paper Outline\nThe rest of the paper is organized as follows.\nSection 2 discusses the process by which we generate realistic kidney exchange market data, in order to benchmark the clearing algorithms.\nSection 3 contains a proof that the market clearing decision problem is NP-complete.\nSections 4 and 5 each contain an ILP formulation of the clearing problem.\nWe also detail in those sections our techniques used to solve those programs on large instances.\nSection 6 presents experiments on the various techniques.\nSection 7 discusses recent fielding of our algorithm.\nFinally, we present our conclusions in Section 8, and suggest future research directions.\n7.\nFIELDING THE TECHNOLOGY\nOur algorithm and implementation replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges, in December 2006.\nWe conduct a match run every two weeks, and the first transplants based on our solutions have already been conducted.\nWhile there are (for political\/inter-personal reasons) at least four kidney exchanges in the US currently, everyone understands that a unified unfragmented national exchange would save more lives.\nWe are in discussions with additional kidney exchanges that are interested in adopting our technology.\nThis way our technology (and the processes around it) will hopefully serve as a substrate that will eventually help in unifying the exchanges.\nAt least computational scalability is no longer an obstacle.\n8.\nCONCLUSION AND FUTURE RESEARCH In this work we have developed the most scalable exact algorithms for barter exchanges to date, with special focus on the upcoming national kidney-exchange market in which patients with kidney disease will be matched with compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nOur work presents the first algorithm capable of clearing these markets on a nationwide scale.\nIt optimally solves the kidney exchange clearing problem with 10,000 donordonee pairs.\nThe best prior technology (vanilla CPLEX) cannot handle instances beyond about 900 donor-donee pairs because it runs out of memory.\nThe key to our improvement is incremental problem formulation.\nWe adapted two paradigms for the task: constraint generation and column generation.\nFor each, we developed a host of techniques that substantially improve both runtime and memory usage.\nSome of the techniques use domain-specific observations while others are domain independent.\nWe conclude that column generation scales dramatically better than constraint generation.\nUndoubtedly, further parameter tuning and perhaps additional speed improvement techniques could be used to make the algorithm even faster.\nOur algorithm also supports several generalizations, as desired by real-world kidney exchanges.\nBecause we use an ILP methodology, we can also support a variety of side constraints, which often play an important role in markets in practice [19].\nWe can also support forcing part of the allocation, for example, \"This acutely sick teenager has to get a kidney if possible.\"\nOur work has treated the kidney exchange as a batch problem with full information (at least in the short run, kidney exchanges will most likely continue to run in batch mode every so often).\nTwo important directions for future work are to explicitly address both online and limited-information aspects of the problem.\nThe online aspect is that donees and donors will be arriving into the system over time, and it may be best to not execute the myopically optimal exchange now, but rather save part of the current market for later matches.","lvl-2":"Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges\nABSTRACT\nIn barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nWe focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nThe clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed.\nLong cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously.\nAlso, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle.\nWe prove that the clearing problem with this cycle-length constraint is NP-hard.\nSolving it exactly is one of the main challenges in establishing a national kidney exchange.\nWe present the first algorithm capable of clearing these markets on a nationwide scale.\nThe key is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop techniques that dramatically improve both runtime and memory usage.\nWe conclude that column generation scales drastically better than constraint generation.\nOur algorithm also supports several generalizations, as demanded by real-world kidney exchanges.\nOur algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges.\nThe match runs are conducted every two weeks and transplants based on our optimizations have already been conducted.\n1.\nINTRODUCTION\nThe role of kidneys is to filter waste from blood.\nKidney failure results in accumulation of this waste, which leads to death in months.\nOne treatment option is dialysis, in which the patient goes to a hospital to have his\/her blood filtered by an external machine.\nSeveral visits are required per week, and each takes several hours.\nThe quality of life on dialysis can be extremely low, and in fact many patients opt to withdraw from dialysis, leading to a natural death.\nOnly 12% of dialysis patients survive 10 years [23].\nInstead, the preferred treatment is a kidney transplant.\nKidney transplants are by far the most common transplant.\nUnfortunately, the demand for kidneys far outstrips supply.\nIn the United States in 2005, 4,052 people died waiting for a life-saving kidney transplant.\nDuring this time, almost 30,000 people were added to the national waiting list, while only 9,913 people left the list after receiving a deceaseddonor kidney.\nThe waiting list currently has over 70,000 people, and the median waiting time ranges from 2 to 5 years, depending on blood type . '\nFor many patients with kidney disease, the best option is to find a living donor, that is, a healthy person willing to donate one of his\/her two kidneys.\nAlthough there are marketplaces for buying and selling living-donor kidneys, the commercialization of human organs is almost universally regarded as unethical, and the practice is often explicitly illegal, such as in the US.\nHowever, in most countries, live donation is legal, provided it occurs as a gift with no financial compensation.\nIn 2005, there were 6,563 live donations in the US.\nThe number of live donations would have been much higher if it were not for the fact that, frequently, a potential donor'D ata from the United Network for Organ Sharing [21].\nand his intended recipient are blood-type or tissue-type incompatible.\nIn the past, the incompatible donor was sent home, leaving the patient to wait for a deceased-donor kidney.\nHowever, there are now a few regional kidney exchanges in the United States, in which patients can swap their incompatible donors with each other, in order to each obtain a compatible donor.\nThese markets are examples of barter exchanges.\nIn a barter-exchange market, agents (patients) seek to swap their items (incompatible donors) with each other.\nThese swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle.\nBarter exchanges are ubiquitous: examples include Peerflix (DVDs) [11], Read It Swap It (books) [12], and Intervac (holiday houses) [9].\nFor many years, there has even been a large shoe exchange in the United States [10].\nPeople with different-sized feet use this to avoid having to buy two pairs of shoes.\nLeg amputees have a separate exchange to share the cost of buying a single pair of shoes.\nWe can encode a barter exchange market as a directed graph G = (V, E) in the following way.\nConstruct one vertex for each agent.\nAdd a weighted edge e from one agent vi to another vj, if vi wants the item of vj.\nThe weight we of e represents the utility to vi of obtaining vj's item.\nA cycle c in this graph represents a possible swap, with each agent in the cycle obtaining the item of the next agent.\nThe weight wc of a cycle c is the sum of its edge weights.\nAn exchange is a collection of disjoint cycles.\nThe weight of an exchange is the sum of its cycle weights.\nA social welfare maximizing exchange is one with maximum weight.\nFigure 1 illustrates an example market with 5 agents, {v1, v2,..., v5}, in which all edges have weight 1.\nThe market has 4 cycles, c1 = (v1, v2), c2 = (v2, v3), c3 = (v3, v4) and c4 = (v1, v2, v3, v4, v5), and two (inclusion) maximal exchanges, namely M1 = {c4} and M2 = {c1, c3}.\nExchange M1 has both maximum weight and maximum cardinality (i.e., it includes the most edges\/vertices).\nFigure 1: Example barter exchange market.\nThe clearing problem is to find a maximum-weight exchange consisting of cycles with length at most some small constant L.\nThis cycle-length constraint arises naturally for several reasons.\nFor example, in a kidney exchange, all operations in a cycle have to be performed simultaneously; otherwise a donor might back out after his incompatible partner has received a kidney.\n(One cannot write a binding contract to donate an organ.)\nThis gives rise to a logistical constraint on cycle size: even if all the donors are operated on first and the same personnel and facilities are used to then operate on the donees, a k-cycle requires between 3k and 6k doctors, around 4k nurses, and almost 2k operating rooms.\nDue to such resource constraints, the upcoming national kidney exchange market will likely allow only cycles of length 2 and 3.\nAnother motivation for short cycles is that if the cycle fails to exchange, fewer agents are affected.\nFor example, last-minute testing in a kidney exchange often reveals new incompatibilities that were not detected in the initial testing (based on which the compatibility graph was constructed).\nMore generally, an agent may drop out of a cycle if his preferences have changed, or he\/she simply fails to fulfill his obligations (such as sending a book to another agent in the cycle) due to forgetfulness.\nIn Section 3, we show that (the decision version of) the clearing problem is NP-complete for L> 3.\nOne approach then might be to look for a good heuristic or approximation algorithm.\nHowever, for two reasons, we aim for an exact algorithm based on an integer-linear program (ILP) formulation, which we solve using specialized tree search.\n9 First, any loss of optimality could lead to unnecessary patient deaths.\n9 Second, an attractive feature of using an ILP formula\ntion is that it allows one to easily model a number of variations on the objective, and to add additional constraints to the problem.\nFor example, if 3-cycles are believed to be more likely to fail than 2-cycles, then one can simply give them a weight that is appropriately lower than 3\/2 the weight of a 2-cycle.\nOr, if for various (e.g., ethical) reasons one requires a maximum cardinality exchange, one can at least in a second pass find the solution (out of all maximum cardinality solutions) that has the fewest 3-cycles.\nOther variations one can solve for include finding various forms of \"fault tolerant\" (non-disjoint) collections of cycles in the event that certain pairs that were thought to be compatible turn out to be incompatible after all.\nIn this paper, we present the first algorithm capable of clearing these markets on a nationwide scale.\nStraight-forward ILP encodings are too large to even construct on current hardware--not to talk about solving them.\nThe key then is incremental problem formulation.\nWe adapt two paradigms for the task: constraint generation and column generation.\nFor each, we develop a host of (mainly problemspecific) techniques that dramatically improve both runtime and memory usage.\n1.1 Prior Work\nSeveral recent papers have used simulations and marketclearing algorithms to explore the impact of a national kidney exchange [13, 20, 6, 14, 15, 17].\nFor example, using Edmond's maximum-matching algorithm [4], [20] shows that a national pairwise-exchange market (using length-2 cycles only) would result in more transplants, reduced waiting time, and savings of $750 million in heath care costs over 5 years.\nThose results are conservative in two ways.\nFirstly, the simulated market contained only 4,000 initial patients, with 250 patients added every 3 months.\nIt has been reported to us that the market could be almost double this size.\nSecondly, the exchanges were restricted to length-2 cycles (because that is all that can be modeled as maximum matching, and solved using Edmonds's algorithm).\nAllowing length-3 cycles leads to additional significant gains.\nThis has been demonstrated on kidney exchange markets with 100 patients by using CPLEX to solve an integer-program encoding of the clearing problem [15].\nIn this paper, we\npresent an alternative algorithm for this integer program that can clear markets with over 10,000 patients (and that same number of willing donors).\nAllowing cycles of length more than 3 often leads to no improvement in the size of the exchange [15].\n(Furthermore, in a simplified theoretical model, any kidney exchange can be converted into one with cycles of length at most 4 [15].)\nWhilst this does not hold for general barter exchanges, or even for all kidney exchange markets, in Section 5.2.3 we make use of the observation that short cycles suffice to dramatically increase the speed of our algorithm.\nAt a high-level, the clearing problem for barter exchanges is similar to the clearing problem (aka winner determination problem) in combinatorial auctions.\nIn both settings, the idea is to gather all the pertinent information about the agents into a central clearing point and to run a centralized clearing algorithm to determine the allocation.\nBoth problems are NP-hard.\nBoth are best solved using tree search techniques.\nSince 1999, significant work has been done in computer science and operations research on faster optimal tree search algorithms for clearing combinatorial auctions.\n(For a recent review, see [18].)\nHowever, the kidney exchange clearing problem (with a limit of 3 or more on cycle size) is different from the combinatorial auction clearing problem in significant ways.\nThe most important difference is that the natural formulations of the combinatorial auction problem tend to easily fit in memory, so time is the bottleneck in practice.\nIn contrast, the natural formulations of the kidney exchange problem (with L = 3) take at least cubic space in the number of patients to even model, and therefore memory becomes a bottleneck much before time does when using standard tree search, such as branch-andcut in CPLEX, to tackle the problem.\n(On a 1GB computer and a realistic standard instance generator, discussed later, CPLEX 10.010 runs out of memory on five of the ten 900patient instances and ten of the ten 1,000-patient instances that we generated.)\nTherefore, the approaches that have been developed for combinatorial auctions cannot handle the kidney exchange problem.\n1.2 Paper Outline\nThe rest of the paper is organized as follows.\nSection 2 discusses the process by which we generate realistic kidney exchange market data, in order to benchmark the clearing algorithms.\nSection 3 contains a proof that the market clearing decision problem is NP-complete.\nSections 4 and 5 each contain an ILP formulation of the clearing problem.\nWe also detail in those sections our techniques used to solve those programs on large instances.\nSection 6 presents experiments on the various techniques.\nSection 7 discusses recent fielding of our algorithm.\nFinally, we present our conclusions in Section 8, and suggest future research directions.\n2.\nMARKET CHARACTERISTICS AND INSTANCE GENERATOR\nWe test the algorithms on simulated kidney exchange markets, which are generated by a process described in Saidman et al. [17].\nThis process is based on the extensive nationwide data maintained by the United Network for Organ Sharing (UNOS) [21], so it generates a realistic instance distribution.\nSeveral papers have used variations of this process to demonstrate the effectiveness of a national kidney exchange (extrapolating from small instances or restricting the clearing to 2-cycles) [6, 20, 14, 13, 15, 17].\nBriefly, the process involves generating patients with a random blood type, sex, and probability of being tissue-type incompatible with a randomly chosen donor.\nThese probabilities are based on actual real-world population data.\nEach patient is assigned a potential donor with a random blood type and relation to the patient.\nIf the patient and potential donor are incompatible, the two are entered into the market.\nBlood type and tissue type information is then used to decide on which patients and donors are compatible.\nOne complication, handled by the generator, is that if the patient is female, and she has had a child with her potential donor, then the probability that the two are incompatible increases.\n(This is because the mother develops antibodies to her partner during pregnancy.)\nFinally, although our algorithms can handle more general weight functions, patients have a utility of 1 for compatible donors, since their survival probability is not affected by the choice of donor [3].\nThis means that the maximum-weight exchange has maximum cardinality.\nTable 1 gives lower and upper bounds on the size of a maximum-cardinality exchange in the kidney-exchange market.\nThe lower bounds were found by clearing the market with length-2 cycles only, while the upper bounds had no restriction on cycle length.\nFor each market size, the bounds were computed over 10 randomly generated markets.\nNote that there can be a large amount of variability in the markets - in one 5000 patient market, less than 1000 patients were in the maximum-cardinality exchange.\nTable 1: Upper and lower bounds on exchange size.\nTable 2 gives additional characteristics of the kidney-exchange market.\nNote that a market with 5000 patients can already have more than 450 million cycles of length 2 and 3.\nTable 2: Market characteristics.\n3.\nPROBLEM COMPLEXITY\nIn this section, we prove that (the decision version of) the market clearing problem with short cycles is NP-complete.\nPROOF.\nIt is clear that this problem is in NP.\nFor NPhardness, we reduce from 3D-Matching, which is the problem of, given disjoint sets X, Y and Z of size q, and a set of triples T C X x Y x Z, deciding if there is a disjoint subset M of T with size q.\nOne straightforward idea is to construct a tripartite graph with vertex sets X U Y U Z and directed edges (xa, yb), (yb, zc), and (zc, xa) for each triple ti = {xa, yb, zc} E T. However, it is not too hard to see that this encoding fails because a perfect cycle cover may include a cycle with no corresponding triple.\nInstead then, we use the following reduction.\nGiven an instance of 3D-Matching, construct one vertex for each element in X, Y and Z. For each triple, ti = {xa, yb, zc} construct the gadget in Figure 2, which is a similar to one in Garey and Johnson [5, pp 68-69].\nNote that the gadgets intersect only on vertices in X U Y U Z.\nIt is clear that this construction can be done in polynomial time.\nFigure 2: NP-completeness gadget for triple ti and maximum cycle length L.\nLet M be a perfect 3D-Matching.\nWe will show the construction admits a perfect cycle cover by short cycles.\nIf ti = {xa, yb, zc} E M, add from ti's gadget the three lengthL cycles containing xa, yb and zc respectively.\nAlso add the cycle xia, yib, zic).\nOtherwise, if ti E \/ M, add the three lengthL cycles containing xia, yiband zic respectively.\nIt is clear that all vertices are covered, since M partitions X x Y x Z. Conversely, suppose we have a perfect cover by short cycles.\nNote that the construction only has short cycles of lengths 3 and L, and no short cycle involves distinct vertices from two different gadgets.\nIt is easy to see then that in a perfect cover, each gadget ti contributes cycles according to the cases above: ti E M, or ti E \/ M. Hence, there exists a perfect 3D-Matching in the original instance.\n4.\nSOLUTION APPROACHES BASED ON AN EDGE FORMULATION\nIn this section, we consider a formulation of the clearing problem as an ILP with one variable for each edge.\nThis encoding is based on the following classical algorithm for solving the directed cycle cover problem with no cycle-length constraints.\nGiven a market G = (V, E), construct a bipartite graph with one vertex for each agent, and one vertex for each item.\nAdd an edge ev with weight 0 between each agent v and its own item.\nAt this point, the encoding is a perfect matching.\nNow, for each edge e = (vi, vj) in the original market, add an edge e with weight we between agent vi and the item of vj.\nPerfect matchings in this encoding correspond exactly with cycle covers, since whenever an agent's item is taken, it must receive some other agent's item.\nIt follows that the unrestricted clearing problem can be solved in polynomial time by finding a maximum-weight perfect matching.\nFigure 3 contains the bipartite graph encoding of the example market from Figure 1.\nThe weight-0 edges are encoded by dashed lines, while the market edges are in bold.\nFigure 3: Perfect matching encoding of the market in Figure 1.\nAlternatively, we can solve the problem by encoding it as an ILP with one variable for each edge in the original market graph G.\nThis ILP, given below, has the advantage that it can be extended naturally to deal with cycle length constraints.\nTherefore, for the rest of this section, this is the approach we will pursue.\nsuch that for all vi E V, the conservation constraint\nand capacity constraint\nare satisfied.\nIf cycles are allowed to have length at most L, it is easy to see that we only need to make the following changes to the ILP.\nFor each length-L path (throughout the paper, we do not include cycles in the definition of \"path\") p = (ep1, ep2,..., epL), add a constraint\nwhich precludes path p from being in any feasible solution.\nUnfortunately, in a market with only 1000 patients, the number of length-3 paths is in excess of 400 million, and so we cannot even construct this ILP without running out of memory.\nTherefore, we use a tree search with an incremental formulation approach.\nSpecifically, we use CPLEX, though\nwe add constraints as cutting planes during the tree search process.\nWe begin with only a small subset of the constraints in the ILP.\nSince this ILP is small, CPLEX can solve its LP relaxation.\nWe then check whether any of the missing constraints are violated by the fractional solution.\nIf so, we generate a set of these constraints, add them to the ILP, and repeat.\nEven once all constraints are satisfied, there may be no integral solution matching the fractional upper bound, and even if there were, the LP solver might not find it.\nIn these cases, CPLEX branches on a variable (we used CPLEX's default branching strategy), and generates one new search node corresponding to each of the children.\nAt each node of the search tree that is visited, this process of solving the LP and adding constraints is repeated.\nClearly, this approach yields an optimal solution once the tree search finishes.\nWe still need to explain the details of the constraint seeder (i.e., selecting which constraints to begin with) and the constraint generation (i.e., selecting which violated constraints to include).\nWe describe these briefly in the next two subsections, respectively.\n4.1 Constraint Seeder\nThe main constraint seeder we developed forbids any path of length L--1 that does not have an edge closing the cycle from its head to its tail.\nWhile it is computationally expensive to find these constraints, their addition focuses the search away from paths that cannot be in the final solution.\nWe also tried seeding the LP with a random collection of constraints from the ILP.\n4.2 Constraint Generation\nWe experimented with several constraint generators.\nIn each, given a fractional solution, we construct the subgraph of edges with positive value.\nThis graph is much smaller than the original graph, so we can perform the following computations efficiently.\nIn our first constraint generator, we simply search for length-L paths with value sum more than L--1.\nFor any such path, we restrict its sum to be at most L--1.\nNote that if there is a cycle c with length lcl> L, it could contain as many as lcl violating paths.\nIn our second constraint generator, we only add one constraint for such cycles: the sum of edges in the cycle can be at most Llcl (L--1) \/ LJ.\nThis generator made the algorithm slower, so we went in the other direction in developing our final generator.\nIt adds one constraint per violating path p, and furthermore, it adds a constraint for each path with the same interior vertices (not counting the endpoints) as p.\nThis improved the overall speed.\n4.3 Experimental performance\nIt turned out that even with these improvements, the edge formulation approach cannot clear a kidney exchange with 100 vertices in the time the cycle formulation (described later in Section 5) can clear one with 10,000 vertices.\nIn other words, column generation based approaches turned out to be drastically better than constraint generation based approaches.\nTherefore, in the rest of the paper, we will focus on the cycle formulation and the column generation based approaches.\n5.\nSOLUTION APPROACHES BASED ON A CYCLE FORMULATION\nIn this section, we consider a formulation of the clearing problem as an ILP with one variable for each cycle.\nThis encoding is based on the following classical algorithm for solving the directed cycle cover problem when cycles have length 2.\nGiven a market G = (V, E), construct a new graph on V with a weight wc edge for each cycle c of length 2.\nIt is easy to see that matchings in this new graph correspond to cycle covers by length-2 cycles in the original market graph.\nHence, the market clearing problem with L = 2 can be solved in polynomial time by finding a maximum-weight matching.\nFigure 4: Maximum-weight matching encoding of the market in Figure 1.\nWe can generalize this encoding for arbitrary L. Let C (L) be the set of all cycles of G with length at most L.\nThen the following ILP finds the maximum-weight cycle cover by C (L) cycles:\n5.1 Edge vs Cycle Formulation\nIn this section, we consider the merits of the edge formulation and cycle formulation.\nThe edge formulation can be solved in polynomial time when there are no constraints on the cycle size.\nThe cycle formulation can be solved in polynomial time when the cycle size is at most 2.\nWe now consider the case of short cycles of length at most L, where L> 3.\nOur tree search algorithms use the LP relaxation of these formulations to provide upper bounds on the optimal solution.\nThese bounds help prune subtrees and guide the search in the usual ways.\nPROOF.\nConsider an optimal solution to the LP relaxation of the cycle formulation.\nWe show how to construct an equivalent solution in the edge formulation.\nFor each edge in the graph, set its value as the sum of values of all the cycles of which it is a member.\nAlso, define the value of a vertex in the same manner.\nBecause of the cycle constraints, the conservation and capacity constraints of the edge encoding are clearly satisfied.\nIt remains to show that none of the path constraints are violated.\nLet p be any length-L path in the graph.\nSince p has L--1 interior vertices (not counting the endpoints), the value sum of these interior vertices is at most L--1.\nNow, for any cycle c of length at most L, the number of edges it has in p, which we denote by ec (p), is at most the number of interior vertices it has in p, which we denote by vc (p).\nHence, EeEp e = EcEC (L) c * ec (p) OPT (D).\nWe can verify this by checking that vOP T (DI) satisfies the constraints of D not already in D'--i.e. constraint c2.\nIt follows that OPT (P') = OPT (D')> OPT (D) = OPT (P), and so vOP T (PI) is provably an optimal solution for P, even though P' is contains a only strict subset of the columns of P. Of course, it may turn out (unfortunately) that vOP T (DI) is not feasible for D.\nThis can happen above if vOP T (DI) = (v1 = 2, v2 = 0, v3 = 0, v4 = 2).\nAlthough we can still see that OPT (D') = OPT (D), in general we cannot prove this because D and P are too large to solve.\nInstead, because constraint c2 is violated, we add column c2 to P', update D', and repeat.\nThe problem of finding a violated constraint is called the pricing problem.\nHere, the price of a column (cycle in our setting) is the difference between its weight, and the dual-value sum of the cycle's vertices.\nIf any column of P has a positive price, its corresponding constraint is violated and we have not yet proven optimality.\nIn this case, we must continue generating columns to add to P'.\n5.2.1 Pricing Problem\nFor smaller instances, we can maintain an explicit collection of all feasible cycles.\nThis makes the pricing problem easy and efficient to solve: we simply traverse the collection of cycles, and look for cycles with positive price.\nWe can even find cycles with the most positive price, which are the ones most likely to improve the objective value of restricted LP [1].\nThis approach does not scale however.\nA market with 5000 patients can have as many as 400 million cycles of length at most 3 (see Table 2).\nThis is too many cycles to keep in memory.\nHence, for larger instances, we have to generate feasible cycles while looking for one with a positive price.\nWe do this using a depth-first search algorithm on the market graph (see Figure 1).\nIn order to make this search faster, we explore vertices in non-decreasing value order, as these vertices are more likely to belong to cycles with positive weight.\nWe also use several pruning rules to determine if the current search path can lead to a positive weight cycle.\nFor example, at a given vertex in the search, we can prune based on the fact that every vertex we visit from this point onwards will have value at least as great the current vertex.\nEven with these pruning rules, column generation is a bottleneck.\nHence, we also implemented the following optimizations.\nWhenever the search exhaustively proves that a vertex belongs to no positive price cycle, we mark the vertex and do not use it as the root of a depth-first search until its dual value decreases.\nIn this way, we avoid unnecessarily repeating our computational efforts from a previous column generation iteration.\nFinally, it can sometimes be beneficial for column generation to include several positive-price columns in one iteration, since it may be faster to generate a second column, once the first one is found.\nHowever, we avoid this for the following reason.\nIf we attempt to find more positive-price columns than there are to be found, or if the columns are far apart in the search space, we end up having to generate and check a large part of the collection of feasible cycles.\nIn our experiments, we have seen this occur in markets with hundreds of millions of cycles, resulting in prohibitively expensive computation costs.\n5.2.2 Column Seeding\nEven if there is only a small gap to the master LP relaxation, column generation requires many iterations to improve the objective value of the restricted LP.\nEach of these\niterations is expensive, as we must solve the pricing problem, and re-solve the restricted LP.\nHence, although we could begin with no columns in the restricted LP, it is much faster to seed the LP with enough columns that the optimal objective value is not too far from the master LP.\nOf course, we cannot include so many columns that we run out of memory.\nWe experimented with several column seeders.\nIn one class of seeder, we use a heuristic to find an exchange, and then add the cycles of that exchange to the initial restricted LP.\nWe implemented two heuristics.\nThe first is a greedy algorithm: for each vertex in a random order, if it is uncovered, we attempt to include a cycle containing it and other uncovered vertices.\nThe other heuristic uses specialized maximum-weight matching code [16] to find an optimal cover by length-2 cycles.\nThese heuristics perform extremely well, especially taking into account the fact that they only add a small number of columns.\nFor example, Table 1 shows that an optimal cover by length-2 cycles has almost as much weight as the exchange with unrestricted cycle size.\nHowever, we have enough memory to include hundreds-of-thousands of additional columns and thereby get closer still to the upper bound.\nOur best column seeder constructs a random collection of feasible cycles.\nSince a market with 5000 patients can have as many as 400 million feasible cycles, it takes too long to generate and traverse all feasible cycles, and so we do not include a uniformly random collection.\nInstead, we perform a random walk on the market graph (see, for example, Figure 1), in which, after each step of the walk, we test whether there is an edge back onto our path that forms a feasible cycle.\nIf we find a cycle, it is included in the restricted LP, and we start a new walk from a random vertex.\nIn our experiments (see Section 6), we use this algorithm to seed the LP with 400,000 cycles.\nThis last approach outperforms the heuristic seeders described above.\nHowever, in our algorithm, we use a combination that takes the union of all columns from all three seeders.\nIn Figure 6, we compare the performance of the combination seeder against the combination without the random collection seeder.\nWe do not plot the performance of the algorithm without any seeder at all, because it can take hours to clear markets we can otherwise clear in a few minutes.\n5.2.3 Proving Optimality\nRecall that our aim is to find an optimal solution to the master LP relaxation.\nUsing column generation, we can prove that a restricted-primal solution is optimal once all columns have non-positive prices.\nUnfortunately though, our clearing problem has the so-called tailing-off effect [1, Section 6.3], in which, even though the restricted primal is optimal in hindsight, a large number of additional iterations are required in order to prove optimality (i.e., eliminate all positive-price columns).\nThere is no good general solution to the tailing-off effect.\nHowever, to mitigate this effect, we take advantage of the following problem-specific observation.\nRecall from Section 1.1 that, almost always, a maximum-weight exchange with cycles of length at most 3 has the same weight as an unrestricted maximum-weight exchange.\n(This does not mean that the solver for the unrestricted case will find a solution with short cycles, however.)\nFurthermore, the unrestricted clearing problem can be solved in polynomial time (recall Section 4).\nHence, we can efficiently compute an upper bound on the master LP relaxation, and, whenever the restricted primal achieves this upper bound, we have proven optimality without necessarily having to eliminate all positive-price columns!\nIn order for this to improve the running time of the overall algorithm, we need to be able to clear the unrestricted market in less time than it takes column generation to eliminate all the positive-price cycles.\nEven though the first problem is polynomial-time solvable, this is not trivial for large instances.\nFor example, for a market with 10,000 patients and 25 million edges, specialized maximum-weight matching code [16] was too slow, and CPLEX ran out of memory on the edge formulation encoding from Section 4.\nTo make this idea work then, we used column generation to solve the edge formulation.\nThis involves starting with a small random subset of the edges, and then adding positive price edges one-by-one until none remain.\nWe conduct this secondary column generation not in the original market graph G, but in the perfect matching bipartite graph of Figure 3.\nWe do this so that we only need to solve the LP, not the ILP, since the integrality gap in the perfect matching bipartite graph is 1--i.e. there always exists an integral solution that achieves the fractional upper bound.\nThe resulting speedup to the overall algorithm is dramatic, as can be seen in Figure 6.\n5.2.4 Column Management\nIf the optimal value of the initial restricted LP P' is far from the the master LP P, then a large number of columns are generated before the gap is closed.\nThis leads to memory problems on markets with as few as 4,000 patients.\nAlso, even before memory becomes an issue, the column generation iterations become slow because of the additional overhead of solving a larger LP.\nTo address these issues, we implemented a column management scheme to limit the size of the restricted LP.\nWhenever we add columns to the LP, we check to see if it contains more than a threshold number of columns.\nIf this is the case, we selectively remove columns until it is again below the threshold2.\nAs we discussed earlier, only a tiny fraction of all the cycles will end up in the final solution.\nIt is unlikely that we delete such a cycle, and even if we do, it can always be generated again.\nOf course, we must not be too aggressive with the threshold, because doing so may offset the per-iteration performance gains by significantly increasing the number of iterations required to get a suitable column set in the LP at the same time.\nThere are some columns we never delete, for example those we have branched on (see Section 5.3.2), or those with a non-zero LP value.\nAmongst the rest, we delete those with the lowest price, since those correspond to the dual constraints that are most satisfied.\nThis column management scheme works well and has enabled us to clear markets with 10,000 patients, as seen in Figure 6.\n5.3 Branch-and-Price Search for the ILP\nGiven a large market clearing problem, we can successfully solve its LP relaxation to optimality by using the column generation enhancements described above.\nHowever, the solutions we find are usually fractional.\nThus the next\nstep involves performing a branch-and-price tree search [1] to find an optimal integral solution.\nBriefly, this is the idea of branch-and-price.\nWhenever we set a fractional variable to 0 or 1 (branch), both the master LP, and the restriction we are working with, are changed (constrained).\nBy default then, we need to perform column generation (go through the effort of pricing) at each node of the search tree to prove that the constrained restriction is optimal for constrained master LP.\n(However, as discussed in Section 5.2.3, we compute the integral upper bound for the root node based on relaxing the cycle length constraint completely, and whenever any node's LP in the tree achieves that value, we do not need to continue pricing columns at that node.)\nFor the clearing problem with cycles of length at most 3, we have found that there is rarely a gap between the optimal integral and fractional solutions.\nThis means we can largely avoid the expensive per node pricing step: whenever the constrained restricted LP has the same optimal value as its parent in the tree search, we can prove LP optimality, as in Section 5.2.3, without having to include any additional columns in the restricted LP.\nAlthough CPLEX can solve ILPs, it does not support branch-and-price (for example, because there can be problemspecific complications involving the interaction between the branching rule and the pricing problem).\nHence, we implemented our own branch-and-price algorithm, which explores the search tree in depth-first order.\nWe also experimented with the A * node selection order [7, 2].\nHowever, this search strategy requires significantly more memory, which we found was better employed in making the column generation phase faster (see Section 5.2.2).\nThe remaining major components of the algorithm are described in the next two subsections.\n5.3.1 Primal Heuristics\nBefore branching on a fractional variable, we use primal heuristics to construct a feasible integral solution.\nThese solutions are lower bounds on the final optimal integral solutions.\nHence, whenever a restricted fractional solution is no better than the best integral solution found so far, we prune the current subtree.\nA primal heuristic is effective if it is efficient and constructs tight lower bounds.\nWe experimented with two primal heuristics.\nThe first is a simple rounding algorithm [8]: include all cycles with fractional value at least 0.5, and then, ensuring feasibility, greedily add the remaining cycles.\nWhilst this heuristic is efficient, we found that the lower bounds it constructs rarely enable much pruning.\nWe also tried using CPLEX as a primal heuristic.\nAt any given node of the search tree, we can convert the restricted LP relaxation back to an ILP by reintroducing the integrality constraints.\nCPLEX has several built-in primal heuristics, which we can apply to this ILP.\nMoreover, we can use CPLEX's own tree search to find an optimal integral solution.\nIn general, this tree search is much faster than our own.\nIf CPLEX finds an integral solution that matches the fractional upper bound at the root node, we are done.\nOtherwise, no such integral solution exists, or we don't yet have the right combination of cycles in the restricted LP.\nFor kidney-exchange markets, it is usually the second reason that applies (see Sections 5.2.2 and 5.2.4).\nHence, at some point in the tree search, once more columns have been generated as a result of branching, the CPLEX heuristic will find an optimal integral solution.\nAlthough CPLEX tree search is faster than our own, it is not so fast that we can apply it to every node in our search tree.\nHence, we make the following optimizations.\nFirstly, we add a constraint that requires the objective value of the ILP to be as large as the fractional target.\nIf this is not the case, we want to abort and proceed to generate more columns with our branch-and-price search.\nSecondly, we limit the number of nodes in CPLEX's search tree.\nThis is because we have observed that no integral solution exists, CPLEX can take a very long time to prove that.\nFinally, we only apply the CPLEX heuristic at a node if it has a sufficiently different set of cycles from its parent.\nUsing CPLEX as a primal heuristic has a large impact because it makes the search tree smaller, so all the computationally expensive pricing work is avoided at nodes that are not generated in this smaller tree.\n5.3.2 Cycle Brancher\nWe experimented with two branching strategies, both of which select one variable per node.\nThe first strategy, branching by certainty, randomly selects a variable from those whose LP value is closest to 1.\nThe second strategy, branching by uncertainty, randomly selects a variable whose LP value is closest to 0.5.\nIn either case, two children of the node are generated corresponding to two subtrees, one in which the variable is set to 0, the other in which it is set to 1.\nOur depth-first search always chooses to explore first the subtree in which the value of the variable is closest to its fractional value.\nFor our clearing problem with cycles of length at most 3, we found branching by uncertainty to be superior, rarely requiring any backtracking.\n6.\nEXPERIMENTAL RESULTS\nAll our experiments were performed in Linux (Red Hat 9.0), using a Dell PC with a 3GHz Intel Pentium 4 processor, and 1GB of RAM.\nWherever we used CPLEX (e.g., in solving the LP and as a primal heuristic, as discussed in the previous sections), we used CPLEX 10.010.\nFigure 6 shows the runtime performance of four clearing algorithms.\nFor each market size listed, we randomly generated 10 markets, and attempted to clear them using each of the algorithms.\nThe first algorithm is CPLEX on the full cycle formulation.\nThis algorithm fails to clear any markets with 1000 patients or more.\nAlso, its running time on markets smaller than this is significantly worse than the other algorithms.\nThe other algorithms are variations of the incremental column generation approach described in Section 5.\nWe begin with the following settings (all optimizations are switched on):\nThe combination of these optimizations allows us to easily clear markets with over 10,000 patients.\nIn each of the next two algorithms, we turn one of these optimizations off to highlight its effectiveness.\nFirst, we restrict the seeder so that it only begins with 10,000 cycles.\nThis setting is faster for smaller instances, since the LP relaxations are smaller, and faster to solve.\nHowever, at 5000 vertices, this effect starts to be offset by the additional column generation that must be performed.\nFor larger instance, this restricted seeder is clearly worse.\nFinally, we restore the seeder to its optimized setting, but this time, remove the optimality prover described in Section 5.2.3.\nAs in many column generation problems, the tailing-off effect is substantial.\nBy taking advantage of the properties of our problem, we manage to clear a market with 10,000 patients in about the same time it would otherwise have taken to clear a 6000 patient market.\n7.\nFIELDING THE TECHNOLOGY\nOur algorithm and implementation replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges, in December 2006.\nWe conduct a match run every two weeks, and the first transplants based on our solutions have already been conducted.\nWhile there are (for political\/inter-personal reasons) at least four kidney exchanges in the US currently, everyone understands that a unified unfragmented national exchange would save more lives.\nWe are in discussions with additional kidney exchanges that are interested in adopting our technology.\nThis way our technology (and the processes around it) will hopefully serve as a substrate that will eventually help in unifying the exchanges.\nAt least computational scalability is no longer an obstacle.\n8.\nCONCLUSION AND FUTURE RESEARCH In this work we have developed the most scalable exact algorithms for barter exchanges to date, with special focus on the upcoming national kidney-exchange market in which patients with kidney disease will be matched with compatible donors by swapping their own willing but incompatible donors.\nWith over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease.\nOur work presents the first algorithm capable of clearing these markets on a nationwide scale.\nIt optimally solves the kidney exchange clearing problem with 10,000 donordonee pairs.\nThus there is no need to resort to approximate solutions.\nThe best prior technology (vanilla CPLEX) cannot handle instances beyond about 900 donor-donee pairs because it runs out of memory.\nThe key to our improvement is incremental problem formulation.\nWe adapted two paradigms for the task: constraint generation and column generation.\nFor each, we developed a host of techniques that substantially improve both runtime and memory usage.\nSome of the techniques use domain-specific observations while others are domain independent.\nWe conclude that column generation scales dramatically better than constraint generation.\nFor column generation in the LP, our enhancements include pricing techniques, column seeding techniques, techniques for proving optimality without having to bring in all positive-price columns (and using another column-generation process in a different formulation to do so), and column removal techniques.\nFor the branch-andprice search in the integer program that surrounds the LP, our enhancements include primal heuristics and we also compared branching strategies.\nUndoubtedly, further parameter tuning and perhaps additional speed improvement techniques could be used to make the algorithm even faster.\nOur algorithm also supports several generalizations, as desired by real-world kidney exchanges.\nThese include multiple alternative donors per patient, weighted edges in the market graph (to encode differences in expected life years added based on degrees of compatibility, patient age and weight, etc., as well as the probability of last-minute incompatibility), \"angel-triggered chains\" (chains of transplants triggered by altruistic donors who do not have patients associated with them, each chain ending with a left-over kidney), and additional issues (such as different scores for saving different altruistic donors or left-over kidneys for future match runs based on blood type, tissue type, and likelihood that the organ would not disappear from the market by the donor getting second thoughts).\nBecause we use an ILP methodology, we can also support a variety of side constraints, which often play an important role in markets in practice [19].\nWe can also support forcing part of the allocation, for example, \"This acutely sick teenager has to get a kidney if possible.\"\nOur work has treated the kidney exchange as a batch problem with full information (at least in the short run, kidney exchanges will most likely continue to run in batch mode every so often).\nTwo important directions for future work are to explicitly address both online and limited-information aspects of the problem.\nThe online aspect is that donees and donors will be arriving into the system over time, and it may be best to not execute the myopically optimal exchange now, but rather save part of the current market for later matches.\nIn fact, some work has been done on this in certain restricted settings [22, 24].\nThe limited-information aspect is that even in batch mode, the graph provided as input is not completely correct: a number of donor-donee pairs believed to be compatible turn out to be incompatible when more expensive last-minute tests are performed.\nTherefore, it would be desirable to perform an optimization with this in mind, such as outputting a low-degree \"robust\" subgraph to be tested before the final match is produced, or to output a contingency plan in case of failure.\nWe are currently exploring a number of questions along these lines but there is certainly much more to be done.","keyphrases":["barter","exchang","barter-exchang market","transplant","column gener","match","match","kidnei","market characterist","instanc gener","solut approach","edg formul","cycl formul","branch-and-price"],"prmu":["P","P","P","P","P","P","P","U","M","M","U","M","R","U"]} {"id":"H-19","title":"Analyzing Feature Trajectories for Event Detection","abstract":"We consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words. A set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner. The document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point. In this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature's burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events. All of the above methods can be applied to time series data in general. We extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.","lvl-1":"Analyzing Feature Trajectories for Event Detection Qi He qihe@pmail.ntu.edu.sg Kuiyu Chang ASKYChang@ntu.edu.sg Ee-Peng Lim ASEPLim@ntu.edu.sg School of Computer Engineering Nanyang Technological University Block N4, Nanyang Avenue, Singapore 639798 ABSTRACT We consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words.\nA set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner.\nThe document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point.\nIn this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature``s burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events.\nAll of the above methods can be applied to time series data in general.\nWe extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.\nCategories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms: Algorithms, Experimentation.\n1.\nINTRODUCTION There are more than 4,000 online news sources in the world.\nManually monitoring all of them for important events has become difficult or practically impossible.\nIn fact, the topic detection and tracking (TDT) community has for many years been trying to come up with a practical solution to help people monitor news effectively.\nUnfortunately, the holy grail is still elusive, because the vast majority of TDT solutions proposed for event detection [20, 5, 17, 4, 21, 7, 14, 10] are either too simplistic (based on cosine similarity [5]) or impractical due to the need to tune a large number of parameters [9].\nThe ineffectiveness of current TDT technologies can be easily illustrated by subscribing to any of the many online news alerts services such as the industryleading Google News Alerts [2], which generates more than 50% false alarms [10].\nAs further proof, portals like Yahoo take a more pragmatic approach by requiring all machine generated news alerts to go through a human operator for confirmation before sending them out to subscribers.\nInstead of attacking the problem with variations of the same hammer (cosine similarity and TFIDF), a fundamental understanding of the characteristics of news stream data is necessary before any major breakthroughs can be made in TDT.\nThus in this paper, we look at news stories and feature trends from the perspective of analyzing a time-series word signal.\nPrevious work like [9] has attempted to reconstruct an event with its representative features.\nHowever, in many predictive event detection tasks (i.e., retrospective event detection), there is a vast set of potential features only for a fixed set of observations (i.e., the obvious bursts).\nOf these features, often only a small number are expected to be useful.\nIn particular, we study the novel problem of analyzing feature trajectories for event detection, borrowing a well-known technique from signal processing: identifying distributional correlations among all features by spectral analysis.\nTo evaluate our method, we subsequently propose an unsupervised event detection algorithm for news streams.\n0 0.2 0.4 0.6 0.8 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 Easter April (a) aperiodic event 0 0.1 0.2 0.3 0.4 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 Unaudited Ended (b) periodic event Figure 1: Feature correlation (DFIDF:time) between a) Easter and April b) Unaudited and Ended.\nAs an illustrative example, consider the correlation between the words Easter and April from the Reuters Corpus1 .\nFrom the plot of their normalized DFIDF in Figure 1(a), we observe the heavy overlap between the two words circa 04\/1997, which means they probably both belong to the same event during that time (Easter feast).\nIn this example, the hidden event Easter feast is a typical important aperiodic event over 1-year data.\nAnother example is given by Figure 1(b), where both the words Unaudited and Ended 1 Reuters Corpus is the default dataset for all examples.\nexhibit similar behaviour over periods of 3 months.\nThese two words actually originated from the same periodic event, net income-loss reports, which are released quarterly by publicly listed companies.\nOther observations drawn from Figure 1 are: 1) the bursty period of April is much longer than Easter, which suggests that April may exist in other events during the same period; 2) Unaudited has a higher average DFIDF value than Ended, which indicates Unaudited to be more representative for the underlying event.\nThese two examples are but the tip of the iceberg among all word trends and correlations hidden in a news stream like Reuters.\nIf a large number of them can be uncovered, it could significantly aid TDT tasks.\nIn particular, it indicates the significance of mining correlating features for detecting corresponding events.\nTo summarize, we postulate that: 1) An event is described by its representative features.\nA periodic event has a list of periodic features and an aperiodic event has a list of aperiodic features; 2) Representative features from the same event share similar distributions over time and are highly correlated; 3) An important event has a set of active (largely reported) representative features, whereas an unimportant event has a set of inactive (less-reported) representative features; 4) A feature may be included by several events with overlaps in time frames.\nBased on these observations, we can either mine representative features given an event or detect an event from a list of highly correlated features.\nIn this paper, we focus on the latter, i.e., how correlated features can be uncovered to form an event in an unsupervised manner.\n1.1 Contributions This paper has three main contributions: \u2022 To the best of our knowledge, our approach is the first to categorize word features for heterogenous events.\nSpecifically, every word feature is categorized into one of the following five feature types based on its power spectrum strength and periodicity: 1) HH (high power and high\/long periodicity): important aperiodic events, 2) HL (high power and low periodicity): important periodic events, 3) LH (low power and high periodicity): unimportant aperiodic events, 4) LL (low power and low periodicity): non-events, and 5) SW (stopwords), a higher power and periodicity subset of LL comprising stopwords, which contains no information.\n\u2022 We propose a simple and effective mixture densitybased approach to model and detect feature bursts.\n\u2022 We come up with an unsupervised event detection algorithm to detect both aperiodic and periodic events.\nOur algorithm has been evaluated on a real news stream to show its effectiveness.\n2.\nRELATED WORK This work is largely motivated by a broader family of problems collectively known as Topic Detection and Tracking (TDT) [20, 5, 17, 4, 21, 7, 14, 10].\nMoreover, most TDT research so far has been concerned with clustering\/classifying documents into topic types, identifying novel sentences [6] for new events, etc., without much regard to analyzing the word trajectory with respect to time.\nSwan and Allan [18] first attempted using co-occuring terms to construct an event.\nHowever, they only considered named entities and noun phrase pairs, without considering their periodicities.\nOn the contrary, our paper considers all of the above.\nRecently, there has been significant interest in modeling an event in text streams as a burst of activities by incorporating temporal information.\nKleinberg``s seminal work described how bursty features can be extracted from text streams using an infinite automaton model [12], which inspired a whole series of applications such as Kumar``s identification of bursty communities from Weblog graphs [13], Mei``s summarization of evolutionary themes in text streams [15], He``s clustering of text streams using bursty features [11], etc..\nNevertheless, none of the existing work specifically identified features for events, except for Fung et al. [9], who clustered busty features to identify various bursty events.\nOur work differs from [9] in several ways: 1) we analyze every single feature, not only bursty features; 2) we classify features along two categorical dimensions (periodicity and power), yielding altogether five primary feature types; 3) we do not restrict each feature to exclusively belong to only one event.\nSpectral analysis techniques have previously been used by Vlachos et al. [19] to identify periodicities and bursts from query logs.\nTheir focus was on detecting multiple periodicities from the power spectrum graph, which were then used to index words for query-by-burst search.\nIn this paper, we use spectral analysis to classify word features along two dimensions, namely periodicity and power spectrum, with the ultimate goal of identifying both periodic and aperiodic bursty events.\n3.\nDATA REPRESENTATION Let T be the duration\/period (in days) of a news stream, and F represents the complete word feature space in the classical static Vector Space Model (VSM).\n3.1 Event Periodicity Classification Within T, there may exist certain events that occur only once, e.g., Tony Blair elected as Prime Minister of U.K., and other recurring events of various periodicities, e.g., weekly soccer matches.\nWe thus categorize all events into two types: aperiodic and periodic, defined as follows.\nDefinition 1.\n(Aperiodic Event) An event is aperiodic within T if it only happens once.\nDefinition 2.\n(Periodic Event) If events of a certain event genre occur regularly with a fixed periodicity P \u2264 T\/2 , we say that this particular event genre is periodic, with each member event qualified as a periodic event.\nNote that the definition of aperiodic is relative, i.e., it is true only for a given T, and may be invalid for any other T > T. For example, the event Christmas feast is aperiodic for T \u2264 365 but periodic for T \u2265 730.\n3.2 Representative Features Intuitively, an event can be described very concisely by a few discriminative and representative word features and vice-versa, e.g., hurricane, sweep, and strike could be representative features of a Hurricane genre event.\nLikewise, a set of strongly correlated features could be used to reconstruct an event description, assuming that strongly correlated features are representative.\nThe representation vector of a word feature is defined as follows: Definition 3.\n(Feature Trajectory) The trajectory of a word feature f can be written as the sequence yf = [yf (1), yf (2), ... , yf (T)], where each element yf (t) is a measure of feature f at time t, which could be defined using the normalized DFIDF score2 yf (t) = DFf (t) N(t) \u00d7 log( N DFf ), where DFf (t) is the number of documents (local DF) containing feature f at day t, DFf is the total number of documents (global DF) containing feature f over T, N(t) is the number of documents for day t, and N is the total number of documents over T. 4.\nIDENTIFYING FEATURES FOR EVENTS In this section, we show how representative features can be extracted for (un)important or (a)periodic events.\n4.1 Spectral Analysis for Dominant Period Given a feature f, we decompose its feature trajectory yf = [yf (1), yf (2), ..., yf (T)] into the sequence of T complex numbers [X1, ... , XT ] via the discrete Fourier transform (DFT): Xk = T t=1 yf (t)e\u2212 2\u03c0i T (k\u22121)t , k = 1, 2, ... , T. DFT can represent the original time series as a linear combination of complex sinusoids, which is illustrated by the inverse discrete Fourier transform (IDFT): yf (t) = 1 T T k=1 Xke 2\u03c0i T (k\u22121)t , t = 1, 2, ... , T, where the Fourier coefficient Xk denotes the amplitude of the sinusoid with frequency k\/T.\nThe original trajectory can be reconstructed with just the dominant frequencies, which can be determined from the power spectrum using the popular periodogram estimator.\nThe periodogram is a sequence of the squared magnitude of the Fourier coefficients, Xk 2 , k = 1, 2, ... , T\/2 , which indicates the signal power at frequency k\/T in the spectrum.\nFrom the power spectrum, the dominant period is chosen as the inverse of the frequency with the highest power spectrum, as follows.\nDefinition 4.\n(Dominant Period) The dominant period (DP) of a given feature f is Pf = T\/ arg max k Xk 2 .\nAccordingly, we have Definition 5.\n(Dominant Power Spectrum) The dominant power spectrum (DPS) of a given feature f is Sf = Xk 2 , with Xk 2 \u2265 Xj 2 , \u2200j = k. 4.2 Categorizing Features The DPS of a feature trajectory is a strong indicator of its activeness at the specified frequency; the higher the DPS, the more likely for the feature to be bursty.\nCombining DPS with DP, we therefore categorize all features into four types: 2 We normalize yf (t) as yf (t) = yf (t)\/ T i=1 yf (i) so that it could be interpreted as a probability.\n\u2022 HH: high Sf , aperiodic or long-term periodic (Pf > T\/2 ); \u2022 HL: high Sf , short-term periodic (Pf \u2264 T\/2 ); \u2022 LH: low Sf , aperiodic or long-term periodic; \u2022 LL: low Sf , short-term periodic.\nThe boundary between long-term and short-term periodic is set to T\/2 .\nHowever, distinguishing between a high and low DPS is not straightforward, which will be tackled later.\nProperties of Different Feature Sets To better understand the properties of HH, HL, LH and LL, we select four features, Christmas, soccer, DBS and your as illustrative examples.\nSince the boundary between high and low power spectrum is unclear, these chosen examples have relative wide range of power spectrum values.\nFigure 2(a) shows the DFIDF trajectory for Christmas with a distinct burst around Christmas day.\nFor the 1-year Reuters dataset, Christmas is classified as a typical aperiodic event with Pf = 365 and Sf = 135.68, as shown in Figure 2(b).\nClearly, the value of Sf = 135.68 is reasonable for a well-known bursty event like Christmas.\n0 0.5 1 1.5 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 (a) Christmas(DFIDF:time) 0 50 100 150 0.00 0.13 0.25 0.37 0.50 P=365 S=135.68 (b) Christmas(S:frequency) Figure 2: Feature Christmas with relative high Sf and long-term Pf .\nThe DFIDF trajectory for soccer is shown in Figure 3(a), from which we can observe that there is a regular burst every 7 days, which is again verified by its computed value of Pf = 7, as shown in Figure 3(b).\nUsing the domain knowledge that soccer games have more matches every Saturday, which makes it a typical and heavily reported periodic event, we thus consider the value of Sf = 155.13 to be high.\n0 0.2 0.4 0.6 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 (a) soccer(DFIDF:time) 0 50 100 150 200 0.00 0.13 0.25 0.37 0.50 P=7 S=155.13 (b) soccer(S:frequency) Figure 3: Feature soccer with relative high Sf and short-term Pf .\nFrom the DFIDF trajectory for DBS in Figure 4(a), we can immediately deduce DBS to be an infrequent word with a trivial burst on 08\/17\/1997 corresponding to DBS Land Ra\ufb04es Holdings plans.\nThis is confirmed by the long period of Pf = 365 and low power of Sf = 0.3084 as shown in Figure 4(b).\nMoreover, since this aperiodic event is only reported in a few news stories over a very short time of few days, we therefore say that its low power value of Sf = 0.3084 is representative of unimportant events.\nThe most confusing example is shown in Figure 5 for the word feature your, which looks very similar to the graph for soccer in Figure 3.\nAt first glance, we may be tempted to group both your and soccer into the same category of HL or LL since both distributions look similar and have the same dominant period of approximately a week.\nHowever, further 0 0.05 0.1 0.15 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 (a) DBS(DFIDF:time) 0 0.1 0.2 0.3 0.4 0.00 0.13 0.25 0.37 0.50 P=365 S=0.3084 (b) DBS(S:frequency) Figure 4: Feature DBS with relative low Sf and long-term Pf .\nanalysis indicates that the periodicity of your is due to the differences in document counts for weekdays (average 2,919 per day) and weekends3 (average 479 per day).\nOne would have expected the periodicity of a stopword like your to be a day.\nMoreover, despite our DFIDF normalization, the weekday\/weekend imbalance still prevailed; stopwords occur 4 times more frequently on weekends than on weekdays.\nThus, the DPS remains the only distinguishing factor between your (Sf = 9.42) and soccer (Sf = 155.13).\nHowever, it is very dangerous to simply conclude that a power value of S = 9.42 corresponds to a stopword feature.\n0 0.05 0.1 0.15 0.2 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 (a) your(DFIDF:time) 0 5 10 0.00 0.13 0.25 0.37 0.50 P=7 S=9.42 (b) your(S:frequency) Figure 5: Feature your as an example confusing with feature soccer.\nBefore introducing our solution to this problem, let``s look at another LL example as shown in Figure 6 for beenb, which is actually a confirmed typo.\nWe therefore classify beenb as a noisy feature that does not contribute to any event.\nClearly, the trajectory of your is very different from beenb, which means that the former has to be considered separately.\n0 0.001 0.002 0.003 0.004 8\/20\/1996 12\/8\/1996 3\/28\/1997 7\/16\/1997 (a) beenb(DFIDF:time) 1.20E-05 1.20E-05 1.20E-05 1.20E-05 1.20E-05 0.003 0.126 0.249 0.373 0.496 P=8 S=1.20E-05 (b) beenb(S:frequency) Figure 6: Feature beenb with relative low Sf and short-term Pf .\nStop Words (SW) Feature Set Based on the above analysis, we realize that there must be another feature set between HL and LL that corresponds to the set of stopwords.\nFeatures from this set has moderate DPS and low but known dominant period.\nSince it is hard to distinguish this feature set from HL and LL only based on DPS, we introduce another factor called average DFIDF (DFIDF).\nAs shown in Figure 5, features like your usually have a lower DPS than a HL feature like soccer, but have a much higher DFIDF than another LL noisy feature such as beenb.\nSince such properties are usually characteristics of stopwords, we group features like your into the newly defined stopword (SW) feature set.\nSince setting the DPS and DFIDF thresholds for identifying stopwords is more of an art than science, we proposed a heuristic HS algorithm, Algorithm 1.\nThe basic idea is to only use news stories from weekdays to identify stopwords.\n3 The weekends here also include public holidays falling on weekdays.\nThe SW set is initially seeded with a small set of 29 popular stopwords utilized by Google search engine.\nAlgorithm 1 Heuristic Stopwords detection (HS) Input: Seed SW set, weekday trajectories of all words 1: From the seed set SW, compute the maximum DPS as UDPS, maximum DFIDF as UDFIDF, and minimum of DFIDF as LDFIDF.\n2: for fi \u2208 F do 3: Compute DFT for fi.\n4: if Sfi \u2264 UDPS and DFIDFfi \u2208 [LDFIDF, UDFIDF] then 5: fi \u2192 SW 6: F = F \u2212 fi 7: end if 8: end for Overview of Feature Categorization After the SW set is generated, all stopwords are removed from F.\nWe then set the boundary between high and low DPS to be the upper bound of the SW set``s DPS.\nAn overview of all five feature sets is shown in Figure 7.\nFigure 7: The 5 feature sets for events.\n5.\nIDENTIFYING BURSTS FOR FEATURES Since only features from HH, HL and LH are meaningful and could potentially be representative to some events, we pruned all other feature classified as LL or SW.\nIn this section, we describe how bursts can be identified from the remaining features.\nUnlike Kleinberg``s burst identification algorithm [12], we can identify both significant and trivial bursts without the need to set any parameters.\n5.1 Detecting Aperiodic Features'' Bursts For each feature in HH and HL, we truncate its trajectory by keeping only the bursty period, which is modeled with a Gaussian distribution.\nFor example, Figure 8 shows the word feature Iraq with a burst circa 09\/06\/1996 being modeled as a Gaussian.\nIts bursty period is defined by [\u03bcf \u2212 \u03c3f , \u03bcf + \u03c3f ] as shown in Figure 8(b).\n5.2 Detecting Periodic Features'' Bursts Since we have computed the DP for a periodic feature f, we can easily model its periodic feature trajectory yf using 0 0.2 0.4 0.6 0.8 8\/20\/96 12\/8\/96 3\/28\/97 7\/16\/97 (a) original DFIDF:time 0 0.2 0.4 0.6 0.8 8\/20\/96 12\/8\/96 3\/28\/97 7\/16\/97 burst= [\u03bc-\u03c3, \u03bc+\u03c3] (b) identifying burst Figure 8: Modeling Iraq``s time series as a truncated Gaussian with \u03bc = 09\/06\/1996 and \u03c3 = 6.26.\na mixture of K = T\/Pf Gaussians: f(yf = yf (t)|\u03b8f ) = K k=1 \u03b1k 1 2\u03c0\u03c32 k e \u2212 1 2\u03c32 k (yf (t)\u2212\u00b5k)2 , where the parameter set \u03b8f = {\u03b1k, \u03bck, \u03c3k}K k=1 comprises: \u2022 \u03b1k is the probability of assigning yf into the kth Gaussian.\n\u03b1k > 0, \u2200k \u2208 [1, K] and K k=1 \u03b1k = 1; \u2022 \u03bck\/\u03c3k is mean\/standard deviation of the kth Gaussian.\nThe well known Expectation Maximization (EM) [8] algorithm is used to compute the mixing proportions \u03b1k, as well as the individual Gaussian density parameters \u03bck and \u03c3K .\nEach Gaussian represents one periodic event, and is modeled similarly as mentioned in Section 5.1.\n6.\nEVENTS FROM FEATURES After identifying and modeling bursts for all features, the next task is to paint a picture of the event with a potential set of representative features.\n6.1 Feature Correlation If two features fi and fj are representative of the same event, they must satisfy the following necessary conditions: 1.\nfi and fj are identically distributed: yfi \u223c yfj .\n2.\nfi and fj have a high document overlap.\nMeasuring Feature Distribution Similarity We measure the similarity between two features fi and fj using discrete KL-divergence defined as follows.\nDefinition 6.\n(feature similarity) KL(fi, fj ) is given by max(KL(fi|fj ), KL(fj |fi)), where KL(fi|fj ) = T t=1 f(yfi (t)|\u03b8fi )log f(yfi (t)|\u03b8fi ) f(yfj (t)|\u03b8fj ) .\n(1) Since KL-divergence is not symmetric, we define the similarity between between fi and fj as the maximum of KL(fi|fj ) and KL(fj |fi).\nFurther, the similarity between two aperiodic features can be computed using a closed form of the KL-divergence [16].\nThe same discrete KL-divergence formula of Eq.\n1 is employed to compute the similarity between two periodic features, Next, we define the overal similarity among a set of features R using the maximum inter-feature KL-Divergence value as follows.\nDefinition 7.\n(set``s similarity)KL(R) = max \u2200fi,fj \u2208R KL(fi, fj ).\nDocument Overlap Let Mi be the set of all documents containing feature fi.\nGiven two features fi and fj , the overlapping document set containing both features is Mi \u2229 Mj .\nIntuitively, the higher the |Mi \u2229 Mj |, the more likelty that fi and fj will be highly correlated.\nWe define the degree of document overlap between two features fi and fj as follows.\nDefinition 8.\n(Feature DF Overlap) d(fi, fj ) = |Mi\u2229Mj| min(|Mi|,|Mj|) .\nAccordingly, the DF Overlap among a set of features R is also defined.\nDefinition 9.\n(Set DF Overlap) d(R) = min \u2200fi,fj \u2208R d(fi, fj).\n6.2 Unsupervised Greedy Event Detection We use features from HH to detect important aperiodic events, features from LH to detect less-reported\/unimportant aperiodic events, and features from HL to detect periodic events.\nAll of them share the same algorithm.\nGiven bursty feature fi \u2208 HH, the goal is to find highly correlated features from HH.\nThe set of features similar to fi can then collectively describe an event.\nSpecifically, we need to find a subset Ri of HH that minimizes the following cost function: C(Ri) = KL(Ri) d(Ri) fj \u2208Ri Sfj , Ri \u2282 HH.\n(2) The underlying event e (associated with the burst of fi) can be represented by Ri as y(e) = fj \u2208Ri Sfj fu\u2208Ri Sfu yfj .\n(3) The burst analysis for event e is exactly the same as the feature trajectory.\nThe cost in Eq.\n2 can be minimized using our unsupervised greedy UG event detection algorithm, which is described in Algorithm 2.\nThe UG algorithm allows a feature Algorithm 2 Unsupervised Greedy event detection (UG).\nInput: HH, document index for each feature.\n1: Sort and select features in descending DPS order: Sf1 \u2265 Sf2 \u2265 ... \u2265 Sf|HH| .\n2: k = 0.\n3: for fi \u2208 HH do 4: k = k + 1.\n5: Init: Ri \u2190 fi, C(Ri) = 1\/Sfi and HH = HH \u2212 fi.\n6: while HH not empty do 7: m = arg min m C(Ri \u222a fm).\n8: if C(Ri \u222a fm) < C(Ri) then 9: Ri \u2190 fm and HH = HH \u2212 fm.\n10: else 11: break while.\n12: end if 13: end while 14: Output ek as Eq.\n3.\n15: end for to be contained in multiple events so that we can detect several events happening at the same time.\nFurthermore, trivial events only containing year\/month features (i.e., an event only containing 1 feature Aug could be identified over a 1year news stream) could be removed, although such events will have inherent high cost and should already be ranked very low.\nNote that our UG algorithm only requires one data-dependant parameter, the boundary between high and low power spectrum, to be set once, and this parameter can be easily estimated using the HS algorithm (Algorithm 1).\n7.\nEXPERIMENTS In this section, we study the performances of our feature categorizing method and event detection algorithm.\nWe first introduce the dataset and experimental setup, then we subjectively evaluate the categorization of features for HH, HL, LH, LL and SW.\nFinally, we study the (a)periodic event detection problem with Algorithm 2.\n7.1 Dataset and Experimental Setup The Reuters Corpus contains 806,791 English news stories from 08\/20\/1996 to 08\/19\/1997 at a day resolution.\nVersion 2 of the open source Lucene software [1] was used to tokenize the news text content and generate the document-word vector.\nIn order to preserve the time-sensitive past\/present\/future tenses of verbs and the differences between lower case nouns and upper case named entities, no stemming was done.\nSince dynamic stopword removal is one of the functionalities of our method, no stopword was removed.\nWe did remove nonEnglish characters, however, after which the number of word features amounts to 423,433.\nAll experiments were implemented in Java and conducted on a 3.2 GHz Pentium 4 PC running Windows 2003 Server with 1 GB of memory.\n7.2 Categorizing Features We downloaded 34 well-known stopwords utilized by the Google search engine as our seed training features, which includes a, about, an, are, as, at, be, by, de, for, from, how, in, is, it, of, on, or, that, the, this, to, was, what, when, where, who, will, with, la, com, und, en and www.\nWe excluded the last five stopwords as they are uncommon in news stories.\nBy only analyzing news stories over 259 weekdays, we computed the upper bound of the power spectrum for stopwords at 11.18 and corresponding DFIDF ranges from 0.1182 to 0.3691.\nAny feature f satisfying Sf <= 11.18 and 0.1182 <= DFIDFf <= 0.3691 over weekdays will be considered a stopword.\nIn this manner, 470 stopwords were found and removed as visualized in Figure 9.\nSome detected stopwords are A (P = 65, S = 3.36, DFIDF = 0.3103), At (P = 259, S = 1.86, DFIDF = 0.1551), GMT (P = 130, S = 6.16, DFIDF = 0.1628) and much (P = 22, S = 0.80, DFIDF = 0.1865).\nAfter the removal of these stopwords, the distribution of weekday and weekend news are more or less matched, and in the ensuing experiments, we shall make use of the full corpus (weekdays and weekends).\nThe upper bound power spectrum value of 11.18 for stopwords training was selected as the boundary between the high power and low power spectrum.\nThe boundary between high and low periodicity was set to 365\/2 = 183.\nAll 422,963 (423433 \u2212 470) word features were categorized into 4 feature sets: HH (69 features), HL (1,087 features), LH (83,471 features), and LL (338,806 features) as shown in Figure 10.\nIn Figure 10, each gray level denotes the relative density of features in a square region, measured by log10(1 + Dk), where Dk is the number of features within the k-th square region.\nFrom the figure, we can make the 0 2 4 6 8 11.18 12879.82 1 65 100 130 259 S(f) P(f) LH HH LL HL Figure 9: Distribution of SW (stopwords) in the HH, HL, LH, and LL regions.\n0 5 11.18 50\u00a0100\u00a0200\u00a0300 400\u00a0500\u00a01000\u00a012879.82 1 2 4 6 8 20 50 100 183 364 365 S(f) P(f) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 LH HH LL HL Figure 10: Distribution of categorized features over the four quadrants (shading in log scale).\nfollowing observations: 1.\nMost features have low S and are easily distinguishable from those features having a much higher S, which allows us to detect important (a)periodic events from trivial events by selecting features with high S. 2.\nFeatures in the HH and LH quadrants are aperiodic, which are nicely separated (big horizontal gap) from the periodic features.\nThis allows reliably detecting aperiodic events and periodic events independently.\n3.\nThe (vertical) boundary between high and low power spectrum is not as clearcut and the exact value will be application specific.\nBy checking the scatter distribution of features from SW on HH, HL, LH, and LL as shown in Figure 9, we found that 87.02%(409\/470) of the detected stopwords originated from LL.\nThe LL classification and high DFIDF scores of stopwords agree with the generally accepted notion that stopwords are equally frequent over all time.\nTherefore, setting the boundary between high and low power spectrum using the upper bound Sf of SW is a reasonable heuristic.\n7.3 Detecting Aperiodic Events We shall evaluate our two hypotheses, 1)important aperiodic events can be defined by a set of HH features, and 2)less reported aperiodic events can be defined by a set of LH features.\nSince no benchmark news streams exist for event detection (TDT datasets are not proper streams), we evaluate the quality of the automatically detected events by comparing them to manually-confirmed events by searching through the corpus.\nAmong the 69 HH features, we detected 17 important aperiodic events as shown in Table 1 (e1 \u2212 e17).\nNote that the entire identification took less than 1 second, after removing events containing only the month feature.\nAmong the 17 events, other than the overlaps between e3 and e4 (both describes the same hostage event), e11 and e16 (both about company reports), the 14 identified events are extremely accurate and correspond very well to the major events of the period.\nFor example, the defeat of Bob Dole, election of Tony Blair, Missile attack on Iraq, etc..\nRecall that selecting the features for one event should minimize the cost in Eq.\n2 such that 1) the number of features span different events, and 2) not all features relevant to an event will be selected, e.g., the feature Clinton is representative to e12 but since Clinton relates to many other events, its time domain signal is far different from those of other representative features like Dole and Bob.\nThe number of documents of a detected event is roughly estimated by the number of indexed documents containing the representative features.\nWe can see that all 17 important aperiodic events are popularly reported events.\nAfter 742 minutes of computation time, we detected 23, 525 less reported aperiodic events from 83,471 LH features.\nTable 1 lists the top 5 detected aperiodic events (e18 \u2212 e22) with respect to the cost.\nWe found that these 5 events are actually very trivial events with only a few news reports, and are usually subsumed by some larger topics.\nFor example, e22 is one of the rescue events in an airplane hijack topic.\nOne advantage of our UG Algorithm for discovering less-reported aperiodic events is that we are able to precisely detect the true event period.\n7.4 Detecting Periodic Events Among the 1,087 HL features, 330 important periodic events were detected within 10 minutes of computing time.\nTable 1 lists the top 5 detected periodic events with respect to the cost (e23 \u2212 e27).\nAll of the detected periodic events are indeed valid, and correspond to real life periodic events.\nThe GMM model is able to detect and estimate the bursty period nicely although it cannot distinguish the slight difference between every Monday-Friday and all weekdays as shown in e23.\nWe also notice that e26 is actually a subset of e27 (soccer game), which is acceptable since the Sheffield league results are announced independently every weekend.\n8.\nCONCLUSIONS This paper took a whole new perspective of analyzing feature trajectories as time domain signals.\nBy considering the word document frequencies in both time and frequency domains, we were able to derive many new characteristics about news streams that were previously unknown, e.g., the different distributions of stopwords during weekdays and weekends.\nFor the first time in the area of TDT, we applied a systematic approach to automatically detect important and less-reported, periodic and aperiodic events.\nThe key idea of our work lies in the observations that (a)periodic events have (a)periodic representative features and (un)important events have (in)active representative features, differentiated by their power spectrums and time periods.\nTo address the real event detection problem, a simple and effective mixture density-based approach was used to identify feature bursts and their associated bursty periods.\nWe also designed an unsupervised greedy algorithm to detect both aperiodic and periodic events, which was successful in detecting real events as shown in the evaluation on a real news stream.\nAlthough we have not made any benchmark comparison against another approach, simply because there is no previous work in the addressed problem.\nFuture work includes evaluating the recall of detected events for a labeled news stream, and comparing our model against the closest equivalent methods, which currently are limited to the methods of Kleinberg [12] (which can only detect certain type of bursty events depending on parameter settings), Fung et al. [9], and Swan and Allan [18].\nNevertheless, we believe our simple and effective method will be useful for all TDT practitioners, and will be especially useful for the initial exploratory analysis of news streams.\n9.\nREFERENCES [1] Apache lucene-core 2.0.0, http:\/\/lucene.apache.org.\n[2] Google news alerts, http:\/\/www.google.com\/alerts.\n[3] Reuters corpus, http:\/\/www.reuters.com\/researchandstandards\/corpus\/.\n[4] J. Allan.\nTopic Detection and Tracking.\nEvent-based Information Organization.\nKluwer Academic Publishers, 2002.\n[5] J. Allan, V. Lavrenko, and H. Jin.\nFirst story detection in tdt is hard.\nIn CIKM, pages 374-381, 2000.\n[6] J. Allan, C. Wade, and A. Bolivar.\nRetrieval and novelty detection at the sentence level.\nIn SIGIR, pages 314-321, 2003.\n[7] T. Brants, F. Chen, and A. Farahat.\nA system for new event detection.\nIn SIGIR, pages 330-337, 2003.\n[8] A. P. Dempster, N. M. Laird, and D. B. Rubin.\nMaximum likelihood from incomplete data via the EM algorithm.\nJournal of the Royal Statistical Society, 39(1):1-38, 1977.\n[9] G. P. C. Fung, J. X. Yu, P. S. Yu, and H. Lu.\nParameter free bursty events detection in text streams.\nIn VLDB, pages 181-192, 2005.\n[10] Q. He, K. Chang, and E.-P.\nLim.\nA model for anticipatory event detection.\nIn ER, pages 168-181, 2006.\n[11] Q. He, K. Chang, E.-P.\nLim, and J. Zhang.\nBursty feature reprensentation for clustering text streams.\nIn SDM, accepted, 2007.\n[12] J. Kleinberg.\nBursty and hierarchical structure in streams.\nIn SIGKDD, pages 91-101, 2002.\n[13] R. Kumar, J. Novak, P. Raghavan, and A. Tomkins.\nOn the bursty evolution of blogspace.\nIn WWW, pages 159-178, 2005.\n[14] G. Kumaran and J. Allan.\nText classification and named entities for new event detection.\nIn SIGIR, pages 297-304, 2004.\n[15] Q. Mei and C. Zhai.\nDiscovering evolutionary theme patterns from text: an exploration of temporal text mining.\nIn SIGKDD, pages 198-207, 2005.\n[16] W. D. Penny.\nKullback-liebler divergences of normal, gamma, dirichlet and wishart densities.\nTechnical report, 2001.\n[17] N. Stokes and J. Carthy.\nCombining semantic and syntactic document classifiers to improve first story detection.\nIn SIGIR, pages 424-425, 2001.\n[18] R. Swan and J. Allan.\nAutomatic generation of overview timelines.\nIn SIGIR, pages 49-56, 2000.\n[19] M. Vlachos, C. Meek, Z. Vagena, and D. Gunopulos.\nIdentifying similarities, periodicities and bursts for online search queries.\nIn SIGMOD, pages 131-142, 2004.\n[20] Y. Yang, T. Pierce, and J. Carbonell.\nA study of retrospective and on-line event detection.\nIn SIGIR, pages 28-36, 1998.\n[21] Y. Yang, J. Zhang, J. Carbonell, and C. Jin.\nTopic-conditioned novelty detection.\nIn SIGKDD, pages 688-693, 2002.\nTable 1: All important aperiodic events (e1 \u2212 e17), top 5 less-reported aperiodic events (e18 \u2212 e22) and top 5 important periodic events (e23 \u2212 e27).\nDetected Event and Bursty Period Doc # True Event e1(Sali,Berisha,Albania,Albanian,March) 02\/02\/199705\/29\/1997 1409 Albanian``s president Sali Berisha lost in an early election and resigned, 12\/1996-07\/1997.\ne2(Seko,Mobutu,Sese,Kabila) 03\/22\/1997-06\/09\/1997 2273 Zaire``s president Mobutu Sese coordinated the native rebellion and failed on 05\/16\/1997.\ne3(Marxist,Peruvian) 11\/19\/1996-03\/05\/1997 824 Peru rebels (Tupac Amaru revolutionary Movement) led a hostage siege in Lima in early 1997.\ne4(Movement,Tupac,Amaru,Lima,hostage,hostages) 11\/16\/1996-03\/20\/1997 824 The same as e3.\ne5(Kinshasa,Kabila,Laurent,Congo) 03\/26\/199706\/15\/1997 1378 Zaire was renamed the Democratic Republic of Congo on 05\/16\/1997.\ne6(Jospin,Lionel,June) 05\/10\/1997-07\/09\/1997 605 Following the early General Elections circa 06\/1997, Lionel Jospin was appointed Prime Minister on 06\/02\/1997.\ne7(Iraq,missile) 08\/31\/1996-09\/13\/1996 1262 U.S. fired missile at Iraq on 09\/03\/1996 and 09\/04\/1996.\ne8(Kurdish,Baghdad,Iraqi) 08\/29\/1996-09\/09\/1996 1132 Iraqi troop fought with Kurdish faction circa 09\/1996.\ne9(May,Blair) 03\/24\/1997-07\/04\/1997 1049 Tony Blair became the Primary Minister of the United Kingdom on 05\/02\/1997.\ne10(slalom,skiing) 12\/05\/1996-03\/21\/1997 253 Slalom Game of Alpine Skiing in 01\/1997-02\/1997.\ne11(Interim,months) 09\/24\/1996-12\/31\/1996 3063 Tokyo released company interim results for the past several months in 09\/1996-12\/1996.\ne12(Dole,Bob) 09\/09\/1996-11\/24\/1996 1599 Dole Bob lost the 1996 US presidential election.\ne13(July,Sen) 06\/25\/1997-06\/25\/1997 344 Cambodia``s Prime Minister Hun Sen launched a bloody military coup in 07\/1997.\ne14(Hebron) 10\/15\/1996-02\/14\/1997 2098 Hebron was divided into two sectors in early 1997.\ne15(April,Easter) 02\/23\/1997-05\/04\/1997 480 Easter feasts circa 04\/1997 (for western and Orthodox).\ne16(Diluted,Group) 04\/27\/1997-07\/20\/1997 1888 Tokyo released all 96\/97 group results in 04\/199707\/1997.\ne17(December,Christmas) 11\/17\/1996-01\/26\/1997 1326 Christmas feast in late 12\/1997.\ne18(Kolaceva,winter,Together,promenades,Zajedno, Slobodan,Belgrade,Serbian,Serbia,Draskovic,municipal, Kragujevac) 1\/25\/1997 3 University students organized a vigil on Kolaceva street against government on 1\/25\/1997.\ne19(Tutsi,Luvengi,Burundi,Uvira,fuel,Banyamulenge, Burundian,Kivu,Kiliba,Runingo,Kagunga,Bwegera) 10\/19\/1996 6 Fresh fighting erupted around Uvira between Zaire armed forces and Banyamulengs Tutsi rebels on 10\/19\/1996.\ne20(Malantacchi,Korea,Guy,Rider,Unions,labour, Trade,unions,Confederation,rammed,Geneva,stoppages, Virgin,hire,Myongdong,Metalworkers) 1\/11\/1997 2 Marcello Malantacchi secretary general of the International Metalworkers Federation and Guy Rider who heads the Geneva office of the International Confederation of Free Trade Unions attacked the new labour law of South Korea on 1\/11\/1997.\ne21(DBS,Ra\ufb04es) 8\/17\/1997 9 The list of the unit of Singapore DBS Land Ra\ufb04es Holdings plans on 8\/17\/1997.\ne22(preserver,fuel,Galawa,Huddle,Leul,Beausse) 11\/24\/1996 3 Rescued a woman and her baby during a hijacked Ethiopian plane that ran out of fuel and crashed into the sea near Le Galawa beach on 11\/24\/1996.\ne23(PRICE,LISTING,MLN,MATURITY,COUPON, MOODY,AMT,FIRST,ISS,TYPE,PAY,BORROWER) Monday-Friday\/week 7966 Announce bond price on all weekdays.\ne24(Unaudited,Ended,Months,Weighted,Provision,Cost, Selling,Revenues,Loss,Income,except,Shrs,Revs) every season 2264 Net income-loss reports released by companies in every season.\ne25(rating,Wall,Street,Ian) Monday-Friday\/week 21767 Stock reports from Wall Street on all weekdays.\ne26(Sheffield,league,scoring,goals,striker,games) every Friday, Saturday and Sunday 574 Match results of Sheffield soccer league were published on Friday, Saturday and Sunday 10 times than other 4 days.\ne27(soccer,matches,Results,season,game,Cup,match, victory,beat,played,play,division) every Friday, Saturday and Sunday 2396 Soccer games held on Friday, Saturday and Sunday 7 times than other 4 days.","lvl-3":"Analyzing Feature Trajectories for Event Detection\nABSTRACT\nWe consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words.\nA set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner.\nThe document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point.\nIn this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature's burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events.\nAll of the above methods can be applied to time series data in general.\nWe extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.\n1.\nINTRODUCTION\nThere are more than 4,000 online news sources in the world.\nManually monitoring all of them for important events has become difficult or practically impossible.\nIn fact, the topic detection and tracking (TDT) community has for many years been trying to come up with a practical solution to help people monitor news effectively.\nUnfortunately, the holy grail is still elusive, because the vast majority of TDT\nsolutions proposed for event detection [20, 5, 17, 4, 21, 7, 14, 10] are either too simplistic (based on cosine similarity [5]) or impractical due to the need to tune a large number of parameters [9].\nThe ineffectiveness of current TDT technologies can be easily illustrated by subscribing to any of the many online news alerts services such as the industryleading Google News Alerts [2], which generates more than 50% false alarms [10].\nAs further proof, portals like Yahoo take a more pragmatic approach by requiring all machine generated news alerts to go through a human operator for confirmation before sending them out to subscribers.\nInstead of attacking the problem with variations of the same hammer (cosine similarity and TFIDF), a fundamental understanding of the characteristics of news stream data is necessary before any major breakthroughs can be made in TDT.\nThus in this paper, we look at news stories and feature trends from the perspective of analyzing a time-series word signal.\nPrevious work like [9] has attempted to reconstruct an event with its representative features.\nHowever, in many predictive event detection tasks (i.e., retrospective event detection), there is a vast set of potential features only for a fixed set of observations (i.e., the obvious bursts).\nOf these features, often only a small number are expected to be useful.\nIn particular, we study the novel problem of analyzing feature trajectories for event detection, borrowing a well-known technique from signal processing: identifying distributional correlations among all features by spectral analysis.\nTo evaluate our method, we subsequently propose an unsupervised event detection algorithm for news streams.\nFigure 1: Feature correlation (DFIDF: time) between a) Easter and April b) Unaudited and Ended.\nAs an illustrative example, consider the correlation between the words Easter and April from the Reuters Corpus'.\nFrom the plot of their normalized DFIDF in Figure 1 (a), we observe the heavy overlap between the two words circa 04\/1997, which means they probably both belong to the same event during that time (Easter feast).\nIn this example, the hidden event Easter feast is a typical important aperiodic event over 1-year data.\nAnother example is given by Figure 1 (b), where both the words Unaudited and Ended ` Reuters Corpus is the default dataset for all examples.\nexhibit similar behaviour over periods of 3 months.\nThese two words actually originated from the same periodic event, net income-loss reports, which are released quarterly by publicly listed companies.\nOther observations drawn from Figure 1 are: 1) the bursty period of April is much longer than Easter, which suggests that April may exist in other events during the same period; 2) Unaudited has a higher average DFIDF value than Ended, which indicates Unaudited to be more representative for the underlying event.\nThese two examples are but the tip of the iceberg among all word trends and correlations hidden in a news stream like Reuters.\nIf a large number of them can be uncovered, it could significantly aid TDT tasks.\nIn particular, it indicates the significance of mining correlating features for detecting corresponding events.\nTo summarize, we postulate that: 1) An event is described by its representative features.\nA periodic event has a list of periodic features and an aperiodic event has a list of aperiodic features; 2) Representative features from the same event share similar distributions over time and are highly correlated; 3) An important event has a set of active (largely reported) representative features, whereas an unimportant event has a set of inactive (less-reported) representative features; 4) A feature may be included by several events with overlaps in time frames.\nBased on these observations, we can either mine representative features given an event or detect an event from a list of highly correlated features.\nIn this paper, we focus on the latter, i.e., how correlated features can be uncovered to form an event in an unsupervised manner.\n1.1 Contributions\nThis paper has three main contributions: 9 To the best of our knowledge, our approach is the first to categorize word features for heterogenous events.\nSpecifically, every word feature is categorized into one of the following five feature types based on its power spectrum strength and periodicity: 1) HH (high power and high\/long periodicity): important aperiodic events, 2) HL (high power and low periodicity): important periodic events, 3) LH (low power and high periodicity): unimportant aperiodic events, 4) LL (low power and low periodicity): non-events, and 5) SW (stopwords), a higher power and periodicity subset of LL comprising stopwords, which contains no information.\n9 We propose a simple and effective mixture densitybased approach to model and detect feature bursts.\n9 We come up with an unsupervised event detection algorithm to detect both aperiodic and periodic events.\nOur algorithm has been evaluated on a real news stream to show its effectiveness.\n2.\nRELATED WORK\nThis work is largely motivated by a broader family of problems collectively known as Topic Detection and Tracking (TDT) [20, 5, 17, 4, 21, 7, 14, 10].\nMoreover, most TDT research so far has been concerned with clustering\/classifying documents into topic types, identifying novel sentences [6] for new events, etc., without much regard to analyzing the word trajectory with respect to time.\nSwan and Allan [18] first attempted using co-occuring terms to construct an event.\nHowever, they only considered named entities and noun phrase pairs, without considering their periodicities.\nOn the contrary, our paper considers all of the above.\nRecently, there has been significant interest in modeling an event in text streams as a \"burst of activities\" by incorporating temporal information.\nKleinberg's seminal work described how bursty features can be extracted from text streams using an infinite automaton model [12], which inspired a whole series of applications such as Kumar's identification of bursty communities from Weblog graphs [13], Mei's summarization of evolutionary themes in text streams [15], He's clustering of text streams using bursty features [11], etc. .\nNevertheless, none of the existing work specifically identified features for events, except for Fung et al. [9], who clustered busty features to identify various bursty events.\nOur work differs from [9] in several ways: 1) we analyze every single feature, not only bursty features; 2) we classify features along two categorical dimensions (periodicity and power), yielding altogether five primary feature types; 3) we do not restrict each feature to exclusively belong to only one event.\nSpectral analysis techniques have previously been used by Vlachos et al. [19] to identify periodicities and bursts from query logs.\nTheir focus was on detecting multiple periodicities from the power spectrum graph, which were then used to index words for \"query-by-burst\" search.\nIn this paper, we use spectral analysis to classify word features along two dimensions, namely periodicity and power spectrum, with the ultimate goal of identifying both periodic and aperiodic bursty events.\n3.\nDATA REPRESENTATION\n3.1 Event Periodicity Classification\n3.2 Representative Features\n4.\nIDENTIFYING FEATURES FOR EVENTS\n4.1 Spectral Analysis for Dominant Period\n4.2 Categorizing Features\nProperties of Different Feature Sets\nStop Words (SW) Feature Set\n5.\nIDENTIFYING BURSTS FOR FEATURES\n5.1 Detecting Aperiodic Features' Bursts\n5.2 Detecting Periodic Features' Bursts\nDocument Overlap\n6.\nEVENTS FROM FEATURES\n6.1 Feature Correlation\nMeasuring Feature Distribution Similarity\n6.2 Unsupervised Greedy Event Detection\n7.\nEXPERIMENTS\n7.1 Dataset and Experimental Setup\n7.2 Categorizing Features\n7.3 Detecting Aperiodic Events\n7.4 Detecting Periodic Events\n8.\nCONCLUSIONS\nThis paper took a whole new perspective of analyzing feature trajectories as time domain signals.\nBy considering the word document frequencies in both time and frequency domains, we were able to derive many new characteristics about news streams that were previously unknown, e.g., the different distributions of stopwords during weekdays and weekends.\nFor the first time in the area of TDT, we applied a systematic approach to automatically detect important and less-reported, periodic and aperiodic events.\nThe key idea of our work lies in the observations that (a) periodic events have (a) periodic representative features and (un) important events have (in) active representative features, differentiated by their power spectrums and time periods.\nTo address the real event detection problem, a simple and effective mixture density-based approach was used to identify feature bursts and their associated bursty periods.\nWe also designed an unsupervised greedy algorithm to detect both aperiodic and periodic events, which was successful in detecting real events as shown in the evaluation on a real news stream.\nAlthough we have not made any benchmark comparison against another approach, simply because there is no previous work in the addressed problem.\nFuture work includes evaluating the recall of detected events for a labeled news stream, and comparing our model against the closest equivalent methods, which currently are limited to the methods of Kleinberg [12] (which can only detect certain type of bursty events depending on parameter settings), Fung et al. [9], and Swan and Allan [18].\nNevertheless, we believe our simple and effective method will be useful for all TDT practitioners, and will be especially useful for the initial exploratory analysis of news streams.","lvl-4":"Analyzing Feature Trajectories for Event Detection\nABSTRACT\nWe consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words.\nA set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner.\nThe document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point.\nIn this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature's burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events.\nAll of the above methods can be applied to time series data in general.\nWe extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.\n1.\nINTRODUCTION\nThere are more than 4,000 online news sources in the world.\nManually monitoring all of them for important events has become difficult or practically impossible.\nIn fact, the topic detection and tracking (TDT) community has for many years been trying to come up with a practical solution to help people monitor news effectively.\nsolutions proposed for event detection [20, 5, 17, 4, 21, 7, 14, 10] are either too simplistic (based on cosine similarity [5]) or impractical due to the need to tune a large number of parameters [9].\nThus in this paper, we look at news stories and feature trends from the perspective of analyzing a time-series word signal.\nPrevious work like [9] has attempted to reconstruct an event with its representative features.\nHowever, in many predictive event detection tasks (i.e., retrospective event detection), there is a vast set of potential features only for a fixed set of observations (i.e., the obvious bursts).\nOf these features, often only a small number are expected to be useful.\nIn particular, we study the novel problem of analyzing feature trajectories for event detection, borrowing a well-known technique from signal processing: identifying distributional correlations among all features by spectral analysis.\nTo evaluate our method, we subsequently propose an unsupervised event detection algorithm for news streams.\nFigure 1: Feature correlation (DFIDF: time) between a) Easter and April b) Unaudited and Ended.\nAs an illustrative example, consider the correlation between the words Easter and April from the Reuters Corpus'.\nFrom the plot of their normalized DFIDF in Figure 1 (a), we observe the heavy overlap between the two words circa 04\/1997, which means they probably both belong to the same event during that time (Easter feast).\nIn this example, the hidden event Easter feast is a typical important aperiodic event over 1-year data.\nAnother example is given by Figure 1 (b), where both the words Unaudited and Ended ` Reuters Corpus is the default dataset for all examples.\nexhibit similar behaviour over periods of 3 months.\nThese two words actually originated from the same periodic event, net income-loss reports, which are released quarterly by publicly listed companies.\nOther observations drawn from Figure 1 are: 1) the bursty period of April is much longer than Easter, which suggests that April may exist in other events during the same period; 2) Unaudited has a higher average DFIDF value than Ended, which indicates Unaudited to be more representative for the underlying event.\nThese two examples are but the tip of the iceberg among all word trends and correlations hidden in a news stream like Reuters.\nIf a large number of them can be uncovered, it could significantly aid TDT tasks.\nIn particular, it indicates the significance of mining correlating features for detecting corresponding events.\nTo summarize, we postulate that: 1) An event is described by its representative features.\nBased on these observations, we can either mine representative features given an event or detect an event from a list of highly correlated features.\nIn this paper, we focus on the latter, i.e., how correlated features can be uncovered to form an event in an unsupervised manner.\n1.1 Contributions\nThis paper has three main contributions: 9 To the best of our knowledge, our approach is the first to categorize word features for heterogenous events.\n9 We propose a simple and effective mixture densitybased approach to model and detect feature bursts.\n9 We come up with an unsupervised event detection algorithm to detect both aperiodic and periodic events.\nOur algorithm has been evaluated on a real news stream to show its effectiveness.\n2.\nRELATED WORK\nMoreover, most TDT research so far has been concerned with clustering\/classifying documents into topic types, identifying novel sentences [6] for new events, etc., without much regard to analyzing the word trajectory with respect to time.\nSwan and Allan [18] first attempted using co-occuring terms to construct an event.\nHowever, they only considered named entities and noun phrase pairs, without considering their periodicities.\nOn the contrary, our paper considers all of the above.\nRecently, there has been significant interest in modeling an event in text streams as a \"burst of activities\" by incorporating temporal information.\nNevertheless, none of the existing work specifically identified features for events, except for Fung et al. [9], who clustered busty features to identify various bursty events.\nOur work differs from [9] in several ways: 1) we analyze every single feature, not only bursty features; 2) we classify features along two categorical dimensions (periodicity and power), yielding altogether five primary feature types; 3) we do not restrict each feature to exclusively belong to only one event.\nSpectral analysis techniques have previously been used by Vlachos et al. [19] to identify periodicities and bursts from query logs.\nTheir focus was on detecting multiple periodicities from the power spectrum graph, which were then used to index words for \"query-by-burst\" search.\nIn this paper, we use spectral analysis to classify word features along two dimensions, namely periodicity and power spectrum, with the ultimate goal of identifying both periodic and aperiodic bursty events.\n8.\nCONCLUSIONS\nThis paper took a whole new perspective of analyzing feature trajectories as time domain signals.\nBy considering the word document frequencies in both time and frequency domains, we were able to derive many new characteristics about news streams that were previously unknown, e.g., the different distributions of stopwords during weekdays and weekends.\nFor the first time in the area of TDT, we applied a systematic approach to automatically detect important and less-reported, periodic and aperiodic events.\nThe key idea of our work lies in the observations that (a) periodic events have (a) periodic representative features and (un) important events have (in) active representative features, differentiated by their power spectrums and time periods.\nTo address the real event detection problem, a simple and effective mixture density-based approach was used to identify feature bursts and their associated bursty periods.\nWe also designed an unsupervised greedy algorithm to detect both aperiodic and periodic events, which was successful in detecting real events as shown in the evaluation on a real news stream.\nAlthough we have not made any benchmark comparison against another approach, simply because there is no previous work in the addressed problem.\nNevertheless, we believe our simple and effective method will be useful for all TDT practitioners, and will be especially useful for the initial exploratory analysis of news streams.","lvl-2":"Analyzing Feature Trajectories for Event Detection\nABSTRACT\nWe consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words.\nA set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner.\nThe document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point.\nIn this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature's burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events.\nAll of the above methods can be applied to time series data in general.\nWe extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.\n1.\nINTRODUCTION\nThere are more than 4,000 online news sources in the world.\nManually monitoring all of them for important events has become difficult or practically impossible.\nIn fact, the topic detection and tracking (TDT) community has for many years been trying to come up with a practical solution to help people monitor news effectively.\nUnfortunately, the holy grail is still elusive, because the vast majority of TDT\nsolutions proposed for event detection [20, 5, 17, 4, 21, 7, 14, 10] are either too simplistic (based on cosine similarity [5]) or impractical due to the need to tune a large number of parameters [9].\nThe ineffectiveness of current TDT technologies can be easily illustrated by subscribing to any of the many online news alerts services such as the industryleading Google News Alerts [2], which generates more than 50% false alarms [10].\nAs further proof, portals like Yahoo take a more pragmatic approach by requiring all machine generated news alerts to go through a human operator for confirmation before sending them out to subscribers.\nInstead of attacking the problem with variations of the same hammer (cosine similarity and TFIDF), a fundamental understanding of the characteristics of news stream data is necessary before any major breakthroughs can be made in TDT.\nThus in this paper, we look at news stories and feature trends from the perspective of analyzing a time-series word signal.\nPrevious work like [9] has attempted to reconstruct an event with its representative features.\nHowever, in many predictive event detection tasks (i.e., retrospective event detection), there is a vast set of potential features only for a fixed set of observations (i.e., the obvious bursts).\nOf these features, often only a small number are expected to be useful.\nIn particular, we study the novel problem of analyzing feature trajectories for event detection, borrowing a well-known technique from signal processing: identifying distributional correlations among all features by spectral analysis.\nTo evaluate our method, we subsequently propose an unsupervised event detection algorithm for news streams.\nFigure 1: Feature correlation (DFIDF: time) between a) Easter and April b) Unaudited and Ended.\nAs an illustrative example, consider the correlation between the words Easter and April from the Reuters Corpus'.\nFrom the plot of their normalized DFIDF in Figure 1 (a), we observe the heavy overlap between the two words circa 04\/1997, which means they probably both belong to the same event during that time (Easter feast).\nIn this example, the hidden event Easter feast is a typical important aperiodic event over 1-year data.\nAnother example is given by Figure 1 (b), where both the words Unaudited and Ended ` Reuters Corpus is the default dataset for all examples.\nexhibit similar behaviour over periods of 3 months.\nThese two words actually originated from the same periodic event, net income-loss reports, which are released quarterly by publicly listed companies.\nOther observations drawn from Figure 1 are: 1) the bursty period of April is much longer than Easter, which suggests that April may exist in other events during the same period; 2) Unaudited has a higher average DFIDF value than Ended, which indicates Unaudited to be more representative for the underlying event.\nThese two examples are but the tip of the iceberg among all word trends and correlations hidden in a news stream like Reuters.\nIf a large number of them can be uncovered, it could significantly aid TDT tasks.\nIn particular, it indicates the significance of mining correlating features for detecting corresponding events.\nTo summarize, we postulate that: 1) An event is described by its representative features.\nA periodic event has a list of periodic features and an aperiodic event has a list of aperiodic features; 2) Representative features from the same event share similar distributions over time and are highly correlated; 3) An important event has a set of active (largely reported) representative features, whereas an unimportant event has a set of inactive (less-reported) representative features; 4) A feature may be included by several events with overlaps in time frames.\nBased on these observations, we can either mine representative features given an event or detect an event from a list of highly correlated features.\nIn this paper, we focus on the latter, i.e., how correlated features can be uncovered to form an event in an unsupervised manner.\n1.1 Contributions\nThis paper has three main contributions: 9 To the best of our knowledge, our approach is the first to categorize word features for heterogenous events.\nSpecifically, every word feature is categorized into one of the following five feature types based on its power spectrum strength and periodicity: 1) HH (high power and high\/long periodicity): important aperiodic events, 2) HL (high power and low periodicity): important periodic events, 3) LH (low power and high periodicity): unimportant aperiodic events, 4) LL (low power and low periodicity): non-events, and 5) SW (stopwords), a higher power and periodicity subset of LL comprising stopwords, which contains no information.\n9 We propose a simple and effective mixture densitybased approach to model and detect feature bursts.\n9 We come up with an unsupervised event detection algorithm to detect both aperiodic and periodic events.\nOur algorithm has been evaluated on a real news stream to show its effectiveness.\n2.\nRELATED WORK\nThis work is largely motivated by a broader family of problems collectively known as Topic Detection and Tracking (TDT) [20, 5, 17, 4, 21, 7, 14, 10].\nMoreover, most TDT research so far has been concerned with clustering\/classifying documents into topic types, identifying novel sentences [6] for new events, etc., without much regard to analyzing the word trajectory with respect to time.\nSwan and Allan [18] first attempted using co-occuring terms to construct an event.\nHowever, they only considered named entities and noun phrase pairs, without considering their periodicities.\nOn the contrary, our paper considers all of the above.\nRecently, there has been significant interest in modeling an event in text streams as a \"burst of activities\" by incorporating temporal information.\nKleinberg's seminal work described how bursty features can be extracted from text streams using an infinite automaton model [12], which inspired a whole series of applications such as Kumar's identification of bursty communities from Weblog graphs [13], Mei's summarization of evolutionary themes in text streams [15], He's clustering of text streams using bursty features [11], etc. .\nNevertheless, none of the existing work specifically identified features for events, except for Fung et al. [9], who clustered busty features to identify various bursty events.\nOur work differs from [9] in several ways: 1) we analyze every single feature, not only bursty features; 2) we classify features along two categorical dimensions (periodicity and power), yielding altogether five primary feature types; 3) we do not restrict each feature to exclusively belong to only one event.\nSpectral analysis techniques have previously been used by Vlachos et al. [19] to identify periodicities and bursts from query logs.\nTheir focus was on detecting multiple periodicities from the power spectrum graph, which were then used to index words for \"query-by-burst\" search.\nIn this paper, we use spectral analysis to classify word features along two dimensions, namely periodicity and power spectrum, with the ultimate goal of identifying both periodic and aperiodic bursty events.\n3.\nDATA REPRESENTATION\nLet T be the duration\/period (in days) of a news stream, and F represents the complete word feature space in the classical static Vector Space Model (VSM).\n3.1 Event Periodicity Classification\nWithin T, there may exist certain events that occur only once, e.g., Tony Blair elected as Prime Minister of U.K., and other recurring events of various periodicities, e.g., weekly soccer matches.\nWe thus categorize all events into two types: aperiodic and periodic, defined as follows.\ngenre occur regularly with a fixed periodicity P <[T\/2], we say that this particular event genre is periodic, with each member event qualified as a periodic event.\nNote that the definition of \"aperiodic\" is relative, i.e., it is true only for a given T, and may be invalid for any other T'> T. For example, the event Christmas feast is aperiodic for T <365 but periodic for T> 730.\n3.2 Representative Features\nIntuitively, an event can be described very concisely by a few discriminative and representative word features and vice-versa, e.g., \"hurricane\", \"sweep\", and \"strike\" could be representative features of a Hurricane genre event.\nLikewise, a set of strongly correlated features could be used to reconstruct an event description, assuming that strongly correlated features are representative.\nThe representation vector of a word feature is defined as follows:\nyf = [yf (1), yf (2),..., yf (T)], where each element yf (t) is a measure of feature f at time t, which could be defined using the normalized DFIDF score2\nwhere DFf (t) is the number of documents (local DF) containing feature f at day t, DFf is the total number of documents (global DF) containing feature f over T, N (t) is the number of documents for day t, and N is the total number of documents over T.\n4.\nIDENTIFYING FEATURES FOR EVENTS\nIn this section, we show how representative features can be extracted for (un) important or (a) periodic events.\n4.1 Spectral Analysis for Dominant Period\nGiven a feature f, we decompose its feature trajectory yf = [yf (1), yf (2),..., yf (T)] into the sequence of T complex numbers [X1,..., XT] via the discrete Fourier transform (DFT):\nDFT can represent the original time series as a linear combination of complex sinusoids, which is illustrated by the inverse discrete Fourier transform (IDFT):, t = 1, 2,..., T, where the Fourier coefficient Xk denotes the amplitude of the sinusoid with frequency k\/T.\nThe original trajectory can be reconstructed with just the dominant frequencies, which can be determined from the power spectrum using the popular periodogram estimator.\nThe periodogram is a sequence of the squared magnitude of the Fourier coefficients, 11Xk112, k = 1, 2,..., [T\/2], which indicates the signal power at frequency k\/T in the spectrum.\nFrom the power spectrum, the dominant period is chosen as the inverse of the frequency with the highest power spectrum, as follows.\nDEFINITION 4.\n(Dominant Period) The dominant period (DP) of a given feature f is Pf = T \/ arg max 11Xk112.\nAccordingly, we have DEFINITION 5.\n(Dominant Power Spectrum) The dominant power spectrum (DPS) of a given feature f is Sf = 11Xk112, with 11Xk112 11Xj 112, dj = ~ k.\n4.2 Categorizing Features\nThe DPS of a feature trajectory is a strong indicator of its activeness at the specified frequency; the higher the DPS, the more likely for the feature to be bursty.\nCombining DPS with DP, we therefore categorize all features into four types: 2We normalize yf (t) as y' f (t) = yf (t) \/ ET i = 1 yf (i) so that it could be interpreted as a probability.\n\u2022 HH: high Sf, aperiodic or long-term periodic (Pf> [T\/2]); \u2022 HL: high Sf, short-term periodic (Pf <[T\/2]); \u2022 LH: low Sf, aperiodic or long-term periodic; \u2022 LL: low Sf, short-term periodic.\nThe boundary between long-term and short-term periodic is set to [T\/2].\nHowever, distinguishing between a high and low DPS is not straightforward, which will be tackled later.\nProperties of Different Feature Sets\nTo better understand the properties of HH, HL, LH and LL, we select four features, Christmas, soccer, DBS and your as illustrative examples.\nSince the boundary between high and low power spectrum is unclear, these chosen examples have relative wide range of power spectrum values.\nFigure 2 (a) shows the DFIDF trajectory for Christmas with a distinct burst around Christmas day.\nFor the 1-year Reuters dataset, \"Christmas\" is classified as a typical aperiodic event with Pf = 365 and Sf = 135.68, as shown in Figure 2 (b).\nClearly, the value of Sf = 135.68 is reasonable for a well-known bursty event like Christmas.\nFigure 2: Feature \"Christmas\" with relative high Sf and long-term Pf.\nThe DFIDF trajectory for soccer is shown in Figure 3 (a), from which we can observe that there is a regular burst every 7 days, which is again verified by its computed value of Pf = 7, as shown in Figure 3 (b).\nUsing the domain knowledge that soccer games have more matches every Saturday, which makes it a typical and heavily reported periodic event, we thus consider the value of Sf = 155.13 to be high.\nFigure 3: Feature \"soccer\" with relative high Sf and short-term Pf.\nFrom the DFIDF trajectory for DBS in Figure 4 (a), we can immediately deduce DBS to be an infrequent word with a trivial burst on 08\/17\/1997 corresponding to DBS Land Raffles Holdings plans.\nThis is confirmed by the long period of Pf = 365 and low power of Sf = 0.3084 as shown in Figure 4 (b).\nMoreover, since this aperiodic event is only reported in a few news stories over a very short time of few days, we therefore say that its low power value of Sf = 0.3084 is representative of unimportant events.\nThe most confusing example is shown in Figure 5 for the word feature your, which looks very similar to the graph for soccer in Figure 3.\nAt first glance, we may be tempted to group both your and soccer into the same category of HL or LL since both distributions look similar and have the same dominant period of approximately a week.\nHowever, further\nFigure 4: Feature \"DBS\" with relative low Sf and long-term Pf.\nanalysis indicates that the periodicity of your is due to the differences in document counts for weekdays (average 2,919 per day) and weekends3 (average 479 per day).\nOne would have expected the \"periodicity\" of a stopword like your to be a day.\nMoreover, despite our DFIDF normalization, the weekday\/weekend imbalance still prevailed; stopwords occur 4 times more frequently on weekends than on weekdays.\nThus, the DPS remains the only distinguishing factor between your (Sf = 9.42) and soccer (Sf = 155.13).\nHowever, it is very dangerous to simply conclude that a power value of S = 9.42 corresponds to a stopword feature.\nThe SW set is initially seeded with a small set of 29 popular stopwords utilized by Google search engine.\nAlgorithm 1 Heuristic Stopwords detection (HS) Input: Seed SW set, weekday trajectories of all words\n1: From the seed set SW, compute the maximum DPS as UDPS, maximum DFIDF as UDFIDF, and minimum of DFIDF as LDFIDF.\n2: for fi E F do 3: Compute DFT for fi.\n4: if Sfi 0, ` dk E [1, K] and EKk = 1 \u03b1k = 1; \u2022 \u03bck \/ \u03c3k is mean\/standard deviation of the kth Gaussian.\nThe well known Expectation Maximization (EM) [8] algorithm is used to compute the mixing proportions \u03b1k, as well as the individual Gaussian density parameters \u03bck and \u03c3K.\nEach Gaussian represents one periodic event, and is modeled similarly as mentioned in Section 5.1.\n6.\nEVENTS FROM FEATURES\nAfter identifying and modeling bursts for all features, the next task is to paint a picture of the event with a potential set of representative features.\n6.1 Feature Correlation\nIf two features fi and fj are representative of the same event, they must satisfy the following necessary conditions:\n1.\nfi and fj are identically distributed: yfi \u223c yfj.\n2.\nfi and fj have a high document overlap.\nMeasuring Feature Distribution Similarity\nWe measure the similarity between two features fi and fj using discrete KL-divergence defined as follows.\nSince KL-divergence is not symmetric, we define the similarity between between fi and fj as the maximum of KL (fi | fj) and KL (fj | fi).\nFurther, the similarity between two aperiodic features can be computed using a closed form of the KL-divergence [16].\nThe same discrete KL-divergence formula of Eq.\n1 is employed to compute the similarity between two periodic features, Next, we define the overal similarity among a set of features R using the maximum inter-feature KL-Divergence value as follows.\n6.2 Unsupervised Greedy Event Detection\nWe use features from HH to detect important aperiodic events, features from LH to detect less-reported\/unimportant aperiodic events, and features from HL to detect periodic events.\nAll of them share the same algorithm.\nGiven bursty feature fi E HH, the goal is to find highly correlated features from HH.\nThe set of features similar to fi can then collectively describe an event.\nSpecifically, we need to find a subset Ri of HH that minimizes the following cost function:\nThe underlying event e (associated with the burst of fi) can be represented by Ri as\nThe burst analysis for event e is exactly the same as the feature trajectory.\nThe cost in Eq.\n2 can be minimized using our unsupervised greedy UG event detection algorithm, which is described in Algorithm 2.\nThe UG algorithm allows a feature Algorithm 2 Unsupervised Greedy event detection (UG).\nInput: HH, document index for each feature.\n1: Sort and select features in descending DPS order: Sf1> Sf2>...> Sf | HH |.\n2: k = 0.\n3: for fi E HH do 4: k = k + 1.\n5: Init: Ri + - fi, C (Ri) = 1\/Sfi and HH = HH \u2212 fi.\n6: while HH not empty do 7: m = arg min C (Ri U fm).\nm 8: if C (Ri U fm) 0) [6]: )()( ),( log),( kj kj kj tPtP ttP ttMI = \u2022 The probability of a relation should be higher than a threshold (0.0001 in our case); Having a set of relations, the corresponding Knowledge model is defined as follows: )|()|()|( )|()|()|( 00 )( 0 )( QkQjkj Qtt i Qkjkj Qtt i K Q tPtPtttP ttPtttPtP kj kj \u03b8\u03b8 \u03b8\u03b8 \u2211 \u2211 \u2208 \u2208 = = (10) where (tj tk)\u2208Q means any combination of two terms in the query.\nThis is a direct extension of the translation model proposed in [3] to our context-dependent relations.\nThe score according to the Knowledge model is then defined as follows: \u2211 \u2211\u2208 \u2208 = Vt DiQkQjkj Qtt iK i kj tPtPtPtttPDQScore )|(log)|()|()|(),( 00 )( \u03b8\u03b8\u03b8 (11) Again, only the top 100 expansion terms are used.\n6.\nMODEL PARAMETERS There are several parameters in our model: \u03bb in Equation (2) and \u03b1i (i\u2208{0, Dom, K, F}) in Equation (3).\nAs the parameter \u03bb only affects document model, we will set it to the same value in all our experiments.\nThe value \u03bb=0.5 is determined to maximize the effectiveness of the baseline models (see Section 7.2) on the training data: TREC queries 1-50 and documents on Disk 2.\nThe mixture weights \u03b1i of component models are trained on the same training data using the following method of line search [11] to maximize the Mean Average Precision (MAP): each parameter is considered as a search direction.\nWe start by searching in one direction - testing all the values in that direction, while keeping the values in other directions unchanged.\nEach direction is searched in turn, until no improvement in MAP is observed.\nIn order to avoid being trapped at a local maximum, we started from 10 random points and the best setting is selected.\n7.\nEXPERIMENTS 7.1 Setting The main test data are those from TREC 1-3 ad-hoc and filtering tracks, including queries 1-150, and documents on Disks 1-3.\nThe choice of this test collection is due to the availability of manually specified domain for each query.\nThis allows us to compare with an approach using automatic domain identification.\nBelow is an example of topic: Number: 103 Domain: Law and Government Topic: Welfare Reform We only use topic titles in all our tests.\nQueries 1-50 are used for training and 51-150 for testing.\n13 domains are defined in these queries and their distributions among the two sets of queries are shown in Fig. 1.\nWe can see that the distribution varies strongly between domains and between the two query sets.\nWe have also tested on TREC 7 and 8 data.\nFor this series of tests, each collection is used in turn as training data while the other is used for testing.\nSome statistics of the data are described in Tab.\n2.\nAll the documents are preprocessed using Porter stemmer in Lemur and the standard stoplist is used.\nSome queries (4, 5 and 3 in the three query sets) only contain one word.\nFor these queries, knowledge model is not applicable.\nOn domain models, we examine several questions: \u2022 When query domain is specified manually, is it useful to incorporate the domain model?\n\u2022 If the query domain is not specified, can it be determined automatically?\nHow effective is this method?\n\u2022 We described two ways to gather documents for a domain: either using documents judged relevant to queries in the domain or using documents retrieved for these queries.\nHow do they compare?\nOn Knowledge model, in addition to testing its effectiveness, we also want to compare the context-dependent relations with context-independent ones.\nFinally, we will see the impact of each component model when all the factors are combined.\n7.2 Baseline Methods Two baseline models are used: the classical unigram model without any expansion, and the model with Feedback.\nIn all the experiments, document models are created using Jelinek-Mercer smoothing.\nThis choice is made according to the observation in [36] that the method performs very well for long queries.\nIn our case, as queries are expanded, they perform similarly to long queries.\nIn our preliminary tests, we also found this method performed better than the other methods (e.g. Dirichlet), especially for the main baseline method with Feedback model.\nTable 3 shows the retrieval effectiveness on all the collections.\n7.3 Knowledge Models This model is combined with both baseline models (with or without feedback).\nWe also compare the context-dependent knowledge model with the traditional context-independent term relations (defined between two single terms), which are used to expand queries.\nThis latter selects expansion terms with strongest global relation to the query.\nThis relation is measured by the sum of relations to each of the query terms.\nThis method is equivalent to [24].\nIt is also similar to the translation model [3].\nWe call it 0 5 10 15 20 25 30 35 Environm entFinance Int.Econom ics Int.Finance Int.Politics Int.R elations Law &G ov.\nM edical&Bio.M ilitaryPolitics Sci.\n&Tech.\nU S Econom ics U S Politics Query 1-50 Query 51-150 Figure 1.\nDistribution of domains Table 2.\nTREC collection statistics Collection Document Size (GB) Voc.\n# of Doc.\nQuery Training Disk 2 0.86 350,085 231,219 1-50 Disks 1-3 Disks 1-3 3.10 785,932 1,078,166 51-150 TREC7 Disks 4-5 1.85 630,383 528,155 351-400 TREC8 Disks 4-5 1.85 630,383 528,155 401-450 Co-occurrence model in Table 4.\nT-test is also performed for statistical significance.\nAs we can see, simple co-occurrence relations can produce relatively strong improvements; but context-dependent relations can produce much stronger improvements in all cases, especially when feedback is not used.\nAll the improvements over cooccurrence model are statistically significant (this is not shown in the table).\nThe large differences between the two types of relation clearly show that context-dependent relations are more appropriate for query expansion.\nThis confirms the hypothesis we made, that by incorporating context information into relations, we can better determine the appropriate relations to apply and thus avoid introducing inappropriate expansion terms.\nThe following example can further confirm this observation, where we show the strongest expansion terms suggested by both types of relation for the query #384 space station moon: Co-occurrence Relations: year 0.016552 power 0.013226 time 0.010925 1 0.009422 develop 0.008932 offic 0.008485 oper 0.008408 2 0.007875 earth 0.007843 work 0.007801 radio 0.007701 system 0.007627 build 0.007451 000 0.007403 includ 0.007377 state 0.007076 program 0.007062 nation 0.006937 open 0.006889 servic 0.006809 air 0.006734 space 0.006685 nuclear 0.006521 full 0.006425 make 0.006410 compani 0.006262 peopl 0.006244 project 0.006147 unit 0.006114 gener 0.006036 dai 0.006029 Context-Dependent Relations: space 0.053913 mar 0.046589 earth 0.041786 man 0.037770 program 0.033077 project 0.026901 base 0.025213 orbit 0.025190 build 0.025042 mission 0.023974 call 0.022573 explor 0.021601 launch 0.019574 develop 0.019153 shuttl 0.016966 plan 0.016641 flight 0.016169 station 0.016045 intern 0.016002 energi 0.015556 oper 0.014536 power 0.014224 transport 0.012944 construct 0.012160 nasa 0.011985 nation 0.011855 perman 0.011521 japan 0.011433 apollo 0.010997 lunar 0.010898 In comparison with the baseline model with feedback (Tab.\n3), we see that the improvements made by Knowledge model alone are slightly lower.\nHowever, when both models are combined, there are additional improvements over the Feedback model, and these improvements are statistically significant in 2 cases out of 3.\nThis demonstrates that the impacts produced by feedback and term relations are different and complementary.\n7.4 Domain Models In this section, we test several strategies to create and use domain models, by exploiting the domain information of the query set in various ways.\nStrategies for creating domain models: C1 - With the relevant documents for the in-domain queries: this strategy simulates the case where we have an existing directory in which documents relevant to the domain are included.\nC2 - With the top-100 documents retrieved with the in-domain queries: this strategy simulates the case where the user specifies a domain for his queries without judging document relevance, and the system gathers related documents from his search history.\nStrategies for using domain models: U1 - The domain model is determined by the user manually.\nU2 - The domain model is determined by the system.\n7.4.1 Creating Domain models We test strategies C1 and C2.\nIn this series of tests, each of the queries 51-150 is used in turn as the test query while the other queries and their relevant documents (C1) or top-ranked retrieved documents (C2) are used to create domain models.\nThe same method is used on queries 1-50 to tune the parameters.\nTable 3.\nBaseline models Unigram Model Coll.\nMeasure Without FB With FB AvgP 0.1570 0.2344 (+49.30%) Recall \/48 355 15 711 19 513Disks 1-3 P@10 0.4050 0.5010 AvgP 0.1656 0.2176 (+31.40%) Recall \/4 674 2 237 2 777TREC7 P@10 0.3420 0.3860 AvgP 0.2387 0.2909 (+21.87%) Recall \/4 728 2 764 3 237TREC8 P@10 0.4340 0.4860 Table 4.\nKnowledge models Co-occurrence Knowledge model Coll.\nMeasure Without FB With FB Without FB With FB AvgP 0.1884 (+20.00%)++ 0.2432 (+3.75%)** 0.2164 (+37.83%)++ 0.2463 (+5.08%)** Recall \/48 355 17 430 20 020 18 944 20 260 Disks1-3 P@10 0.4640 0.5160 0.5050 0.5120 AvgP 0.1823 (+10.08%)++ 0.2350 (+8.00%)* 0.2157 (+30.25%)++ 0.2401 (+10.34%)** Recall \/4 674 2 329 2 933 2 709 2 985 TREC7 P@10 0.3780 0.3760 0.3900 0.3900 AvgP 0.2519 (+5.53%) 0.2926 (+0.58%) 0.2724 (+14.12%)++ 0.3007 (+3.37%) Recall \/4 728 2 829 3 279 3 090 3 338 TREC8 P@10 0.4360 0.4940 0.4720 0.5000 (The column WithoutFB is compared to the baseline model without feedback, while WithFB is compared to the baseline with feedback.\n++ and + mean significant changes in t-test with respect to the baseline without feedback, at the level of p<0.01 and p<0.05, respectively.\n** and * are similar but compared to the baseline model with feedback.)\nTable 5.\nDomain models with relevant documents (C1) Domain Sub-Domain Coll.\nMeasure Without FB With FB Without FB With FB AvgP 0.1700 (+8.28%)++ 0.2454 (+4.69%)** 0.1918 (+22.17%)++ 0.2461 (+4.99%)** Recall \/48 355 16 517 20 141 17 872 20 212 Disks1-3 (U1) P@10 0.4370 0.5130 0.4490 0.5150 AvgP 0.1715 (+3.56%)++ 0.2389 (+9.79%)* 0.1842 (+11.23%)++ 0.2408 (+10.66%)** Recall \/4 674 2 270 2 965 2 428 2 987 TREC7 (U2) P@10 0.3720 0.3740 0.3880 0.3760 AvgP 0.2442 (+2.\n30%) 0.2957 (+1.65%) 0.2563 (+7.37%) 0.2967 (+1.99%) Recall \/4 728 2 796 3 308 2 873 3 302 TREC8 (U2) P@10 0.4420 0.5000 0.4280 0.5020 Table 6.\nDomain models with top-100 documents (C2) Domain Sub-Domain Coll.\nMeasure Without FB With FB Without FB With FB AvgP 0.1718 (+9.43%)++ 0.2456 (+4.78%)** 0.1799 (+14.59%)++ 0.2452 (+4.61%)** Recall \/48 355 16 558 20 131 17 341 20 155 Disks1-3 (U1) P@10 0.4300 0.5140 0.4220 0.5110 AvgP 0.1765 (+6.58%)++ 0.2395 (+10.06%)** 0.1785 (+7.79%)++ 0.2393 (+9.97%)** Recall \/4 674 2 319 2 969 2 254 2 968 TREC7 (U2) P@10 0.3780 0.3820 0.3820 0.3820 AvgP 0.2434 (+1.97%) 0.2949 (+1.38%) 0.2441 (+2.26%) 0.2961 (+1.79%) Recall \/4 728 2 772 3 318 2 734 3 311 TREC8 (U2) P@10 0.4380 0.4960 0.4280 0.5020 We also compare the domain models created with all the indomain documents (Domain) and with only the top-10 retrieved documents in the domain with the query (Sub-Domain).\nIn these tests, we use manual identification of query domain for Disks 1-3 (U1), but automatic identification for TREC7 and 8 (U2).\nFirst, it is interesting to notice that the incorporation of domain models can generally improve retrieval effectiveness in all the cases.\nThe improvements on Disks 1-3 and TREC7 are statistically significant.\nHowever, the improvement scales are smaller than using Feedback and Relation models.\nLooking at the distribution of the domains (Fig. 1), this observation is not surprising: for many domains, we only have few training queries, thus few indomain documents to create domain models.\nIn addition, topics in the same domain can vary greatly, in particular in large domains such as science and technology, international politics, etc..\nSecond, we observe that the two methods to create domain models perform equally well (Tab.\n6 vs. Tab.\n5).\nIn other words, providing relevance judgments for queries does not add much advantage for the purpose of creating domain models.\nThis may seem surprising.\nAn analysis immediately shows the reason: a domain model (in the way we created) only captures term distribution in the domain.\nRelevant documents for all in-domain queries vary greatly.\nTherefore, in some large domains, characteristic terms have variable effects on queries.\nOn the other hand, as we only use term distribution, even if the top documents retrieved for the in-domain queries are irrelevant, they can still contain domain characteristic terms similarly to relevant documents.\nThus both strategies produce very similar effects.\nThis result opens the door for a simpler method that does not require relevance judgments, for example using search history.\nThird, without Feedback model, the sub-domain models constructed with relevant documents perform much better than the whole domain models (Tab.\n5).\nHowever, once Feedback model is used, the advantage disappears.\nOn one hand, this confirms our earlier hypothesis that a domain may be too large to be able to suggest relevant terms for new queries in the domain.\nIt indirectly validates our first hypothesis that a single user model or profile may be too large, so smaller domain models are preferred.\nOn the other hand, sub-domain models capture similar characteristics to Feedback model.\nSo when the latter is used, sub-domain models become superfluous.\nHowever, if domain models are constructed with top-ranked documents (Tab.\n6), sub-domain models make much less differences.\nThis can be explained by the fact that the domains constructed with top-ranked documents tend to be more uniform than relevant documents with respect to term distribution, as the top retrieved documents usually have stronger statistical correspondence with the queries than the relevant documents.\n7.4.2 Determining Query Domain Automatically It is not realistic to always ask users to specify a domain for their queries.\nHere, we examine the possibility to automatically identify query domains.\nTable 7 shows the results with this strategy using both strategies for domain model construction.\nWe can observe that the effectiveness is only slightly lower than those produced with manual identification of query domain (Tab.\n5 & 6, Domain models).\nThis shows that automatic domain identification is a way to select domain model as effective as manual identification.\nThis also demonstrates the feasibility to use domain models for queries when no domain information is provided.\nLooking at the accuracy of the automatic domain identification, however, it is surprisingly low: for queries 51-150, only 38% of the determined domains correspond to the manual identifications.\nThis is much lower than the above 80% rates reported in [18].\nA detailed analysis reveals that the main reason is the closeness of several domains in TREC queries (e.g. International relations, International politics, Politics).\nHowever, in this situation, wrong domains assigned to queries are not always irrelevant and useless.\nFor example, even when a query in International relations is classified in International politics, the latter domain can still suggest useful terms to the query.\nTherefore, the relatively low classification accuracy does not mean low usefulness of the domain models.\n7.5 Complete Models The results with the complete model are shown in Table 8.\nThis model integrates all the components described in this paper: Original query model, Feedback model, Domain model and Knowledge model.\nWe have tested both strategies to create domain models, but the differences between them are very small.\nSo we only report the results with the relevant documents.\nOur first observation is that the complete models produce the best results.\nAll the improvements over the baseline model (with feedback) are statistically significant.\nThis result confirms that the integration of contextual factors is effective.\nCompared to the other results, we see consistent, although small in some cases, improvements over all the partial models.\nLooking at the mixture weights, which may reflect the importance of each model, we observed that the best settings in all the collections vary in the following ranges: 0.1\u2264\u03b10 \u22640.2, 0.1\u2264\u03b1Dom \u22640.2, 0.1\u2264\u03b1K \u22640.2 and 0.5\u2264\u03b1F \u22640.6.\nWe see that the most important factor is Feedback model.\nThis is also the single factor which produced the highest improvements over the original query model.\nThis observation seems to indicate that this model has the highest capability to capture the information need behind the query.\nHowever, even with lower weights, the other models do have strong impacts on the final effectiveness.\nThis demonstrates the benefit of integrating more contextual factors in IR.\nTable 7.\nAutomatic query domain identification (U2) Dom.\nwith rel.\ndoc.\n(C1) Dom.\nwith top-100 doc.\n(C2) Coll.\nMeasure Without FB With FB Without FB With FB AvgP 0.1650 (+5.10%)++ 0.2444 (+4.27%)** 0.1670 (+6.37%)++ 0.2449 (+4.48%)** Recall 16 343 20 061 16 414 20 090 Disks 1-3 (U2) P@10 0.4270 0.5100 0.4090 0.5140 Table 8.\nComplete models (C1) All Doc.\nDomain Coll.\nMeasure Man.\ndom.\nid.\n(U1) Auto.\ndom.\nid.\n(U2) AvgP 0.2501 (+6.70%) ** 0.2489 (+6.19%) ** Recall \/48 355 20 514 20 367 Disks 1-3 P@10 0.5200 0.5230 AvgP 0.2462 (+13.14%) ** Recall \/4 674 3 014TREC7 P@10 N\/A 0.3960 AvgP 0.3029 (+4.13%) ** Recall \/4 728 3 321TREC8 P@10 N\/A 0.5020 8.\nCONCLUSIONS Traditional IR approaches usually consider the query as the only element available for the user information need.\nMany previous studies have investigated the integration of some contextual factors in IR models, typically by incorporating a user profile.\nIn this paper, we argue that a single user profile (or model) can contain a too large variety of different topics so that new queries can be incorrectly biased.\nSimilarly to some previous studies, we propose to model topic domains instead of the user.\nPrevious investigations on context focused on factors around the query.\nWe showed in this paper that factors within the query are also important - they help select the appropriate term relations to apply in query expansion.\nWe have integrated the above contextual factors, together with feedback model, in a single language model.\nOur experimental results strongly confirm the benefit of using contexts in IR.\nThis work also shows that the language modeling framework is appropriate for integrating many contextual factors.\nThis work can be further improved on several aspects, including other methods to extract term relations, to integrate more context words in conditions and to identify query domains.\nIt would also be interesting to test the method on Web search using user search history.\nWe will investigate these problems in our future research.\n9.\nREFERENCES [1] Bai, J., Nie, J.Y., Cao, G., Context-dependent term relations for information retrieval, EMNLP``06, pp. 551-559, 2006.\n[2] Belkin, N.J., Interaction with texts: Information retrieval as information seeking behavior, Information Retrieval``93: Von der modellierung zu anwendung, pp. 55-66, Konstanz: Krause & Womser-Hacker, 1993.\n[3] Berger, A., Lafferty, J., Information retrieval as statistical translation, SIGIR``99, pp. 222-229, 1999.\n[4] Bouchard, H., Nie, J.Y., Mod\u00e8les de langue appliqu\u00e9s \u00e0 la recherche d``information contextuelle, Conf.\nen Recherche d``Information et Applications (CORIA), Lyon, 2006.\n[5] Chirita, P.A., Paiu, R., Nejdl, W., Kohlsch\u00fctter, C., Using ODP metadata to personalize search, SIGIR, pp. 178-185, 2005.\n[6] Church, K. W., Hanks, P., Word association norms, mutual information, and lexicography.\nACL, pp. 22-29, 1989.\n[7] Croft, W. B., Cronen-Townsend, S., Lavrenko, V., Relevance feedback and personalization: A language modeling perspective, In: The DELOS-NSF Workshop on Personalization and Recommender Systems Digital Libraries, pp. 49-54, 2006.\n[8] Croft, W. B., Wei, X., Context-based topic models for query modification, CIIR Technical Report, University of Massachusetts, 2005.\n[9] Dumais, S., Cutrell, E., Cadiz, J., Jancke, G., Sarin, R., Robbins, D. C., Stuff I've seen: a system for personal information retrieval and re-use, SIGIR'03, pp. 72-79, 2003.\n[10] Fang, H., Zhai, C., Semantic term matching in axiomatic approaches to information retrieval, SIGIR``06, pp.115-122, 2006.\n[11] Gao, J., Qi, H., Xia, X., Nie, J.-Y., Linear discriminative model for information retrieval.\nSIGIR``05, pp. 290-297, 2005.\n[12] Goole Personalized Search, http:\/\/www.google.com\/psearch.\n[13] Hipp, J., Guntzer, U., Nakhaeizadeh, G., Algorithms for association rule mining - a general survey and comparison.\nSIGKDD explorations, 2 (1), pp. 58-64, 2000.\n[14] Ingwersen, P., J\u00e4verlin, K., Information retrieval in context: IRiX, SIGIR Forum, 39: pp. 31-39, 2004.\n[15] Kim, H.-R., Chan, P.K., Personalized ranking of search results with learned user interest hierarchies from bookmarks, WEBKDD``05 Workshop at ACM-KDD, pp. 32-43, 2005.\n[16] Lavrenko, V., Croft, W. B., Relevance-based language models, SIGIR``01, pp. 120-127, 2001.\n[17] Lau, R., Bruza, P., Song, D., Belief revision for adaptive information retrieval, SIGIR``04, pp. 130-137, 2004.\n[18] Liu, F., Yu,C., Meng, W., Personalized web search by mapping user queries to categories, CIKM``02, pp. 558-565.\n[19] Liu, X., Croft, W. B., Cluster-based retrieval using language models, SIGIR '04, pp. 186-193, 2004.\n[20] Morris, R.C., Toward a user-centered information service, JASIS, 45: pp. 20-30, 1994.\n[21] Park, T.K., Toward a theory of user-based relevance: A call for a new paradigm of inquiry, JASIS, 45: pp. 135-141, 1994.\n[22] Peng, F., Schuurmans, D., Wang, S. Augmenting Naive Bayes Classifiers with Statistical Language Models.\nInf.\nRetr.\n7(3-4): pp. 317-345, 2004.\n[23] Pitkow, J., Sch\u00fctze, H., Cass, T., Cooley, R., Turnbull, D., Edmonds, A., Adar, E., Breuel, T., Personalized Search, Communications of ACM, 45: pp. 50-55, 2002.\n[24] Qiu, Y., Frei, H.P. Concept based query expansion.\nSIGIR``93, pp.160-169, 1993.\n[25] Sanderson, M., Retrieving with good sense, Inf.\nRet., 2(1): pp. 49-69, 2000.\n[26] Schamber, L., Eisenberg, M.B., Nilan, M.S., A reexamination of relevance: Towards a dynamic, situational definition, Information Processing and Management, 26(6): pp. 755-774, 1990.\n[27] Sch\u00fctze, H., Pedersen J.O., A cooccurrence-based thesaurus and two applications to information retrieval, Information Processing and Management, 33(3): pp. 307-318, 1997.\n[28] Shen, D., Pan, R., Sun, J-T., Pan, J.J., Wu, K., Yin, J., Yang, Q. Query enrichment for web-query classification.\nACMTOIS, 24(3): pp. 320-352, 2006.\n[29] Shen, X., Tan, B., Zhai, C., Context-sensitive information retrieval using implicit feedback, SIGIR``05, pp. 43-50, 2005.\n[30] Teevan, J., Dumais, S.T., Horvitz, E., Personalizing search via automated analysis of interests and activities, SIGIR``05, pp. 449-456, 2005.\n[31] Voorhees, E., Query expansion using lexical-semantic relations.\nSIGIR``94, pp. 61-69, 1994.\n[32] Xu, J., Croft, W.B., Query expansion using local and global document analysis, SIGIR``96, pp. 4-11, 1996.\n[33] Yarowsky, D. Unsupervised word sense disambiguation rivaling supervised methods.\nACL, pp. 189-196.\n1995.\n[34] Zhou X., Hu X., Zhang X., Lin X., Song I-Y., Contextsensitive semantic smoothing for the language modeling approach to genomic IR, SIGIR``06, pp. 170-177, 2006.\n[35] Zhai, C., Lafferty, J., Model-based feedback in the language modeling approach to information retrieval, CIKM``01, pp. 403-410, 2001.\n[36] Zhai, C., Lafferty, J., A study of smoothing methods for language models applied to ad-hoc information retrieval.\nSIGIR, pp.334-342, 2001.","lvl-3":"Using Query Contexts in Information Retrieval\nABSTRACT\nUser query is an element that specifies an information need, but it is not the only one.\nStudies in literature have found many contextual factors that strongly influence the interpretation of a query.\nRecent studies have tried to consider the user's interests by creating a user profile.\nHowever, a single profile for a user may not be sufficient for a variety of queries of the user.\nIn this study, we propose to use query-specific contexts instead of user-centric ones, including context around query and context within query.\nThe former specifies the environment of a query such as the domain of interest, while the latter refers to context words within the query, which is particularly useful for the selection of relevant term relations.\nIn this paper, both types of context are integrated in an IR model based on language modeling.\nOur experiments on several TREC collections show that each of the context factors brings significant improvements in retrieval effectiveness.\n1.\nINTRODUCTION\nQueries, especially short queries, do not provide a complete specification of the information need.\nMany relevant terms can be absent from queries and terms included may be ambiguous.\nThese issues have been addressed in a large number of previous studies.\nTypical solutions include expanding either document or query representation [19] [35] by exploiting different resources [24] [31], using word sense disambiguation [25], etc. .\nIn these studies, however, it has been generally assumed that query is the only element available about the user's information need.\nIn reality, query is always formulated in a search context.\nAs it has been found in many previous studies [2] [14] [20] [21] [26], contextual factors have a strong influence on relevance judgments.\nThese factors include, among many others, the user's domain of interest, knowledge, preferences, etc. .\nAll these elements specify the\n2 Yahoo! Inc. .\nMontreal, Quebec, Canada\nbouchard@yahoo-inc.com\n2.\nCONTEXTS AND UTILIZATION IN IR\nDomain of interest and context around query\nKnowledge and context within query\nQuery profile and other factors\n3.\nGENERAL IR MODEL\n4.\nCONSTRUCTING AND USING DOMAIN MODELS\nDom Dom Ct \u2208 D\n5.\nEXTRACTING CONTEXT-DEPENDENT TERM RELATIONS FROM DOCUMENTS\n6.\nMODEL PARAMETERS\n7.\nEXPERIMENTS\n7.1 Setting\n7.2 Baseline Methods\n7.3 Knowledge Models\n7.4 Domain Models\nStrategies for creating domain models:\n7.4.1 Creating Domain models\n7.4.2 Determining Query Domain Automatically\n7.5 Complete Models\n8.\nCONCLUSIONS\nTraditional IR approaches usually consider the query as the only element available for the user information need.\nMany previous studies have investigated the integration of some contextual factors in IR models, typically by incorporating a user profile.\nIn this paper, we argue that a single user profile (or model) can contain a too large variety of different topics so that new queries can be incorrectly biased.\nSimilarly to some previous studies, we propose to model topic domains instead of the user.\nPrevious investigations on context focused on factors around the query.\nWe showed in this paper that factors within the query are also important--they help select the appropriate term relations to apply in query expansion.\nWe have integrated the above contextual factors, together with feedback model, in a single language model.\nOur experimental results strongly confirm the benefit of using contexts in IR.\nThis work also shows that the language modeling framework is appropriate for integrating many contextual factors.\nThis work can be further improved on several aspects, including other methods to extract term relations, to integrate more context words in conditions and to identify query domains.\nIt would also be interesting to test the method on Web search using user search history.\nWe will investigate these problems in our future research.","lvl-4":"Using Query Contexts in Information Retrieval\nABSTRACT\nUser query is an element that specifies an information need, but it is not the only one.\nStudies in literature have found many contextual factors that strongly influence the interpretation of a query.\nRecent studies have tried to consider the user's interests by creating a user profile.\nHowever, a single profile for a user may not be sufficient for a variety of queries of the user.\nIn this study, we propose to use query-specific contexts instead of user-centric ones, including context around query and context within query.\nThe former specifies the environment of a query such as the domain of interest, while the latter refers to context words within the query, which is particularly useful for the selection of relevant term relations.\nIn this paper, both types of context are integrated in an IR model based on language modeling.\nOur experiments on several TREC collections show that each of the context factors brings significant improvements in retrieval effectiveness.\n1.\nINTRODUCTION\nQueries, especially short queries, do not provide a complete specification of the information need.\nMany relevant terms can be absent from queries and terms included may be ambiguous.\nThese issues have been addressed in a large number of previous studies.\nIn these studies, however, it has been generally assumed that query is the only element available about the user's information need.\nIn reality, query is always formulated in a search context.\nThese factors include, among many others, the user's domain of interest, knowledge, preferences, etc. .\nAll these elements specify the\n8.\nCONCLUSIONS\nTraditional IR approaches usually consider the query as the only element available for the user information need.\nMany previous studies have investigated the integration of some contextual factors in IR models, typically by incorporating a user profile.\nSimilarly to some previous studies, we propose to model topic domains instead of the user.\nPrevious investigations on context focused on factors around the query.\nWe showed in this paper that factors within the query are also important--they help select the appropriate term relations to apply in query expansion.\nWe have integrated the above contextual factors, together with feedback model, in a single language model.\nOur experimental results strongly confirm the benefit of using contexts in IR.\nThis work also shows that the language modeling framework is appropriate for integrating many contextual factors.\nThis work can be further improved on several aspects, including other methods to extract term relations, to integrate more context words in conditions and to identify query domains.\nIt would also be interesting to test the method on Web search using user search history.","lvl-2":"Using Query Contexts in Information Retrieval\nABSTRACT\nUser query is an element that specifies an information need, but it is not the only one.\nStudies in literature have found many contextual factors that strongly influence the interpretation of a query.\nRecent studies have tried to consider the user's interests by creating a user profile.\nHowever, a single profile for a user may not be sufficient for a variety of queries of the user.\nIn this study, we propose to use query-specific contexts instead of user-centric ones, including context around query and context within query.\nThe former specifies the environment of a query such as the domain of interest, while the latter refers to context words within the query, which is particularly useful for the selection of relevant term relations.\nIn this paper, both types of context are integrated in an IR model based on language modeling.\nOur experiments on several TREC collections show that each of the context factors brings significant improvements in retrieval effectiveness.\n1.\nINTRODUCTION\nQueries, especially short queries, do not provide a complete specification of the information need.\nMany relevant terms can be absent from queries and terms included may be ambiguous.\nThese issues have been addressed in a large number of previous studies.\nTypical solutions include expanding either document or query representation [19] [35] by exploiting different resources [24] [31], using word sense disambiguation [25], etc. .\nIn these studies, however, it has been generally assumed that query is the only element available about the user's information need.\nIn reality, query is always formulated in a search context.\nAs it has been found in many previous studies [2] [14] [20] [21] [26], contextual factors have a strong influence on relevance judgments.\nThese factors include, among many others, the user's domain of interest, knowledge, preferences, etc. .\nAll these elements specify the\n2 Yahoo! Inc. .\nMontreal, Quebec, Canada\nbouchard@yahoo-inc.com\ncontexts around the query.\nSo we call them context around query in this paper.\nIt has been demonstrated that user's query should be placed in its context for a correct interpretation.\nRecent studies have investigated the integration of some contexts around the query [9] [30] [23].\nTypically, a user profile is constructed to reflect the user's domains of interest and background.\nA user profile is used to favor the documents that are more closely related to the profile.\nHowever, a single profile for a user can group a variety of different domains, which are not always relevant to a particular query.\nFor example, if a user working in computer science issues a query \"Java hotel\", the documents on \"Java language\" will be incorrectly favored.\nA possible solution to this problem is to use query-related profiles or models instead of user-centric ones.\nIn this paper, we propose to model topic domains, among which the related one (s) will be selected for a given query.\nThis method allows us to select more appropriate query-specific context around the query.\nAnother strong contextual factor identified in literature is domain knowledge, or domain-specific term relations, such as \"program computer\" in computer science.\nUsing this relation, one would be able to expand the query \"program\" with the term \"computer\".\nHowever, domain knowledge is available only for a few domains (e.g. \"Medicine\").\nThe shortage of domain knowledge has led to the utilization of general knowledge for query expansion [31], which is more available from resources such as thesauri, or it can be automatically extracted from documents [24] [27].\nHowever, the use of general knowledge gives rise to an enormous problem of knowledge ambiguity [31]: we are often unable to determine if a relation applies to a query.\nFor example, usually little information is available to determine whether \"program computer\" is applicable to queries \"Java program\" and \"TV program\".\nTherefore, the relation has been applied to all queries containing \"program\" in previous studies, leading to a wrong expansion for \"TV program\".\nLooking at the two query examples, however, people can easily determine whether the relation is applicable, by considering the context words \"Java\" and \"TV\".\nSo the important question is how we can serve these context words in queries to select the appropriate relations to apply.\nThese context words form a context within query.\nIn some previous studies [24] [31], context words in a query have been used to select expansion terms suggested by term relations, which are, however, context-independent (such as \"program computer\").\nAlthough improvements are observed in some cases, they are limited.\nWe argue that the problem stems from the lack of necessary context information in relations themselves, and a more radical solution lies in the addition of contexts in relations.\nThe method we propose is to add context words into the condition of a relation, such as \"{Java, program} computer\", to limit its applicability to the appropriate context.\nThis paper aims to make contributions on the following aspects: \u2022 Query-specific domain model: We construct more specific domain models instead of a single user model grouping all the domains.\nThe domain related to a specific query is selected (either manually or automatically) for each query.\n\u2022 Context within query: We integrate context words in term relations so that only appropriate relations can be applied to the query.\n\u2022 Multiple contextual factors: Finally, we propose a framework\nbased on language modeling approach to integrate multiple contextual factors.\nOur approach has been tested on several TREC collections.\nThe experiments clearly show that both types of context can result in significant improvements in retrieval effectiveness, and their effects are complementary.\nWe will also show that it is possible to determine the query domain automatically, and this results in comparable effectiveness to a manual specification of domain.\nThis paper is organized as follows.\nIn section 2, we review some related work and introduce the principle of our approach.\nSection 3 presents our general model.\nThen sections 4 and 5 describe respectively the domain model and the knowledge model.\nSection 6 explains the method for parameter training.\nExperiments are presented in section 7 and conclusions in section 8.\n2.\nCONTEXTS AND UTILIZATION IN IR\nThere are many contextual factors in IR: the user's domain of interest, knowledge about the subject, preference, document recency, and so on [2] [14].\nAmong them, the user's domain of interest and knowledge are considered to be among the most important ones [20] [21].\nIn this section, we review some of the studies in IR concerning these aspects.\nDomain of interest and context around query\nA domain of interest specifies a particular background for the interpretation of a query.\nIt can be used in different ways.\nMost often, a user profile is created to encompass all the domains of interest of a user [23].\nIn [5], a user profile contains a set of topic categories of ODP (Open Directory Project, http:\/\/dmoz.org) identified by the user.\nThe documents (Web pages) classified in these categories are used to create a term vector, which represents the whole domains of interest of the user.\nOn the other hand, [9] [15] [26] [30], as well as Google Personalized Search [12] use the documents read by the user, stored on user's computer or extracted from user's search history.\nIn all these studies, we observe that a single user profile (usually a statistical model or vector) is created for a user without distinguishing the different topic domains.\nThe systematic application of the user profile can incorrectly bias the results for queries unrelated to the profile.\nThis situation can often occur in practice as a user can search for a variety of topics outside the domains that he has previously searched in or identified.\nA possible solution to this problem is the creation of multiple profiles, one for a separate domain of interest.\nThe domains related to a query are then identified according to the query.\nThis will enable us to use a more appropriate query-specific profile, instead of a user-centric one.\nThis approach is used in [18] in which ODP directories are used.\nHowever, only a small scale experiment has been carried out.\nA similar approach is used in [8], where domain models are created using ODP categories and user queries are manually mapped to them.\nHowever, the experiments showed variable results.\nIt remains unclear whether domain models can be effectively used in IR.\nIn this study, we also model topic domains.\nWe will carry out experiments on both automatic and manual identification of query domains.\nDomain models will also be integrated with other factors.\nIn the following discussion, we will call the topic domain of a query a context around query to contrast with another context within query that we will introduce.\nKnowledge and context within query\nDue to the unavailability of domain-specific knowledge, general knowledge resources such as Wordnet and term relations extracted automatically have been used for query expansion [27] [31].\nIn both cases, the relations are defined between two single terms such as \"t1 \u2192 t2\".\nIf a query contains term t1, then t2 is always considered as a candidate for expansion.\nAs we mentioned earlier, we are faced with the problem of relation ambiguity: some relations apply to a query and some others should not.\nFor example, \"program \u2192 computer\" should not be applied to \"TV program\" even if the latter contains \"program\".\nHowever, little information is available in the relation to help us determine if an application context is appropriate.\nTo remedy this problem, approaches have been proposed to make a selection of expansion terms after the application of relations [24] [31].\nTypically, one defines some sort of global relation between the expansion term and the whole query, which is usually a sum of its relations to every query word.\nAlthough some inappropriate expansion terms can be removed because they are only weakly connected to some query terms, many others remain.\nFor example, if the relation \"program \u2192 computer\" is strong enough, \"computer\" will have a strong global relation to the whole query \"TV program\" and it still remains as an expansion term.\nIt is possible to integrate stronger control on the utilization of knowledge.\nFor example, [17] defined strong logical relations to encode knowledge of different domains.\nIf the application of a relation leads to a conflict with the query (or with other pieces of evidence), then it is not applied.\nHowever, this approach requires encoding all the logical consequences including contradictions in knowledge, which is difficult to implement in practice.\nIn our earlier study [1], a simpler and more general approach is proposed to solve the problem at its source, i.e. the lack of context information in term relations: by introducing stricter conditions in a relation, for example \"{Java, program} \u2192 computer\" and \"{algorithm, program} \u2192 computer\", the applicability of the relations will be naturally restricted to correct contexts.\nAs a result, \"computer\" will be used to expand queries \"Java program\" or \"program algorithm\", but not \"TV program\".\nThis principle is similar to that of [33] for word sense disambiguation.\nHowever, we do not explicitly assign a meaning to a word; rather we try to make differences between word usages in different contexts.\nFrom this point of view, our approach is more similar to word sense discrimination [27].\nIn this paper, we use the same approach and we will integrate it into a more global model with other context factors.\nAs the context words added into relations allow us to exploit the word context within the query, we call such factors context within query.\nWithin query context exists in many queries.\nIn fact, users\noften do not use a single ambiguous word such as \"Java\" as query (if they are aware of its ambiguity).\nSome context words are often used together with it.\nIn these cases, contexts within query are created and can be exploited.\nQuery profile and other factors\nMany attempts have been made in IR to create query-specific profiles.\nWe can consider implicit feedback or blind feedback [7] [16] [29] [32] [35] in this family.\nA short-term feedback model is created for the given query from feedback documents, which has been proven to be effective to capture some aspects of the user's intent behind the query.\nIn order to create a good query model, such a query-specific feedback model should be integrated.\nThere are many other contextual factors ([26]) that we do not deal with in this paper.\nHowever, it seems clear that many factors are complementary.\nAs found in [32], a feedback model creates a local context related to the query, while the general knowledge or the whole corpus defines a global context.\nBoth types of contexts have been proven useful [32].\nDomain model specifies yet another type of useful information: it reflects a set of specific background terms for a domain, for example \"pollution\", \"rain\", \"greenhouse\", etc. for the domain of \"Environment\".\nThese terms are often presumed when a user issues a query such as \"waste cleanup\" in the domain.\nIt is useful to add them into the query.\nWe see a clear complementarity among these factors.\nIt is then useful to combine them together in a single IR model.\nIn this study, we will integrate all the above factors within a unified framework based on language modeling.\nEach component contextual factor will determines a different ranking score, and the final document ranking combines all of them.\nThis is described in the following section.\n3.\nGENERAL IR MODEL\nIn the language modeling framework, a typical score function is defined in KL-divergence as follows: Score (Q, D) = E P (t Q) () (Q D)\nwhere BD is a (unigram) language model created for a document D, BQ a language model for the query Q, and V the vocabulary.\nSmoothing on document model is recognized to be crucial [35], and one of common smoothing methods is the Jelinek-Mercer interpolation smoothing:\nwhere A is an interpolation parameter and BC the collection model.\nIn the basic language modeling approaches, the query model is estimated by Maximum Likelihood Estimation (MLE) without any smoothing.\nIn such a setting, the basic retrieval operation is still limited to keyword matching, according to a few words in the query.\nTo improve retrieval effectiveness, it is important to create a more complete query model that represents better the information need.\nIn particular, all the related and presumed words should be included in the query model.\nA more complete query model by several methods have been proposed using feedback documents [16] [35] or using term relations [1] [10] [34].\nIn these cases, we construct two models for the query: the initial query model containing only the original terms, and a new model containing the added terms.\nThey are then combined through interpolation.\nIn this paper, we generalize this approach and integrate more models for the query.\nLet us use BQ0 to denote the original query model, F BQ for the feedback model created from feedback documents, Dom B Q for a domain model and BQK for a knowledge model created by applying term relations.\nBQ0 can be created by MLE.\nF\nGiven these models, we create the following final query model by interpolation:\n, K, F} is the set of all component models and) are their mixture weights.\nThen the document score in Equation (1) is extended as follows:\neach component model.\nHere we can see that our strategy of enhancing the query model by contextual factors is equivalent to document re-ranking, which is used in [5] [15] [30].\nThe remaining problem is to construct domain models and knowledge model and to combine all the models (parameter setting).\nWe describe this in the following sections.\n4.\nCONSTRUCTING AND USING DOMAIN MODELS\nAs in previous studies, we exploit a set of documents already classified in each domain.\nThese documents can be identified in two different ways: 1) One can take advantages of an existing domain hierarchy and the documents manually classified in them, such as ODP.\nIn that case, a new query should be classified into the same domains either manually or automatically.\n2) A user can define his own domains.\nBy assigning a domain to his queries, the system can gather a set of answers to the queries automatically, which are then considered to be in-domain documents.\nThe answers could be those that the user have read, browsed through, or judged relevant to an in-domain query, or they can be simply the top-ranked retrieval results.\nAn earlier study [4] has compared the above two strategies using TREC queries 51-150, for which a domain has been manually assigned.\nThese domains have been mapped to ODP categories.\nIt is found that both approaches mentioned above are equally effective and result in comparable performance.\nTherefore, in this study, we only use the second approach.\nThis choice is also motivated by the possibility to compare between manual and automatic assignment of domain to a new query.\nThis will be explained in detail in our experiments.\nWhatever the strategy, we will obtain a set of documents for each domain, from which a language model can be extracted.\nIf maximum likelihood estimation is used directly on these documents, the resulting domain model will contain both domain\nspecific terms and general terms, and the former do not emerge.\nTherefore, we employ an EM process to extract the specific part of the domain as follows: we assume that the documents in a domain are generated by a domain-specific model (to be extracted) and general language model (collection model).\nThen the likelihood of a document in the domain can be formulated as follows:;\nDom Dom Ct \u2208 D\nwhere c (t; D) is the count of t in document D and \u03b7 is a smoothing parameter (which will be fixed at 0.5 as in [35]).\nThe EM algorithm is used to extract the domain model \u03b8Dom that maximizes P (Dom | \u03b8'D om) (where Dom is the set of documents in the domain), that is:\nThis is the same process as the one used to extract feedback model in [35].\nIt is able to extract the most specific words of the domain from the documents while filtering out the common words of the language.\nThis can be observed in the following table, which shows some words in the domain model of \"Environment\" before and after EM iterations (50 iterations).\nTable 1.\nTerm probabilities before\/after EM\nGiven a set of domain models, the related ones have to be assigned to a new query.\nThis can be done manually by the user or automatically by the system using query classification.\nWe will compare both approaches.\nQuery classification has been investigated in several studies [18] [28].\nIn this study, we use a simple classification method: the selected domain is the one with which the query's KL-divergence score is the lowest, i.e.:\nThis classification method is an extension to Na\u00efve Bayes as shown in [22].\nThe score depending on the domain model is then as follows:\nAlthough the above equation requires using all the terms in the vocabulary, in practice, only the strongest terms in the domain model are useful and the terms with low probabilities are often noise.\nTherefore, we only retain the top 100 strongest terms.\nThe same strategy is used for Knowledge model.\nAlthough domain models are more refined than a single user profile, the topics in a single domain can still be very different, making the domain model too large.\nThis is particularly true for large domains such as \"Science and technology\" defined in TREC queries.\nUsing such a large domain model as the background can introduce much noise terms.\nTherefore, we further construct a subdomain model more related to the given query, by using a subset of in-domain documents that are related to the query.\nThese documents are the top-ranked documents retrieved with the original query within the domain.\nThis approach is indeed a combination of domain and feedback models.\nIn our experiments, we will see that this further specification of sub-domain is necessary in some cases, but not in all, especially when Feedback model is also used.\n5.\nEXTRACTING CONTEXT-DEPENDENT TERM RELATIONS FROM DOCUMENTS\nIn this paper, we extract term relations from the document collection automatically.\nIn general, a term relation can be represented as A \u2192 B. Both A and B have been restricted to single terms in previous studies.\nA single term in A means that the relation is applicable to all the queries containing that term.\nAs we explained earlier, this is the source of many wrong applications.\nThe solution we propose is to add more context terms into A, so that it is applicable only when all the terms in A appear in a query.\nFor example, instead of creating a context-independent relation \"Java \u2192 program\", we will create \"{Java, computer} \u2192 program\", which means that \"program\" is selected when both \"Java\" and \"computer\" appear in a query.\nThe term added in the condition specifies a stricter context to apply the relation.\nWe call this type of relation context-dependent relation.\nIn principle, the addition is not restricted to one term.\nHowever, we will make this restriction due to the following reasons:\n\u2022 User queries are usually very short.\nAdding more terms into the condition will create many rarely applicable relations; \u2022 In most cases, an ambiguous word such as \"Java\" can be effectively disambiguated by one useful context word such as \"computer\" or \"hotel\"; \u2022 The addition of more terms will also lead to a higher space and time complexity for extracting and storing term relations.\nThe extraction of relations of type \"{tj, tk} \u2192 ti\" can be performed using mining algorithms for association rules [13].\nHere, we use a simple co-occurrence analysis.\nWindows of fixed size (10 words in our case) are used to obtain co-occurrence counts of three terms, and the probability P (ti | t j tk) is determined as follows:\nwhere c (ti, tj, tk) is the count of co-occurrences.\nIn order to reduce space requirement, we further apply the following filtering criteria: \u2022 The two terms in the condition should appear at least certain time together in the collection (10 in our case) and they should be related.\nWe use the following pointwise mutual information as a measure of relatedness (MI> 0) [6]: = arg max \u03b8Dom\nwhere (tj tk) \u2208 Q means any combination of two terms in the query.\nThis is a direct extension of the translation model proposed in [3] to our context-dependent relations.\nThe score according to the Knowledge model is then defined as follows: Score Q D = \u2211 \u2211 P t t t P t\nAgain, only the top 100 expansion terms are used.\n6.\nMODEL PARAMETERS\nThere are several parameters in our model: \u03bb in Equation (2) and \u03b1i (i \u2208 {0, Dom, K, F}) in Equation (3).\nAs the parameter \u03bb only affects document model, we will set it to the same value in all our experiments.\nThe value \u03bb = 0.5 is determined to maximize the effectiveness of the baseline models (see Section 7.2) on the training data: TREC queries 1-50 and documents on Disk 2.\nThe mixture weights \u03b1i of component models are trained on the same training data using the following method of line search [11] to maximize the Mean Average Precision (MAP): each parameter is considered as a search direction.\nWe start by searching in one direction--testing all the values in that direction, while keeping the values in other directions unchanged.\nEach direction is searched in turn, until no improvement in MAP is observed.\nIn order to avoid being trapped at a local maximum, we started from 10 random points and the best setting is selected.\n7.\nEXPERIMENTS\n7.1 Setting\nThe main test data are those from TREC 1-3 ad hoc and filtering tracks, including queries 1-150, and documents on Disks 1-3.\nThe choice of this test collection is due to the availability of manually specified domain for each query.\nThis allows us to compare with an approach using automatic domain identification.\nBelow is an example of topic:\n<num> Number: 103 <dom> Domain: Law and Government <title> Topic: Welfare Reform\nWe only use topic titles in all our tests.\nQueries 1-50 are used for training and 51-150 for testing.\n13 domains are defined in these queries and their distributions among the two sets of queries are shown in Fig. 1.\nWe can see that the distribution varies strongly between domains and between the two query sets.\nWe have also tested on TREC 7 and 8 data.\nFor this series of tests, each collection is used in turn as training data while the other is used for testing.\nSome statistics of the data are described in Tab.\n2.\nFigure 1.\nDistribution of domains\nTable 2.\nTREC collection statistics\nAll the documents are preprocessed using Porter stemmer in Lemur and the standard stoplist is used.\nSome queries (4, 5 and 3 in the three query sets) only contain one word.\nFor these queries, knowledge model is not applicable.\nOn domain models, we examine several questions:\n\u2022 When query domain is specified manually, is it useful to incorporate the domain model?\n\u2022 If the query domain is not specified, can it be determined automatically?\nHow effective is this method?\n\u2022 We described two ways to gather documents for a domain:\neither using documents judged relevant to queries in the domain or using documents retrieved for these queries.\nHow do they compare?\nOn Knowledge model, in addition to testing its effectiveness, we also want to compare the context-dependent relations with context-independent ones.\nFinally, we will see the impact of each component model when all the factors are combined.\n7.2 Baseline Methods\nTwo baseline models are used: the classical unigram model without any expansion, and the model with Feedback.\nIn all the experiments, document models are created using Jelinek-Mercer smoothing.\nThis choice is made according to the observation in [36] that the method performs very well for long queries.\nIn our case, as queries are expanded, they perform similarly to long queries.\nIn our preliminary tests, we also found this method performed better than the other methods (e.g. Dirichlet), especially for the main baseline method with Feedback model.\nTable 3 shows the retrieval effectiveness on all the collections.\n7.3 Knowledge Models\nThis model is combined with both baseline models (with or without feedback).\nWe also compare the context-dependent knowledge model with the traditional context-independent term relations (defined between two single terms), which are used to expand queries.\nThis latter selects expansion terms with strongest global relation to the query.\nThis relation is measured by the sum of relations to each of the query terms.\nThis method is equivalent to [24].\nIt is also similar to the translation model [3].\nWe call it\nTable 3.\nBaseline models\nTable 4.\nKnowledge models\nCo-occurrence model in Table 4.\nT-test is also performed for statistical significance.\nAs we can see, simple co-occurrence relations can produce relatively strong improvements; but context-dependent relations can produce much stronger improvements in all cases, especially when feedback is not used.\nAll the improvements over cooccurrence model are statistically significant (this is not shown in the table).\nThe large differences between the two types of relation clearly show that context-dependent relations are more appropriate for query expansion.\nThis confirms the hypothesis we made, that by incorporating context information into relations, we can better determine the appropriate relations to apply and thus avoid introducing inappropriate expansion terms.\nThe following example can further confirm this observation, where we show the strongest expansion terms suggested by both types of relation for the query #384 \"space station moon\": Co-occurrence Relations: year 0.016552 power 0.013226 time 0.010925 1 0.009422 develop 0.008932 offic 0.008485 oper 0.008408 2 0.007875 earth 0.007843 work 0.007801 radio 0.007701 system 0.007627 build 0.007451 000 0.007403 includ 0.007377 state 0.007076 program 0.007062 nation 0.006937 open 0.006889 servic 0.006809 air 0.006734 space 0.006685 nuclear 0.006521 full 0.006425 make 0.006410 compani 0.006262 peopl 0.006244 project 0.006147 unit 0.006114 gener 0.006036 dai 0.006029 Context-Dependent Relations: space 0.053913 mar 0.046589 earth 0.041786 man 0.037770 program 0.033077 project 0.026901 base 0.025213 orbit 0.025190 build 0.025042 mission 0.023974 call 0.022573 explor 0.021601 launch 0.019574 develop 0.019153 shuttl 0.016966 plan 0.016641 flight 0.016169 station 0.016045 intern 0.016002 energi 0.015556 oper 0.014536 power 0.014224 transport 0.012944 construct 0.012160 nasa 0.011985 nation 0.011855 perman 0.011521 japan 0.011433 apollo 0.010997 lunar 0.010898 In comparison with the baseline model with feedback (Tab.\n3), we see that the improvements made by Knowledge model alone are slightly lower.\nHowever, when both models are combined, there are additional improvements over the Feedback model, and these improvements are statistically significant in 2 cases out of 3.\nThis demonstrates that the impacts produced by feedback and term relations are different and complementary.\n7.4 Domain Models\nIn this section, we test several strategies to create and use domain models, by exploiting the domain information of the query set in various ways.\nStrategies for creating domain models:\nC1 - With the relevant documents for the in-domain queries: this strategy simulates the case where we have an existing directory in which documents relevant to the domain are included.\nC2 - With the top-100 documents retrieved with the in-domain queries: this strategy simulates the case where the user specifies a domain for his queries without judging document relevance, and the system gathers related documents from his search history.\nStrategies for using domain models: U1 - The domain model is determined by the user manually.\nU2 - The domain model is determined by the system.\n7.4.1 Creating Domain models\nWe test strategies C1 and C2.\nIn this series of tests, each of the queries 51-150 is used in turn as the test query while the other queries and their relevant documents (C1) or top-ranked retrieved documents (C2) are used to create domain models.\nThe same method is used on queries 1-50 to tune the parameters.\nTable 6.\nDomain models with top-100 documents (C2)\n(The column WithoutFB is compared to the baseline model without feedback, while WithFB is compared to the baseline with feedback.\n+ + and + mean significant changes in t-test with respect to the baseline without feedback, at the level of p <0.01 and p <0.05, respectively.\n** and * are similar but compared to the baseline model with feedback.)\nTable 5.\nDomain models with relevant documents (C1)\nWe also compare the domain models created with all the indomain documents (Domain) and with only the top-10 retrieved documents in the domain with the query (Sub-Domain).\nIn these tests, we use manual identification of query domain for Disks 1-3 (U1), but automatic identification for TREC7 and 8 (U2).\nFirst, it is interesting to notice that the incorporation of domain models can generally improve retrieval effectiveness in all the cases.\nThe improvements on Disks 1-3 and TREC7 are statistically significant.\nHowever, the improvement scales are smaller than using Feedback and Relation models.\nLooking at the distribution of the domains (Fig. 1), this observation is not surprising: for many domains, we only have few training queries, thus few indomain documents to create domain models.\nIn addition, topics in the same domain can vary greatly, in particular in large domains such as \"science and technology\", \"international politics\", etc. .\nSecond, we observe that the two methods to create domain models perform equally well (Tab.\n6 vs. Tab.\n5).\nIn other words, providing relevance judgments for queries does not add much advantage for the purpose of creating domain models.\nThis may seem surprising.\nAn analysis immediately shows the reason: a domain model (in the way we created) only captures term distribution in the domain.\nRelevant documents for all in-domain queries vary greatly.\nTherefore, in some large domains, characteristic terms have variable effects on queries.\nOn the other hand, as we only use term distribution, even if the top documents retrieved for the in-domain queries are irrelevant, they can still contain domain characteristic terms similarly to relevant documents.\nThus both strategies produce very similar effects.\nThis result opens the door for a simpler method that does not require relevance judgments, for example using search history.\nThird, without Feedback model, the sub-domain models constructed with relevant documents perform much better than the whole domain models (Tab.\n5).\nHowever, once Feedback model is used, the advantage disappears.\nOn one hand, this confirms our earlier hypothesis that a domain may be too large to be able to suggest relevant terms for new queries in the domain.\nIt indirectly validates our first hypothesis that a single user model or profile may be too large, so smaller domain models are preferred.\nOn the other hand, sub-domain models capture similar characteristics to Feedback model.\nSo when the latter is used, sub-domain models become superfluous.\nHowever, if domain models are constructed with top-ranked documents (Tab.\n6), sub-domain models make much less differences.\nThis can be explained by the fact that the domains constructed with top-ranked documents tend to be more uniform than relevant documents with respect to term distribution, as the top retrieved documents usually have stronger statistical correspondence with the queries than the relevant documents.\n7.4.2 Determining Query Domain Automatically\nIt is not realistic to always ask users to specify a domain for their queries.\nHere, we examine the possibility to automatically identify query domains.\nTable 7 shows the results with this strategy using both strategies for domain model construction.\nWe can observe that the effectiveness is only slightly lower than those produced with manual identification of query domain (Tab.\n5 & 6, Domain models).\nThis shows that automatic domain identification is a way to select domain model as effective as manual identification.\nThis also demonstrates the feasibility to use domain models for queries when no domain information is provided.\nTable 7.\nAutomatic query domain identification (U2)\nTable 8.\nComplete models (C1)\nLooking at the accuracy of the automatic domain identification, however, it is surprisingly low: for queries 51-150, only 38% of the determined domains correspond to the manual identifications.\nThis is much lower than the above 80% rates reported in [18].\nA detailed analysis reveals that the main reason is the closeness of several domains in TREC queries (e.g. \"International relations\", \"International politics\", \"Politics\").\nHowever, in this situation, wrong domains assigned to queries are not always irrelevant and useless.\nFor example, even when a query in \"International relations\" is classified in \"International politics\", the latter domain can still suggest useful terms to the query.\nTherefore, the relatively low classification accuracy does not mean low usefulness of the domain models.\n7.5 Complete Models\nThe results with the complete model are shown in Table 8.\nThis model integrates all the components described in this paper: Original query model, Feedback model, Domain model and Knowledge model.\nWe have tested both strategies to create domain models, but the differences between them are very small.\nSo we only report the results with the relevant documents.\nOur first observation is that the complete models produce the best results.\nAll the improvements over the baseline model (with feedback) are statistically significant.\nThis result confirms that the integration of contextual factors is effective.\nCompared to the other results, we see consistent, although small in some cases, improvements over all the partial models.\nLooking at the mixture weights, which may reflect the importance of each model, we observed that the best settings in all the collections vary in the following ranges: 0.1!\n9a0!\n90.2, 0.1!\n9aDom!\n90.2, 0.1!\n9aK!\n90.2 and 0.5!\n9aF!\n90.6.\nWe see that the most important factor is Feedback model.\nThis is also the single factor which produced the highest improvements over the original query model.\nThis observation seems to indicate that this model has the highest capability to capture the information need behind the query.\nHowever, even with lower weights, the other models do have strong impacts on the final effectiveness.\nThis demonstrates the benefit of integrating more contextual factors in IR.\n8.\nCONCLUSIONS\nTraditional IR approaches usually consider the query as the only element available for the user information need.\nMany previous studies have investigated the integration of some contextual factors in IR models, typically by incorporating a user profile.\nIn this paper, we argue that a single user profile (or model) can contain a too large variety of different topics so that new queries can be incorrectly biased.\nSimilarly to some previous studies, we propose to model topic domains instead of the user.\nPrevious investigations on context focused on factors around the query.\nWe showed in this paper that factors within the query are also important--they help select the appropriate term relations to apply in query expansion.\nWe have integrated the above contextual factors, together with feedback model, in a single language model.\nOur experimental results strongly confirm the benefit of using contexts in IR.\nThis work also shows that the language modeling framework is appropriate for integrating many contextual factors.\nThis work can be further improved on several aspects, including other methods to extract term relations, to integrate more context words in conditions and to identify query domains.\nIt would also be interesting to test the method on Web search using user search history.\nWe will investigate these problems in our future research.","keyphrases":["queri context","inform need","user profil","term relat","languag model","context factor","queri-specif context","user-centric on","interest domain","word sens disambigu","search context","domain knowledg","gener knowledg util","knowledg ambigu problem","context-independ","context inform","domain model","radic solut","googl person search"],"prmu":["P","P","P","P","P","P","M","M","R","M","M","M","U","U","U","R","R","U","U"]} {"id":"H-25","title":"Term Feedback for Information Retrieval with Language Models","abstract":"In this paper we study term-based feedback for information retrieval in the language modeling approach. With term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process. We propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback. Our algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback. They are helpful even when there are no relevant documents in the top.","lvl-1":"Term Feedback for Information Retrieval with Language Models Bin Tan\u2020 , Atulya Velivelli\u2021 , Hui Fang\u2020 , ChengXiang Zhai\u2020 Dept. of Computer Science\u2020 , Dept. of Electrical and Computer Engineering\u2021 University of Illinois at Urbana-Champaign bintan@cs.uiuc.edu, velivell@ifp.uiuc.edu, hfang@cs.uiuc.edu, czhai@cs.uiuc.edu ABSTRACT In this paper we study term-based feedback for information retrieval in the language modeling approach.\nWith term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process.\nWe propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback.\nOur algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback.\nThey are helpful even when there are no relevant documents in the top.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Algorithms 1.\nINTRODUCTION In the language modeling approach to information retrieval, feedback is often modeled as estimating an improved query model or relevance model based on a set of feedback documents [25, 13].\nThis is in line with the traditional way of doing relevance feedback - presenting a user with documents\/passages for relevance judgment and then extracting terms from the judged documents or passages to expand the initial query.\nIt is an indirect way of seeking user``s assistance for query model construction, in the sense that the refined query model (based on terms) is learned through feedback documents\/passages, which are high-level structures of terms.\nIt has the disadvantage that irrelevant terms, which occur along with relevant ones in the judged content, may be erroneously used for query expansion, causing undesired effects.\nFor example, for the TREC query Hubble telescope achievements, when a relevant document talks more about the telescope``s repair than its discoveries, irrelevant terms such as spacewalk can be added into the modified query.\nWe can consider a more direct way to involve a user in query model improvement, without an intermediary step of document feedback that can introduce noise.\nThe idea is to present a (reasonable) number of individual terms to the user and ask him\/her to judge the relevance of each term or directly specify their probabilities in the query model.\nThis strategy has been discussed in [15], but to our knowledge, it has not been seriously studied in existing language modeling literature.\nCompared to traditional relevance feedback, this term-based approach to interactive query model refinement has several advantages.\nFirst, the user has better control of the final query model through direct manipulation of terms: he\/she can dictate which terms are relevant, irrelevant, and possibly, to what degree.\nThis avoids the risk of bringing unwanted terms into the query model, although sometimes the user introduces low-quality terms.\nSecond, because a term takes less time to judge than a document``s full text or summary, and as few as around 20 presented terms can bring significant improvement in retrieval performance (as we will show later), term feedback makes it faster to gather user feedback.\nThis is especially helpful for interactive adhoc search.\nThird, sometimes there are no relevant documents in the top N of the initially retrieved results if the topic is hard.\nThis is often true when N is constrained to be small, which arises from the fact that the user is unwilling to judge too many documents.\nIn this case, relevance feedback is useless, as no relevant document can be leveraged on, but term feedback is still often helpful, by allowing relevant terms to be picked from irrelevant documents.\nDuring our participation in the TREC 2005 HARD Track and continued study afterward, we explored how to exploit term feedback from the user to construct improved query models for information retrieval in the language modeling approach.\nWe identified two key subtasks of term-based feedback, i.e., pre-feedback presentation term selection and post-feedback query model construction, with effective algorithms developed for both.\nWe imposed a secondary cluster structure on terms and found that a cluster view sheds additional insight into the user``s information need, and provides a good way of utilizing term feedback.\nThrough experiments we found that term feedback improves significantly over the nonfeedback baseline, even though the user often makes mistakes in relevance judgment.\nAmong our algorithms, the one with best retrieval performance is TCFB, the combination of TFB, the direct term feedback algorithm, and CFB, the cluster-based feedback algorithm.\nWe also varied the number of feedback terms and observed reasonable improvement even at low numbers.\nFinally, by comparing term feedback with document-level feedback, we found it to be a viable alternative to the latter with competitive retrieval performance.\nThe rest of the paper is organized as follows.\nSection 2 discusses some related work.\nSection 4 outlines our general approach to term feedback.\nWe present our method for presentation term selection in Section 3 and algorithms for query model construction in Section 5.\nThe experiment results are given in Section 6.\nSection 7 concludes this paper.\n2.\nRELATED WORK Relevance feedback[17, 19] has long been recognized as an effective method for improving retrieval performance.\nNormally, the top N documents retrieved using the original query are presented to the user for judgment, after which terms are extracted from the judged relevant documents, weighted by their potential of attracting more relevant documents, and added into the query model.\nThe expanded query usually represents the user``s information need better than the original one, which is often just a short keyword query.\nA second iteration of retrieval using this modified query usually produces significant increase in retrieval accuracy.\nIn cases where true relevance judgment is unavailable and all top N documents are assumed to be relevant, it is called blind or pseudo feedback[5, 16] and usually still brings performance improvement.\nBecause document is a large text unit, when it is used for relevance feedback many irrelevant terms can be introduced into the feedback process.\nTo overcome this, passage feedback is proposed and shown to improve feedback performance[1, 23].\nA more direct solution is to ask the user for their relevance judgment of feedback terms.\nFor example, in some relevance feedback systems such as [12], there is an interaction step that allows the user to add or remove expansion terms after they are automatically extracted from relevant documents.\nThis is categorized as interactive query expansion, where the original query is augmented with user-provided terms, which can come from direct user input (free-form text or keywords)[22, 7, 10] or user selection of system-suggested terms (using thesauri[6, 22] or extracted from feedback documents[6, 22, 12, 4, 7]).\nIn many cases term relevance feedback has been found to effectively improve retrieval performance[6, 22, 12, 4, 10].\nFor example, the study in [12] shows that the user prefers to have explicit knowledge and direct control of which terms are used for query expansion, and the penetrable interface that provides this freedom is shown to perform better than other interfaces.\nHowever, in some other cases there is no significant benefit[3, 14], even if the user likes interacting with expansion terms.\nIn a simulated study carried out in [18], the author compares the retrieval performance of interactive query expansion and automatic query expansion with a simulated study, and suggests that the potential benefits of the former can be hard to achieve.\nThe user is found to be not good at identifying useful terms for query expansion, when a simple term presentation interface is unable to provide sufficient semantic context of the feedback terms.\nOur work differs from the previous ones in two important aspects.\nFirst, when we choose terms to present to the user for relevance judgment, we not only consider single-term value (e.g., the relative frequency of a term in the top documents, which can be measured by metrics such as Robertson Selection Value and Simplified Kullback-Leibler Distance as listed in [24]), but also examine the cluster structure of the terms, so as to produce a balanced coverage of the different topic aspects.\nSecond, with the language modelling framework, we allow an elaborate construction of the updated query model, by setting different probabilities for different terms based on whether it is a query term, its significance in the top documents, and its cluster membership.\nAlthough techniques for adjusting query term weights exist for vector space models[17] and probablistic relevance models[9], most of the aforementioned works do not use them, choosing to just append feedback terms to the original query (thus using equal weights for them), which can lead to poorer retrieval performance.\nThe combination of the two aspects allows our method to perform much better than the baseline.\nThe usual way for feedback term presentation is just to display the terms in a list.\nThere have been some works on alternative user interfaces.\n[8] arranges terms in a hierarchy, and [11] compares three different interfaces, including terms + checkboxes, terms + context (sentences) + checkboxes, sentences + input text box.\nIn both studies, however, there is no significant performance difference.\nIn our work we adopt the simplest approach of terms + checkboxes.\nWe focus on term presentation and query model construction from feedback terms, and believe using contexts to improve feedback term quality should be orthogonal to our method.\n3.\nGENERAL APPROACH We follow the language modeling approach, and base our method on the KL-divergence retrieval model proposed in [25].\nWith this model, the retrieval task involves estimating a query language model \u03b8q from a given query, a document language model \u03b8d from each document, and calculating their KL-divergence D(\u03b8q||\u03b8d), which is then used to score the documents.\n[25] treats relevance feedback as a query model re-estimation problem, i.e., computing an updated query model \u03b8q given the original query text and the extra evidence carried by the judged relevant documents.\nWe adopt this view, and cast our task as updating the query model from user term feedback.\nThere are two key subtasks here: First, how to choose the best terms to present to the user for judgment, in order to gather maximal evidence about the user``s information need.\nSecond, how to compute an updated query model based on this term feedback evidence, so that it captures the user``s information need and translates into good retrieval performance.\n4.\nPRESENTATION TERM SELECTION Proper selection of terms to be presented to the user for judgment is crucial to the success of term feedback.\nIf the terms are poorly chosen and there are few relevant ones, the user will have a hard time looking for useful terms to help clarify his\/her information need.\nIf the relevant terms are plentiful, but all concentrate on a single aspect of the query topic, then we will only be able to get feedback on that aspect and missing others, resulting in a breadth loss in retrieved results.\nTherefore, it is important to carefully select presentation terms to maximize expected gain from user feedback, i.e., those that can potentially reveal most evidence of the user``s information need.\nThis is similar to active feedback[21], which suggests that a retrieval system should actively probe the user``s information need, and in the case of relevance feedback, the feedback documents should be chosen to maximize learning benefits (e.g. diversely so as to increase coverage).\nIn our approach, the top N documents from an initial retrieval using the original query form the source of feedback terms: all terms that appear in them are considered candidates to present to the user.\nThese documents serve as pseudo-feedback, since they provide a much richer context than the original query (usually very short), while the user is not asked to judge their relevance.\nDue to the latter reason, it is possible to make N quite large (e.g., in our experiments we set N = 60) to increase its coverage of different aspects in the topic.\nThe simplest way of selecting feedback terms is to choose the most frequent M terms from the N documents.\nThis method, however, has two drawbacks.\nFirst, a lot of common noisy terms will be selected due to their high frequencies in the document collection, unless a stop-word list is used for filtering.\nSecond, the presentation list will tend to be filled by terms from major aspects of the topic; those from a minor aspect are likely to be missed due to their relatively low frequencies.\nWe solve the above problems by two corresponding measures.\nFirst, we introduce a background model \u03b8B that is estimated from collection statistics and explains the common terms, so that they are much less likely to appear in the presentation list.\nSecond, the terms are selected from multiple clusters in the pseudo-feedback documents, to ensure sufficient representation of different aspects of the topic.\nWe rely on the mixture multinomial model, which is used for theme discovery in [26].\nSpecifically, we assume the N documents contain K clusters {Ci| i = 1, 2, \u00b7 \u00b7 \u00b7 K}, each characterized by a multinomial word distribution (also known as unigram language model) \u03b8i and corresponding to an aspect of the topic.\nThe documents are regarded as sampled from a mixture of K + 1 components, including the K clusters and the background model: p(w|d) = \u03bbBp(w|\u03b8B) + (1 \u2212 \u03bbB) K i=1 \u03c0d,ip(w|\u03b8i) where w is a word, \u03bbB is the mixture weight for the background model \u03b8B, and \u03c0d,i is the document-specific mixture weight for the i-th cluster model \u03b8i.\nWe then estimate the cluster models by maximizing the probability of the pseudo-feedback documents being generated from the multinomial mixture model: log p(D|\u039b) = d\u2208D w\u2208V c(w; d) log p(w|d) where D = {di| i = 1, 2, \u00b7 \u00b7 \u00b7 N} is the set of the N documents, V is the vocabulary, c(w; d) is w``s frequency in d and \u039b = {\u03b8i| i = 1, 2, \u00b7 \u00b7 \u00b7 K} \u222a {\u03c0dij | i = 1, 2, \u00b7 \u00b7 \u00b7 N, j = 1, 2, \u00b7 \u00b7 \u00b7 K} is the set of model parameters to estimate.\nThe cluster models can be efficiently estimated using the Expectation-Maximization (EM) algorithm.\nFor its details, we refer the reader to [26].\nTable 1 shows the cluster models for TREC query Transportation tunnel disasters (K = 3).\nNote that only the middle cluster is relevant.\nTable 1: Cluster models for topic 363 Transportation tunnel disasters Cluster 1 Cluster 2 Cluster 3 tunnel 0.0768 tunnel 0.0935 tunnel 0.0454 transport 0.0364 fire 0.0295 transport 0.0406 traffic 0.0206 truck 0.0236 toll 0.0166 railwai 0.0186 french 0.0220 amtrak 0.0153 harbor 0.0146 smoke 0.0157 train 0.0129 rail 0.0140 car 0.0154 airport 0.0122 bridg 0.0139 italian 0.0152 turnpik 0.0105 kilomet 0.0136 firefight 0.0144 lui 0.0095 truck 0.0133 blaze 0.0127 jersei 0.0093 construct 0.0131 blanc 0.0121 pass 0.0087 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 From each of the K estimated clusters, we choose the L = M\/K terms with highest probabilities to form a total of M presentation terms.\nIf a term happens to be in top L in multiple clusters, we assign it to the cluster where it has highest probability and let the other clusters take one more term as compensation.\nWe also filter out terms in the original query text because they tend to always be relevant when the query is short.\nThe selected terms are then presented to the user for judgment.\nA sample (completed) feedback form is shown in Figure 1.\nIn this study we only deal with binary judgment: a presented term is by default unchecked, and a user may check it to indicate relevance.\nWe also do not explicitly exploit negative feedback (i.e., penalizing irrelevant terms), because with binary feedback an unchecked term is not necessarily irrelevant (maybe the user is unsure about its relevance).\nWe could ask the user for finer judgment (e.g., choosing from highly relevant, somewhat relevant, do not know, somewhat irrelevant and highly irrelevant), but binary feedback is more compact, taking less space to display and less user effort to make judgment.\n5.\nESTIMATING QUERY MODELS FROM TERM FEEDBACK In this section, we present several algorithms for exploiting term feedback.\nThe algorithms take as input the original query q, the clusters {\u03b8i} as generated by the theme discovery algorithm, the set of feedback terms T and their relevance judgment R, and outputs an updated query language model \u03b8q that makes best use of the feedback evidence to capture the user``s information need.\nFirst we describe our notations: \u2022 \u03b8q: The original query model, derived from query terms only: p(w|\u03b8q) = c(w; q) |q| where c(w; q) is the count of w in q, and |q| = w\u2208q c(w; q) is the query length.\n\u2022 \u03b8q : The updated query model which we need to estimate from term feedback.\n\u2022 \u03b8i (i = 1, 2, ... K): The unigram language model of cluster Ci, as estimated using the theme discovery algorithm.\n\u2022 T = {ti,j} (i = 1 ... K, j = 1 ... L): The set of terms presented to the user for judgment.\nti,j is the j-th term chosen from cluster Ci.\n\u2022 R = {\u03b4w|w \u2208 T}: \u03b4w is an indicator variable that is 1 if w is judged relevant or 0 otherwise.\n5.1 TFB (Direct Term Feedback) This is a straight-forward form of term feedback that does not involve any secondary structure.\nWe give a weight of 1 to terms judged relevant by the user, a weight of \u03bc to query terms, zero weight to other terms, and then apply normalization: p(w|\u03b8q ) = \u03b4w + \u03bc c(w; q) w \u2208T \u03b4w + \u03bc|q| where w \u2208T \u03b4w is the total number of terms that are judged relevant.\nWe call this method TFB (direct Term FeedBack).\nIf we let \u03bc = 1, this approach is equivalent to appending the relevant terms after the original query, which is what standard query expansion (without term reweighting) does.\nIf we set \u03bc > 1, we are putting more emphasis on the query terms than the checked ones.\nNote that the result model will be more biased toward \u03b8q if the original query is long or the user feedback is weak, which makes sense, as we can trust more on the original query in either case.\nFigure 1: Filled clarification form for Topic 363 363 transportation tunnel disasters Please select all terms that are relevant to the topic.\ntraffic railway harbor rail bridge kilometer construct swiss cross link kong hong river project meter shanghai fire truck french smoke car italian firefights blaze blanc mont victim franc rescue driver chamonix emerge toll amtrak train airport turnpike lui jersey pass rome z center electron road boston speed bu submit 5.2 CFB (Cluster Feedback) Here we exploit the cluster structure that played an important role when we selected the presentation terms.\nThe clusters represent different aspects of the query topic, each of which may or may not be relevant.\nIf we are able to identify the relevant clusters, we can combine them to generate a query model that is good at discovering documents belonging to these clusters (instead of the irrelevant ones).\nWe could ask the user to directly judge the relevance of a cluster after viewing representative terms in that cluster, but this would sometimes be a difficult task for the user, who has to guess the semantics of a cluster via its set of terms, which may not be well connected to one another due to a lack of context.\nTherefore, we propose to learn cluster feedback indirectly, inferring the relevance of a cluster through the relevance of its feedback terms.\nBecause each cluster has an equal number of terms presented to the user, the simplest measure of a cluster``s relevance is the number of terms that are judged relevant in it.\nIntuitively, the more terms are marked relevant in a cluster, the closer the cluster is to the query topic, and the more the cluster should participate in query modification.\nIf we combine the cluster models using weights determined this way and then interpolate with the original query model, we get the following formula for query updating, which we call CFB (Cluster FeedBack): p(w|\u03b8q ) = \u03bbp(w|\u03b8q) + (1 \u2212 \u03bb) K i=1 L j=1 \u03b4ti,j K k=1 L j=1 \u03b4tk,j p(w|\u03b8i) where L j=1 \u03b4ti,j is the number of relevant terms in cluster Ci, and K k=1 L j=1 \u03b4tk,j is the total number of relevant terms.\nWe note that when there is only one cluster (K = 1), the above formula degenerates to p(w|\u03b8q ) = \u03bbp(w|\u03b8q) + (1 \u2212 \u03bb)p(w|\u03b81) which is merely pseudo-feedback of the form proposed in [25].\n5.3 TCFB (Term-cluster Feedback) TFB and CFB both have their drawbacks.\nTFB assigns non-zero probabilities to the presented terms that are marked relevant, but completely ignores (a lot more) others, which may be left unchecked due to the user``s ignorance, or simply not included in the presentation list, but we should be able to infer their relevance from the checked ones.\nFor example, in Figure 1, since as many as 5 terms in the middle cluster (the third and fourth columns) are checked, we should have high confidence in the relevance of other terms in that cluster.\nCFB remedies TFB``s problem by treating the terms in a cluster collectively, so that unchecked\/unpresented terms receive weights when presented terms in their clusters are judged as relevant, but it does not distinguish which terms in a cluster are presented or judged.\nIntuitively, the judged relevant terms should receive larger weights because they are explicitly indicated as relevant by the user.\nTherefore, we try to combine the two methods, hoping to get the best out of both.\nWe do this by interpolating the TFB model with the CFB model, and call it TCFB: p(w|\u03b8q ) = \u03b1p(w|\u03b8qT F B ) + (1 \u2212 \u03b1)p(w|\u03b8qCF B ) 6.\nEXPERIMENTS In this section, we describe our experiment results.\nWe first describe our experiment setup and present an overview of various methods'' performance.\nThen we discuss the effects of varying the parameter setting in the algorithms, as well as the number of presentation terms.\nNext we analyze user term feedback behavior and its relation to retrieval performance.\nFinally we compare term feedback to relevance feedback and show that it has its particular advantage.\n6.1 Experiment Setup and Basic Results We took the opportunity of TREC 2005 HARD Track[2] for the evaluation of our algorithms.\nThe tracks used the AQUAINT collection, a 3GB corpus of English newswire text.\nThe topics included 50 ones previously known to be hard, i.e. with low retrieval performance.\nIt is for these hard topics that user feedback is most helpful, as it can provide information to disambiguate the queries; with easy topics the user may be unwilling to spend efforts for feedback if the automatic retrieval results are good enough.\nParticipants of the track were able to submit custom-designed clarification forms (CF) to solicit feedback from human assessors provided by Table 2: Retrieval performance for different methods and CF types.\nThe last row is the percentage of MAP improvement over the baseline.\nThe parameter settings \u03bc = 4, \u03bb = 0.1, \u03b1 = 0.3 are near optimal.\nBaseline TFB1C TFB3C TFB6C CFB1C CFB3C CFB6C TCFB1C TCFB3C TCFB6C MAP 0.219 0.288 0.288 0.278 0.254 0.305 0.301 0.274 0.309 0.304 Pr@30 0.393 0.467 0.475 0.457 0.399 0.480 0.473 0.431 0.491 0.473 RR 4339\u00a04753\u00a04762\u00a04740 4600\u00a04907\u00a04872\u00a04767 4947 4906 % 0% 31.5% 31.5% 26.9% 16.0% 39.3% 37.4% 25.1% 41.1% 38.8% Table 3: MAP variation with the number of presented terms.\n# terms TFB1C TFB3C TFB6C CFB3C CFB6C TCFB3C TCFB6C 6 0.245 0.240 0.227 0.279 0.279 0.281 0.274 12 0.261 0.261 0.242 0.299 0.286 0.297 0.281 18 0.275 0.274 0.256 0.301 0.282 0.300 0.286 24 0.276 0.281 0.265 0.303 0.292 0.305 0.292 30 0.280 0.285 0.270 0.304 0.296 0.307 0.296 36 0.282 0.288 0.272 0.307 0.297 0.309 0.297 42 0.283 0.288 0.275 0.306 0.298 0.309 0.300 48 0.288 0.288 0.278 0.305 0.301 0.309 0.303 NIST.\nWe designed three sets of clarification forms for term feedback, differing in the choice of K, the number of clusters, and L, the number of presented terms from each cluster.\nThey are: 1\u00d7 48, a big cluster with 48 terms, 3 \u00d7 16, 3 clusters with 16 terms each, and 6 \u00d7 8, 6 clusters with 8 terms each.\nThe total number of presented terms (M) is fixed at 48, so by comparing the performance of different types of clarification forms we can know the effects of different degree of clustering.\nFor each topic, an assessor would complete the forms ordered by 6 \u00d7 8, 1 \u00d7 48 and 3 \u00d7 16, spending up to three minutes on each form.\nThe sample clarification form shown in Figure 1 is of type 3 \u00d7 16.\nIt is a simple and compact interface in which the user can check relevant terms.\nThe form is self-explanatory; there is no need for extra user training on how to use it.\nOur initinal queries are constructed only using the topic title descriptions, which are on average 2.7 words in length.\nAs our baseline we use the KL divergence retrieval method implemented in the Lemur Toolkit1 with 5 pseudo-feedback documents.\nWe stem the terms, choose Dirichlet smoothing with a prior of 2000, and truncate query language models to 50 terms (these settings are used throughout the experiments).\nFor all other parameters we use Lemur``s default settings.\nThe baseline turns out to perform above average among the track participants.\nAfter an initial run using this baseline retrieval method, we take the top 60 documents for each topic and apply the theme discovery algorithm to output the clusters (1, 3, or 6 of them), based on which we generate clarification forms.\nAfter user feedback is received, we run the term feedback algorithms (TFB, CFB or TCFB) to estimate updated query models, which are then used for a second iteration of retrieval.\nWe evaluate the different retrieval methods'' performance on their rankings of the top 1000 documents.\nThe evaluation metrics we adopt include mean average (non-interpolated) precision (MAP), precision at top 30 (Pr@30) and total relevant retrieved (RR).\nTable 2 shows the performance of various methods and configurations of K \u00d7 L.\nThe suffixes (1C, 3C, 6C) after TFB,CFB,TCFB stand for the number of clusters (K).\nFor example, TCFB3C means the TCFB method on the 3 \u00d7 16 clarification forms.\nFrom Table 2 we can make the following observations: 1 http:\/\/www.lemurproject.com 1.\nAll methods perform considerably better than the pseudofeedback baseline, with TCFB3C achieving a highest 41.1% improvement in MAP, indicating significant contribution of term feedback for clarification of the user``s information need.\nIn other words, term feedback is truly helpful for improving retrieval accuracy.\n2.\nFor TFB, the performance is almost equal on the 1 \u00d7 48 and 3 \u00d7 16 clarification forms in terms of MAP (although the latter is slightly better in Pr@30 and RR), and a little worse on the 6 \u00d7 8 ones.\n3.\nBoth CFB3C and CFB6C perform better than their TFB counterparts in all three metrics, suggesting that feedback on a secondary cluster structure is indeed beneficial.\nCFB1C is actually worse because it cannot adjust the weight of its (single) cluster from term feedback and it is merely pseudofeedback.\n4.\nAlthough TCFB is just a simple mixture of TFB and CFB by interpolation, it is able to outperform both.\nThis supports our speculation that TCFB overcomes the drawbacks of TFB (paying attention only to checked terms) and CFB (not distinguishing checked and unchecked terms in a cluster).\nExcept for TCFB6C v.s. CFB6C, the performance advantage of TCFB over TFB\/CFB is significant at p < 0.05 using the Wilcoxon signed rank test.\nThis is not true in the case of TFB v.s. CFB, each of which is better than the other in nearly half of the topics.\n6.2 Reduction of Presentation Terms In some situations we may have to reduce the number of presentation terms due to limits in display space or user feedback efforts.\nIt is interesting to know whether our algorithms'' performance deteriorates when the user is presented with fewer terms.\nBecause the presentation terms within each cluster are generated in decreasing order of their frequencies, the presentation list forms a subset of the original one if its size is reduced2 .\nTherefore, we can easily simulate what happens when the number of presentation terms decreases 2 There are complexities arising from terms appearing in top L of multiple clusters, but these are exceptions from M to M : we will keep all judgments of the top L = M \/K terms in each cluster and discard those of others.\nTable 3 shows the performance of various algorithms as the number of presentation terms ranges from 6 to 48.\nWe find that the performance of TFB is more susceptible to presentation term reduction than that of CFB or TCFB.\nFor example, at 12 terms the MAP of TFB3C is 90.6% of that at 48 terms, while the numbers for CFB3C and TCFB3C are 98.0% and 96.1% respectively.\nWe conjecture the reason to be that while TFB``s performance heavily depends on how many good terms are chosen for query expansion, CFB only needs a rough estimate of cluster weights to work.\nAlso, the 3 \u00d7 16 clarification forms seem to be more robust than the 6 \u00d7 8 ones: at 12 terms the MAP of TFB6C is 87.1% of that at 48 terms, lower than 90.6% for TFB3C.\nSimilarly, for CFB it is 95.0% against 98.0%.\nThis is natual, as for a large cluster number of 6, it is easier to get into the situation where each cluster gets too few presentation terms to make topic diversification useful.\nOverall, we are surprised to see that the algorithms are still able to perform reasonably well when the number of presentation terms is small.\nFor example, at only 12 terms CFB3C (the clarification form is of size 3 \u00d7 4) can still improve 36.5% over the baseline, dropping slightly from 39.3% at 48 terms.\n6.3 User Feedback Analysis In this part we study several aspects of user``s term feedback behavior, and whether they are connected to retrieval performance.\nFigure 2: Clarification form completion time distributions 0\u221230 30\u221260 60\u221290 90\u2212120 120\u2212150 150\u2212180 0 5 10 15 20 25 30 35 completion time (seconds) #topics 1\u00d748 3\u00d716 6\u00d78 Figure 2 shows the distribution of time needed to complete a clarification form3 .\nWe see that the user is usually able to finish term feedback within a reasonably short amount of time: for more than half of the topics the clarification form is completed in just 1 minute, and only a small fraction of topics (less than 10% for 1 \u00d7 48 and 3 \u00d7 16) take more than 2 minutes.\nThis suggests that term feedback is suitable for interactive ad-hoc retrieval, where a user usually does not want to spend too much time on providing feedback.\nWe find that a user often makes mistakes when judging term relevance.\nSometimes a relevant term may be left out because its connection to the query topic is not obvious to the user.\nOther times a dubious term may be included but turns out to be irrelevant.\nTake the topic in Figure 1 for example.\nThere was a fire disaster in Mont 3 The maximal time is 180 seconds, as the NIST assessor would be forced to submit the form at that moment.\nTable 4: Term selection statistics (topic average) CF Type 1 \u00d7 48 3 \u00d7 16 6 \u00d7 8 # checked terms 14.8 13.3 11.2 # rel.\nterms 15.0 12.6 11.2 # rel.\nchecked terms 7.9 6.9 5.9 precision 0.534 0.519 0.527 recall 0.526 0.548 0.527 Blanc Tunnel between France and Italy in 1999, but the user failed to select such keywords as mont, blanc, french and italian due to his\/her ignorance of the event.\nIndeed, without proper context it would be hard to make perfect judgment.\nWhat is then, the extent to which the user is good at term feedback?\nDoes it have serious impact on retrieval performance?\nTo answer these questions, we need a measure of individual terms'' true relevance.\nWe adopt the Simplified KL Divergence metric used in [24] to decide query expansion terms as our term relevance measure: \u03c3KLD(w) = p(w|R) log p(w|R) p(w|\u00acR) where p(w|R) is the probability that a relevant document contains term w, and p(w|\u00acR) is the probability that an irrelevant document contains w, both of which can be easily computed via maximum likelihood estimate given document-level relevance judgment.\nIf \u03c3KLD(w) > 0, w is more likely to appear in relevant documents than irrelevant ones.\nWe consider a term relevant if its Simplified KL Divergence value is greater than a certain threshold \u03c30.\nWe can then define precision and recall of user term judgment accordingly: precision is the fraction of terms checked by the user that are relevant; recall is the fraction of presented relevant terms that are checked by the user.\nTable 4 shows the number of checked terms, relevant terms and relevant checked terms when \u03c30 is set to 1.0, as well as the precision\/recall of user term judgment.\nNote that when the clarification forms contain more clusters, fewer terms are checked: 14.8 for 1 \u00d7 48, 13.3 for 3 \u00d7 16 and 11.2 for 6\u00d78.\nSimilar pattern holds for relevant terms and relevant checked terms.\nThere seems to be a trade-off between increasing topic diversity by clustering and losing extra relevant terms: when there are more clusters, each of them gets fewer terms to present, which can hurt a major relevant cluster that contains many relevant terms.\nTherefore, it is not always helpful to have more clusters, e.g., TFB6C is actually worse than TFB1C.\nThe major finding we can make from Table 4 is that the user is not particularly good at identifying relevant terms, which echoes the discovery in [18].\nIn the case of 3 \u00d7 16 clarification forms, the average number of terms checked as relevant by the user is 13.3 per topic, and the average number of relevant terms whose \u03c3KLD value exceed 1.0 is 12.6.\nThe user is able to recognize only 6.9 of these terms on average.\nIndeed, the precision and recall of user feedback terms (as defined previously) are far from perfect.\nOn the other hand, If the user had correctly checked all such relevant terms, the performance of our algorithms would have increased a lot, as shown in Table 5.\nWe see that TFB gets big improvement when there is an oracle who checks all relevant terms, while CFB meets a bottleneck around MAP of 0.325, since all it does is adjust cluster weights, and when the learned weights are close to being accurate, it cannot benefit more from term feedback.\nAlso note that TCFB fails to outperform TFB, probably because TFB is sufficiently accurate.\nTable 5: Change of MAP when using all (and only) relevant terms (\u03c3KLD > 1.0) for feedback.\noriginal term feedback relevant term feedback TF1 0.288 0.354 TF3 0.288 0.354 TF6 0.278 0.346 CF3 0.305 0.325 CF6 0.301 0.326 TCF3 0.309 0.345 TCF6 0.304 0.341 6.4 Comparison with Relevance Feedback Now we compare term feedback with document-level relevance feedback, in which the user is presented with the top N documents from an initial retrieval and asked to judge their relevance.\nThe feedback process is simulated using document relevance judgment from NIST.\nWe use the mixture model based feedback method proposed in [25], with mixture noise set to 0.95 and feedback coefficient set to 0.9.\nComparative evaluation of relevance feedback against other methods is complicated by the fact that some documents have already been viewed during feedback, so it makes no sense to include them in the retrieval results of the second run.\nHowever, this does not hold for term feedback.\nThus, to make it fair w.r.t. user``s information gain, if the feedback documents are relevant, they should be kept in the top of the ranking; if they are irrelevant, they should be left out.\nTherefore, we use relevance feedback to produce a ranking of top 1000 retrieved documents but with every feedback document excluded, and then prepend the relevant feedback documents at the front.\nTable 6 shows the performance of relevance feedback for different values of N and compares it with TCFB3C.\nTable 6: Performance of relevance feedback for different number of feedback documents (N).\nN MAP Pr@30 RR 5 0.302 0.586 4779 10 0.345 0.670 4916 20 0.389 0.772 5004 TCFB3C 0.309 0.491 4947 We see that the performance of TCFB3C is comparable to that of relevance feedback using 5 documents.\nAlthough it is poorer than when there are 10 feedback documents in terms of MAP and Pr@30, it does retrieve more documents (4947) when going down the ranked list.\nWe try to compare the quality of automatically inserted terms in relevance feedback with that of manually selected terms in term feedback.\nThis is done by truncating the relevance feedback modified query model to a size equal to the number of checked terms for the same topic.\nWe can then compare the terms in the truncated model with the checked terms.\nFigure 3 shows the distribution of the terms'' \u03c3KLD scores.\nWe find that term feedback tends to produce expansion terms of higher quality(those with \u03c3KLD > 1) compared to relevance feedback (with 10 feedback documents).\nThis does not contradict the fact that the latter yields higher retrieval performance.\nActually, when we use the truncated query model instead of the intact one refined from relevance feedback, the MAP is only 0.304.\nThe truth Figure 3: Comparison of expansion term quality between relevance feedback (with 10 feedback documents) and term feedback (with 3 \u00d7 16 CFs) \u22121\u22120 0\u22121 1\u22122 2\u22123 3\u22124 4\u22125 5\u22126 0 50 100 150 200 250 300 350 \u03c3KLD #terms relevance feedback term feedback is, although there are many unwanted terms in the expanded query model from feedback documents, there are also more relevant terms than what the user can possibly select from the list of presentation terms generated with pseudo-feedback documents, and the positive effects often outweights the negative ones.\nWe are interested to know under what circumstances term feedback has advantage over relevance feedback.\nOne such situation is when none of the top N feedback documents is relevant, rendering relevance feedback useless.\nThis is not infrequent, as one might have thought: out of the 50 topics, there are 13 such cases when N = 5, 10 when N = 10, and still 3 when N = 20.\nWhen this happens, one can only back off to the original retrieval method; the power of relevance feedback is lost.\nSurprisingly, in 11 out of 13 such cases where relevance feedback seems impossible, the user is able to check at least 2 relevant terms from the 3 \u00d7 16 clarification form (we consider term t to be relevant if \u03c3KLD(t) > 1.0).\nFurthermore, in 10 out of them TCFB3C outperforms the pseudo-feedback baseline, increasing MAP from 0.076 to 0.146 on average (these are particularly hard topics).\nWe think that there are two possible explanations for this phenomenon of term feedback being active even when relevance feedback does not work: First, even if none of the top N (suppose it is a small number) documents are relevant, we may still find relevant documents in top 60, which is more inclusive but usually unreachable when people are doing relevance feedback in interactive ad-hoc search, from which we can draw feedback terms.\nThis is true for topic 367 piracy, where the top 10 feedback documents are all about software piracy, yet there are documents between 10-60 that are about piracy on the seas (which is about the real information need), contributing terms such as pirate, ship for selection in the clarification form.\nSecond, for some topics, a document needs to meet some special condition in order to be relevant.\nThe top N documents may be related to the topic, but nonetheless irrelevant.\nIn this case, we may still extract useful terms from these documents, even if they do not qualify as relevant ones.\nFor example, in topic 639 consumer online shopping, a document needs to mention what contributes to shopping growth to really match the specified information need, hence none of the top 10 feedback documents are regarded as relevant.\nBut nevertheless, the feedback terms such as retail, commerce are good for query expansion.\n7.\nCONCLUSIONS In this paper we studied the use of term feedback for interactive information retrieval in the language modeling approach.\nWe proposed a cluster-based method for selecting presentation terms as well as algorithms to estimate refined query models from user term feedback.\nWe saw significant improvement in retrieval accuracy brought by term feedback, in spite of the fact that a user often makes mistakes in relevance judgment that hurts its performance.\nWe found the best-performing algorithm to be TCFB, which benefits from the combination of directly observed term evidence with TFB and indirectly learned cluster relevance with CFB.\nWhen we reduced the number of presentation terms, term feedback is still able to keep much of its performance gain over the baseline.\nFinally, we compared term feedback to document-level relevance feedback, and found that TCFB3C``s performance is on a par with the latter with 5 feedback documents.\nWe regarded term feedback as a viable alternative to traditional relevance feedback, especially when there are no relevant documents in the top.\nWe propose to extend our work in several ways.\nFirst, we want to study whether the use of various contexts can help the user to better identify term relevance, while not sacrificing the simplicity and compactness of term feedback.\nSecond, currently all terms are presented to the user in a single batch.\nWe could instead consider iterative term feedback, by presenting a small number of terms first, and show more terms after receiving user feedback or stop when the refined query is good enough.\nThe presented terms should be selected dynamically to maximize learning benefits at any moment.\nThird, we have plans to incorporate term feedback into our UCAIR toolbar[20], an Internet Explorer plugin, to make it work for web search.\nWe are also interested in studying how to combine term feedback with relevance feedback or implicit feedback.\nWe could, for example, allow the user to dynamically modify terms in a language model learned from feedback documents.\n8.\nACKNOWLEDGMENT This work is supported in part by the National Science Foundation grants IIS-0347933 and IIS-0428472.\n9.\nREFERENCES [1] J. Allan.\nRelevance feedback with too much data.\nIn Proceedings of the 18th annual international ACM SIGIR conference on research and development in information retrieval, pages 337-343, 1995.\n[2] J. Allan.\nHARD track overview in TREC 2005 - High Accuracy Retrieval from Documents.\nIn The Fourteenth Text REtrieval Conference, 2005.\n[3] P. Anick.\nUsing terminological feedback for web search refinement: a log-based study.\nIn Proceedings of the 26th annual international ACM SIGIR conference on research and development in informaion retrieval, pages 88-95, 2003.\n[4] P. G. Anick and S. Tipirneni.\nThe paraphrase search assistant: terminological feedback for iterative information seeking.\nIn Proceedings of the 22nd annual international ACM SIGIR conference on research and development in information retrieval, pages 153-159, 1999.\n[5] C. Buckley, G. Salton, J. Allan, and A. Singhal.\nAutomatic query expansion using SMART.\nIn Proceedings of the Third Text REtrieval Conference, 1994.\n[6] D. Harman.\nTowards interactive query expansion.\nIn Proceedings of the 11th annual international ACM SIGIR conference on research and development in information retrieval, pages 321-331, 1988.\n[7] N. A. Jaleel, A. Corrada-Emmanuel, Q. Li, X. Liu, C. Wade, and J. Allan.\nUMass at TREC 2003: HARD and QA.\nIn TREC, pages 715-725, 2003.\n[8] H. Joho, C. Coverson, M. Sanderson, and M. Beaulieu.\nHierarchical presentation of expansion terms.\nIn Proceedings of the 2002 ACM symposium on applied computing, pages 645-649, 2002.\n[9] K. S. Jones, S. Walker, and S. E. Robertson.\nA probabilistic model of information retrieval: development and status.\nTechnical Report 446, Computer Laboratory, University of Cambridge, 1998.\n[10] D. Kelly, V. D. Dollu, and X. Fu.\nThe loquacious user: a document-independent source of terms for query expansion.\nIn Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval, pages 457-464, 2005.\n[11] D. Kelly and X. Fu.\nElicitation of term relevance feedback: an investigation of term source and context.\nIn Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval, 2006.\n[12] J. Koenemann and N. Belkin.\nA case for interaction: A study of interactive information retrieval behavior and effectiveness.\nIn Proceedings of the SIGCHI conference on human factors in computing systems, pages 205-212, 1996.\n[13] V. Lavrenko and W. B. Croft.\nRelevance-based language models.\nIn Research and Development in Information Retrieval, pages 120-127, 2001.\n[14] Y. Nemeth, B. Shapira, and M. Taeib-Maimon.\nEvaluation of the real and perceived value of automatic and interactive query expansion.\nIn Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval, pages 526-527, 2004.\n[15] J. Ponte.\nA Language Modeling Approach to Information Retrieval.\nPhD thesis, University of Massachusetts at Amherst, 1998.\n[16] S. E. Robertson, S. Walker, S. Jones, M. Beaulieu, and M. Gatford.\nOkapi at TREC-3.\nIn Proceedings of the Third Text REtrieval Conference, 1994.\n[17] J. Rocchio.\nRelevance feedback in information retrieval.\nIn The SMART retrieval system, pages 313-323.\n1971.\n[18] I. Ruthven.\nRe-examining the potential effectiveness of interactive query expansion.\nIn Proceedings of the 26th annual international ACM SIGIR conference on research and development in informaion retrieval, pages 213-220, 2003.\n[19] G. Salton and C. Buckley.\nImproving retrieval performance by relevance feedback.\nJournal of the American Society for Information Science, 41:288-297, 1990.\n[20] X. Shen, B. Tan, and C. Zhai.\nImplicit user modeling for personalized search.\nIn Proceedings of the 14th ACM international conference on information and knowledge management, pages 824-831, 2005.\n[21] X. Shen and C. Zhai.\nActive feedback in ad-hoc information retrieval.\nIn Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval, pages 59-66, 2005.\n[22] A. Spink.\nTerm relevance feedback and query expansion: relation to design.\nIn Proceedings of the 17th annual international ACM SIGIR conference on research and development in information retrieval, pages 81-90, 1994.\n[23] J. Xu and W. B. Croft.\nQuery expansion using local and global document analysis.\nIn Proceedings of the 19th annual international ACM SIGIR conference on research and development in information retrieval, pages 4-11, 1996.\n[24] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson.\nMicrosoft cambridge at TREC-13: Web and HARD tracks.\nIn Proceedings of the 13th Text REtrieval Conference, 2004.\n[25] C. Zhai and J. Lafferty.\nModel-based feedback in the language modeling approach to information retrieval.\nIn Proceedings of the tenth international conference on information and knowledge management, pages 403-410, 2001.\n[26] C. Zhai, A. Velivelli, and B. Yu.\nA cross-collection mixture model for comparative text mining.\nIn Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, pages 743-748, 2004.","lvl-3":"Term Feedback for Information Retrieval with Language Models\nABSTRACT\nI n t hi s paper w e s t udy t er m - based f eedback f or i nf or mat i on r etrieval in the language modeling approach.\nWith term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process.\nWe propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback.\nOur algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback.\nThey are helpful even when there are no relevant documents in the top.\n1.\nINTRODUCTION\nIn the language modeling approach to information retrieval, feedback is often modeled as estimating an improved query model or relevance model based on a set of feedback documents [25, 13].\nThis is in line with the traditional way of doing relevance feedback - presenting a user with documents\/passages for relevance judgment and then extracting terms from the judged documents or passages to expand the initial query.\nIt is an indirect way of seeking user's assistance for query model construction, in the sense that the refined query model (based on terms) is learned through feedback documents\/passages, which are high-level structures of terms.\nIt has the disadvantage that irrelevant terms, which occur along with relevant ones in the judged content, may be erroneously used for query expansion, causing undesired effects.\nFor example, for the\nTREC query \"Hubble telescope achievements\", when a relevant document talks more about the telescope's repair than its discoveries, irrelevant terms such as \"spacewalk\" can be added into the modified query.\nWe can consider a more direct way to involve a user in query model improvement, without an intermediary step of document feedback that can introduce noise.\nThe idea is to present a (reasonable) number of individual terms to the user and ask him\/her to judge the relevance of each term or directly specify their probabilities in the query model.\nThis strategy has been discussed in [15], but to our knowledge, it has not been seriously studied in existing language modeling literature.\nCompared to traditional relevance feedback, this term-based approach to interactive query model refinement has several advantages.\nFirst, the user has better control of the final query model through direct manipulation of terms: he\/she can dictate which terms are relevant, irrelevant, and possibly, to what degree.\nThis avoids the risk of bringing unwanted terms into the query model, although sometimes the user introduces low-quality terms.\nSecond, because a term takes less time to judge than a document's full text or summary, and as few as around 20 presented terms can bring significant improvement in retrieval performance (as we will show later), term feedback makes it faster to gather user feedback.\nThis is especially helpful for interactive adhoc search.\nThird, sometimes there are no relevant documents in the top N of the initially retrieved results if the topic is hard.\nThis is often true when N is constrained to be small, which arises from the fact that the user is unwilling to judge too many documents.\nIn this case, relevance feedback is useless, as no relevant document can be leveraged on, but term feedback is still often helpful, by allowing relevant terms to be picked from irrelevant documents.\nDuring our participation in the TREC 2005 HARD Track and continued study afterward, we explored how to exploit term feedback from the user to construct improved query models for information retrieval in the language modeling approach.\nWe identified two key subtasks of term-based feedback, i.e., pre-feedback presentation term selection and post-feedback query model construction, with effective algorithms developed for both.\nWe imposed a secondary cluster structure on terms and found that a cluster view sheds additional insight into the user's information need, and provides a good way of utilizing term feedback.\nThrough experiments we found that term feedback improves significantly over the nonfeedback baseline, even though the user often makes mistakes in relevance judgment.\nAmong our algorithms, the one with best retrieval performance is TCFB, the combination of TFB, the direct term feedback algorithm, and CFB, the cluster-based feedback algorithm.\nWe also varied the number of feedback terms and observed reasonable improvement even at low numbers.\nFinally, by comparing term feedback with document-level feedback, we found\nit to be a viable alternative to the latter with competitive retrieval performance.\nThe rest of the paper is organized as follows.\nSection 2 discusses some related work.\nSection 4 outlines our general approach to term feedback.\nWe present our method for presentation term selection in Section 3 and algorithms for query model construction in Section 5.\nThe experiment results are given in Section 6.\nSection 7 concludes this paper.\n2.\nRELATED WORK\nRelevance feedback [17, 19] has long been recognized as an effective method for improving retrieval performance.\nNormally, the top N documents retrieved using the original query are presented to the user for judgment, after which terms are extracted from the judged relevant documents, weighted by their potential of attracting more relevant documents, and added into the query model.\nThe expanded query usually represents the user's information need better than the original one, which is often just a short keyword query.\nA second iteration of retrieval using this modified query usually produces significant increase in retrieval accuracy.\nIn cases where true relevance judgment is unavailable and all top N documents are assumed to be relevant, it is called blind or pseudo feedback [5, 16] and usually still brings performance improvement.\nBecause document is a large text unit, when it is used for relevance feedback many irrelevant terms can be introduced into the feedback process.\nTo overcome this, passage feedback is proposed and shown to improve feedback performance [1, 23].\nA more direct solution is to ask the user for their relevance judgment of feedback terms.\nFor example, in some relevance feedback systems such as [12], there is an interaction step that allows the user to add or remove expansion terms after they are automatically extracted from relevant documents.\nThis is categorized as interactive query expansion, where the original query is augmented with user-provided terms, which can come from direct user input (free-form text or keywords) [22, 7, 10] or user selection of system-suggested terms (using thesauri [6, 22] or extracted from feedback documents [6, 22, 12, 4, 7]).\nIn many cases term relevance feedback has been found to effectively improve retrieval performance [6, 22, 12, 4, 10].\nFor example, the study in [12] shows that the user prefers to have explicit knowledge and direct control of which terms are used for query expansion, and the penetrable interface that provides this freedom is shown to perform better than other interfaces.\nHowever, in some other cases there is no significant benefit [3, 14], even if the user likes interacting with expansion terms.\nIn a simulated study carried out in [18], the author compares the retrieval performance of interactive query expansion and automatic query expansion with a simulated study, and suggests that the potential benefits of the former can be hard to achieve.\nThe user is found to be not good at identifying useful terms for query expansion, when a simple term presentation interface is unable to provide sufficient semantic context of the feedback terms.\nOur work differs from the previous ones in two important aspects.\nFirst, when we choose terms to present to the user for relevance judgment, we not only consider single-term value (e.g., the relative frequency of a term in the top documents, which can be measured by metrics such as Robertson Selection Value and Simplified Kullback-Leibler Distance as listed in [24]), but also examine the cluster structure of the terms, so as to produce a balanced coverage of the different topic aspects.\nSecond, with the language modelling framework, we allow an elaborate construction of the updated query model, by setting different probabilities for different terms based on whether it is a query term, its significance in the top documents, and its cluster membership.\nAlthough techniques for adjusting query term weights exist for vector space models [17] and probablistic relevance models [9], most of the aforementioned works do not use them, choosing to just append feedback terms to the original query (thus using equal weights for them), which can lead to poorer retrieval performance.\nThe combination of the two aspects allows our method to perform much better than the baseline.\nThe usual way for feedback term presentation is just to display the terms in a list.\nThere have been some works on alternative user interfaces.\n[8] arranges terms in a hierarchy, and [11] compares three different interfaces, including terms + checkboxes, terms + context (sentences) + checkboxes, sentences + input text box.\nIn both studies, however, there is no significant performance difference.\nIn our work we adopt the simplest approach of terms + checkboxes.\nWe focus on term presentation and query model construction from feedback terms, and believe using contexts to improve feedback term quality should be orthogonal to our method.\n3.\nGENERAL APPROACH\n4.\nPRESENTATION TERM SELECTION\n5.\nESTIMATING QUERY MODELS FROM TERM FEEDBACK\n5.2 CFB (Cluster Feedback)\n5.3 TCFB (Term-cluster Feedback)\n6.\nEXPERIMENTS\n6.1 Experiment Setup and Basic Results\n6.2 Reduction of Presentation Terms\n6.3 User Feedback Analysis\n6.4 Comparison with Relevance Feedback\n7.\nCONCLUSIONS\nIn this paper we studied the use of term feedback for interactive information retrieval in the language modeling approach.\nWe proposed a cluster-based method for selecting presentation terms as well as algorithms to estimate refined query models from user term feedback.\nWe saw significant improvement in retrieval accuracy brought by term feedback, in spite of the fact that a user often makes mistakes in relevance judgment that hurts its performance.\nWe found the best-performing algorithm to be TCFB, which benefits from the combination of directly observed term evidence with TFB and indirectly learned cluster relevance with CFB.\nWhen we reduced the number of presentation terms, term feedback is still able to keep much of its performance gain over the baseline.\nFinally, we compared term feedback to document-level relevance feedback, and found that TCFB3C's performance is on a par with the latter with 5 feedback documents.\nWe regarded term feedback as a viable alternative to traditional relevance feedback, especially when there are no relevant documents in the top.\nWe propose to extend our work in several ways.\nFirst, we want to study whether the use of various contexts can help the user to better identify term relevance, while not sacrificing the simplicity and compactness of term feedback.\nSecond, currently all terms are presented to the user in a single batch.\nWe could instead consider iterative term feedback, by presenting a small number of terms first, and show more terms after receiving user feedback or stop when the refined query is good enough.\nThe presented terms should be selected dynamically to maximize learning benefits at any moment.\nThird, we have plans to incorporate term feedback into our UCAIR toolbar [20], an Internet Explorer plugin, to make it work for web search.\nWe are also interested in studying how to combine term feedback with relevance feedback or implicit feedback.\nWe could, for example, allow the user to dynamically modify terms in a language model learned from feedback documents.","lvl-4":"Term Feedback for Information Retrieval with Language Models\nABSTRACT\nI n t hi s paper w e s t udy t er m - based f eedback f or i nf or mat i on r etrieval in the language modeling approach.\nWith term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process.\nWe propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback.\nOur algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback.\nThey are helpful even when there are no relevant documents in the top.\n1.\nINTRODUCTION\nIn the language modeling approach to information retrieval, feedback is often modeled as estimating an improved query model or relevance model based on a set of feedback documents [25, 13].\nThis is in line with the traditional way of doing relevance feedback - presenting a user with documents\/passages for relevance judgment and then extracting terms from the judged documents or passages to expand the initial query.\nIt is an indirect way of seeking user's assistance for query model construction, in the sense that the refined query model (based on terms) is learned through feedback documents\/passages, which are high-level structures of terms.\nIt has the disadvantage that irrelevant terms, which occur along with relevant ones in the judged content, may be erroneously used for query expansion, causing undesired effects.\nFor example, for the\nTREC query \"Hubble telescope achievements\", when a relevant document talks more about the telescope's repair than its discoveries, irrelevant terms such as \"spacewalk\" can be added into the modified query.\nWe can consider a more direct way to involve a user in query model improvement, without an intermediary step of document feedback that can introduce noise.\nThe idea is to present a (reasonable) number of individual terms to the user and ask him\/her to judge the relevance of each term or directly specify their probabilities in the query model.\nCompared to traditional relevance feedback, this term-based approach to interactive query model refinement has several advantages.\nFirst, the user has better control of the final query model through direct manipulation of terms: he\/she can dictate which terms are relevant, irrelevant, and possibly, to what degree.\nThis avoids the risk of bringing unwanted terms into the query model, although sometimes the user introduces low-quality terms.\nThis is especially helpful for interactive adhoc search.\nIn this case, relevance feedback is useless, as no relevant document can be leveraged on, but term feedback is still often helpful, by allowing relevant terms to be picked from irrelevant documents.\nDuring our participation in the TREC 2005 HARD Track and continued study afterward, we explored how to exploit term feedback from the user to construct improved query models for information retrieval in the language modeling approach.\nWe identified two key subtasks of term-based feedback, i.e., pre-feedback presentation term selection and post-feedback query model construction, with effective algorithms developed for both.\nWe imposed a secondary cluster structure on terms and found that a cluster view sheds additional insight into the user's information need, and provides a good way of utilizing term feedback.\nThrough experiments we found that term feedback improves significantly over the nonfeedback baseline, even though the user often makes mistakes in relevance judgment.\nAmong our algorithms, the one with best retrieval performance is TCFB, the combination of TFB, the direct term feedback algorithm, and CFB, the cluster-based feedback algorithm.\nWe also varied the number of feedback terms and observed reasonable improvement even at low numbers.\nFinally, by comparing term feedback with document-level feedback, we found\nit to be a viable alternative to the latter with competitive retrieval performance.\nThe rest of the paper is organized as follows.\nSection 2 discusses some related work.\nSection 4 outlines our general approach to term feedback.\nWe present our method for presentation term selection in Section 3 and algorithms for query model construction in Section 5.\nThe experiment results are given in Section 6.\nSection 7 concludes this paper.\n2.\nRELATED WORK\nRelevance feedback [17, 19] has long been recognized as an effective method for improving retrieval performance.\nNormally, the top N documents retrieved using the original query are presented to the user for judgment, after which terms are extracted from the judged relevant documents, weighted by their potential of attracting more relevant documents, and added into the query model.\nThe expanded query usually represents the user's information need better than the original one, which is often just a short keyword query.\nA second iteration of retrieval using this modified query usually produces significant increase in retrieval accuracy.\nIn cases where true relevance judgment is unavailable and all top N documents are assumed to be relevant, it is called blind or pseudo feedback [5, 16] and usually still brings performance improvement.\nBecause document is a large text unit, when it is used for relevance feedback many irrelevant terms can be introduced into the feedback process.\nTo overcome this, passage feedback is proposed and shown to improve feedback performance [1, 23].\nA more direct solution is to ask the user for their relevance judgment of feedback terms.\nFor example, in some relevance feedback systems such as [12], there is an interaction step that allows the user to add or remove expansion terms after they are automatically extracted from relevant documents.\nIn many cases term relevance feedback has been found to effectively improve retrieval performance [6, 22, 12, 4, 10].\nFor example, the study in [12] shows that the user prefers to have explicit knowledge and direct control of which terms are used for query expansion, and the penetrable interface that provides this freedom is shown to perform better than other interfaces.\nHowever, in some other cases there is no significant benefit [3, 14], even if the user likes interacting with expansion terms.\nThe user is found to be not good at identifying useful terms for query expansion, when a simple term presentation interface is unable to provide sufficient semantic context of the feedback terms.\nOur work differs from the previous ones in two important aspects.\nThe usual way for feedback term presentation is just to display the terms in a list.\nThere have been some works on alternative user interfaces.\nIn both studies, however, there is no significant performance difference.\nIn our work we adopt the simplest approach of terms + checkboxes.\nWe focus on term presentation and query model construction from feedback terms, and believe using contexts to improve feedback term quality should be orthogonal to our method.\n7.\nCONCLUSIONS\nIn this paper we studied the use of term feedback for interactive information retrieval in the language modeling approach.\nWe proposed a cluster-based method for selecting presentation terms as well as algorithms to estimate refined query models from user term feedback.\nWe saw significant improvement in retrieval accuracy brought by term feedback, in spite of the fact that a user often makes mistakes in relevance judgment that hurts its performance.\nWe found the best-performing algorithm to be TCFB, which benefits from the combination of directly observed term evidence with TFB and indirectly learned cluster relevance with CFB.\nWhen we reduced the number of presentation terms, term feedback is still able to keep much of its performance gain over the baseline.\nFinally, we compared term feedback to document-level relevance feedback, and found that TCFB3C's performance is on a par with the latter with 5 feedback documents.\nWe regarded term feedback as a viable alternative to traditional relevance feedback, especially when there are no relevant documents in the top.\nWe propose to extend our work in several ways.\nFirst, we want to study whether the use of various contexts can help the user to better identify term relevance, while not sacrificing the simplicity and compactness of term feedback.\nSecond, currently all terms are presented to the user in a single batch.\nWe could instead consider iterative term feedback, by presenting a small number of terms first, and show more terms after receiving user feedback or stop when the refined query is good enough.\nThe presented terms should be selected dynamically to maximize learning benefits at any moment.\nThird, we have plans to incorporate term feedback into our UCAIR toolbar [20], an Internet Explorer plugin, to make it work for web search.\nWe are also interested in studying how to combine term feedback with relevance feedback or implicit feedback.\nWe could, for example, allow the user to dynamically modify terms in a language model learned from feedback documents.","lvl-2":"Term Feedback for Information Retrieval with Language Models\nABSTRACT\nI n t hi s paper w e s t udy t er m - based f eedback f or i nf or mat i on r etrieval in the language modeling approach.\nWith term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process.\nWe propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback.\nOur algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback.\nThey are helpful even when there are no relevant documents in the top.\n1.\nINTRODUCTION\nIn the language modeling approach to information retrieval, feedback is often modeled as estimating an improved query model or relevance model based on a set of feedback documents [25, 13].\nThis is in line with the traditional way of doing relevance feedback - presenting a user with documents\/passages for relevance judgment and then extracting terms from the judged documents or passages to expand the initial query.\nIt is an indirect way of seeking user's assistance for query model construction, in the sense that the refined query model (based on terms) is learned through feedback documents\/passages, which are high-level structures of terms.\nIt has the disadvantage that irrelevant terms, which occur along with relevant ones in the judged content, may be erroneously used for query expansion, causing undesired effects.\nFor example, for the\nTREC query \"Hubble telescope achievements\", when a relevant document talks more about the telescope's repair than its discoveries, irrelevant terms such as \"spacewalk\" can be added into the modified query.\nWe can consider a more direct way to involve a user in query model improvement, without an intermediary step of document feedback that can introduce noise.\nThe idea is to present a (reasonable) number of individual terms to the user and ask him\/her to judge the relevance of each term or directly specify their probabilities in the query model.\nThis strategy has been discussed in [15], but to our knowledge, it has not been seriously studied in existing language modeling literature.\nCompared to traditional relevance feedback, this term-based approach to interactive query model refinement has several advantages.\nFirst, the user has better control of the final query model through direct manipulation of terms: he\/she can dictate which terms are relevant, irrelevant, and possibly, to what degree.\nThis avoids the risk of bringing unwanted terms into the query model, although sometimes the user introduces low-quality terms.\nSecond, because a term takes less time to judge than a document's full text or summary, and as few as around 20 presented terms can bring significant improvement in retrieval performance (as we will show later), term feedback makes it faster to gather user feedback.\nThis is especially helpful for interactive adhoc search.\nThird, sometimes there are no relevant documents in the top N of the initially retrieved results if the topic is hard.\nThis is often true when N is constrained to be small, which arises from the fact that the user is unwilling to judge too many documents.\nIn this case, relevance feedback is useless, as no relevant document can be leveraged on, but term feedback is still often helpful, by allowing relevant terms to be picked from irrelevant documents.\nDuring our participation in the TREC 2005 HARD Track and continued study afterward, we explored how to exploit term feedback from the user to construct improved query models for information retrieval in the language modeling approach.\nWe identified two key subtasks of term-based feedback, i.e., pre-feedback presentation term selection and post-feedback query model construction, with effective algorithms developed for both.\nWe imposed a secondary cluster structure on terms and found that a cluster view sheds additional insight into the user's information need, and provides a good way of utilizing term feedback.\nThrough experiments we found that term feedback improves significantly over the nonfeedback baseline, even though the user often makes mistakes in relevance judgment.\nAmong our algorithms, the one with best retrieval performance is TCFB, the combination of TFB, the direct term feedback algorithm, and CFB, the cluster-based feedback algorithm.\nWe also varied the number of feedback terms and observed reasonable improvement even at low numbers.\nFinally, by comparing term feedback with document-level feedback, we found\nit to be a viable alternative to the latter with competitive retrieval performance.\nThe rest of the paper is organized as follows.\nSection 2 discusses some related work.\nSection 4 outlines our general approach to term feedback.\nWe present our method for presentation term selection in Section 3 and algorithms for query model construction in Section 5.\nThe experiment results are given in Section 6.\nSection 7 concludes this paper.\n2.\nRELATED WORK\nRelevance feedback [17, 19] has long been recognized as an effective method for improving retrieval performance.\nNormally, the top N documents retrieved using the original query are presented to the user for judgment, after which terms are extracted from the judged relevant documents, weighted by their potential of attracting more relevant documents, and added into the query model.\nThe expanded query usually represents the user's information need better than the original one, which is often just a short keyword query.\nA second iteration of retrieval using this modified query usually produces significant increase in retrieval accuracy.\nIn cases where true relevance judgment is unavailable and all top N documents are assumed to be relevant, it is called blind or pseudo feedback [5, 16] and usually still brings performance improvement.\nBecause document is a large text unit, when it is used for relevance feedback many irrelevant terms can be introduced into the feedback process.\nTo overcome this, passage feedback is proposed and shown to improve feedback performance [1, 23].\nA more direct solution is to ask the user for their relevance judgment of feedback terms.\nFor example, in some relevance feedback systems such as [12], there is an interaction step that allows the user to add or remove expansion terms after they are automatically extracted from relevant documents.\nThis is categorized as interactive query expansion, where the original query is augmented with user-provided terms, which can come from direct user input (free-form text or keywords) [22, 7, 10] or user selection of system-suggested terms (using thesauri [6, 22] or extracted from feedback documents [6, 22, 12, 4, 7]).\nIn many cases term relevance feedback has been found to effectively improve retrieval performance [6, 22, 12, 4, 10].\nFor example, the study in [12] shows that the user prefers to have explicit knowledge and direct control of which terms are used for query expansion, and the penetrable interface that provides this freedom is shown to perform better than other interfaces.\nHowever, in some other cases there is no significant benefit [3, 14], even if the user likes interacting with expansion terms.\nIn a simulated study carried out in [18], the author compares the retrieval performance of interactive query expansion and automatic query expansion with a simulated study, and suggests that the potential benefits of the former can be hard to achieve.\nThe user is found to be not good at identifying useful terms for query expansion, when a simple term presentation interface is unable to provide sufficient semantic context of the feedback terms.\nOur work differs from the previous ones in two important aspects.\nFirst, when we choose terms to present to the user for relevance judgment, we not only consider single-term value (e.g., the relative frequency of a term in the top documents, which can be measured by metrics such as Robertson Selection Value and Simplified Kullback-Leibler Distance as listed in [24]), but also examine the cluster structure of the terms, so as to produce a balanced coverage of the different topic aspects.\nSecond, with the language modelling framework, we allow an elaborate construction of the updated query model, by setting different probabilities for different terms based on whether it is a query term, its significance in the top documents, and its cluster membership.\nAlthough techniques for adjusting query term weights exist for vector space models [17] and probablistic relevance models [9], most of the aforementioned works do not use them, choosing to just append feedback terms to the original query (thus using equal weights for them), which can lead to poorer retrieval performance.\nThe combination of the two aspects allows our method to perform much better than the baseline.\nThe usual way for feedback term presentation is just to display the terms in a list.\nThere have been some works on alternative user interfaces.\n[8] arranges terms in a hierarchy, and [11] compares three different interfaces, including terms + checkboxes, terms + context (sentences) + checkboxes, sentences + input text box.\nIn both studies, however, there is no significant performance difference.\nIn our work we adopt the simplest approach of terms + checkboxes.\nWe focus on term presentation and query model construction from feedback terms, and believe using contexts to improve feedback term quality should be orthogonal to our method.\n3.\nGENERAL APPROACH\nWe follow the language modeling approach, and base our method on the KL-divergence retrieval model proposed in [25].\nWith this model, the retrieval task involves estimating a query language model 0, from a given query, a document language model 0d from each document, and calculating their KL-divergence D (0, | | 0d), which is then used to score the documents.\n[25] treats relevance feedback as a query model re-estimation problem, i.e., computing an updated query model 0, given the original query text and the extra evidence carried by the judged relevant documents.\nWe adopt this view, and cast our task as updating the query model from user term feedback.\nThere are two key subtasks here: First, how to choose the best terms to present to the user for judgment, in order to gather maximal evidence about the user's information need.\nSecond, how to compute an updated query model based on this term feedback evidence, so that it captures the user's information need and translates into good retrieval performance.\n4.\nPRESENTATION TERM SELECTION\nProper selection of terms to be presented to the user for judgment is crucial to the success of term feedback.\nIf the terms are poorly chosen and there are few relevant ones, the user will have a hard time looking for useful terms to help clarify his\/her information need.\nIf the relevant terms are plentiful, but all concentrate on a single aspect of the query topic, then we will only be able to get feedback on that aspect and missing others, resulting in a breadth loss in retrieved results.\nTherefore, it is important to carefully select presentation terms to maximize expected gain from user feedback, i.e., those that can potentially reveal most evidence of the user's information need.\nThis is similar to active feedback [21], which suggests that a retrieval system should actively probe the user's information need, and in the case of relevance feedback, the feedback documents should be chosen to maximize learning benefits (e.g. diversely so as to increase coverage).\nIn our approach, the top N documents from an initial retrieval using the original query form the source of feedback terms: all terms that appear in them are considered candidates to present to the user.\nThese documents serve as pseudo-feedback, since they provide a much richer context than the original query (usually very short), while the user is not asked to judge their relevance.\nDue to the latter reason, it is possible to make N quite large (e.g., in our experiments we set N = 60) to increase its coverage of different aspects in the topic.\nThe simplest way of selecting feedback terms is to choose the most frequent M terms from the N documents.\nThis method, however, has two drawbacks.\nFirst, a lot of common noisy terms will be selected due to their high frequencies in the document collection, unless a stop-word list is used for filtering.\nSecond, the presentation list will tend to be filled by terms from major aspects of the topic; those from a minor aspect are likely to be missed due to their relatively low frequencies.\nWe solve the above problems by two corresponding measures.\nFirst, we introduce a background model \u03b8B that is estimated from collection statistics and explains the common terms, so that they are much less likely to appear in the presentation list.\nSecond, the terms are selected from multiple clusters in the pseudo-feedback documents, to ensure sufficient representation of different aspects of the topic.\nWe rely on the mixture multinomial model, which is used for theme discovery in [26].\nSpecifically, we assume the N documents contain K clusters {Ci | i = 1, 2, \u00b7 \u00b7 \u00b7 K}, each characterized by a multinomial word distribution (also known as unigram language model) \u03b8i and corresponding to an aspect of the topic.\nThe documents are regarded as sampled from a mixture of K + 1 components, including the K clusters and the background model:\nwhere w is a word, \u03bbB is the mixture weight for the background model \u03b8B, and \u03c0d, i is the document-specific mixture weight for the i-th cluster model \u03b8i.\nWe then estimate the cluster models by maximizing the probability of the pseudo-feedback documents being generated from the multinomial mixture model:\nwhere D = {di | i = 1, 2, \u00b7 \u00b7 \u00b7 N} is the set of the N documents, V is the vocabulary, c (w; d) is w's frequency in d and \u039b = {\u03b8i | i = 1, 2, \u00b7 \u00b7 \u00b7 K} \u222a {\u03c0dij | i = 1, 2, \u00b7 \u00b7 \u00b7 N, j = 1, 2, \u00b7 \u00b7 \u00b7 K} is the set of model parameters to estimate.\nThe cluster models can be efficiently estimated using the Expectation-Maximization (EM) algorithm.\nFor its details, we refer the reader to [26].\nTable 1 shows the cluster models for TREC query \"Transportation tunnel disasters\" (K = 3).\nNote that only the middle cluster is relevant.\nTable 1: Cluster models for topic 363 \"Transportation tunnel disasters\"\nCluster 1 Cluster 2 Cluster 3 tunnel 0.0768 tunnel 0.0935 tunnel 0.0454 transport 0.0364 fire 0.0295 transport 0.0406 traffic 0.0206 truck 0.0236 toll 0.0166 railwai 0.0186 french 0.0220 amtrak 0.0153 harbor 0.0146 smoke 0.0157 train 0.0129 rail 0.0140 car 0.0154 airport 0.0122 bridg 0.0139 italian 0.0152 turnpik 0.0105 kilomet 0.0136 firefight 0.0144 lui 0.0095 truck 0.0133 blaze 0.0127 jersei 0.0093 construct 0.0131 blanc 0.0121 pass 0.0087 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 From each of the K estimated clusters, we choose the L = M\/K terms with highest probabilities to form a total of M presentation terms.\nIf a term happens to be in top L in multiple clusters, we assign it to the cluster where it has highest probability and let the other clusters take one more term as compensation.\nWe also filter out terms in the original query text because they tend to always be relevant when the query is short.\nThe selected terms are then presented to the user for judgment.\nA sample (completed) feedback form is shown in Figure 1.\nIn this study we only deal with binary judgment: a presented term is by default unchecked, and a user may check it to indicate relevance.\nWe also do not explicitly exploit negative feedback (i.e., penalizing irrelevant terms), because with binary feedback an unchecked term is not necessarily irrelevant (maybe the user is unsure about its relevance).\nWe could ask the user for finer judgment (e.g., choosing from highly relevant, somewhat relevant, do not know, somewhat irrelevant and highly irrelevant), but binary feedback is more compact, taking less space to display and less user effort to make judgment.\n5.\nESTIMATING QUERY MODELS FROM TERM FEEDBACK\nIn this section, we present several algorithms for exploiting term feedback.\nThe algorithms take as input the original query q, the clusters {\u03b8i} as generated by the theme discovery algorithm, the set of feedback terms T and their relevance judgment R, and outputs an updated query language model \u03b8 ~ q that makes best use of the feedback evidence to capture the user's information need.\nFirst we describe our notations: | q | where c (w; q) is the count of w in q, and | q | = EwEq c (w; q) is the query length.\n\u2022 \u03b8q: The updated query model which we need to estimate from term feedback.\n\u2022 \u03b8i (i = 1, 2,...K): The unigram language model of cluster Ci, as estimated using the theme discovery algorithm.\n\u2022 T = {ti, j} (i = 1...K, j = 1...L): The set of terms presented to the user for judgment.\nti, j is the j-th term chosen from cluster Ci.\nwhere Ew ET \u03b4w is the total number of terms that are judged relevant.\nWe call this method TFB (direct Term FeedBack).\nIf we let \u03bc = 1, this approach is equivalent to appending the relevant terms after the original query, which is what standard query expansion (without term reweighting) does.\nIf we set \u03bc> 1, we are putting more emphasis on the query terms than the checked ones.\nNote that the result model will be more biased toward \u03b8q if the original query is long or the user feedback is weak, which makes sense, as we can trust more on the original query in either case.\n\u2022 \u03b8q: The original query model, derived from query terms only: p (w | \u03b8q) = c (w; q) \u2022 R = {\u03b4w | w \u2208 T}: \u03b4w is an indicator variable that is 1 if w is judged relevant or 0 otherwise.\n5.1 TFB (Direct Term Feedback)\nThis is a straight-forward form of term feedback that does not involve any secondary structure.\nWe give a weight of 1 to terms judged relevant by the user, a weight of \u03bc to query terms, zero weight to other terms, and then apply normalization:\nFigure 1: Filled clarification form for Topic 363\n5.2 CFB (Cluster Feedback)\nHere we exploit the cluster structure that played an important role when we selected the presentation terms.\nThe clusters represent different aspects of the query topic, each of which may or may not be relevant.\nIf we are able to identify the relevant clusters, we can combine them to generate a query model that is good at discovering documents belonging to these clusters (instead of the irrelevant ones).\nWe could ask the user to directly judge the relevance of a cluster after viewing representative terms in that cluster, but this would sometimes be a difficult task for the user, who has to guess the semantics of a cluster via its set of terms, which may not be well connected to one another due to a lack of context.\nTherefore, we propose to learn cluster feedback indirectly, inferring the relevance of a cluster through the relevance of its feedback terms.\nBecause each cluster has an equal number of terms presented to the user, the simplest measure of a cluster's relevance is the number of terms that are judged relevant in it.\nIntuitively, the more terms are marked relevant in a cluster, the closer the cluster is to the query topic, and the more the cluster should participate in query modification.\nIf we combine the cluster models using weights determined this way and then interpolate with the original query model, we get the following formula for query updating, which we call CFB (Cluster FeedBack):\nwhere EL j = 1 \u03b4ti, j is the number of relevant terms in cluster Ci, and EK EL j = 1 \u03b4tk, j is the total number of relevant terms.\nk = 1 We note that when there is only one cluster (K = 1), the above formula degenerates to\nwhich is merely pseudo-feedback of the form proposed in [25].\n5.3 TCFB (Term-cluster Feedback)\nTFB and CFB both have their drawbacks.\nTFB assigns non-zero probabilities to the presented terms that are marked relevant, but completely ignores (a lot more) others, which may be left unchecked due to the user's ignorance, or simply not included in the presentation list, but we should be able to infer their relevance from the checked ones.\nFor example, in Figure 1, since as many as 5 terms in the middle cluster (the third and fourth columns) are checked, we should have high confidence in the relevance of other terms in that cluster.\nCFB remedies TFB's problem by treating the terms in a cluster collectively, so that unchecked\/unpresented terms receive weights when presented terms in their clusters are judged as relevant, but it does not distinguish which terms in a cluster are presented or judged.\nIntuitively, the judged relevant terms should receive larger weights because they are explicitly indicated as relevant by the user.\nTherefore, we try to combine the two methods, hoping to get the best out of both.\nWe do this by interpolating the TFB model with the CFB model, and call it TCFB:\n6.\nEXPERIMENTS\nIn this section, we describe our experiment results.\nWe first describe our experiment setup and present an overview of various methods' performance.\nThen we discuss the effects of varying the parameter setting in the algorithms, as well as the number of presentation terms.\nNext we analyze user term feedback behavior and its relation to retrieval performance.\nFinally we compare term feedback to relevance feedback and show that it has its particular advantage.\n6.1 Experiment Setup and Basic Results\nWe took the opportunity of TREC 2005 HARD Track [2] for the evaluation of our algorithms.\nThe tracks used the AQUAINT collection, a 3GB corpus of English newswire text.\nThe topics included 50 ones previously known to be hard, i.e. with low retrieval performance.\nIt is for these hard topics that user feedback is most helpful, as it can provide information to disambiguate the queries; with easy topics the user may be unwilling to spend efforts for feedback if the automatic retrieval results are good enough.\nParticipants of the track were able to submit custom-designed clarification forms (CF) to solicit feedback from human assessors provided by\nTable 2: Retrieval performance for different methods and CF types.\nThe last row is the percentage of MAP improvement over the baseline.\nThe parameter settings \u00b5 = 4, ~ = 0.1, \u03b1 = 0.3 are near optimal.\nTable 3: MAP variation with the number of presented terms.\nNIST.\nWe designed three sets of clarification forms for term feedback, differing in the choice of K, the number of clusters, and L, the number of presented terms from each cluster.\nThey are: 1 x 48, a big cluster with 48 terms, 3 x 16, 3 clusters with 16 terms each, and 6 x 8, 6 clusters with 8 terms each.\nThe total number of presented terms (M) is fixed at 48, so by comparing the performance of different types of clarification forms we can know the effects of different degree of clustering.\nFor each topic, an assessor would complete the forms ordered by 6 x 8, 1 x 48 and 3 x 16, spending up to three minutes on each form.\nThe sample clarification form shown in Figure 1 is of type 3 x 16.\nIt is a simple and compact interface in which the user can check relevant terms.\nThe form is self-explanatory; there is no need for extra user training on how to use it.\nOur initinal queries are constructed only using the topic title descriptions, which are on average 2.7 words in length.\nAs our baseline we use the KL divergence retrieval method implemented in the Lemur Toolkit1 with 5 pseudo-feedback documents.\nWe stem the terms, choose Dirichlet smoothing with a prior of 2000, and truncate query language models to 50 terms (these settings are used throughout the experiments).\nFor all other parameters we use Lemur's default settings.\nThe baseline turns out to perform above average among the track participants.\nAfter an initial run using this baseline retrieval method, we take the top 60 documents for each topic and apply the theme discovery algorithm to output the clusters (1, 3, or 6 of them), based on which we generate clarification forms.\nAfter user feedback is received, we run the term feedback algorithms (TFB, CFB or TCFB) to estimate updated query models, which are then used for a second iteration of retrieval.\nWe evaluate the different retrieval methods' performance on their rankings of the top 1000 documents.\nThe evaluation metrics we adopt include mean average (non-interpolated) precision (MAP), precision at top 30 (Pr@30) and total relevant retrieved (RR).\nTable 2 shows the performance of various methods and configurations of K x L.\nThe suffixes (1C, 3C, 6C) after TFB, CFB, TCFB stand for the number of clusters (K).\nFor example, TCFB3C means the TCFB method on the 3 x 16 clarification forms.\nFrom Table 2 we can make the following observations:\n1.\nAll methods perform considerably better than the pseudofeedback baseline, with TCFB3C achieving a highest 41.1% improvement in MAP, indicating significant contribution of term feedback for clarification of the user's information need.\nIn other words, term feedback is truly helpful for improving retrieval accuracy.\n2.\nFor TFB, the performance is almost equal on the 1 x 48 and 3 x 16 clarification forms in terms of MAP (although the latter is slightly better in Pr@30 and RR), and a little worse on the 6 x 8 ones.\n3.\nBoth CFB3C and CFB6C perform better than their TFB counterparts in all three metrics, suggesting that feedback on a secondary cluster structure is indeed beneficial.\nCFB1C is actually worse because it cannot adjust the weight of its (single) cluster from term feedback and it is merely pseudofeedback.\n4.\nAlthough TCFB is just a simple mixture of TFB and CFB by interpolation, it is able to outperform both.\nThis supports\nour speculation that TCFB overcomes the drawbacks of TFB (paying attention only to checked terms) and CFB (not distinguishing checked and unchecked terms in a cluster).\nExcept for TCFB6C v.s. CFB6C, the performance advantage of TCFB over TFB\/CFB is significant at p <0.05 using the Wilcoxon signed rank test.\nThis is not true in the case of TFB v.s. CFB, each of which is better than the other in nearly half of the topics.\n6.2 Reduction of Presentation Terms\nIn some situations we may have to reduce the number of presentation terms due to limits in display space or user feedback efforts.\nIt is interesting to know whether our algorithms' performance deteriorates when the user is presented with fewer terms.\nBecause the presentation terms within each cluster are generated in decreasing order of their frequencies, the presentation list forms a subset of the original one if its size is reduced2.\nTherefore, we can easily simulate what happens when the number of presentation terms decreases 2There are complexities arising from terms appearing in top L of multiple clusters, but these are exceptions\nfrom M to M': we will keep all judgments of the top L' = M' \/ K terms in each cluster and discard those of others.\nTable 3 shows the performance of various algorithms as the number of presentation terms ranges from 6 to 48.\nWe find that the performance of TFB is more susceptible to presentation term reduction than that of CFB or TCFB.\nFor example, at 12 terms the MAP of TFB3C is 90.6% of that at 48 terms, while the numbers for CFB3C and TCFB3C are 98.0% and 96.1% respectively.\nWe conjecture the reason to be that while TFB's performance heavily depends on how many good terms are chosen for query expansion, CFB only needs a rough estimate of cluster weights to work.\nAlso, the 3 x 16 clarification forms seem to be more robust than the 6 x 8 ones: at 12 terms the MAP of TFB6C is 87.1% of that at 48 terms, lower than 90.6% for TFB3C.\nSimilarly, for CFB it is 95.0% against 98.0%.\nThis is natual, as for a large cluster number of 6, it is easier to get into the situation where each cluster gets too few presentation terms to make topic diversification useful.\nOverall, we are surprised to see that the algorithms are still able to perform reasonably well when the number of presentation terms is small.\nFor example, at only 12 terms CFB3C (the clarification form is of size 3 x 4) can still improve 36.5% over the baseline, dropping slightly from 39.3% at 48 terms.\n6.3 User Feedback Analysis\nIn this part we study several aspects of user's term feedback behavior, and whether they are connected to retrieval performance.\ncompletion time (seconds) Figure 2 shows the distribution of time needed to complete a clarification form3.\nWe see that the user is usually able to finish term feedback within a reasonably short amount of time: for more than half of the topics the clarification form is completed in just 1 minute, and only a small fraction of topics (less than 10% for 1 x 48 and 3 x 16) take more than 2 minutes.\nThis suggests that term feedback is suitable for interactive ad-hoc retrieval, where a user usually does not want to spend too much time on providing feedback.\nWe find that a user often makes mistakes when judging term relevance.\nSometimes a relevant term may be left out because its connection to the query topic is not obvious to the user.\nOther times a dubious term may be included but turns out to be irrelevant.\nTake the topic in Figure 1 for example.\nThere was a fire disaster in Mont\nTable 4: Term selection statistics (topic average)\nBlanc Tunnel between France and Italy in 1999, but the user failed to select such keywords as \"mont\", \"blanc\", \"french\" and \"italian\" due to his\/her ignorance of the event.\nIndeed, without proper context it would be hard to make perfect judgment.\nWhat is then, the extent to which the user is good at term feedback?\nDoes it have serious impact on retrieval performance?\nTo answer these questions, we need a measure of individual terms' true relevance.\nWe adopt the Simplified KL Divergence metric used in [24] to decide query expansion terms as our term relevance measure:\nwhere p (wIR) is the probability that a relevant document contains term w, and p (wI-R) is the probability that an irrelevant document contains w, both of which can be easily computed via maximum likelihood estimate given document-level relevance judgment.\nIf \u03c3KLD (w)> 0, w is more likely to appear in relevant documents than irrelevant ones.\nWe consider a term relevant if its Simplified KL Divergence value is greater than a certain threshold \u03c30.\nWe can then define precision and recall of user term judgment accordingly: precision is the fraction of terms checked by the user that are relevant; recall is the fraction of presented relevant terms that are checked by the user.\nTable 4 shows the number of checked terms, relevant terms and relevant checked terms when \u03c30 is set to 1.0, as well as the precision\/recall of user term judgment.\nNote that when the clarification forms contain more clusters, fewer terms are checked: 14.8 for 1 x 48, 13.3 for 3 x 16 and 11.2 for 6 x 8.\nSimilar pattern holds for relevant terms and relevant checked terms.\nThere seems to be a trade-off between increasing topic diversity by clustering and losing extra relevant terms: when there are more clusters, each of them gets fewer terms to present, which can hurt a major relevant cluster that contains many relevant terms.\nTherefore, it is not always helpful to have more clusters, e.g., TFB6C is actually worse than TFB1C.\nThe major finding we can make from Table 4 is that the user is not particularly good at identifying relevant terms, which echoes the discovery in [18].\nIn the case of 3 x 16 clarification forms, the average number of terms checked as relevant by the user is 13.3 per topic, and the average number of relevant terms whose \u03c3KLD value exceed 1.0 is 12.6.\nThe user is able to recognize only 6.9 of these terms on average.\nIndeed, the precision and recall of user feedback terms (as defined previously) are far from perfect.\nOn the other hand, If the user had correctly checked all such relevant terms, the performance of our algorithms would have increased a lot, as shown in Table 5.\nWe see that TFB gets big improvement when there is an oracle who checks all relevant terms, while CFB meets a bottleneck around MAP of 0.325, since all it does is adjust cluster weights, and when the learned weights are close to being accurate, it cannot benefit more from term feedback.\nAlso note that TCFB fails to outperform TFB, probably because TFB is sufficiently accurate.\nFigure 2: Clarification form completion time distributions\nTable 5: Change of MAP when using all (and only) relevant\n6.4 Comparison with Relevance Feedback\nNow we compare term feedback with document-level relevance feedback, in which the user is presented with the top N documents from an initial retrieval and asked to judge their relevance.\nThe feedback process is simulated using document relevance judgment from NIST.\nWe use the mixture model based feedback method proposed in [25], with mixture noise set to 0.95 and feedback coefficient set to 0.9.\nComparative evaluation of relevance feedback against other methods is complicated by the fact that some documents have already been viewed during feedback, so it makes no sense to include them in the retrieval results of the second run.\nHowever, this does not hold for term feedback.\nThus, to make it fair w.r.t. user's information gain, if the feedback documents are relevant, they should be kept in the top of the ranking; if they are irrelevant, they should be left out.\nTherefore, we use relevance feedback to produce a ranking of top 1000 retrieved documents but with every feedback document excluded, and then prepend the relevant feedback documents at the front.\nTable 6 shows the performance of relevance feedback for different values of N and compares it with TCFB3C.\nTable 6: Performance of relevance feedback for different number of feedback documents (N).\nWe see that the performance of TCFB3C is comparable to that of relevance feedback using 5 documents.\nAlthough it is poorer than when there are 10 feedback documents in terms of MAP and Pr@30, it does retrieve more documents (4947) when going down the ranked list.\nWe try to compare the quality of automatically inserted terms in relevance feedback with that of manually selected terms in term feedback.\nThis is done by truncating the relevance feedback modified query model to a size equal to the number of checked terms for the same topic.\nWe can then compare the terms in the truncated model with the checked terms.\nFigure 3 shows the distribution of the terms' \u03c3KLD scores.\nWe find that term feedback tends to produce expansion terms of higher quality (those with \u03c3KLD> 1) compared to relevance feedback (with 10 feedback documents).\nThis does not contradict the fact that the latter yields higher retrieval performance.\nActually, when we use the truncated query model instead of the intact one refined from relevance feedback, the MAP is only 0.304.\nThe truth\nFigure 3: Comparison of expansion term quality between relevance feedback (with 10 feedback documents) and term feed\nis, although there are many unwanted terms in the expanded query model from feedback documents, there are also more relevant terms than what the user can possibly select from the list of presentation terms generated with pseudo-feedback documents, and the positive effects often outweights the negative ones.\nWe are interested to know under what circumstances term feedback has advantage over relevance feedback.\nOne such situation is when none of the top N feedback documents is relevant, rendering relevance feedback useless.\nThis is not infrequent, as one might have thought: out of the 50 topics, there are 13 such cases when N = 5, 10 when N = 10, and still 3 when N = 20.\nWhen this happens, one can only back off to the original retrieval method; the power of relevance feedback is lost.\nSurprisingly, in 11 out of 13 such cases where relevance feedback seems impossible, the user is able to check at least 2 relevant terms from the 3 \u00d7 16 clarification form (we consider term t to be relevant if \u03c3KLD (t)> 1.0).\nFurthermore, in 10 out of them TCFB3C outperforms the pseudo-feedback baseline, increasing MAP from 0.076 to 0.146 on average (these are particularly hard topics).\nWe think that there are two possible explanations for this phenomenon of term feedback being active even when relevance feedback does not work: First, even if none of the top N (suppose it is a small number) documents are relevant, we may still find relevant documents in top 60, which is more inclusive but usually unreachable when people are doing relevance feedback in interactive ad-hoc search, from which we can draw feedback terms.\nThis is true for topic 367 \"piracy\", where the top 10 feedback documents are all about software piracy, yet there are documents between 10-60 that are about piracy on the seas (which is about the real information need), contributing terms such as\" pirate\",\" ship\" for selection in the clarification form.\nSecond, for some topics, a document needs to meet some special condition in order to be relevant.\nThe top N documents may be related to the topic, but nonetheless irrelevant.\nIn this case, we may still extract useful terms from these documents, even if they do not qualify as relevant ones.\nFor example, in topic 639 \"consumer online shopping\", a document needs to mention what contributes to shopping growth to really match the specified information need, hence none of the top 10 feedback documents are regarded as relevant.\nBut nevertheless, the feedback terms such as\" retail\", \"commerce\" are good for query expansion.\n7.\nCONCLUSIONS\nIn this paper we studied the use of term feedback for interactive information retrieval in the language modeling approach.\nWe proposed a cluster-based method for selecting presentation terms as well as algorithms to estimate refined query models from user term feedback.\nWe saw significant improvement in retrieval accuracy brought by term feedback, in spite of the fact that a user often makes mistakes in relevance judgment that hurts its performance.\nWe found the best-performing algorithm to be TCFB, which benefits from the combination of directly observed term evidence with TFB and indirectly learned cluster relevance with CFB.\nWhen we reduced the number of presentation terms, term feedback is still able to keep much of its performance gain over the baseline.\nFinally, we compared term feedback to document-level relevance feedback, and found that TCFB3C's performance is on a par with the latter with 5 feedback documents.\nWe regarded term feedback as a viable alternative to traditional relevance feedback, especially when there are no relevant documents in the top.\nWe propose to extend our work in several ways.\nFirst, we want to study whether the use of various contexts can help the user to better identify term relevance, while not sacrificing the simplicity and compactness of term feedback.\nSecond, currently all terms are presented to the user in a single batch.\nWe could instead consider iterative term feedback, by presenting a small number of terms first, and show more terms after receiving user feedback or stop when the refined query is good enough.\nThe presented terms should be selected dynamically to maximize learning benefits at any moment.\nThird, we have plans to incorporate term feedback into our UCAIR toolbar [20], an Internet Explorer plugin, to make it work for web search.\nWe are also interested in studying how to combine term feedback with relevance feedback or implicit feedback.\nWe could, for example, allow the user to dynamically modify terms in a language model learned from feedback documents.","keyphrases":["inform retriev","languag model","queri expans process","queri expans","term-base feedback","queri model","interact adhoc search","retriev perform","probabl","kl-diverg","present term","interact retriev"],"prmu":["P","P","P","P","M","R","M","R","U","U","R","R"]} {"id":"H-31","title":"A Study of Poisson Query Generation Model for Information Retrieval","abstract":"Many variants of language models have been proposed for information retrieval. Most existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model. In this paper, we propose and study a new family of query generation models based on Poisson distribution. We show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods. We show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling. We present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections. The results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing. The performance can be further improved with two-stage smoothing.","lvl-1":"A Study of Poisson Query Generation Model for Information Retrieval Qiaozhu Mei, Hui Fang, Chengxiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign Urbana,IL 61801 {qmei2,hfang,czhai}@uiuc.\nedu ABSTRACT Many variants of language models have been proposed for information retrieval.\nMost existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model.\nIn this paper, we propose and study a new family of query generation models based on Poisson distribution.\nWe show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods.\nWe show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling.\nWe present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections.\nThe results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing.\nThe performance can be further improved with two-stage smoothing.\nCategories and Subject Descriptors: H.3.3 [Information Search and Retrieval]: Retrieval Models General Terms: Algorithms 1.\nINTRODUCTION As a new type of probabilistic retrieval models, language models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nAmong many variants of language models proposed, the most popular and fundamental one is the query-generation language model [21, 13], which leads to the query-likelihood scoring method for ranking documents.\nIn such a model, given a query q and a document d, we compute the likelihood of generating query q with a model estimated based on document d, i.e., the conditional probability p(q|d).\nWe can then rank documents based on the likelihood of generating the query.\nVirtually all the existing query generation language models are based on either multinomial distribution [19, 6, 28] or multivariate Bernoulli distribution [21, 18].\nThe multinomial distribution is especially popular and also shown to be quite effective.\nThe heavy use of multinomial distribution is partly due to the fact that it has been successfully used in speech recognition, where multinomial distribution is a natural choice for modeling the occurrence of a particular word in a particular position in text.\nCompared with multivariate Bernoulli, multinomial distribution has the advantage of being able to model the frequency of terms in the query; in contrast, multivariate Bernoulli only models the presence and absence of query terms, thus cannot capture different frequencies of query terms.\nHowever, multivariate Bernoulli also has one potential advantage over multinomial from the viewpoint of retrieval: in a multinomial distribution, the probabilities of all the terms must sum to 1, making it hard to accommodate per-term smoothing, while in a multivariate Bernoulli, the presence probabilities of different terms are completely independent of each other, easily accommodating per-term smoothing and weighting.\nNote that term absence is also indirectly captured in a multinomial model through the constraint that all the term probabilities must sum to 1.\nIn this paper, we propose and study a new family of query generation models based on the Poisson distribution.\nIn this new family of models, we model the frequency of each term independently with a Poisson distribution.\nTo score a document, we would first estimate a multivariate Poisson model based on the document, and then score it based on the likelihood of the query given by the estimated Poisson model.\nIn some sense, the Poisson model combines the advantage of multinomial in modeling term frequency and the advantage of the multivariate Bernoulli in accommodating per-term smoothing.\nIndeed, similar to the multinomial distribution, the Poisson distribution models term frequencies, but without the constraint that all the term probabilities must sum to 1, and similar to multivariate Bernoulli, it models each term independently, thus can easily accommodate per-term smoothing.\nAs in the existing work on multinomial language models, smoothing is critical for this new family of models.\nWe derive several smoothing methods for Poisson model in parallel to those used for multinomial distributions, and compare the corresponding retrieval models with those based on multinomial distributions.\nWe find that while with some smoothing methods, the new model and the multinomial model lead to exactly the same formula, with some other smoothing methods they diverge, and the Poisson model brings in more flexibility for smoothing.\nIn particular, a key difference is that the Poisson model can naturally accommodate perterm smoothing, which is hard to achieve with a multinomial model without heuristic twist of the semantics of a generative model.\nWe exploit this potential advantage to develop a new term-dependent smoothing algorithm for Poisson model and show that this new smoothing algorithm can improve performance over term-independent smoothing algorithms using either Poisson or multinomial model.\nThis advantage is seen for both one-stage and two-stage smoothing.\nAnother potential advantage of the Poisson model is that its corresponding background model for smoothing can be improved through using a mixture model that has a closed form formula.\nThis new background model is shown to outperform the standard background model and reduce the sensitivity of retrieval performance to the smoothing parameter.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce the new family of query generation models with Poisson distribution, and present various smoothing methods which lead to different retrieval functions.\nIn Section 3, we analytically compare the Poisson language model with the multinomial language model, from the perspective of retrieval.\nWe then design empirical experiments to compare the two families of language models in Section 4.\nWe discuss the related work in 5 and conclude in 6.\n2.\nQUERY GENERATION WITH POISSON PROCESS In the query generation framework, a basic assumption is that a query is generated with a model estimated based on a document.\nIn most existing work [12, 6, 28, 29], people assume that each query word is sampled independently from a multinomial distribution.\nAlternatively, we assume that a query is generated by sampling the frequency of words from a series of independent Poisson processes [20].\n2.1 The Generation Process Let V = {w1, ..., wn} be a vocabulary set.\nLet w be a piece of text composed by an author and c(w1), ..., c(wn) be a frequency vector representing w, where c(wi, w) is the frequency count of term wi in text w.\nIn retrieval, w could be either a query or a document.\nWe consider the frequency counts of the n unique terms in w as n different types of events, sampled from n independent homogeneous Poisson processes, respectively.\nSuppose t is the time period during which the author composed the text.\nWith a homogeneous Poisson process, the frequency count of each event, i.e., the number of occurrences of wi, follows a Poisson distribution with associated parameter \u03bbit, where \u03bbi is a rate parameter characterizing the expected number of wi in a unit time.\nThe probability density function of such a Poisson Distribution is given by P(c(wi, w) = k|\u03bbit) = e\u2212\u03bbit (\u03bbit)k k!\nWithout losing generality, we set t to the length of the text w (people write one word in a unit time), i.e., t = |w|.\nWith n such independent Poisson processes, each explaining the generation of one term in the vocabulary, the likelihood of w to be generated from such Poisson processes can be written as p(w|\u039b) = n i=1 p(c(wi, w)|\u039b) = n i=1 e\u2212\u03bbi\u00b7|w| (\u03bbi \u00b7 |w|)c(wi,w) c(wi, w)!\nwhere \u039b = {\u03bb1, ..., \u03bbn} and |w| = n i=1 c(wi, w).\nWe refer to these n independent Poisson processes with parameter \u039b as a Poisson Language Model.\nLet D = {d1, ..., dm} be an observed set of document samples generated from the Poisson process above.\nThe maximum likelihood estimate (MLE) of \u03bbi is \u02c6\u03bbi = d\u2208D c(wi, d) d\u2208D w \u2208V c(w , d) Note that this MLE is different from the MLE for the Poisson distribution without considering the document lengths, which appears in [22, 24].\nGiven a document d, we may estimate a Poisson language model \u039bd using d as a sample.\nThe likelihood that a query q is generated from the document language model \u039bd can be written as p(q|d) = w\u2208V p(c(w, q)|\u039bd) (1) This representation is clearly different from the multinomial query generation model as (1) the likelihood includes all the terms in the vocabulary V , instead of only those appearing in q, and (2) instead of the appearance of terms, the event space of this model is the frequencies of each term.\nIn practice, we have the flexibility to choose the vocabulary V .\nIn one extreme, we can use the vocabulary of the whole collection.\nHowever, this may bring in noise and considerable computational cost.\nIn the other extreme, we may focus on the terms in the query and ignore other terms, but some useful information may be lost by ignoring the nonquery terms.\nAs a compromise, we may conflate all the non-query terms as one single pseudo term.\nIn other words, we may assume that there is exactly one non-query term in the vocabulary for each query.\nIn our experiments, we adopt this pseudo non-query term strategy.\nA document can be scored with the likelihood in Equation 1.\nHowever, if a query term is unseen in the document, the MLE of the Poisson distribution would assign zero probability to the term, causing the probability of the query to be zero.\nAs in existing language modeling approaches, the main challenge of constructing a reasonable retrieval model is to find a smoothed language model for p(\u00b7|d).\n2.2 Smoothing in Poisson Retrieval Model In general, we want to assign non-zero rates for the query terms that are not seen in document d.\nMany smoothing methods have been proposed for multinomial language models[2, 28, 29].\nIn general, we have to discount the probabilities of some words seen in the text to leave some extra probability mass to assign to the unseen words.\nIn Poisson language models, however, we do not have the same constraint as in a multinomial model (i.e., w\u2208V p(w|d) = 1).\nThus we do not have to discount the probability of seen words in order to give a non-zero rate to an unseen word.\nInstead, we only need to guarantee that k=0,1,2,... p(c(w, d) = k|d) = 1.\nIn this section, we introduce three different strategies to smooth a Poisson language model, and show how they lead to different retrieval functions.\n2.2.1 Bayesian Smoothing using Gamma Prior Following the risk minimization framework in [11], we assume that a document is generated by the arrival of terms in a time period of |d| according to the document language model, which essentially consists of a vector of Poisson rates for each term, i.e., \u039bd = \u03bbd,1, ..., \u03bbd,|V | .\nA document is assumed to be generated from a potentially different model.\nGiven a particular document d, we want to estimate \u039bd.\nThe rate of a term is estimated independently of other terms.\nWe use Bayesian estimation with the following Gamma prior, which has two parameters, \u03b1 and \u03b2: Gamma(\u03bb|\u03b1, \u03b2) = \u03b2\u03b1 \u0393(\u03b1) \u03bb\u03b1\u22121 e\u2212\u03b2\u03bb For each term w, the parameters \u03b1w and \u03b2w are chosen to be \u03b1w = \u00b5 \u2217 \u03bbC,w and \u03b2w = \u00b5, where \u00b5 is a parameter and \u03bbC,w is the rate of w estimated from some background language model, usually the collection language model.\nThe posterior distribution of \u039bd is given by p(\u039bd|d, C) \u221d w\u2208V e\u2212\u03bbw(|d|+\u00b5) \u03bb c(w,d)+\u00b5\u03bbC,w\u22121 w which is a product of |V | Gamma distributions with parameters c(w, d) + \u00b5\u03bbC,w and |d| + \u00b5 for each word w. Given that the Gamma mean is \u03b1 \u03b2 , we have \u02c6\u03bbd,w = \u03bbd,w \u03bbd,wp(\u03bbd,w|d, C)d\u03bbd,w = c(w, d) + \u00b5\u03bbC,w |d| + \u00b5 This is precisely the smoothed estimate of multinomial language model with Dirichlet prior [28].\n2.2.2 Interpolation (Jelinek-Mercer) Smoothing Another straightforward method is to decompose the query generation model as a mixture of two component models.\nOne is the document language model estimated with maximum likelihood estimator, and the other is a model estimated from the collection background, p(\u00b7|C), which assigns non-zero rate to w. For example, we may use an interpolation coefficient between 0 and 1 (i.e., \u03b4 \u2208 [0, 1]).\nWith this simple interpolation, we can score a document with Score(d, q) = w\u2208V log((1 \u2212 \u03b4)p(c(w, q)|d) + \u03b4p(c(w, q)|C)) (2) Using the maximum likelihood estimator for p(\u00b7|d), we have \u03bbd,w = c(w,d) |d| , thus Equation 2 becomes Score(d, q) \u221d w\u2208d\u2229q [log(1 + 1 \u2212 \u03b4 \u03b4 e\u2212\u03bbd,w|q| (\u03bbd,w|q|)c(w,q) c(w, q)!\n\u00b7 p(c(w, q)|C) ) \u2212 log (1 \u2212 \u03b4)e\u2212\u03bbd,w|q| + \u03b4p(c(w, q) = 0|C) 1 \u2212 \u03b4 + \u03b4p(c(w, q) = 0|C) ] + w\u2208d log (1 \u2212 \u03b4)e\u2212\u03bbd,w|q| + \u03b4p(c(w, q) = 0|C) 1 \u2212 \u03b4 + \u03b4p(c(w, q) = 0|C) We can also use a Poisson language model for p(\u00b7|C), or use some other frequency-based models.\nIn the retrieval formula above, the first summation can be computed efficiently.\nThe second summation can be actually treated as a document prior, which penalizes long documents.\nAs the second summation is difficult to compute efficiently, we conflate all non-query terms as one pseudo non-queryterm, denoted as N. Using the pseudo-term formulation and a Poisson collection model, we can rewrite the retrieval formula as Score(d, q) \u221d w\u2208d\u2229q log(1 + 1 \u2212 \u03b4 \u03b4 e\u2212\u03bbd,w (\u03bbd,w|q|)c(w,q) e\u2212\u03bbd,C |q| (\u03bbd,C )c(w,q) ) + log (1 \u2212 \u03b4)e\u2212\u03bbd,N |q| + \u03b4e\u2212\u03bbC,N |q| 1 \u2212 \u03b4 + \u03b4e\u2212\u03bbC,N |q| (3) where \u03bbd,N = |d|\u2212 w\u2208q c(w,d) |d| and \u03bbC,N = |C|\u2212 w\u2208q c(w,C) |C| .\n2.2.3 Two-Stage Smoothing As discussed in [29], smoothing plays two roles in retrieval: (1) to improve the estimation of the document language model, and (2) to explain the common terms in the query.\nIn order to distinguish the content and non-discriminative words in a query, we follow [29] and assume that a query is generated by sampling from a two-component mixture of Poisson language models, with one component being the document model \u039bd and the other being a query background language model p(\u00b7|U).\np(\u00b7|U) models the typical term frequencies in the user``s queries.\nWe may then score each document with the query likelihood computed using the following two-stage smoothing model: p(c(w, q)|\u039bd, U) = (1 \u2212 \u03b4)p(c(w, q)|\u039bd) + \u03b4p(c(w, q)|U) (4) where \u03b4 is a parameter, roughly indicating the amount of noise in q.\nThis looks similar to the interpolation smoothing, except that p(\u00b7|\u039bd) now should be a smoothed language model, instead of the one estimated with MLE.\nWith no prior knowledge on p(\u00b7|U), we could set it to p(\u00b7|C).\nAny smoothing methods for the document language model can be used to estimate p(\u00b7|d) such as the Gamma smoothing as discussed in Section 2.2.1.\nThe empirical study of the smoothing methods is presented in Section 4.\n3.\nANALYSIS OF POISSON LANGUAGE MODEL From the previous section, we notice that the Poisson language model has a strong connection to the multinomial language model.\nThis is expected since they both belong to the exponential family [26].\nHowever, there are many differences when these two families of models are applied with different smoothing methods.\nFrom the perspective of retrieval, will these two language models perform equivalently?\nIf not, which model provides more benefits to retrieval, or provides flexibility which could lead to potential benefits?\nIn this section, we analytically discuss the retrieval features of the Poisson language models, by comparing their behavior with that of the multinomial language models.\n3.1 The Equivalence of Basic Models Let us begin with the assumption that all the query terms appear in every document.\nUnder this assumption, no smoothing is needed.\nA document can be scored by the log likelihood of the query with the maximum likelihood estimate: Score(d, q) = w\u2208V log e\u2212\u03bbd,w|q| (\u03bbd,w|q|)c(w,q) c(w, q)!\n(5) Using the MLE, we have \u03bbd,w = c(w,d) w\u2208V c(w,d) .\nThus Score(d, q) \u221d c(w,q)>0 c(w, q) log c(w, d) w\u2208V c(w, d) This is exactly the log likelihood of the query if the document language model is a multinomial with maximum likelihood estimate.\nIndeed, even with Gamma smoothing, when plugging \u03bbd,w = c(w,d)+\u00b5\u03bbC,w |d|+\u00b5 and \u03bbC,w = c(w,C) |C| into Equation 5, it is easy to show that Score(d, q) \u221d w\u2208q\u2229d c(w, q) log(1 + c(w, d) \u00b5 \u00b7 c(w,C) |C| ) + |q| log \u00b5 |d| + \u00b5 (6) which is exactly the Dirichlet retrieval formula in [28].\nNote that this equivalence holds only when the document length variation is modeled with Poisson process.\nThis derivation indicates the equivalence of the basic Poisson and multinomial language models for retrieval.\nWith other smoothing strategies, however, the two models would be different.\nNevertheless, with this equivalence in basic models, we could expect that the Poisson language model performs comparably to the multinomial language model in retrieval, if only simple smoothing is explored.\nBased on this equivalence analysis, one may ask, why we should pursue the Poisson language model.\nIn the following sections, we show that despite the equivalence in their basic models, the Poisson language model brings in extra flexibility for exploring advanced techniques on various retrieval features, which could not be achieved with multinomial language models.\n3.2 Term Dependent Smoothing One flexibility of the Poisson language model is that it provides a natural framework to accommodate term dependent (per-term) smoothing.\nExisting work on language model smoothing has already shown that different types of queries should be smoothed differently according to how discriminative the query terms are.\n[7] also predicted that different terms should have a different smoothing weights.\nWith multinomial query generation models, people usually use a single smoothing coefficient to control the combination of the document model and the background model [28, 29].\nThis parameter can be made specific for different queries, but always has to be a constant for all the terms.\nThis is mandatory since a multinomial language model has the constraint that w\u2208V p(w|d) = 1.\nHowever, from retrieval perspective, different terms may need to be smoothed differently even if they are in the same query.\nFor example, a non-discriminative term (e.g., the, is) is expected to be explained more with the background model, while a content term (e.g., retrieval, bush) in the query should be explained with the document model.\nTherefore, a better way of smoothing would be to set the interpolation coefficient (i.e., \u03b4 in Formula 2 and Formula 3) specifically for each term.\nSince the Poisson language model does not have the sum-to-one constraint across terms, it can easily accommodate per-term smoothing without needing to heuristically twist the semantics of a generative model as in the case of multinomial language models.\nBelow we present a possible way to explore term dependent smoothing with Poisson language models.\nEssentially, we want to use a term-specific smoothing coefficient \u03b4 in the linear combination, denoted as \u03b4w.\nThis coefficient should intuitively be larger if w is a common word and smaller if it is a content word.\nThe key problem is to find a method to assign reasonable values to \u03b4w.\nEmpirical tuning is infeasible for so many parameters.\nWe may instead estimate the parameters \u2206 = {\u03b41, ..., \u03b4|V |} by maximizing the likelihood of the query given the mixture model of p(q|\u039bQ) and p(q|U), where \u039bQ is the true query model to generate the query and p(q|U) is a query background model as discussed in Section 2.2.3.\nWith the model p(q|\u039bQ) hidden, the query likelihood is p(q|\u2206, U) = \u039bQ w\u2208V ((1 \u2212 \u03b4w)p(c(w, q)|\u039bQ) + \u03b4wp(c(w, q)|U))P(\u039bQ|U)d\u039bQ If we have relevant documents for each query, we can approximate the query model space with the language models of all the relevant documents.\nWithout relevant documents, we opt to approximate the query model space with the models of all the documents in the collection.\nSetting p(\u00b7|U) as p(\u00b7|C), the query likelihood becomes p(q|\u2206, U) = d\u2208C \u03c0d w\u2208V ((1\u2212\u03b4w)p(c(w, q)|\u02c6\u039bd)+\u03b4wp(c(w, q)|C)) where \u03c0d = p(\u02c6\u039bd|U).\np(\u00b7|\u02c6\u039bd) is an estimated Poisson language model for document d.\nIf we have prior knowledge on p(\u02c6\u039bd|U), such as which documents are relevant to the query, we can set \u03c0d accordingly, because what we want is to find \u2206 that can maximize the likelihood of the query given relevant documents.\nWithout this prior knowledge, we can leave \u03c0d as free parameters, and use the EM algorithm to estimate \u03c0d and \u2206.\nThe updating functions are given as \u03c0 (k+1) d = \u03c0d w\u2208V ((1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C)) d\u2208C \u03c0d w\u2208V ((1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C)) and \u03b4 (k+1) w = d\u2208C \u03c0d \u03b4wp(c(w, q)|C)) (1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C)) As discussed in [29], we only need to run the EM algorithm for several iterations, thus the computational cost is relatively low.\nWe again assume our vocabulary containing all query terms plus a pseudo non-query term.\nNote that the function does not give an explicit way of estimating the coefficient for the unseen non-query term.\nIn our experiments, we set it to the average over \u03b4w of all query terms.\nWith this flexibility, we expect Poisson language models could improve the retrieval performance, especially for verbose queries, where the query terms have various discriminative values.\nIn Section 4, we use empirical experiments to prove this hypothesis.\n3.3 Mixture Background Models Another flexibility is to explore different background (collection) models (i.e., p(\u00b7|U), or p(\u00b7|C)).\nOne common assumption made in language modeling information retrieval is that the background model is a homogeneous model of the document models [28, 29].\nSimilarly, we can also make the assumption that the collection model is a Poisson language model, with the rates \u03bbC,w = d\u2208C c(w,d) |C| .\nHowever, this assumption usually does not hold, since the collection is far more complex than a single document.\nIndeed, the collection usually consists of a mixture of documents with various genres, authors, and topics, etc..\nTreating the collection model as a mixture of document models, instead of a single pseudo-document model is more reasonable.\nExisting work of multinomial language modeling has already shown that a better modeling of background improves the retrieval performance, such as clusters [15, 10], neighbor documents [25], and aspects [8, 27].\nAll the approaches can be easily adopted using Poisson language models.\nHowever, a common problem of these approaches is that they all require heavy computation to construct the background model.\nWith Poisson language modeling, we show that it is possible to model the mixture background without paying for the heavy computational cost.\nPoisson Mixture [3] has been proposed to model a collection of documents, which can fit the data much better than a single Poisson.\nThe basic idea is to assume that the collection is generated from a mixture of Poisson models, which has the general form of p(x = k|PM) = \u03bb p(\u03bb)p(x = k|\u03bb)d\u03bb p(\u00b7|\u03bb) is a single Poisson model and p(\u03bb) is an arbitrary probability density function.\nThere are three well known Poisson mixtures [3]: 2-Poisson, Negative Binomial, and the Katz``s K-Mixture [9].\nNote that the 2-Poisson model has actually been explored in probabilistic retrieval models, which led to the well-known BM25 formula [22].\nAll these mixtures have closed forms, and can be estimated from the collection of documents efficiently.\nThis is an advantage over the multinomial mixture models, such as PLSI [8] and LDA [1], for retrieval.\nFor example, the probability density function of Katz``s K-Mixture is given as p(c(w) = k|\u03b1w, \u03b2w) = (1 \u2212 \u03b1w)\u03b7k,0 + \u03b1w \u03b2w + 1 ( \u03b2w \u03b2w + 1 )k where \u03b7k,0 = 1 when k = 0, and 0 otherwise.\nWith the observation of a collection of documents, \u03b1w and \u03b2w can be estimated as \u03b2w = cf(w) \u2212 df(w) df(w) and \u03b1w = cf(w) N\u03b2w where cf(w) and df(w) are the collection frequency and document frequency of w, and N is the number of documents in the collection.\nTo account for the different document lengths, we assume that \u03b2w is a reasonable estimation for generating a document of the average length, and use \u03b2 = \u03b2w avdl |q| to generate the query.\nThis Poisson mixture model can be easily used to replace P(\u00b7|C) in the retrieval functions 3 and 4.\n3.4 Other Possible Flexibilities In addition to term dependent smoothing and efficient mixture background, a Poisson language model has also some other potential advantages.\nFor example, in Section 2, we see that Formula 2 introduces a component which does document length penalization.\nIntuitively, when the document has more unique words, it will be penalized more.\nOn the other hand, if a document is exactly n copies of another document, it would not get over penalized.\nThis feature is desirable and not achieved with the Dirichlet model [5].\nPotentially, this component could penalize a document according to what types of terms it contains.\nWith term specific settings of \u03b4, we could get even more flexibility for document length normalization.\nPseudo-feedback is yet another interesting direction where the Poission model might be able to show its advantage.\nWith model-based feedback, we could again relax the combination coefficients of the feedback model and the background model, and allow different terms to contribute differently to the feedback model.\nWe could also utilize the relevant documents to learn better per-term smoothing coefficients.\n4.\nEVALUATION In Section 3, we analytically compared the Poisson language models and multinomial language models from the perspective of query generation and retrieval.\nIn this section, we compare these two families of models empirically.\nExperiment results show that the Poisson model with perterm smoothing outperforms multinomial model, and the performance can be further improved with two-stage smoothing.\nUsing Poisson mixture as background model also improves the retrieval performance.\n4.1 Datasets Since retrieval performance could significantly vary from one test collection to another, and from one query to another, we select four representative TREC test collections: AP, Trec7, Trec8, and Wt2g(Web).\nTo cover different types of queries, we follow [28, 5], and construct short-keyword (SK, keyword title), short-verbose (SV, one sentence description), and long-verbose (LV, multiple sentences) queries.\nThe documents are stemmed with the Porter``s stemmer, and we do not remove any stop word.\nFor each parameter, we vary its value to cover a reasonably wide range.\n4.2 Comparison to Multinomial We compare the performance of the Poisson retrieval models and multinomial retrieval models using interpolation (JelinekMercer, JM) smoothing and Bayesian smoothing with conjugate priors.\nTable 1 shows that the two JM-smoothed models perform similarly on all data sets.\nSince the Dirichlet Smoothing for multinomial language model and the Gamma Smoothing for Poisson language model lead to the same retrieval formula, the performance of these two models are jointly presented.\nWe see that Dirichlet\/Gamma smoothing methods outperform both Jelinek-Mercer smoothing methods.\nThe parameter sensitivity curves for two Jelinek-Mercer 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0.25 0.3 Dataset: Trec8 Parameter: \u03b4 AveragePrecision JM\u2212Multinomial: LV JM\u2212Multinomial: SV JM\u2212Multinomial: SK JM\u2212Poisson: SK JM\u2212Poisson: SV JM\u2212Poisson: LV Figure 1: Poisson and multinomial performs similarly with Jelinek-Mercer smoothing smoothing methods are shown in Figure 1.\nClearly, these two methods perform similarly either in terms of optimality Data Query JM-Multinomial JM-Poisson Dirichlet\/Gamma Per-term 2-Stage Poisson MAP InitPr Pr@5d MAP InitPr Pr@5d MAP InitPr Pr@5d MAP InitPr Pr@5d AP88-89 SK 0.203 0.585 0.356 0.203 0.585 0.358 0.224 0.629 0.393 0.226 0.630 0.396 SV 0.187 0.580 0.361 0.183 0.571 0.345 0.204 0.613 0.387 0.217* 0.603 0.390 LV 0.283 0.716 0.480 0.271 0.692 0.470 0.291 0.710 0.496 0.304* 0.695 0.510 Trec7 SK 0.167 0.635 0.400 0.168 0.635 0.404 0.186 0.687 0.428 0.185 0.646 0.436 SV 0.174 0.655 0.432 0.176 0.653 0.432 0.182 0.666 0.432 0.196* 0.660 0.440 LV 0.223 0.730 0.496 0.215 0.766 0.488 0.224 0.748 0.52 0.236* 0.738 0.512 Trec8 SK 0.239 0.621 0.440 0.239 0.621 0.436 0.257 0.718 0.496 0.256 0.704 0.468 SV 0.231 0.686 0.448 0.234 0.702 0.456 0.228 0.691 0.456 0.246* 0.692 0.476 LV 0.265 0.796 0.548 0.261 0.757 0.520 0.260 0.741 0.492 0.274* 0.766 0.508 Web SK 0.250 0.616 0.380 0.250 0.616 0.380 0.302 0.767 0.468 0.307 0.739 0.468 SV 0.214 0.611 0.392 0.217 0.609 0.384 0.273 0.693 0.508 0.292* 0.703 0.480 LV 0.266 0.790 0.464 0.259 0.776 0.452 0.283 0.756 0.496 0.311* 0.759 0.488 Table 1: Performance comparison between Poisson and Multinomial retrieval models: basic models perform comparably; term dependent two-stage smoothing significantly improves Poisson An asterisk (*) indicates that the difference between the performance of the term dependent two-stage smoothing and that of the Dirichlet\/Gamma single smoothing is statistically significant according to the Wilcoxon signed rank test at the level of 0.05.\nor sensitivity.\nThis similarity of performance is expected as we discussed in Section 3.1.\nAlthough the Poisson model and multinomial model are similar in terms of the basic model and\/or with simple smoothing methods, the Poisson model has great potential and flexibility to be further improved.\nAs shown in the rightmost column of Table 1, term dependent two-stage Poisson model consistently outperforms the basic smoothing models, especially for verbose queries.\nThis model is given in Formula 4, with a Gamma smoothing for the document model p(\u00b7|d), and \u03b4w, which is term dependent.\nThe parameter \u00b5 of the first stage Gamma smoothing is empirically tuned.\nThe combination coefficients (i.e., \u2206), are estimated with the EM algorithm in Section 3.2.\nThe parameter sensitivity curves for Dirichlet\/Gamma and the per-term two-stage smoothing model are plotted in Figure 2.\nThe per-term two-stage smoothing method is less sensitive to the parameter \u00b5 than Dirichlet\/Gamma, and yields better optimal performance.\n0 1000\u00a02000\u00a03000\u00a04000 5000\u00a06000\u00a07000\u00a08000 9000 10000 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Dataset: AP; Query Type: SV Parameter: \u00b5 AveragePrecision Dirichlet\/Gamma Smoothing Term Dependent 2\u2212Stage Figure 2: Term dependent two-stage smoothing of Poisson outperforms Dirichlet\/Gamma In the following subsections, we conduct experiments to demonstrate how the flexibility of the Poisson model could be utilized to achieve better performance, which we cannot achieve with multinomial language models.\n4.3 Term Dependent Smoothing To test the effectiveness of the term dependent smoothing, we conduct the following two experiments.\nIn the first experiment, we relax the constant coefficient in the simple Jelinek-Mercer smoothing formula (i.e., Formula 3), and use the EM algorithm proposed in Section 3.2 to find a \u03b4w for each unique term.\nSince we are using the EM algorithm to iteratively estimate the parameters, we usually do not want the probability of p(\u00b7|d) to be zero.\nWe then use a simple Laplace method to slightly smooth the document model before it goes into the EM iterations.\nThe documents are then still scored with Formula 3, but using learnt \u03b4w.\nThe results are labeled with JM+L.\nin Table 2.\nData Q JM JM JM+L.\n2-Stage 2-Stage (MAP) PT: No Yes Yes No Yes AP SK 0.203 0.204 0.206 0.223 0.226* SV 0.183 0.189 0.214* 0.204 0.217* Trec7 SK 0.168 0.171 0.174 0.186 0.185 SV 0.176 0.147 0.198* 0.194 0.196 Trec8 SK 0.239 0.240 0.227* 0.257 0.256 SV 0.234 0.223 0.249* 0.242 0.246* Web SK 0.250 0.236 0.220* 0.291 0.307* SV 0.217 0.232 0.261* 0.273 0.292* Table 2: Term dependent smoothing improves retrieval performance An asterisk (*) in Column 3 indicates that the difference between the JM+L.\nmethod and JM method is statistically significant; an asterisk (*) in Column 5 means that the difference between term dependent two-stage method and query dependent two-stage method is statistically significant; PT stands for per-term.\nWith term dependent coefficients, the performance of the Jelinek-Mercer Poisson model is improved in most cases.\nHowever, in some cases (e.g., Trec7\/SV), it performs poorly.\nThis might be caused by the problem of EM estimation with unsmoothed document models.\nOnce non-zero probability is assigned to all the terms before entering the EM iteration, the performance on verbose queries can be improved significantly.\nThis indicates that there is still room to find better methods to estimate \u03b4w.\nPlease note that neither the perterm JM method nor the JM+L.\nmethod has a parameter to tune.\nAs shown in Table 1, the term dependent two-stage smoothing can significantly improve retrieval performance.\nTo understand whether the improvement is contributed by the term dependent smoothing or the two-stage smoothing framework, we design another experiment to compare the perterm two-stage smoothing with the two-stage smoothing method proposed in [29].\nTheir method managed to find coefficients specific to the query, thus a verbose query would use a higher \u03b4.\nHowever, since their model is based on multinomial language modeling, they could not get per-term coefficients.\nWe adopt their method to the Poisson two-stage smoothing, and also estimate a per-query coefficient for all the terms.\nWe compare the performance of such a model with the per-term two-stage smoothing model, and present the results in the right two columns in Table 2.\nAgain, we see that the per-term two-stage smoothing outperforms the per-query two-stage smoothing, especially for verbose queries.\nThe improvement is not as large as how the perterm smoothing method improves over Dirichlet\/Gamma.\nThis is expected, since the per-query smoothing has already addressed the query discrimination problem to some extent.\nThis experiment shows that even if the smoothing is already per-query, making it per-term is still beneficial.\nIn brief, the per-term smoothing improved the retrieval performance of both one-stage and two-stage smoothing method.\n4.4 Mixture Background Model In this section, we conduct experiments to examine the benefits of using a mixture background model without extra computational cost, which can not be achieved for multinomial models.\nSpecifically, in retrieval formula 3, instead of using a single Poisson distribution to model the background p(\u00b7|C), we use Katz``s K-Mixture model, which is essentially a mixture of Poisson distributions.\np(\u00b7|C) can be computed efficiently with simple collection statistics, as discussed in Section 3.3.\nData Query JM.\nPoisson JM.\nK-Mixture AP SK 0.203 0.204 SV 0.183 0.188* Trec-7 SK 0.168 0.169 SV 0.176 0.178* Trec-8 SK 0.239 0.239 SV 0.234 0.238* Web SK 0.250 0.250 SV 0.217 0.223* Table 3: K-Mixture background model improves retrieval performance The performance of the JM retrieval model with single Poisson background and with Katz``s K-Mixture background model is compared in Table 3.\nClearly, using K-Mixture to model the background model outperforms the single Poisson background model in most cases, especially for verbose queries where the improvement is statistically significant.\nFigure 3 shows that the performance changes over different parameters for short verbose queries.\nThe model using K-Mixture background is less sensitive than the one using single Poisson background.\nGiven that this type of mixture 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0.25 Data: Trec8; Query: SV Parameter: \u03b4 AveragePrecision Poisson Background K\u2212Mixture Background Figure 3: K-Mixture background model deviates the sensitivity of verbose queries background model does not require any extra computation cost, it would be interesting to study whether using other mixture Poisson models, such as 2-Poisson and negative Binomial, could help the performance.\n5.\nRELATED WORK To the best of our knowledge, there has been no study of query generation models based on Poisson distribution.\nLanguage models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nThe most popular and fundamental one is the query-generation language model [21, 13].\nAll existing query generation language models are based on either multinomial distribution [19, 6, 28, 13] or multivariate Bernoulli distribution [21, 17, 18].\nWe introduce a new family of language models, based on Poisson distribution.\nPoisson distribution has been previously studied in the document generation models [16, 22, 3, 24], leading to the development of one of the most effective retrieval formula BM25 [23].\n[24] studies the parallel derivation of three different retrieval models which is related to our comparison of Poisson and multinomial.\nHowever, the Poisson model in their paper is still under the document generation framework, and also does not account for the document length variation.\n[26] introduces a way to empirically search for an exponential model for the documents.\nPoisson mixtures [3] such as 2-Poisson [22], Negative multinomial, and Katz``s KMixture [9] has shown to be effective to model and retrieve documents.\nOnce again, none of this work explores Poisson distribution in the query generation framework.\nLanguage model smoothing [2, 28, 29] and background structures [15, 10, 25, 27] have been studied with multinomial language models.\n[7] analytically shows that term specific smoothing could be useful.\nWe show that Poisson language model is natural to accommodate the per-term smoothing without heuristic twist of the semantics of a generative model, and is able to efficiently better model the mixture background, both analytically and empirically.\n6.\nCONCLUSIONS We present a new family of query generation language models for retrieval based on Poisson distribution.\nWe derive several smoothing methods for this family of models, including single-stage smoothing and two-stage smoothing.\nWe compare the new models with the popular multinomial retrieval models both analytically and experimentally.\nOur analysis shows that while our new models and multinomial models are equivalent under some assumptions, they are generally different with some important differences.\nIn particular, we show that Poisson has an advantage over multinomial in naturally accommodating per-term smoothing.\nWe exploit this property to develop a new per-term smoothing algorithm for Poisson language models, which is shown to outperform term-independent smoothing for both Poisson and multinomial models.\nFurthermore, we show that a mixture background model for Poisson can be used to improve the performance and robustness over the standard Poisson background model.\nOur work opens up many interesting directions for further exploration in this new family of models.\nFurther exploring the flexibilities over multinomial language models, such as length normalization and pseudo-feedback could be good future work.\nIt is also appealing to find robust methods to learn the per-term smoothing coefficients without additional computation cost.\n7.\nACKNOWLEDGMENTS We thank the anonymous SIGIR 07 reviewers for their useful comments.\nThis material is based in part upon work supported by the National Science Foundation under award numbers IIS-0347933 and 0425852.\n8.\nREFERENCES [1] D. Blei, A. Ng, and M. Jordan.\nLatent dirichlet allocation.\nJournal of Machine Learning Research, 3:993-1022, 2003.\n[2] S. F. Chen and J. Goodman.\nAn empirical study of smoothing techniques for language modeling.\nTechnical Report TR-10-98, Harvard University, 1998.\n[3] K. Church and W. Gale.\nPoisson mixtures.\nNat.\nLang.\nEng., 1(2):163-190, 1995.\n[4] W. B. Croft and J. Lafferty, editors.\nLanguage Modeling and Information Retrieval.\nKluwer Academic Publishers, 2003.\n[5] H. Fang, T. Tao, and C. Zhai.\nA formal study of information retrieval heuristics.\nIn Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 49-56, 2004.\n[6] D. Hiemstra.\nUsing Language Models for Information Retrieval.\nPhD thesis, University of Twente, Enschede, Netherlands, 2001.\n[7] D. Hiemstra.\nTerm-specific smoothing for the language modeling approach to information retrieval: the importance of a query term.\nIn Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 35-41, 2002.\n[8] T. Hofmann.\nProbabilistic latent semantic indexing.\nIn Proceedings of ACM SIGIR``99, pages 50-57, 1999.\n[9] S. M. Katz.\nDistribution of content words and phrases in text and language modelling.\nNat.\nLang.\nEng., 2(1):15-59, 1996.\n[10] O. Kurland and L. Lee.\nCorpus structure, language models, and ad-hoc information retrieval.\nIn Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 194-201, 2004.\n[11] J. Lafferty and C. Zhai.\nDocument language models, query models, and risk minimization for information retrieval.\nIn Proceedings of SIGIR``01, pages 111-119, Sept 2001.\n[12] J. Lafferty and C. Zhai.\nProbabilistic IR models based on query and document generation.\nIn Proceedings of the Language Modeling and IR workshop, pages 1-5, May 31 - June 1 2001.\n[13] J. Lafferty and C. Zhai.\nProbabilistic relevance models based on document and query generation.\nIn W. B. Croft and J. Lafferty, editors, Language Modeling and Information Retrieval.\nKluwer Academic Publishers, 2003.\n[14] V. Lavrenko and B. Croft.\nRelevance-based language models.\nIn Proceedings of SIGIR``01, pages 120-127, Sept 2001.\n[15] X. Liu and W. B. Croft.\nCluster-based retrieval using language models.\nIn Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 186-193, 2004.\n[16] E. L. Margulis.\nModelling documents with multiple poisson distributions.\nInf.\nProcess.\nManage., 29(2):215-227, 1993.\n[17] A. McCallum and K. Nigam.\nA comparison of event models for naive bayes text classification.\nIn Proceedings of AAAI-98 Workshop on Learning for Text Categorization, 1998.\n[18] D. Metzler, V. Lavrenko, and W. B. Croft.\nFormal multiple-bernoulli models for language modeling.\nIn Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 540-541, 2004.\n[19] D. H. Miller, T. Leek, and R. Schwartz.\nA hidden Markov model information retrieval system.\nIn Proceedings of the 1999 ACM SIGIR Conference on Research and Development in Information Retrieval, pages 214-221, 1999.\n[20] A. Papoulis.\nProbability, random variables and stochastic processes.\nNew York: McGraw-Hill, 1984, 2nd ed., 1984.\n[21] J. M. Ponte and W. B. Croft.\nA language modeling approach to information retrieval.\nIn Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 275-281, 1998.\n[22] S. Robertson and S. Walker.\nSome simple effective approximations to the 2-poisson model for probabilistic weighted retrieval.\nIn Proceedings of SIGIR``94, pages 232-241, 1994.\n[23] S. E. Robertson, S. Walker, S. Jones, M. M.Hancock-Beaulieu, and M. Gatford.\nOkapi at TREC-3.\nIn D. K. Harman, editor, The Third Text REtrieval Conference (TREC-3), pages 109-126, 1995.\n[24] T. Roelleke and J. Wang.\nA parallel derivation of probabilistic information retrieval models.\nIn Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 107-114, 2006.\n[25] T. Tao, X. Wang, Q. Mei, and C. Zhai.\nLanguage model information retrieval with document expansion.\nIn Proceedings of HLT\/NAACL 2006, pages 407-414, 2006.\n[26] J. Teevan and D. R. Karger.\nEmpirical development of an exponential probabilistic model for text retrieval: using textual analysis to build a better model.\nIn Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 18-25, 2003.\n[27] X. Wei and W. B. Croft.\nLda-based document models for ad-hoc retrieval.\nIn Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 178-185, 2006.\n[28] C. Zhai and J. Lafferty.\nA study of smoothing methods for language models applied to ad-hoc information retrieval.\nIn Proceedings of ACM SIGIR``01, pages 334-342, Sept 2001.\n[29] C. Zhai and J. Lafferty.\nTwo-stage language models for information retrieval.\nIn Proceedings of ACM SIGIR``02, pages 49-56, Aug 2002.","lvl-3":"A Study of Poisson Query Generation Model for Information Retrieval\nABSTRACT\nMany variants of language models have been proposed for information retrieval.\nMost existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model.\nIn this paper, we propose and study a new family of query generation models based on Poisson distribution.\nWe show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods.\nWe show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling.\nWe present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections.\nThe results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing.\nThe performance can be further improved with two-stage smoothing.\n1.\nINTRODUCTION\nAs a new type of probabilistic retrieval models, language models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nAmong many variants of language models proposed, the most popular and fundamental one is the query-generation language model [21, 13], which leads to the query-likelihood scoring method for ranking documents.\nIn such a model, given a query q and a document d, we compute the likelihood of \"generating\" query q with a model estimated based on document d, i.e., the conditional prob\nability p (q | d).\nWe can then rank documents based on the likelihood of generating the query.\nVirtually all the existing query generation language models are based on either multinomial distribution [19, 6, 28] or multivariate Bernoulli distribution [21, 18].\nThe multinomial distribution is especially popular and also shown to be quite effective.\nThe heavy use of multinomial distribution is partly due to the fact that it has been successfully used in speech recognition, where multinomial distribution is a natural choice for modeling the occurrence of a particular word in a particular position in text.\nCompared with multivariate Bernoulli, multinomial distribution has the advantage of being able to model the frequency of terms in the query; in contrast, multivariate Bernoulli only models the presence and absence of query terms, thus cannot capture different frequencies of query terms.\nHowever, multivariate Bernoulli also has one potential advantage over multinomial from the viewpoint of retrieval: in a multinomial distribution, the probabilities of all the terms must sum to 1, making it hard to accommodate per-term smoothing, while in a multivariate Bernoulli, the presence probabilities of different terms are completely independent of each other, easily accommodating per-term smoothing and weighting.\nNote that term absence is also indirectly captured in a multinomial model through the constraint that all the term probabilities must sum to 1.\nIn this paper, we propose and study a new family of query generation models based on the Poisson distribution.\nIn this new family of models, we model the frequency of each term independently with a Poisson distribution.\nTo score a document, we would first estimate a multivariate Poisson model based on the document, and then score it based on the likelihood of the query given by the estimated Poisson model.\nIn some sense, the Poisson model combines the advantage of multinomial in modeling term frequency and the advantage of the multivariate Bernoulli in accommodating per-term smoothing.\nIndeed, similar to the multinomial distribution, the Poisson distribution models term frequencies, but without the constraint that all the term probabilities must sum to 1, and similar to multivariate Bernoulli, it models each term independently, thus can easily accommodate per-term smoothing.\nAs in the existing work on multinomial language models, smoothing is critical for this new family of models.\nWe derive several smoothing methods for Poisson model in parallel to those used for multinomial distributions, and compare the corresponding retrieval models with those based on multi\nnomial distributions.\nWe find that while with some smoothing methods, the new model and the multinomial model lead to exactly the same formula, with some other smoothing methods they diverge, and the Poisson model brings in more flexibility for smoothing.\nIn particular, a key difference is that the Poisson model can naturally accommodate perterm smoothing, which is hard to achieve with a multinomial model without heuristic twist of the semantics of a generative model.\nWe exploit this potential advantage to develop a new term-dependent smoothing algorithm for Poisson model and show that this new smoothing algorithm can improve performance over term-independent smoothing algorithms using either Poisson or multinomial model.\nThis advantage is seen for both one-stage and two-stage smoothing.\nAnother potential advantage of the Poisson model is that its corresponding background model for smoothing can be improved through using a mixture model that has a closed form formula.\nThis new background model is shown to outperform the standard background model and reduce the sensitivity of retrieval performance to the smoothing parameter.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce the new family of query generation models with Poisson distribution, and present various smoothing methods which lead to different retrieval functions.\nIn Section 3, we analytically compare the Poisson language model with the multinomial language model, from the perspective of retrieval.\nWe then design empirical experiments to compare the two families of language models in Section 4.\nWe discuss the related work in 5 and conclude in 6.\n2.\nQUERY GENERATION WITH POISSON PROCESS\n2.1 The Generation Process\n2.2 Smoothing in Poisson Retrieval Model\n2.2.1 Bayesian Smoothing using Gamma Prior\n2.2.2 Interpolation (Jelinek-Mercer) Smoothing\n2.2.3 Two-Stage Smoothing\n3.\nANALYSIS OF POISSON LANGUAGE MODEL\n3.1 The Equivalence of Basic Models\n3.2 Term Dependent Smoothing\n3.3 Mixture Background Models\n3.4 Other Possible Flexibilities\n4.\nEVALUATION\n4.1 Datasets\n4.2 Comparison to Multinomial\n4.3 Term Dependent Smoothing\n4.4 Mixture Background Model\n5.\nRELATED WORK\nTo the best of our knowledge, there has been no study of query generation models based on Poisson distribution.\nLanguage models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nThe most popular and fundamental one is the query-generation language model [21, 13].\nAll existing query generation language models are based on either multinomial distribution [19, 6, 28, 13] or multivariate Bernoulli distribution [21, 17, 18].\nWe introduce a new family of language models, based on Poisson distribution.\nPoisson distribution has been previously studied in the document generation models [16, 22, 3, 24], leading to the development of one of the most effective retrieval formula BM25 [23].\n[24] studies the parallel derivation of three different retrieval models which is related to our comparison of Poisson and multinomial.\nHowever, the Poisson model in their paper is still under the document generation framework, and also does not account for the document length variation.\n[26] introduces a way to empirically search for an exponential model for the documents.\nPoisson mixtures [3] such as 2-Poisson [22], Negative multinomial, and Katz's KMixture [9] has shown to be effective to model and retrieve documents.\nOnce again, none of this work explores Poisson distribution in the query generation framework.\nLanguage model smoothing [2, 28, 29] and background structures [15, 10, 25, 27] have been studied with multinomial language models.\n[7] analytically shows that term specific smoothing could be useful.\nWe show that Poisson language model is natural to accommodate the per-term smoothing without heuristic twist of the semantics of a generative model, and is able to efficiently better model the mixture background, both analytically and empirically.\n6.\nCONCLUSIONS\nWe present a new family of query generation language models for retrieval based on Poisson distribution.\nWe derive several smoothing methods for this family of models, including single-stage smoothing and two-stage smoothing.\nWe compare the new models with the popular multinomial retrieval models both analytically and experimentally.\nOur analysis shows that while our new models and multinomial models are equivalent under some assumptions, they are generally different with some important differences.\nIn particular, we show that Poisson has an advantage over multinomial in naturally accommodating per-term smoothing.\nWe exploit this property to develop a new per-term smoothing algorithm for Poisson language models, which is shown to outperform term-independent smoothing for both Poisson and multinomial models.\nFurthermore, we show that a mixture background model for Poisson can be used to improve the performance and robustness over the standard Poisson background model.\nOur work opens up many interesting directions for further exploration in this new family of models.\nFurther exploring the flexibilities over multinomial language models, such as length normalization and pseudo-feedback could be good future work.\nIt is also appealing to find robust methods to learn the per-term smoothing coefficients without additional computation cost.","lvl-4":"A Study of Poisson Query Generation Model for Information Retrieval\nABSTRACT\nMany variants of language models have been proposed for information retrieval.\nMost existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model.\nIn this paper, we propose and study a new family of query generation models based on Poisson distribution.\nWe show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods.\nWe show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling.\nWe present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections.\nThe results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing.\nThe performance can be further improved with two-stage smoothing.\n1.\nINTRODUCTION\nAs a new type of probabilistic retrieval models, language models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nWe can then rank documents based on the likelihood of generating the query.\nVirtually all the existing query generation language models are based on either multinomial distribution [19, 6, 28] or multivariate Bernoulli distribution [21, 18].\nThe multinomial distribution is especially popular and also shown to be quite effective.\nNote that term absence is also indirectly captured in a multinomial model through the constraint that all the term probabilities must sum to 1.\nIn this paper, we propose and study a new family of query generation models based on the Poisson distribution.\nIn this new family of models, we model the frequency of each term independently with a Poisson distribution.\nTo score a document, we would first estimate a multivariate Poisson model based on the document, and then score it based on the likelihood of the query given by the estimated Poisson model.\nIn some sense, the Poisson model combines the advantage of multinomial in modeling term frequency and the advantage of the multivariate Bernoulli in accommodating per-term smoothing.\nAs in the existing work on multinomial language models, smoothing is critical for this new family of models.\nWe derive several smoothing methods for Poisson model in parallel to those used for multinomial distributions, and compare the corresponding retrieval models with those based on multi\nnomial distributions.\nWe find that while with some smoothing methods, the new model and the multinomial model lead to exactly the same formula, with some other smoothing methods they diverge, and the Poisson model brings in more flexibility for smoothing.\nIn particular, a key difference is that the Poisson model can naturally accommodate perterm smoothing, which is hard to achieve with a multinomial model without heuristic twist of the semantics of a generative model.\nWe exploit this potential advantage to develop a new term-dependent smoothing algorithm for Poisson model and show that this new smoothing algorithm can improve performance over term-independent smoothing algorithms using either Poisson or multinomial model.\nThis advantage is seen for both one-stage and two-stage smoothing.\nAnother potential advantage of the Poisson model is that its corresponding background model for smoothing can be improved through using a mixture model that has a closed form formula.\nThis new background model is shown to outperform the standard background model and reduce the sensitivity of retrieval performance to the smoothing parameter.\nIn Section 2, we introduce the new family of query generation models with Poisson distribution, and present various smoothing methods which lead to different retrieval functions.\nIn Section 3, we analytically compare the Poisson language model with the multinomial language model, from the perspective of retrieval.\nWe then design empirical experiments to compare the two families of language models in Section 4.\nWe discuss the related work in 5 and conclude in 6.\n5.\nRELATED WORK\nTo the best of our knowledge, there has been no study of query generation models based on Poisson distribution.\nLanguage models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nThe most popular and fundamental one is the query-generation language model [21, 13].\nAll existing query generation language models are based on either multinomial distribution [19, 6, 28, 13] or multivariate Bernoulli distribution [21, 17, 18].\nWe introduce a new family of language models, based on Poisson distribution.\nPoisson distribution has been previously studied in the document generation models [16, 22, 3, 24], leading to the development of one of the most effective retrieval formula BM25 [23].\n[24] studies the parallel derivation of three different retrieval models which is related to our comparison of Poisson and multinomial.\nHowever, the Poisson model in their paper is still under the document generation framework, and also does not account for the document length variation.\n[26] introduces a way to empirically search for an exponential model for the documents.\nPoisson mixtures [3] such as 2-Poisson [22], Negative multinomial, and Katz's KMixture [9] has shown to be effective to model and retrieve documents.\nOnce again, none of this work explores Poisson distribution in the query generation framework.\nLanguage model smoothing [2, 28, 29] and background structures [15, 10, 25, 27] have been studied with multinomial language models.\n[7] analytically shows that term specific smoothing could be useful.\nWe show that Poisson language model is natural to accommodate the per-term smoothing without heuristic twist of the semantics of a generative model, and is able to efficiently better model the mixture background, both analytically and empirically.\n6.\nCONCLUSIONS\nWe present a new family of query generation language models for retrieval based on Poisson distribution.\nWe derive several smoothing methods for this family of models, including single-stage smoothing and two-stage smoothing.\nWe compare the new models with the popular multinomial retrieval models both analytically and experimentally.\nOur analysis shows that while our new models and multinomial models are equivalent under some assumptions, they are generally different with some important differences.\nIn particular, we show that Poisson has an advantage over multinomial in naturally accommodating per-term smoothing.\nWe exploit this property to develop a new per-term smoothing algorithm for Poisson language models, which is shown to outperform term-independent smoothing for both Poisson and multinomial models.\nFurthermore, we show that a mixture background model for Poisson can be used to improve the performance and robustness over the standard Poisson background model.\nOur work opens up many interesting directions for further exploration in this new family of models.\nFurther exploring the flexibilities over multinomial language models, such as length normalization and pseudo-feedback could be good future work.","lvl-2":"A Study of Poisson Query Generation Model for Information Retrieval\nABSTRACT\nMany variants of language models have been proposed for information retrieval.\nMost existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model.\nIn this paper, we propose and study a new family of query generation models based on Poisson distribution.\nWe show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods.\nWe show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling.\nWe present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections.\nThe results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing.\nThe performance can be further improved with two-stage smoothing.\n1.\nINTRODUCTION\nAs a new type of probabilistic retrieval models, language models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nAmong many variants of language models proposed, the most popular and fundamental one is the query-generation language model [21, 13], which leads to the query-likelihood scoring method for ranking documents.\nIn such a model, given a query q and a document d, we compute the likelihood of \"generating\" query q with a model estimated based on document d, i.e., the conditional prob\nability p (q | d).\nWe can then rank documents based on the likelihood of generating the query.\nVirtually all the existing query generation language models are based on either multinomial distribution [19, 6, 28] or multivariate Bernoulli distribution [21, 18].\nThe multinomial distribution is especially popular and also shown to be quite effective.\nThe heavy use of multinomial distribution is partly due to the fact that it has been successfully used in speech recognition, where multinomial distribution is a natural choice for modeling the occurrence of a particular word in a particular position in text.\nCompared with multivariate Bernoulli, multinomial distribution has the advantage of being able to model the frequency of terms in the query; in contrast, multivariate Bernoulli only models the presence and absence of query terms, thus cannot capture different frequencies of query terms.\nHowever, multivariate Bernoulli also has one potential advantage over multinomial from the viewpoint of retrieval: in a multinomial distribution, the probabilities of all the terms must sum to 1, making it hard to accommodate per-term smoothing, while in a multivariate Bernoulli, the presence probabilities of different terms are completely independent of each other, easily accommodating per-term smoothing and weighting.\nNote that term absence is also indirectly captured in a multinomial model through the constraint that all the term probabilities must sum to 1.\nIn this paper, we propose and study a new family of query generation models based on the Poisson distribution.\nIn this new family of models, we model the frequency of each term independently with a Poisson distribution.\nTo score a document, we would first estimate a multivariate Poisson model based on the document, and then score it based on the likelihood of the query given by the estimated Poisson model.\nIn some sense, the Poisson model combines the advantage of multinomial in modeling term frequency and the advantage of the multivariate Bernoulli in accommodating per-term smoothing.\nIndeed, similar to the multinomial distribution, the Poisson distribution models term frequencies, but without the constraint that all the term probabilities must sum to 1, and similar to multivariate Bernoulli, it models each term independently, thus can easily accommodate per-term smoothing.\nAs in the existing work on multinomial language models, smoothing is critical for this new family of models.\nWe derive several smoothing methods for Poisson model in parallel to those used for multinomial distributions, and compare the corresponding retrieval models with those based on multi\nnomial distributions.\nWe find that while with some smoothing methods, the new model and the multinomial model lead to exactly the same formula, with some other smoothing methods they diverge, and the Poisson model brings in more flexibility for smoothing.\nIn particular, a key difference is that the Poisson model can naturally accommodate perterm smoothing, which is hard to achieve with a multinomial model without heuristic twist of the semantics of a generative model.\nWe exploit this potential advantage to develop a new term-dependent smoothing algorithm for Poisson model and show that this new smoothing algorithm can improve performance over term-independent smoothing algorithms using either Poisson or multinomial model.\nThis advantage is seen for both one-stage and two-stage smoothing.\nAnother potential advantage of the Poisson model is that its corresponding background model for smoothing can be improved through using a mixture model that has a closed form formula.\nThis new background model is shown to outperform the standard background model and reduce the sensitivity of retrieval performance to the smoothing parameter.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce the new family of query generation models with Poisson distribution, and present various smoothing methods which lead to different retrieval functions.\nIn Section 3, we analytically compare the Poisson language model with the multinomial language model, from the perspective of retrieval.\nWe then design empirical experiments to compare the two families of language models in Section 4.\nWe discuss the related work in 5 and conclude in 6.\n2.\nQUERY GENERATION WITH POISSON PROCESS\nIn the query generation framework, a basic assumption is that a query is generated with a model estimated based on a document.\nIn most existing work [12, 6, 28, 29], people assume that each query word is sampled independently from a multinomial distribution.\nAlternatively, we assume that a query is generated by sampling the frequency of words from a series of independent Poisson processes [20].\n2.1 The Generation Process\nLet V = {w1,..., wn} be a vocabulary set.\nLet w be a piece of text composed by an author and hc (w1),..., c (wn) i be a frequency vector representing w, where c (wi, w) is the frequency count of term wi in text w.\nIn retrieval, w could be either a query or a document.\nWe consider the frequency counts of the n unique terms in w as n different types of events, sampled from n independent homogeneous Poisson processes, respectively.\nSuppose t is the time period during which the author composed the text.\nWith a homogeneous Poisson process, the frequency count of each event, i.e., the number of occurrences of wi, follows a Poisson distribution with associated parameter \u03bbit, where \u03bbi is a rate parameter characterizing the expected number of wi in a unit time.\nThe probability density function of such a Poisson Distribution is given by\nWithout losing generality, we set t to the length of the text w (people write one word in a unit time), i.e., t = | w |.\nWith n such independent Poisson processes, each explaining the generation of one term in the vocabulary, the likelihood of w to be generated from such Poisson processes can be written as\nwhere \u039b = {\u03bb1,..., \u03bbn} and | w | = En i = 1 c (wi, w).\nWe refer to these n independent Poisson processes with parameter \u039b as a Poisson Language Model.\nLet D = {d1,..., dm} be an observed set of document samples generated from the Poisson process above.\nThe maximum likelihood estimate (MLE) of \u03bbi is\nNote that this MLE is different from the MLE for the Poisson distribution without considering the document lengths, which appears in [22, 24].\nGiven a document d, we may estimate a Poisson language model \u039bd using d as a sample.\nThe likelihood that a query q is generated from the document language model \u039bd can be written as\nThis representation is clearly different from the multinomial query generation model as (1) the likelihood includes all the terms in the vocabulary V, instead of only those appearing in q, and (2) instead of the appearance of terms, the event space of this model is the frequencies of each term.\nIn practice, we have the flexibility to choose the vocabulary V.\nIn one extreme, we can use the vocabulary of the whole collection.\nHowever, this may bring in noise and considerable computational cost.\nIn the other extreme, we may focus on the terms in the query and ignore other terms, but some useful information may be lost by ignoring the nonquery terms.\nAs a compromise, we may conflate all the non-query terms as one single pseudo term.\nIn other words, we may assume that there is exactly one \"non-query term\" in the vocabulary for each query.\nIn our experiments, we adopt this \"pseudo non-query term\" strategy.\nA document can be scored with the likelihood in Equation 1.\nHowever, if a query term is unseen in the document, the MLE of the Poisson distribution would assign zero probability to the term, causing the probability of the query to be zero.\nAs in existing language modeling approaches, the main challenge of constructing a reasonable retrieval model is to find a smoothed language model for p (\u00b7 | d).\n2.2 Smoothing in Poisson Retrieval Model\nIn general, we want to assign non-zero rates for the query terms that are not seen in document d.\nMany smoothing methods have been proposed for multinomial language models [2, 28, 29].\nIn general, we have to discount the probabilities of some words seen in the text to leave some extra probability mass to assign to the unseen words.\nIn Poisson language models, however, we do not have the same constraint as in a multinomial model (i.e., EwEV p (w | d) = 1).\nThus we do not have to discount the probability of seen words in order to give a non-zero rate to an unseen word.\nInstead, we only need to guarantee that Ek = 0,1,2,...p (c (w, d) = k | d) = 1.\nIn this section, we introduce three different strategies to smooth a Poisson language model, and show how they lead to different retrieval functions.\n2.2.1 Bayesian Smoothing using Gamma Prior\nFollowing the risk minimization framework in [11], we assume that a document is generated by the arrival of terms in a time period of IdI according to the document language model, which essentially consists of a vector of Poisson rates for each term, i.e., \u039bd = (\u03bbd,1,..., \u03bbd, | V |).\nA document is assumed to be generated from a potentially different model.\nGiven a particular document d, we want to estimate \u039bd.\nThe rate of a term is estimated independently of other terms.\nWe use Bayesian estimation with the following Gamma prior, which has two parameters, \u03b1 and \u03b2:\nFor each term w, the parameters \u03b1w and \u03b2w are chosen to be \u03b1w = \u00b5 * \u03bbC, w and \u03b2w = \u00b5, where \u00b5 is a parameter and \u03bbC, w is the rate of w estimated from some background language model, usually the \"collection language model\".\nThe posterior distribution of \u039bd is given by\nThis is precisely the smoothed estimate of multinomial language model with Dirichlet prior [28].\n2.2.2 Interpolation (Jelinek-Mercer) Smoothing\nAnother straightforward method is to decompose the query generation model as a mixture of two component models.\nOne is the document language model estimated with maximum likelihood estimator, and the other is a model estimated from the collection background, p (\u2022 IC), which assigns non-zero rate to w. For example, we may use an interpolation coefficient between 0 and 1 (i.e., \u03b4 E [0, 1]).\nWith this simple interpolation, we can score a document with\nWe can also use a Poisson language model for p (\u2022 IC), or use some other frequency-based models.\nIn the retrieval formula above, the first summation can be computed efficiently.\nThe second summation can be actually treated as a document prior, which penalizes long documents.\nAs the second summation is difficult to compute efficiently, we conflate all non-query terms as one pseudo \"non-queryterm\", denoted as \"N\".\nUsing the pseudo-term formulation and a Poisson collection model, we can rewrite the retrieval formula as\nwhere \u03bbd, N = | d | \u2212 Ew \u2208 q c (w, d) and \u03bbC, N = | C | \u2212 Ew \u2208 q c (w, C)\n2.2.3 Two-Stage Smoothing\nAs discussed in [29], smoothing plays two roles in retrieval: (1) to improve the estimation of the document language model, and (2) to explain the common terms in the query.\nIn order to distinguish the content and non-discriminative words in a query, we follow [29] and assume that a query is generated by sampling from a two-component mixture of Poisson language models, with one component being the document model \u039bd and the other being a query background language model p (\u2022 IU).\np (\u2022 IU) models the \"typical\" term frequencies in the user's queries.\nWe may then score each document with the query likelihood computed using the following two-stage smoothing model:\nwhere \u03b4 is a parameter, roughly indicating the amount of \"noise\" in q.\nThis looks similar to the interpolation smoothing, except that p (\u2022 I\u039bd) now should be a smoothed language model, instead of the one estimated with MLE.\nWith no prior knowledge on p (\u2022 IU), we could set it to p (\u2022 IC).\nAny smoothing methods for the document language model can be used to estimate p (\u2022 Id) such as the Gamma smoothing as discussed in Section 2.2.1.\nThe empirical study of the smoothing methods is presented in Section 4.\n3.\nANALYSIS OF POISSON LANGUAGE MODEL\nFrom the previous section, we notice that the Poisson language model has a strong connection to the multinomial language model.\nThis is expected since they both belong to the exponential family [26].\nHowever, there are many differences when these two families of models are applied with different smoothing methods.\nFrom the perspective of retrieval, will these two language models perform equivalently?\nIf not, which model provides more benefits to retrieval, or provides flexibility which could lead to potential benefits?\nIn this section, we analytically discuss the retrieval features of the Poisson language models, by comparing their behavior with that of the multinomial language models.\n3.1 The Equivalence of Basic Models\nLet us begin with the assumption that all the query terms appear in every document.\nUnder this assumption, no smoothing is needed.\nA document can be scored by the log likelihood of the query with the maximum likelihood estimate:\nUsing the MLE, we have \u03bbd, w = c (w, d)\nThis is exactly the log likelihood of the query if the document language model is a multinomial with maximum likelihood estimate.\nIndeed, even with Gamma smoothing, when plugging \u03bbd, w = c (w, d) + \u00b5\u03bbC, w\nwhich is exactly the Dirichlet retrieval formula in [28].\nNote that this equivalence holds only when the document length variation is modeled with Poisson process.\nThis derivation indicates the equivalence of the basic Poisson and multinomial language models for retrieval.\nWith other smoothing strategies, however, the two models would be different.\nNevertheless, with this equivalence in basic models, we could expect that the Poisson language model performs comparably to the multinomial language model in retrieval, if only simple smoothing is explored.\nBased on this equivalence analysis, one may ask, why we should pursue the Poisson language model.\nIn the following sections, we show that despite the equivalence in their basic models, the Poisson language model brings in extra flexibility for exploring advanced techniques on various retrieval features, which could not be achieved with multinomial language models.\n3.2 Term Dependent Smoothing\nOne flexibility of the Poisson language model is that it provides a natural framework to accommodate term dependent (per-term) smoothing.\nExisting work on language model smoothing has already shown that different types of queries should be smoothed differently according to how discriminative the query terms are.\n[7] also predicted that different terms should have a different smoothing weights.\nWith multinomial query generation models, people usually use a single smoothing coefficient to control the combination of the document model and the background model [28, 29].\nThis parameter can be made specific for different queries, but always has to be a constant for all the terms.\nThis is mandatory since a multinomial language model has the constraint that Ew2V p (wId) = 1.\nHowever, from retrieval perspective, different terms may need to be smoothed differently even if they are in the same query.\nFor example, a non-discriminative term (e.g., \"the\", \"is\") is expected to be explained more with the background model, while a content term (e.g., \"retrieval\", \"bush\") in the query should be explained with the document model.\nTherefore, a better way of smoothing would be to set the interpolation coefficient (i.e., \u03b4 in Formula 2 and Formula 3) specifically for each term.\nSince the Poisson language model does not have the \"sum-to-one\" constraint across terms, it can easily accommodate per-term smoothing without needing to heuristically twist the semantics of a generative model as in the case of multinomial language models.\nBelow we present a possible way to explore term dependent smoothing with Poisson language models.\nEssentially, we want to use a term-specific smoothing coefficient \u03b4 in the linear combination, denoted as \u03b4w.\nThis coefficient should intuitively be larger if w is a common word and smaller if it is a content word.\nThe key problem is to find a method to assign reasonable values to \u03b4w.\nEmpirical tuning is infeasible for so many parameters.\nWe may instead estimate the parameters \"\u0394 = {\u03b41,..., \u03b4jV j}\" by maximizing the likelihood of the query given the mixture model of p (qI\u039bQ) and p (qIU), where \u039bQ is the \"true\" query model to generate the query and p (qIU) is a query background model as discussed in Section 2.2.3.\nWith the model p (qI\u039bQ) hidden, the query likelihood is\nIf we have relevant documents for each query, we can approximate the query model space with the language models of all the relevant documents.\nWithout relevant documents, we opt to approximate the query model space with the models of all the documents in the collection.\nSetting p (\u2022 IU) as p (\u2022 IC), the query likelihood becomes\nwhere \u03c0d = p (\u02c6\u039bdIU).\np (\u2022 I \u02c6\u039bd) is an estimated Poisson language model for document d.\nIf we have prior knowledge on p (\u02c6\u039bdIU), such as which documents are relevant to the query, we can set \u03c0d accordingly, because what we want is to find \u0394 that can maximize the likelihood of the query given relevant documents.\nWithout this prior knowledge, we can leave \u03c0d as free parameters, and use the EM algorithm to estimate \u03c0d and \u0394.\nThe updating functions are given as\nd dEC \u03c0d wEV ((1--\u03b4w) p (c (w, q) I\u02c6\u039bd) + \u03b4wp (c (w, q) IC)) and\nAs discussed in [29], we only need to run the EM algorithm for several iterations, thus the computational cost is relatively low.\nWe again assume our vocabulary containing all query terms plus a pseudo non-query term.\nNote that the function does not give an explicit way of estimating the coefficient for the unseen non-query term.\nIn our experiments, we set it to the average over \u03b4w of all query terms.\nWith this flexibility, we expect Poisson language models could improve the retrieval performance, especially for verbose queries, where the query terms have various discriminative values.\nIn Section 4, we use empirical experiments to prove this hypothesis.\n3.3 Mixture Background Models\nAnother flexibility is to explore different background (collection) models (i.e., p (\u2022 IU), or p (\u2022 IC)).\nOne common assumption made in language modeling information retrieval is that the background model is a homogeneous model of the document models [28, 29].\nSimilarly, we can also make the assumption that the collection model is a Poisson lan\nguage model, with the rates \u03bbC, w = jCj.\nHowever, this assumption usually does not hold, since the collection is far more complex than a single document.\nIndeed, the\ncollection usually consists of a mixture of documents with various genres, authors, and topics, etc. .\nTreating the collection model as a mixture of document models, instead of a single \"pseudo-document model\" is more reasonable.\nExisting work of multinomial language modeling has already shown that a better modeling of background improves the retrieval performance, such as clusters [15, 10], neighbor documents [25], and aspects [8, 27].\nAll the approaches can be easily adopted using Poisson language models.\nHowever, a common problem of these approaches is that they all require heavy computation to construct the background model.\nWith Poisson language modeling, we show that it is possible to model the mixture background without paying for the heavy computational cost.\nPoisson Mixture [3] has been proposed to model a collection of documents, which can fit the data much better than a single Poisson.\nThe basic idea is to assume that the collection is generated from a mixture of Poisson models, which has the general form of\np (\u00b7 | A) is a single Poisson model and p (A) is an arbitrary probability density function.\nThere are three well known Poisson mixtures [3]: 2-Poisson, Negative Binomial, and the Katz's K-Mixture [9].\nNote that the 2-Poisson model has actually been explored in probabilistic retrieval models, which led to the well-known BM25 formula [22].\nAll these mixtures have closed forms, and can be estimated from the collection of documents efficiently.\nThis is an advantage over the multinomial mixture models, such as PLSI [8] and LDA [1], for retrieval.\nFor example, the probability density function of Katz's K-Mixture is given as\nwhere 77k,0 = 1 when k = 0, and 0 otherwise.\nWith the observation of a collection of documents, \u03b1w and\nwhere cf (w) and df (w) are the collection frequency and document frequency of w, and N is the number of documents in the collection.\nTo account for the different document lengths, we assume that,3 w is a reasonable estimation for generating a document of the average length, and use\navdl | q | to generate the query.\nThis Poisson mixture model can be easily used to replace P (\u00b7 | C) in the retrieval functions 3 and 4.\n3.4 Other Possible Flexibilities\nIn addition to term dependent smoothing and efficient mixture background, a Poisson language model has also some other potential advantages.\nFor example, in Section 2, we see that Formula 2 introduces a component which does document length penalization.\nIntuitively, when the document has more unique words, it will be penalized more.\nOn the other hand, if a document is exactly n copies of another document, it would not get over penalized.\nThis feature is desirable and not achieved with the Dirichlet model [5].\nPotentially, this component could penalize a document according to what types of terms it contains.\nWith term specific settings of S, we could get even more flexibility for document length normalization.\nPseudo-feedback is yet another interesting direction where the Poission model might be able to show its advantage.\nWith model-based feedback, we could again relax the combination coefficients of the feedback model and the background model, and allow different terms to contribute differently to the feedback model.\nWe could also utilize the \"relevant\" documents to learn better per-term smoothing coefficients.\n4.\nEVALUATION\nIn Section 3, we analytically compared the Poisson language models and multinomial language models from the perspective of query generation and retrieval.\nIn this section, we compare these two families of models empirically.\nExperiment results show that the Poisson model with perterm smoothing outperforms multinomial model, and the performance can be further improved with two-stage smoothing.\nUsing Poisson mixture as background model also improves the retrieval performance.\n4.1 Datasets\nSince retrieval performance could significantly vary from one test collection to another, and from one query to another, we select four representative TREC test collections: AP, Trec7, Trec8, and Wt2g (Web).\nTo cover different types of queries, we follow [28, 5], and construct short-keyword (SK, keyword title), short-verbose (SV, one sentence description), and long-verbose (LV, multiple sentences) queries.\nThe documents are stemmed with the Porter's stemmer, and we do not remove any stop word.\nFor each parameter, we vary its value to cover a reasonably wide range.\n4.2 Comparison to Multinomial\nWe compare the performance of the Poisson retrieval models and multinomial retrieval models using interpolation (JelinekMercer, JM) smoothing and Bayesian smoothing with conjugate priors.\nTable 1 shows that the two JM-smoothed models perform similarly on all data sets.\nSince the Dirichlet Smoothing for multinomial language model and the Gamma Smoothing for Poisson language model lead to the same retrieval formula, the performance of these two models are jointly presented.\nWe see that Dirichlet\/Gamma smoothing methods outperform both Jelinek-Mercer smoothing methods.\nThe parameter sensitivity curves for two Jelinek-Mercer\nFigure 1: Poisson and multinomial performs similarly with Jelinek-Mercer smoothing\nsmoothing methods are shown in Figure 1.\nClearly, these two methods perform similarly either in terms of optimality\nTable 1: Performance comparison between Poisson and Multinomial retrieval models: basic models perform comparably; term dependent two-stage smoothing significantly improves Poisson\nAn asterisk (*) indicates that the difference between the performance of the term dependent two-stage smoothing and that of the Dirichlet\/Gamma single smoothing is statistically significant according to the Wilcoxon signed rank test at the level of 0.05.\nor sensitivity.\nThis similarity of performance is expected as we discussed in Section 3.1.\nAlthough the Poisson model and multinomial model are similar in terms of the basic model and\/or with simple smoothing methods, the Poisson model has great potential and flexibility to be further improved.\nAs shown in the rightmost column of Table 1, term dependent two-stage Poisson model consistently outperforms the basic smoothing models, especially for verbose queries.\nThis model is given in Formula 4, with a Gamma smoothing for the document model P (\u00b7 | d), and 6., which is term dependent.\nThe parameter \u00b5 of the first stage Gamma smoothing is empirically tuned.\nThe combination coefficients (i.e., \u0394), are estimated with the EM algorithm in Section 3.2.\nThe parameter sensitivity curves for Dirichlet\/Gamma and the per-term two-stage smoothing model are plotted in Figure 2.\nThe per-term two-stage smoothing method is less sensitive to the parameter \u00b5 than Dirichlet\/Gamma, and yields better optimal performance.\nFigure 2: Term dependent two-stage smoothing of Poisson outperforms Dirichlet\/Gamma\nIn the following subsections, we conduct experiments to demonstrate how the flexibility of the Poisson model could be utilized to achieve better performance, which we cannot achieve with multinomial language models.\n4.3 Term Dependent Smoothing\nTo test the effectiveness of the term dependent smoothing, we conduct the following two experiments.\nIn the first experiment, we relax the constant coefficient in the simple Jelinek-Mercer smoothing formula (i.e., Formula 3), and use the EM algorithm proposed in Section 3.2 to find a 6.\nfor each unique term.\nSince we are using the EM algorithm to iteratively estimate the parameters, we usually do not want the probability of P (\u00b7 | d) to be zero.\nWe then use a simple Laplace method to slightly smooth the document model before it goes into the EM iterations.\nThe documents are then still scored with Formula 3, but using learnt 6.\n.\nThe results are labeled with \"JM+L.\"\nin Table 2.\nTable 2: Term dependent smoothing improves retrieval performance\nAn asterisk (*) in Column 3 indicates that the difference between the \"JM+L.\"\nmethod and JM method is statistically significant; an asterisk (*) in Column 5 means that the difference between term dependent two-stage method and query dependent two-stage method is statistically significant; PT stands for \"per-term\".\nWith term dependent coefficients, the performance of the Jelinek-Mercer Poisson model is improved in most cases.\nHowever, in some cases (e.g., Trec7\/SV), it performs poorly.\nThis might be caused by the problem of EM estimation with unsmoothed document models.\nOnce non-zero probability is assigned to all the terms before entering the EM iteration, the performance on verbose queries can be improved significantly.\nThis indicates that there is still room to find better methods to estimate 6.\n.\nPlease note that neither the perterm JM method nor the \"JM+L.\"\nmethod has a parameter to tune.\nAs shown in Table 1, the term dependent two-stage smoothing can significantly improve retrieval performance.\nTo understand whether the improvement is contributed by the term dependent smoothing or the two-stage smoothing framework, we design another experiment to compare the perterm two-stage smoothing with the two-stage smoothing method proposed in [29].\nTheir method managed to find coefficients specific to the query, thus a verbose query would use a higher 6.\nHowever, since their model is based on multinomial language modeling, they could not get per-term coefficients.\nWe adopt their method to the Poisson two-stage\nsmoothing, and also estimate a per-query coefficient for all the terms.\nWe compare the performance of such a model with the per-term two-stage smoothing model, and present the results in the right two columns in Table 2.\nAgain, we see that the \"per-term\" two-stage smoothing outperforms the \"per-query\" two-stage smoothing, especially for verbose queries.\nThe improvement is not as large as how the perterm smoothing method improves over Dirichlet\/Gamma.\nThis is expected, since the per-query smoothing has already addressed the query discrimination problem to some extent.\nThis experiment shows that even if the smoothing is already per-query, making it per-term is still beneficial.\nIn brief, the per-term smoothing improved the retrieval performance of both one-stage and two-stage smoothing method.\n4.4 Mixture Background Model\nIn this section, we conduct experiments to examine the benefits of using a mixture background model without extra computational cost, which cannot be achieved for multinomial models.\nSpecifically, in retrieval formula 3, instead of using a single Poisson distribution to model the background P (\u00b7 | C), we use Katz's K-Mixture model, which is essentially a mixture of Poisson distributions.\nP (\u00b7 | C) can be computed efficiently with simple collection statistics, as discussed in Section 3.3.\nTable 3: K-Mixture background model improves retrieval performance\nThe performance of the JM retrieval model with single Poisson background and with Katz's K-Mixture background model is compared in Table 3.\nClearly, using K-Mixture to model the background model outperforms the single Poisson background model in most cases, especially for verbose queries where the improvement is statistically significant.\nFigure 3 shows that the performance changes over different parameters for short verbose queries.\nThe model using K-Mixture background is less sensitive than the one using single Poisson background.\nGiven that this type of mixture\nFigure 3: K-Mixture background model deviates the\nsensitivity of verbose queries background model does not require any extra computation cost, it would be interesting to study whether using other mixture Poisson models, such as 2-Poisson and negative Binomial, could help the performance.\n5.\nRELATED WORK\nTo the best of our knowledge, there has been no study of query generation models based on Poisson distribution.\nLanguage models have been shown to be effective for many retrieval tasks [21, 28, 14, 4].\nThe most popular and fundamental one is the query-generation language model [21, 13].\nAll existing query generation language models are based on either multinomial distribution [19, 6, 28, 13] or multivariate Bernoulli distribution [21, 17, 18].\nWe introduce a new family of language models, based on Poisson distribution.\nPoisson distribution has been previously studied in the document generation models [16, 22, 3, 24], leading to the development of one of the most effective retrieval formula BM25 [23].\n[24] studies the parallel derivation of three different retrieval models which is related to our comparison of Poisson and multinomial.\nHowever, the Poisson model in their paper is still under the document generation framework, and also does not account for the document length variation.\n[26] introduces a way to empirically search for an exponential model for the documents.\nPoisson mixtures [3] such as 2-Poisson [22], Negative multinomial, and Katz's KMixture [9] has shown to be effective to model and retrieve documents.\nOnce again, none of this work explores Poisson distribution in the query generation framework.\nLanguage model smoothing [2, 28, 29] and background structures [15, 10, 25, 27] have been studied with multinomial language models.\n[7] analytically shows that term specific smoothing could be useful.\nWe show that Poisson language model is natural to accommodate the per-term smoothing without heuristic twist of the semantics of a generative model, and is able to efficiently better model the mixture background, both analytically and empirically.\n6.\nCONCLUSIONS\nWe present a new family of query generation language models for retrieval based on Poisson distribution.\nWe derive several smoothing methods for this family of models, including single-stage smoothing and two-stage smoothing.\nWe compare the new models with the popular multinomial retrieval models both analytically and experimentally.\nOur analysis shows that while our new models and multinomial models are equivalent under some assumptions, they are generally different with some important differences.\nIn particular, we show that Poisson has an advantage over multinomial in naturally accommodating per-term smoothing.\nWe exploit this property to develop a new per-term smoothing algorithm for Poisson language models, which is shown to outperform term-independent smoothing for both Poisson and multinomial models.\nFurthermore, we show that a mixture background model for Poisson can be used to improve the performance and robustness over the standard Poisson background model.\nOur work opens up many interesting directions for further exploration in this new family of models.\nFurther exploring the flexibilities over multinomial language models, such as length normalization and pseudo-feedback could be good future work.\nIt is also appealing to find robust methods to learn the per-term smoothing coefficients without additional computation cost.","keyphrases":["queri gener","languag model","multinomi distribut","queri gener probabilist model","poisson distribut","two-stage smooth","multivari bernoullu distribut","speech recognit","term frequenc","perterm smooth","new term-depend smooth algorithm","vocabulari set","homogen poisson process","singl pseudo term","poisson process","formal model","term depend smooth"],"prmu":["P","P","P","P","P","M","M","U","U","M","M","U","M","U","M","M","M"]} {"id":"J-1","title":"Generalized Trade Reduction Mechanisms","abstract":"When designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB). It is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB. There have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions [13], distributed markets [3] and supply chain problems [2, 4]. In this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare. We bound the welfare achieved by our procedure for a wide range of domains. In particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains. Furthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.","lvl-1":"Generalized Trade Reduction Mechanisms Mira Gonen Electrical Engineering Dept. Tel Aviv University Ramat Aviv 69978, Israel gonenmir@post.tau.ac.il Rica Gonen\u2217 Yahoo! Research Yahoo! Sunnyvale, CA 94089 gonenr@yahoo-inc.com Elan Pavlov Media Lab MIT Cambridge MA, 02149 elan@mit.edu ABSTRACT When designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB).\nIt is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB.\nThere have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions[13], distributed markets[3] and supply chain problems[2, 4].\nIn this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare.\nWe bound the welfare achieved by our procedure for a wide range of domains.\nIn particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains.\nFurthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Electronic Commerce]: Payment scheme General Terms Algorithms, Design, Economics, Theory 1.\nINTRODUCTION When designing a mechanism there are several key properties that are desirable to maintain.\nSome of the more important ones are individual rationality (IR) - to make it worthwhile for all players to participate, incentive compatibility (IC) - to give incentive to players to report their true value to the mechanism and budget balance (BB) - not to run the mechanism on a loss.\nIn many of the mechanisms the goal function that a mechanism designer attempts to maximize is the social welfare1 - the total benefit to society.\nHowever, it is well known from [15] that any mechanism that maximizes social welfare while maintaining individual rationality and incentive compatibility runs a deficit perforce, i.e., is not budget balanced.\nOf course, for many applications of practical importance we lack the will and the capability to allow the mechanism to run a deficit and hence one must balance the payments made by the mechanism.\nTo maintain the BB property in an IR and IC mechanism it is necessary to compromise on the optimality of the social welfare.\n1.1 Related Work and Specific Solutions There have been several attempts to design budget balanced mechanisms for particular domains2 .\nFor instance, for double-sided auctions where both the buyers and sellers are strategic and the goods are homogeneous [13] (or when the goods are heterogeneous [5]).\n[13] developed a mechanism that given valuations of buyers and sellers produces an allocation (which are the trading players) and a matching between buyers and sellers such that the mechanism is IR, IC, and BB while retaining most of the social welfare.\nIn the distributed markets problem (and closely related problems) goods are transported between geographic locations while incurring some constant cost for transportation.\n[16, 9, 3] present mechanisms that approximate the social welfare while achieving an IR, IC and BB mechanism.\nFor supply chain problems [2, 4] bounds the loss of social welfare that is necessary to inflict on the mechanism in order to achieve the desired combination of IR, IC, and BB.\nDespite the works discussed above, the question of how to design a general mechanism that achieves IR, IC, and BB independently of the problem domain remains open.\nFurthermore, there are several domains where the question of how to design an IR, IC and BB mechanism which approx1 Social Welfare is also referred to as efficiency in the economics literature.\n2 A brief reminder of all of the problems used in this paper can be found in Appendix B 20 imates the social welfare remains an open problem.\nFor example, in the important domain of combinatorial doublesided auctions there is no known result that bounds the loss of social welfare needed to achieve budget balance.\nAnother interesting example is the open question left by [3]:How can one bound the loss in social welfare that is needed to achieve budget balance in an IR and IC distributed market where the transportation edges are strategic.\nNaturally an answer to the BB distributed market with strategic edges has vast practical implications, for example to transportation networks.\n1.2 Our Contribution In this paper we unify all the problems discussed above (both the solved as well as the open ones) into one solution concept procedure.\nThe solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism for single-valued players and outputs an IR, IC and BB mechanism.\nThe output mechanism may suffer some welfare loss as a tradeoff of achieving BB.\nThere are problem instances in which no welfare loss is necessary but by [15] there are problem instances in which there is welfare loss.\nNevertheless for a wide class of problems we are able to bound the loss in welfare.\nA particularly interesting case is one in which the input mechanism is an efficient allocation.\nIn addition to unifying many of the BB problems under a single solution concept, the GTR procedure improves on existing results and solves several open problems in the literature.\nThe existing solutions our GTR procedure improves are homogeneous double-sided auctions, distributed markets [3], and supply chain [2, 4].\nFor the homogeneous doublesided auctions the GTR solution procedure improves on the well known solution [13] by allowing for some cases of no trade reduction at all.\nFor the distributed markets [3] and the supply chain [2, 4] the GTR solution procedure improves on the welfare losses'' bound, i.e., allows one to achieve an IR, IC and BB mechanism with smaller loss on the social welfare.\nRecently we also learned that the GTR procedure allows one to turn the model newly presented [6] into a BB mechanism.\nThe open problems that are answered by GTR are distributed markets with strategic transportation edges and bounded paths, combinatorial double-sided auctions with bounded size of the trading group i.e., a buyer and its bundle goods'' sellers, combinatorial double-sided auctions with bounded number of possible trading groups.\nIn addition to the main contribution described above, this paper also defines an important classification of problem domains.\nWe define class based domain and procurement class based domains.\nThe above definitions build on the different competition powers of players in a mechanisms called internal and external competition.\nMost of the studied problem domains are of the more restrictive procurement class domains and we believe that the more general setting will inspire more research.\n2.\nPRELIMINARIES 2.1 The Model In this paper we design a method which given any IR and IC mechanism outputs a mechanism that maintains the IC and IR properties while achieving BB.\nFor some classes of mechanisms we bound the competitive approximation of welfare.\nIn our model there are N players divided into sets of trade.\nThe sets of trade are called procurement sets and are defined (following [2]) as follows: Definition 2.1.\nA procurement set s is the smallest set of players that is required for trade to occur.\nFor example, in a double-sided auction, a procurement set is a pair consisting of a buyer and a seller.\nIn a combinatorial double-sided auction a procurement set can consist of a buyer and several sellers.\nWe mark the set of all procurement sets as S and assume that any allocation is a disjoint union of procurement sets.\nEach player i, 1 \u2264 i \u2264 n, assigns a real value vi(s) to each possible procurement set s \u2208 S. Namely, vi(s) is the valuation of player i on procurement set s.\nWe assume that for each player i vi(s) is i``s private value and that i is a single value player, meaning that if vi(sj) > 0 then for every other sk, k = j, either vi(sk) = vi(sj) or vi(sk) = 0.\nFor the ease of notation we will mark by vi the value of player i for any procurement set s such that vi(s) > 0.\nThe set Vi \u2286 R is the set of all possible valuations vi.\nThe set of all possible valuations of all the players is denoted by V = V1 \u00d7 ... \u00d7 Vn.\nLet v\u2212i = (v1, ..., vi\u22121, vi+1, ..., vn) be the vector of valuations of all the players besides player i, and let V\u2212i be the set of all possible vectors v\u2212i.\nWe denote by W(s) the value of a procurement set s \u2208 S such that W(s) = i\u2208s vi(s) + F(s), where F is some function that assigns a constant to procurement sets.\nFor example, F can be a (non-strategic) transportation cost in a distributed market problem.\nLet the size of a procurement set s be denoted as |s|.\nIt is assumed that any allocation is a disjoint union of procurement sets and therefore one can say that an allocation partitions the players into two sets; a set of players that trade and a set of players that do not trade.\nThe paper denotes by O the set of possible partitions of an allocation A into procurement sets.\nThe value W(A) of an allocation A is the sum of the values of its most efficient partition to procurement sets, that is W(A) = maxS\u2208O s\u2208S W(s).\nThis means that W(A) = i\u2208A vi +maxS\u2208O s\u2208S F(s).\nIn the case where F is identically zero, then W(A) = i\u2208A vi.\nAn optimal partition S\u2217 (A) is a partition that maximizes the above sum for an allocation A. Let the value of A be W(S\u2217 (A)) (note that the value can depend on F).\nWe say that the allocation A is efficient if there is no other allocation with a higher value.\nThe efficiency of the allocation \u02c6A is W ( \u02c6A) W (A) , where A is a maximal valued allocation.\nWe assume w.l.o.g. that there are no two allocations with the same value3 .\nA mechanism M defines an allocation and payment rules, M = (R, P).\nA payment rule P decides i``s payment pi where P is a function P : V \u2192 RN .\nWe work with mechanisms 3 Ties can be broken using the identities of the players.\n21 in which players are required to report their values.\nAn example of such a mechanism is the VCG mechanism [17, 8, 10].\nThe reported value bi \u2208 Vi of player i is called a bid and might be different from his private value vi.\nLet b \u2208 V be the bids of all players.\nAn allocation rule R decides the allocation according to the reported values b \u2208 V .\nWe make the standard assumption that players have quasi-linear utility so that when player i trades and pays pi then his utility is ui(vi, b\u2212i) = vi \u2212 pi, ui : V \u21d2 R.\nWe also assume that players are rational utility maximizers.\nMechanism M is Budget Balanced (BB) if i\u2208N pi \u2265 0 for any bids b \u2208 V .\nM is Incentive-Compatible (IC) in dominant strategies if for any player i, value vi and any b\u2212i \u2208 V\u2212i, ui(vi, b\u2212i) \u2265 ui(b) meaning that for any player i, bidding vi maximized i``s utility over all possible bids of the other players.\nM is (ex-post) Individually Rational (IR) if for any player i value vi, and any b\u2212i \u2208 V\u2212i ui(vi, b\u2212i) \u2265 0 meaning that for all possible bids of the other players, player``s i utility is non-negative.\nNote that since our mechanisms are normalized IR, if a player does not trade then the player pays 0 and has utility 0.\nOur algorithm presented in the next section employs a commonly used payment scheme, the critical value payment scheme.\nDefinition 2.2.\nCritical value payment scheme: A mechanism uses a critical value payment scheme if given an allocation it charges players the minimum value they need to report to the mechanism in order to remain allocated.\nWe denote by Ci the critical value price computed for player i. 2.2 Competitions and Domains In this paper we present two generalized trade reduction algorithms.\nThe two algorithms are such that given an IR and IC mechanism M that solves a problem in some domain (different domains are formally defined below), turns M into IR, IC and BB mechanism.\nThe algorithm presented finds procurement sets and removes them in iterations until the right conditions are fulfilled and the mechanism M is turned into a BB one.\nThe right conditions that need to be met are conditions of competition among the players in the given problem.\nThe following definitions leads us to the competition conditions we are looking for.\nDefinition 2.3.\nFor any player i \u2208 N, we say that the set Ri \u2286 N \\ {i} is a replacing set of i, if for any procurement set s \u2208 S such that i \u2208 s and Ri\u2229s = \u2205, s\\{i}\u222aRi \u2208 S. For example, in a (homogeneous) double-sided auction (see problem B.1) the replacement set for any buyer is simply any other buyer.\nIn an auction for transportation slots (see problem B.7), the replacement set of an edge is a path between the endpoints of the edge.\nNote that a set can replace a single player.\nFurthermore, this relationship is transitive but not necessarily symmetric.\nIf i is a replacement set for j, it is not necessarily true that j is a replacement set for i. Definition 2.4.\nFor any allocation A, procurement set s \u2286 A, and any i \u2208 s we say Ri(A, s) is an internal competition for i with respect to A and s, if Ri(A, s) \u2286 N \\ A is a replacement set for i s.t. T = s \\ {i} \u222a Ri(A, s) \u2208 S and W(T) \u2265 0.\nDefinition 2.5.\nFor any allocation A and procurement set s \u2286 A and any i \u2208 s we say that Ei(A, s) is an external competition for i with respect to A and s, if Ei(A, s) \u2286 N \\ A is a set s.t., T = {i} \u222a Ei(A, s) \u2208 S and W(T) \u2265 0.\nWe will assume, without loss of generality, that there are no ties between the values of any allocations, and in particular there are no ties between values of procurement sets.\nIn case of ties, these can be broken by using the identities of the players4 .\nSo for any allocation A, procurement set s and player i with external competition Ei(A, s), there exists exactly one set representing the maximally valued external competition.\nDefinition 2.6.\nA set X \u2282 N is closed under replacement if \u2200i \u2208 X then Ri \u2282 X The following defines the required competition needed to maintain IC, IR and BB.\nThe set X5 denotes this competition and is closed under replacement.\nIn the remainder of the paper we will assume that all of our sets which define competition in a mechanism are closed under replacement.\nDefinition 2.7.\nLet X \u2282 N be a set that is closed under replacement, we say that the mechanism is an X-external mechanism, if 1.\nEach player i \u2208 X has an external competition.\n2.\nEach player i \/\u2208 X has an internal competition.\n3.\nFor all players i1, ... , it \u2208 s \\ X there exist Ri1 (A, s), ... , Rit (A, s) such that for every iz = iq, Riz (A, s) \u2229 Riq (A, s) = \u2205 4.\nfor every procurement set s \u2208 S it holds that s\u2229X = \u2205 For general domains the choice of X can be crucial.\nIn fact even for the same domain the welfare (and revenue) can vary widely depending on how X is defined.\nIn appendix C we give an example where two possible choices of X yield greatly different results.\nAlthough we show that X should be chosen as small as possible we do not give any characterization of the optimality of X and this is an important open problem.\nOur two generalized trade reduction algorithms will ensure that for any allocation we have the desired types of competition.\nSo given a mechanism M that is IC and IR with allocation A, the goal of the algorithms is to turn M into an X-external mechanism.\nThe two generalized trade reduction algorithms utilize a dividing function D which divides allocation A into disjoint procurement sets.\nThe algorithms order the procurements sets defined by D in order of increasing value.\nFor any procurement set there is a desired type of competition that depends only on the players who compose the procurement set.\nThe generalized trade reduction algorithms go over the procurement sets in order (from the smallest to the largest) and remove any procurement set that does not have the desired competition when the set is reached.\nThe reduction of procurement sets will also be referred to as a trade reduction.\nFormally, 4 The details of how to break ties in allocations are standard and are omitted.\n5 We present some tradeoffs between the different possible sets in Appendix C. 22 Definition 2.8.\nD is a dividing function if for any allocation A and the players'' value vector v, D divides the allocation into disjoint procurements sets s1, ... , sk s.t. \u222asj = A and for any player i with value vi if i \u2208 sj1 and t \u2208 sj2 s.t. j1 \u2265 j2 then for any value vi > vi of player i and division by D into s1, ... , sk such that i \u2208 sj1 and t \u2208 sj2 then j1 > j2.\nThe two generalized trade reduction algorithms presented accept problems in different domains.\nThe formal domain definitions follow: Definition 2.9.\nA domain is a class domain if for all i \u2208 N and all replacement sets of i, Ri, |Ri| = 1 and for all i, j, i = j if j = Ri then i = Rj.\nIntuitively, this means that replacement sets are of size 1 and the replacing relationship is symmetric.\nWe define the class of a player i as the set of the player``s replacement sets and denote the class of player i by [i].\nIt is important to note that since replacement sets are transitive relations and since class domains also impose symmetric relations on the replacement sets, the class of a player i, [i] is actually an equivalence class for i. Definition 2.10.\nA domain is a procurement-class domain if the domain is a class-based domain and if for any player i such that there exists two procurement sets s1, s2 (not necessarily trading simultaneously in any allocation) such that i \u2208 s1 and i \u2208 s2 then there exists a bijection f : s1 \u2192 s2 such that for any j \u2208 s1, f(j) is a replacement set for j in s2.\nExample 2.1.\nA (homogeneous) double-sided auction (see problem B.1) is a procurement-class based domain.\nFor the (homogeneous) double-sided auction each procurement set consists of a buyer and a seller.\nThe double sided combinatorial auction consisting of a single multi-minded buyer and multiple sellers of heterogenous goods (see problem B.9), is a class based domain (as we have a single buyer) but not a procurement-class based domain.\nIn this case, the buyer is a class and each set of sellers of the same good is a class.\nHowever, for a buyer there is no bijection between the different the procurement sets of the bundles of goods the buyer is interested in.\nThe spatial-distributed market with strategic edges (see problem B.6) is not a class-based domain (and therefore not a procurement-class domain).\nFor example, even for a fixed buyer and a fixed seller there are two different procurement sets consisting of different paths between the buyers and sellers.\nThe next sections present two algorithms GTR-1 and GTR2.\nGTR-1 accepts problems in procurement-class based domains, its properties are proved with a general dividing function D.\nThe GTR-2 algorithm accepts problems in any domain.\nWe prove the GTR-2``s properties with specific dividing function D0.\nThe function will be defined in section 4.\nSince the dividing function can have a large practical impact on welfare (and revenue) the generality of GTR \u2212 1 (albeit in special domains) can be an important practical consideration.\n3.\nPROCUREMENT-CLASS BASED DOMAINS This section focuses on the problems that are procurementclass based domains.\nFor this domain, we present an algorithm called GTR-1, which given a mechanism that is IR and IC outputs a mechanism with reduced welfare which is IR, IC and budget balanced.\nAlthough procurement class domains appear to be a relatively restricted model, in fact many domains studied in the literature are procurement class domains.\nExample 3.1.\nThe following domains are procurement class domains: \u2022 double-sided auctions with homogenous goods [13](problem B.1).\nIn this domain there are two classes.\nThe class of buyers and the class of sellers.\nEach procurement set consists of a single buyer and a single seller.\nSince every pair of (buyer, seller) is a valid procurement set (albeit possible with negative value) this is a procurement class domain.\nIn this domain the constant assigned to the procurement sets is F = 0.\n\u2022 Spatially distributed markets with non strategic edges [3, 9](problem B.3).\nLike the double-sided auctions with homogenous goods, their are two classes in the domain.\nClass of buyers and class of sellers with procurement sets consisting of single buyer and single seller.\nThe sellers and buyers are nodes in a graph and the function F is the distance of two nodes (length of the edge) which represent transport costs.\nThese costs differ between different (buyer, seller) pairs.\n\u2022 Supply chains [2, 4] (problem B.5).\nThe assumption of unique manufactory by [2, 4] can best be understood as turning general supply chains (which need not be a procurement class domain) into a procurement class domain.\n\u2022 Single minded combinatorial auctions [11] (problem B.8).\nIn this context each seller sells a single good and each buyer wants a set of goods.\nThe classes are the sets of sellers selling the same good as well as the buyers who desire the same bundle.\nA procurement set consists of a single buyer as well as a set of sellers who can satisfy that buyer.\nA definition of the mechanism follows: Definition 3.1.\nThe GTR-1 algorithm - given a mechanism M, a set X \u2282 N which is closed under replacement, a dividing function D, and allocation A, GTR-1 operates as follows: 1.\nUse the dividing function D to divide A into procurement sets s1, ... , sk \u2208 S. 2.\nOrder the procurement sets by increasing value.\n3.\nFor each sj, starting from the lowest value procurement set: If for every i \u2208 sj \u2229 X there is external competition and every i \u2208 sj \\ X there is internal competition then 23 keep sj.\nOtherwise reduce the trade sj (i.e., remove every i \u2208 sj from the allocation).6 4.\nAll trading players are charged the critical value for trading.\nAll non trading players are charged nothing.\nRemark 3.1.\nThe special case where X = N has received attention under different guises in various special cases, such as ([13, 3, 4]).\n3.1 The GTR-1 Produces an X-external Mechanism that is IR, IC and BB In this subsection we prove that the GTR-1 algorithm produces an X-external mechanism that is IR, IC and BB.\nTo prove GTR-1``s properties we make use of theorem 3.1 which is a well known result (e.g., [14, 11]).\nTheorem 3.1 characterizes necessary and sufficient conditions for a mechanism for single value players to be IR and IC: Definition 3.2.\nAn allocation rule R is Bid Monotonic if for any player i, any bids of the other players b\u2212i \u2208 V\u2212i, and any two possible bids of i, \u02c6bi > bi, if i trades under the allocation rule R when reporting bi, then i also trades when reporting \u02c6bi.\nIntuitively, a bid monotonic allocation rule ensures that no trading player can become a non-trading player by improving his bid.\nTheorem 3.1.\nAn IR mechanism M with allocation rule R is IC if and only if R is Bid Monotonic and each trading player i pays his critical value Ci (pi = Ci).\nSo for normalized IR7 and IC mechanisms, the allocation rule which is bid monotonic uniquely defines the critical values for all the players and thus the payments.\nObservation 3.1.\nLet M1 and M2 be two IR and IC mechanisms with the same allocation rule.\nThen M1 and M2 must have the same payment rule.\nIn the following we prove that the X-external GTR-1 algorithm produces a IR, IC and BB mechanism, but first a subsidiary lemma is shown.\nLemma 3.1.\nFor procurement class domains if there exists a procurement set sj s.t. i \u2208 sj and i has external competition than all t = i t \u2208 sj, t has internal competition.\nProof.\nThis follows from the definition of procurement class domains.\nSuppose that i has external competition, then there exists a set of players Ei(A, s) such that {i} \u222a Ei(A, s) \u2208 S. Let us denote by sj = {i} \u222a Ei(A, s).\nSince the domain is a procurement-class domain there exists a bijection function f between sj and sj.\nf defines the required internal competition.\nWe start by proving IR and IC: 6 Although the definition of an X-external mechanism requires that X intersects every procurement set, this is not strictly necessary.\nIt is possible to define an X that does not intersect every possible procurement set.\nIn this case, any procurement set s \u2208 S s.t. s \u2229 X = \u2205 will be reduced.\n7 Note that this is not true for mechanisms which are not normalized e.g., [7, 12] Lemma 3.2.\nFor any X, the X-external mechanism with a critical value pricing scheme produced by the GTR-1 algorithm is an IR and IC mechanism.\nProof.\nBy the definition of a critical value pricing scheme 2.2 and the GTR-1 algorithm 3.1 it follows that for every trading player i, vi \u2265 0.\nBy the GTR-1 algorithm 3.1 nontrading players i have a payment of zero.\nThus for every player i, value vi, and any b\u2212i \u2208 V\u2212i ui(vi, b\u2212i) \u2265 0, meaning the produced X-external mechanism is IR.\nAs the X-external GTR-1 algorithm is IR and applies the critical value payment scheme according to theorem 3.1, in order to show that the produced X-external mechanism with the critical value payment scheme is IC, it remains to show that the produced mechanism``s allocation rule is bid monotonic.\nSince GTR-1 orders the procurement sets according to increasing value, if player i increases his bid from bi to bi > bi then for any division function D of procurement sets, the procurement set s containing i always appears later with the bid bi than with the bid bi.\nSo the likelihood of competition can only increase if i appears in later procurement sets.\nThis follows as GTR-1 can reduce more of the lower value procurement sets which will result in more non-trading players.\nTherefore if s has the required competition and is not reduced with bi then it will have the required competition with bi and will not be reduced.\nFinally we prove BB: Lemma 3.3.\nFor any X, the X-external mechanism with critical value pricing scheme produced by the GTR-1 algorithm is a BB mechanism.\nProof.\nIn order to show that the produced mechanism is BB we show that each procurement set that is not reduced has a positive budget (i.e., the sum of payments is positive).\nLet s \u2208 S be a procurement set that is not reduced.\nLet i \u2208 s \u2229 X then according to the definition of X-external definition 2.7 and the GTR-1 algorithm 3.1 i has an external competition.\nAssume w.l.o.g.8 that i is the only player with external competition in s and all other players j = i, j \u2208 s have internal competition.\nLet A be the allocation after the procurement sets reduction by the GTR-1 algorithm.\nAccording to the definition of external competition 2.5, there exists a set Ei(A, s) \u2282 N \\A such that i \u222a Ei(A, s) \u2208 S and W(i \u222a Ei(A, s)) \u2265 0.\nSince W(i\u222aEi(A, s)) = vi +W(Ei(A, s)) then vi \u2265 \u2212W(Ei(A, s)).\nBy the critical value pricing scheme definition 2.2 it means that if player i bids any less than \u2212W(Ei(A, s)) he will not have external competition and therefore will be removed from trading.\nThus i pays no less than min \u2212W(Ei(A, s)).\nSince all other players j \u2208 s have internal competition their critical price can not be less than their maximal value internal competitor (set) i.e., max W(Rj(A, s)).\nIf any player j \u2208 s bids less then its maximal internal competitor (set) then he will not be in s but his maximal internal competitor (set) will.\nAs a possible Ei(A, s) is \u222aj\u2208sRj(A, s) one can bound the maximal value of i``s external competition W(Ei(A, s)) by the sum of the maximal values of the rest of the players in s 8 since the domain is a procurement class domain we can use lemma 3.1 24 internal competition i.e., j\u2208s max W(Rj(A, s)).\nTherefore min \u2212W(Ei(A, s)) = \u2212( j\u2208s max W(Rj(A, s))).\nAs the F function is defined to be a positive constant we get that W(s) = min \u2212W(Ei(A, s))+( j\u2208s max W(Rj(A, s)))+F(s) \u2265 0 and thus s is at least budget balanced.\nAs each procurement set that is not reduced is at least budget balanced, it follows that the produced X-external mechanism is BB.\nThe above two lemmas yield the following theorem: Theorem 3.2.\nFor procurement class domains for any X, the X-external mechanism with critical value pricing scheme produced by the GTR-1 algorithm is an IR, IC and BB mechanism.\nRemark 3.2.\nThe proof of the theorem yields bounds on the payments any player has to make to the mechanism.\n4.\nNON PROCUREMENT-CLASS BASED DOMAINS The main reason that GTR-1 works for the procurementclass domains is that each player``s possibility of being reduced is monotonic.\nBy the definition of a dividing function if a player i \u2208 sj increases his value, i can only appear in later procurement set sj and hence has a higher chance of having the desired competition.\nTherefore, the chance of i lacking the requisite competition is decreased.\nSince the domain is a procurement class domain, all other players t = i,t \u2208 sj are also more likely to have competition since members of their class continue to appear before i and hence the likelihood that i will be reduced is decreased.\nSince by theorem 3.1 a necessary and sufficient condition for the mechanism to be IC is monotonicity.\nGTR-1 is IC for procurement-class domains.\nHowever, for domains that are not procurement class domains this does not suffice even if the domain is a class based domain.\nAlthough, all members of sj continue to have the required competition it is possible that there are members of sj who do not have analogues in sj who do not have competition.\nHence i might be reduced after increasing his value which by lemma 3.1 means the mechanism is not IC.\nWe therefore define a different algorithm for non procurement class domains.\nOur modified algorithm requires a special dividing function in order to maintain the IC property.\nAlthough our restriction to this special dividing function appears stringent, the dividing function we use is a generalization of the way that procurement sets are chosen in procurement-class based domains e.g., [13, 16, 9, 3, 2, 4].\nFor ease of presentation in this section we assume that F = 0.\nThe dividing function for general domains is defined by looking at all possible dividing functions.\nFor each dividing function Di and each set of bids, the GTR-1 algorithm yields a welfare that is a function of the bids and the dividing function9 .\nWe denote by D0 the dividing function that divides the players into sets s.t. the welfare that GTR-1 finds is maximal10 .\n9 Note that for any particular Di this might not be IC as GTR-1 is IC only for procurement class domains and not for general domains 10 In Appendix A we show how to calculate D0 in polynoFormally, Let D be the set of all dividing functions D. Denote the welfare achieved by the mechanism produced by GTR1 when using dividing function D and a set of bids \u00afb by GTR1(D,\u00afb).\nDenote by D0(\u00afb) = argmaxD\u2208D(GTR1(D,\u00afb)).\nFor ease of presentation we denote D0(\u00afb) by D0 when the dependence on b is clear from the context.\nRemark 4.1.\nD0 is an element of the set of dividing functions, and therefore is a dividing function.\nThe second generalized trade reduction algorithm GTR-2 follows.\nDefinition 4.1.\nThe GTR-2 algorithm - Given mechanism M, allocation A, and a set X \u2282 N closed under replacement, GTR-2 operates as follows: 1.\nCalculate the dividing function D0 as defined above.\n2.\nUse the dividing function D0 to divide A into procurement sets s1, ... , sk \u2208 S. 3.\nFor each sj, starting from the lowest value procurement set, do the following: If for i \u2208 sj \u2229 X there is an external competition and there is at most one i \u2208 sj that does not have an internal competition then keep sj.\nOtherwise, reduce the trade sj.\n4.\nAll trading players are charged the critical value for trading.\nAll non trading players are charged zero.\n11 We will prove that the mechanism produced by GTR-2 maintains the desired properties of IR, IC, and BB.\nThe following lemma shows that the GTR-2 produced mechanism is IR, and IC.\nLemma 4.1.\nFor any X, the X-external mechanism with critical value pricing scheme produced by the GTR-2 algorithm is an IR and IC mechanism.\nProof.\nBy theorem 3.1 it suffices to prove that the produced mechanism by the GTR-2 algorithm is bid monotonic for every player i. Suppose that i was not reduced when bidding bi we need to prove that i will not be reduced when bidding bi > bi.\nDenote by D1 = D0(b) the dividing function used by GTR-2 when i reported bi and the rest of the players reported b\u2212i. Denote by D1 = D0(bi, b\u2212i) the dividing function used by GTR-2 when i reported bi and the rest of the players reported b\u2212i. Denote by \u00afD1(b) a maximal dividing function that results in GTR-1 reducing i when reporting bi.\nAssume to the contrary that GTR-2 reduced i from the trade when i reported bi then GTR1(D1, (bi, b\u2212i)) = GTR1( \u00afD1, b).\nSince D1 \u2208 D it follows that GTR1(D1, b) > GTR1( \u00afD1, b) and therefore GTR1(D1, b) > GTR1(D1, (bi, b\u2212i)).\nHowever according to the definition D1 \u2208 D, GTR-2 should not have reduced i mial time for procurement-class domains.\nCalculating D0 in polynomial time for general domains is an important open problem.\n11 In the full version GTR-2 is extend such that it suffices that there exists some time in which the third step holds.\nThat extension is omitted from current version due to lack of space.\n25 with the dividing function D1 and gained a greater welfare than GTR1(D1, b).\nThus a contradiction arises and and GTR-2 does not reduce i from the trade when i reports bi > bi.\nLemma 4.2.\nFor any X, the X-external mechanism with critical value pricing scheme produced by the GTR-2 algorithm is a BB mechanism.\nProof.\nThis proof is similar to the proof of lemma 3.3.\nCombining the two lemmas above we get: Theorem 4.1.\nFor any X closed under replacement, the X-external mechanism with critical value pricing scheme produced by the GTR-2 algorithm is an IR, IC and BB mechanism.\nAppendix A shows how to calculate D0 for procurement class domains in polynomial time, it is not generally known how to easily calculate D0.\nCreating a general method for calculating the needed dividing function in polynomial time remains as an open question.\n4.1 Bounding the Welfare for ProcurementClass Based Domains and other General Domains Cases This section shows that in addition to producing a mechanism with the desired properties, GTR-2 also produces a mechanism that maintains high welfare.\nSince the GTR-2 algorithm finds a budget balanced mechanism in arbitrary domains we are unable to bound the welfare for general cases.\nHowever we can bound the welfare for procurementclass based domain and a wide variety of cases in general domains which includes many cases previously studied.\nDefinition 4.2.\nDenote freqk([i], sj) to indicate that a class [i] appears in a procurement set sj, k times and there are k members of [i] in sj.\nDefinition 4.3.\nDenote by freqk([i], S) the maximal k s.t. there are k members of [i] in sj.\nI.e., freqk([i], S) = maxsj \u2208S freqk([i], sj).\nLet the set of equivalence classes in procurement class based domain mechanism be ec and |ec| be the number of those equivalence classes.\nUsing the definition of class appearance frequency we can bound the welfare achieved by the GTR-2 produced mechanism for procurement class domains12 : Lemma 4.3.\nFor procurement class domains with F = 0, the number of procurement sets that are reduced by GTR-213 is at most |ec| times the maximal frequency of each class.\nFormally, the maximal number of procurement sets that is reduced is O( [i]\u2208ec freqk([i], S)) Proof.\nLet D be an arbitrary dividing function.\nWe note that by definition any procurement set sj will not be reduced if every i \u2208 sj has both internal competition and external competition.\n12 The welfare achieved by GTR-1 can also be bounded for the cases presented in this section.\nHowever, we focus on GTR-2 as it always achieves better welfare.\n13 or GTR-1 Every procurement set s that is reduced has at least one player i who has no competition.\nOnce s is reduced all players of [i] have internal competition.\nSo by reducing the number of equivalence classes |ec| procurement sets we cover all the remaining players with internal competition.\nIf the maximal frequency of every equivalence classes was one then each remaining player t in procurement set sk also have external competition as all the internal competitors of players \u00aft = t, \u00aft \u2208 sk are an external competition for t.\nIf we have freqk([t], S) players from class [t] who were reduced then there is sufficient external competition for all players in sk.\nTherefore it suffices to reduce O( [i]\u2208ec freqk([i], S)) procurement sets in order to ensure that both the requisite internal and external competition exists.\nThe next theorem follows as an immediate corollary for lemma 4.3: Theorem 4.2.\nGiven procurement-class based domain mechanisms with H procurement sets, the efficiency is at least a 1 \u2212 O( O( [i]\u2208ec freqk([i],S)) H ) fraction of the optimal welfare.\nThe following corollaries are direct results of theorem 4.2.\nAll of these corollaries either improve prior results or achieve the same welfare as prior results.\nCorollary 4.1.\nUsing GTR-2 for homogenous doublesided auctions (problem B.1) at most14 one procurement set must be reduced.\nSimilarly, for spatially distributed markets without strategic edges (problem B.3) using GTR-2 improves the result of [3] where a minimum cycle including a buyer and seller is reduced.\nCorollary 4.2.\nUsing GTR-2 for spatially distributed markets without strategic edges at most one cycle per connected component15 will be reduced.\nFor supply chains (problem B.5) using GTR-2 improves the result of [2, 4] similar to corollary 4.2.\nCorollary 4.3.\nUsing GTR-2 for supply chains at most one cycle per connected component16 will be reduced.\nThe following corollary solves the open problem at [3].\nCorollary 4.4.\nFor distributed markets on n nodes with strategic agents and paths of bounded length K (problem B.6) it suffices to remove at most K \u2217 n procurements sets.\nProof.\nSketch: These will create at least K spanning trees, hence we can disjointly cover every remaining procurement set.\nThis improves the naive algorithm of reducing n2 procurement sets.\nWe provide results for two special cases of double sided CA with single value players (problem B.8).\n14 It is possible that no reductions will be made, for instance when there is a non-trading player who is the requisite external competition.\n15 Similar to the double-sided auctions, sometimes there will be enough competition without a reduction.\n16 Similar to double-sided auctions, sometimes there will be enough competition without a reduction.\n26 Corollary 4.5.\nif there are at most M different kinds of procurement sets it suffices to remove M procurement sets.\nCorollary 4.6.\nIf there are K types of goods and each procurement set consists of at most one of each type it suffices to remove at most K procurement sets.\n5.\nCONCLUSIONS AND FUTURE WORK In this paper we presented a general solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism as an input and outputs mechanisms that are IR, IC and BB.\nThe output mechanisms achieves welfare that is close to optimal for a wide range of domains.\nThe GTR procedure improves on existing results such as homogeneous double-sided auctions, distributed markets, and supply chains, and solves several open problems such as distributed markets with strategic transportation edges and bounded paths, combinatorial double-sided auctions with bounded size procurements sets, and combinatorial doublesided auctions with a bounded number of procurement sets.\nThe question of the quality of welfare approximation both in general and in class domains that are not procurement class domains is an important and interesting open question.\nWe also leave open the question of upper bounds for the quality of approximation of welfare.\nAlthough we know that it is impossible to have IR, IC and BB in an efficient mechanism it would be interesting to have an upper bound on the approximation to welfare achievable in an IR, IC and BB mechanism.\nThe GTR procedure outputs a mechanism which depends on a set X \u2282 N. Another interesting question is what the quality of approximation is when X is chosen randomly from N before valuations are declared.\nAcknowledgements The authors wish to thank Eva Tardos et al for sharing their results with us.\nThe authors also wish to express their gratitude to the helpful comments of the anonymous reviewers.\n6.\nREFERENCES [1] A. Archer and E. Tardos.\nFrugal path mechanisms.\nSymposium on Discrete Algorithms, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms,2002.\n[2] M. Babaioff and N. Nisan.\nConcurrent Auctions Across the Supply Chain.\nJournal of Artificial Intelligence Research,2004.\n[3] Babaioff M., Nisan N. and Pavlov E. Mechanisms for a Spatially Distributed Market.\nIn proceedings of the 5th ACM Conference on Electronic Commerce,2004.\n[4] M. Babaioff and W. E. Walsh.\nIncentive-Compatible, Budget-Balanced, yet Highly Efficient Auctions for Supply Chain Formation.\nIn proceedings of Fourth ACM Conference on Electronic Commerce,2003.\n[5] Y. Bartal, R. Gonen and P. La Mura.\nNegotiation-range mechanisms: exploring the limits of truthful efficient markets.\nEC ``04: Proceedings of the 5th ACM conference on Electronic commerce, 2004.\n[6] Blume L, Easley D., Kleinberg J. and Tardos E. Trading Networks with Price-Setting Agents.\nIn proceedings of the 8th ACM conference on Electronic commerce,2007.\n[7] Cavallo R. Optimal decision-making with minimal waste: Strategyproof redistribution of VCG payments.\nIn Proc.\n5th Int.\nConf.\non Auton.\nAgents and Multi-Agent Systems (AAMAS06).\n[8] E. H. Clarke Multipart Pricing of Public Goods.\nIn journal Public Choice 1971, vol.\n2, pp. 17-33.\n[9] Chu L. Y. and Shen Zuo-Jun M. Agent Competition Double Auction Mechanism.\nManagement Science,vol 52(8),2006.\n[10] T. Groves Incentives in teams.\nIn journal Econometrica 1973, vol.\n41, pp. 617-631.\n[11] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth Revelation in Approximately Efficient Combinatorial Auctions.\nIn Journal of ACM 2002, vol.\n49(5), pp. 577-602.\n[12] Leonard H. Elicitation of Honest Preferences for the Assignment of Individuals to Positions.\nJournal of political econ,1983.\n[13] McAfee R. P.\nA Dominant Strategy Double Auction.\nJournal of Economic Theory,vol 56, 434-450, 1992.\n[14] A. Mu``alem, and N. Nisan.\nTruthful Approximation Mechanisms for Restricted Combinatorial Auctions.\nProceeding of AAAI 2002.\n[15] Myerson R. B. and Satterthwaite M. A. Efficient Mechanisms for Bilateral Trading.\nJournal of Economic Theory,vol 29, 265-281, 1983.\n[16] Roundy R., Chen R., Janakriraman G. and Zhang R. Q. Efficient Auction Mechanisms for Supply Chain Procurement.\nSchool of Operations Research and Industrial Engineering, Cornell University,2001.\n[17] W. Vickrey Counterspeculation, Auctions and Competitive Sealed Tenders.\nIn Journal of Finance 1961, vol.\n16, pp. 8-37.\nAPPENDIX A. CALCULATING THE OPTIMAL DIVIDING FUNCTION IN PROCUREMENT CLASS DOMAINS IN POLYNOMIAL TIME In this section we show how to calculate the optimal dividing function for procurement class domains in polynomial time.\nWe first define a special dividing function D0 which is easy to calculate: We define the dividing function D0 recursively as follows: At stage j, D0 divides the trading players into two sets Aj and Aj s.t. \u2022 Aj is a procurement set \u2022 Aj can be divided into a disjoint union of procurement sets.\n\u2022 Aj has minimal value from all possible such partitions.\nDefine sj = Aj and recursively invoke D0 and Aj until Aj = \u2205.\nWe now prove that D0 is the required dividing function.\nLemma A.1.\nFor procurement class domains D0 = D0.\nProof.\nSince the domain is a procurement class domain, for every reduced procurement set the set of players which achieve competition (either internal or external) is fixed.\n27 Therefore, the number of procurement sets which are reduced is independent of the dividing function D.\nSince the goal is to optimize welfare by reducing procurement sets with the least value we can optimize welfare.\nThis is achieved by D0.\nB. PROBLEMS AND EXAMPLES For completeness we present in this section the formal definitions of the problems that we use to illustrate our mechanism.\nThe first problem that we define is the double-sided auction with homogeneous goods.\nProblem B.1.\nDouble-sided auction with homogeneous goods: There are m sellers each of which have a single good (all goods are identical) and n buyers each of which are interested in receiving a good.\nWe denote the set of sellers by S and the set of buyers by B. Every player i \u2208 S \u222a B (both buyers and sellers) has a value vi for the good.\nIn this model a procurement set consists of a single buyer and a single seller, i.e., |s| = 2.\nThe value of a procurement set is W(s) = vj \u2212 vi where j \u2208 B and i \u2208 S, i.e., the gain from trade.\nIf procurement sets are created by matching the highest value buyer to the lowest value seller then [13]``s deterministic trade reduction mechanism17 reduces the lowest value procurement set.\nA related model is the pair related costs [9] model.\nProblem B.2.\nThe pair related costs: A double-sided auction B.1 in which every pair of players i \u2208 S and j \u2208 B has a related cost F(i, j) \u2265 0 in order to trade.\nF(i, j) is a friction cost which should be minimized in order to maximize welfare.\n[9] defines two budget-balanced mechanisms for this case.\nOne of [9]``s mechanisms has the set of buyers B as the X set for the X-external mechanism and the other has the set of sellers S as the X set for the X-external mechanism.\nA similar model is the spatially distributed markets (SDM) model [3] in which there is a graph imposing relationships on the cost.\nProblem B.3.\nSpatially distributed markets: there is a graph G = (V, E) such that each v \u2208 V has a set of sellers Sv and a set of buyers Bv .\nEach edge e \u2208 E has an associated cost which is the cost to transport a single unit of good along the edge.\nThe edges are non strategic but all players are strategic.\n[3] defines a budget balanced mechanism for this case.\nOur paper improves on [3] result.\nAnother graph model is the model defined in [6].\nProblem B.4.\nTrading Networks: Given a graph and buyers and sellers who are situated on nodes of the graph.\nAll trade must pass through a trader.\nIn this case procurement sets are of the form (buyer, seller, trader) where the possible sets of this form are defined by a graph.\nThe supply chain model [2, 4] can be seen as a generalization of [6] in which procurement sets consist of the form (producer, consumer, trader1, ... , traderk).\n17 It is also possible to randomize the reduction of procurements sets so as to achieve an expected budget of zero similar to [13], details are obvious and omitted.\nProblem B.5.\nSupply Chain: There is a set D of agents and a set G of goods and a graph G = (V, E) which defines possible trading relationships.\nAgents can require an input of multiple quantities of goods in order to output a single good.\nThe producer type of player can produce goods out of nothing, the consumer has a valuation and an entire chain of interim traders is necessary to create a viable procurement set.\n[2, 4] consider unique manufacturing technology in which the graph defining possible relationships is a tree.\nAll of the above problems are procurement-class domains.\nWe also consider several problems which are not procurement class domains and generally the questions of budget balance have been left as open problems.\nAn open problem raised in [3] is the SDM model in which edges are strategic.\nProblem B.6.\nSpatially distributed markets with strategic edges: there is a graph G = (V, E) such that each v \u2208 V has a set of sellers Sv and a set of buyers Bv .\nEach edge e \u2208 E has an associated cost which is the cost to transport a single unit of good along the edge.\nEach buyer,seller and edge has a value for the trade, i.e., all entities are strategic.\n[2, 4] left open the question of budget balanced mechanisms for supply chains where there is no unique manufacturing technology.\nIt is easy to see that this problem is not a procurement class domain.\nAnother interesting problem is transport networks.\nProblem B.7.\nTransport networks: A graph G = (V, E) where the edges are strategic players with costs and the goal is to find a minimum cost transportation route between a pair of privileged nodes Source, Target \u2208 V .\nIt was shown in [1] that the efficient allocation can have a budget deficit that is linear in the number of players.\nClearly, this problem is not a procurement class domain and [1] left the question of a budget balanced mechanism open.\nAnother non procurement-class based domain mechanism is the double-sided combinatorial auction (CA) with singlevalue players.\nProblem B.8.\nDouble-sided combinatorial auction (CA) with single value players: There exists a set S of sellers each selling a single good.\nThere also exists a set B of buyers each interested in bundles of 2S18 .\nThere are two variants of this problem.\nIn the single minded case each buyer has a positive value for only a single subset whereas in the multi minded case each buyer can have multiple bundles with positive valuation but all of the values are the same.\nIn both cases we assume free disposal so that all bundles containing the desired bundle have the same value for the buyer.\nWe also consider problems that are non class domains.\nProblem B.9.\nDouble-sided combinatorial auction (CA) with general multi-minded players: same as B.8 but each buyer can have multiple bundles with positive valuation which are not necessarily the same.\n18 We abuse notation and identify the seller with the good.\n28 C. COMPARING DIFFERENT CHOICES OF X The choice of X can have a large impact on the welfare (and revenue) of the reduced mechanism and therefore the question arises of how one should choose the set X.\nAs the X-external mechanism is required to maintain IC clearly the choice of X can not depend on the value of the players as otherwise the reduced mechanism will not be truthful.\nIn this section we motivate the choice of small X sets for procurement class domains and give intuition that it may also be the case for some other domains.\nWe start by illustrating the effect of the set X over the welfare and revenue in the double-sided auction with homogeneous goods problem B.1.\nSimilar examples can be constructed for the other problems defined is B.\nThe following example shows an effect on the welfare.\nExample C.1.\nThere are two buyers and two sellers and two non intersecting (incomparable) sets X = {buyers} and Y = {sellers}.\nIf the values of the buyers are 101, 100 and the sellers are 150, 1 then the X-external mechanism will yield a gain from trade of 0 and the Y -external mechanism will yield a gain from trade of 100.\nConversely, if the buyers values are 100, 1 and the sellers are 2, 3 the X-external mechanism will yield a gain from trade of 98 and and the Y -external mechanism will yield a gain from trade of zero.\nThe example clearly shows that the difference between the X-external and the Y -external mechanism is unbounded although as shown above the fraction each of them reduces can be bound and therefore the multiplicative ratio between them can be bound (as a function of the number of trades).\nOn the revenue side we can not even bound the ratio as seen from the following example: Example C.2.\nConsider k buyers with value 100 and k+ 1 sellers with value 1.\nIf X = {buyers} then there is no need to reduce any trade and all of the buyer receive the good and pay 1.\nk + 1 of the sellers sell and each of them receive 1.\nThis yields a net revenue of zero.\nIf Y = {sellers} then one must reduce a trade!\nThis means that all of the buyers pay 100 while all of the sellers still receive 1.\nthe revenue is then 99k.\nSimilarly, an example can be constructed that yields much higher revenue for the X-external mechanism as compared to the Y -external mechanism.\nThe above examples refer to sets X and Y which do not intersect and are incomparable.\nThe following theorem compares the X-external and Y -external mechanisms for procurement class domains where X is a subset of Y .\nTheorem C.1.\nFor procurement class domains, if X \u2282 Y and for any s \u2208 S, s \u2229 X \u2229 Y = \u2205 then: 1.\nThe efficiency of the X external mechanism in GTR-1 (and hence GTR-2) is at least that of the Y -external mechanism.\n2.\nAny winning player that wins in both the X-external and Y -external mechanisms pays no less in the Y -external than in the X-external and therefore the ratio of budget to welfare is no worse in the Y external then the X-external.\nProof.\n1.\nFor any dividing function D if there is a procurement set sj that is reduced in the X-external mechanism there are two possible reasons: (a) sj lacks external competition in the X-external mechanism.\nIn this case sj lacks external competition in the internal mechanism.\n(b) sj has all required external competitions in X-external.\nIn this case sj has all required internal competitions in Y -external by lemma 3.1 but might lack some external competition for sj \u222a {Y \\ X} and be reduced, 2.\nThis follows from the fact that for any ordering D any procurement set s that is reduced in the X-external mechanism is also reduced in the Y -external mechanism.\nTherefore, the critical value is no less in the Yexternal mechanism than the X-external mechanism.\nRemark C.1.\nFor any two sets X, Y it is easy to build an example in which the X-external and Y -external mechanisms reduce the same procurement sets so the inequality is weak.\nTheorem C.1 shows an inequality in welfare as well as for payments but it is easy to construct an example in which the revenue can increase for X as compared to Y as well as the opposite.\nThis suggests that in general we want X to be as small as possible although in some domains it is not possible to compare different X``s. 29","lvl-3":"Generalized Trade Reduction Mechanisms\nABSTRACT\nWhen designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB).\nIt is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB.\nThere have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions [13], distributed markets [3] and supply chain problems [2, 4].\nIn this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare.\nWe bound the welfare achieved by our procedure for a wide range of domains.\nIn particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains.\nFurthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.\n1.\nINTRODUCTION\nWhen designing a mechanism there are several key properties that are desirable to maintain.\nSome of the more important ones are individual rationality (IR) - to make it worthwhile for all players to participate, incentive compatibility (IC) - to give incentive to players to report their true value to the mechanism and budget balance (BB) - not to run the mechanism on a loss.\nIn many of the mechanisms the goal function that a mechanism designer attempts to maximize is the social welfare' - the total benefit to society.\nHowever, it is well known from [15] that any mechanism that maximizes social welfare while maintaining individual rationality and incentive compatibility runs a deficit perforce, i.e., is not budget balanced.\nOf course, for many applications of practical importance we lack the will and the capability to allow the mechanism to run a deficit and hence one must balance the payments made by the mechanism.\nTo maintain the BB property in an IR and IC mechanism it is necessary to compromise on the optimality of the social welfare.\n1.1 Related Work and Specific Solutions\nThere have been several attempts to design budget balanced mechanisms for particular domains2.\nFor instance, for double-sided auctions where both the buyers and sellers are strategic and the goods are homogeneous [13] (or when the goods are heterogeneous [5]).\n[13] developed a mechanism that given valuations of buyers and sellers produces an allocation (which are the trading players) and a matching between buyers and sellers such that the mechanism is IR, IC, and BB while retaining most of the social welfare.\nIn the distributed markets problem (and closely related problems) goods are transported between geographic locations while incurring some constant cost for transportation.\n[16, 9, 3] present mechanisms that approximate the social welfare while achieving an IR, IC and BB mechanism.\nFor supply chain problems [2, 4] bounds the loss of social welfare that is necessary to inflict on the mechanism in order to achieve the desired combination of IR, IC, and BB.\nDespite the works discussed above, the question of how to design a general mechanism that achieves IR, IC, and BB independently of the problem domain remains open.\nFurthermore, there are several domains where the question of how to design an IR, IC and BB mechanism which approx\nimates the social welfare remains an open problem.\nFor example, in the important domain of combinatorial doublesided auctions there is no known result that bounds the loss of social welfare needed to achieve budget balance.\nAnother interesting example is the open question left by [3]: How can one bound the loss in social welfare that is needed to achieve budget balance in an IR and IC distributed market where the transportation edges are strategic.\nNaturally an answer to the BB distributed market with strategic edges has vast practical implications, for example to transportation networks.\n1.2 Our Contribution\nIn this paper we unify all the problems discussed above (both the solved as well as the open ones) into one solution concept procedure.\nThe solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism for single-valued players and outputs an IR, IC and BB mechanism.\nThe output mechanism may suffer some welfare loss as a tradeoff of achieving BB.\nThere are problem instances in which no welfare loss is necessary but by [15] there are problem instances in which there is welfare loss.\nNevertheless for a wide class of problems we are able to bound the loss in welfare.\nA particularly interesting case is one in which the input mechanism is an efficient allocation.\nIn addition to unifying many of the BB problems under a single solution concept, the GTR procedure improves on existing results and solves several open problems in the literature.\nThe existing solutions our GTR procedure improves are homogeneous double-sided auctions, distributed markets [3], and supply chain [2, 4].\nFor the homogeneous doublesided auctions the GTR solution procedure improves on the well known solution [13] by allowing for some cases of no trade reduction at all.\nFor the distributed markets [3] and the supply chain [2, 4] the GTR solution procedure improves on the welfare losses' bound, i.e., allows one to achieve an IR, IC and BB mechanism with smaller loss on the social welfare.\nRecently we also learned that the GTR procedure allows one to turn the model newly presented [6] into a BB mechanism.\nThe open problems that are answered by GTR are distributed markets with strategic transportation edges and bounded paths, combinatorial double-sided auctions with bounded size of the trading group i.e., a buyer and its bundle goods' sellers, combinatorial double-sided auctions with bounded number of possible trading groups.\nIn addition to the main contribution described above, this paper also defines an important classification of problem domains.\nWe define class based domain and procurement class based domains.\nThe above definitions build on the different competition \"powers\" of players in a mechanisms called internal and external competition.\nMost of the studied problem domains are of the more restrictive procurement class domains and we believe that the more general setting will inspire more research.\n2.\nPRELIMINARIES 2.1 The Model\nS such that W (s) = E\ni.\n2.2 Competitions and Domains\n3.\nPROCUREMENT-CLASS BASED DOMAINS\n3.1 The GTR-1 Produces an X-external Mechanism that is IR, IC and BB\n4.\nNON PROCUREMENT-CLASS BASED DOMAINS\n4.1 Bounding the Welfare for ProcurementClass Based Domains and other General Domains Cases\n5.\nCONCLUSIONS AND FUTURE WORK\nAcknowledgements\n6.\nREFERENCES\nAPPENDIX A. CALCULATING THE OPTIMAL DIVIDING FUNCTION IN PROCUREMENT CLASS DOMAINS IN POLYNOMIAL TIME\nB. PROBLEMS AND EXAMPLES\nC. COMPARING DIFFERENT CHOICES OF X","lvl-4":"Generalized Trade Reduction Mechanisms\nABSTRACT\nWhen designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB).\nIt is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB.\nThere have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions [13], distributed markets [3] and supply chain problems [2, 4].\nIn this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare.\nWe bound the welfare achieved by our procedure for a wide range of domains.\nIn particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains.\nFurthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.\n1.\nINTRODUCTION\nWhen designing a mechanism there are several key properties that are desirable to maintain.\nIn many of the mechanisms the goal function that a mechanism designer attempts to maximize is the social welfare' - the total benefit to society.\nHowever, it is well known from [15] that any mechanism that maximizes social welfare while maintaining individual rationality and incentive compatibility runs a deficit perforce, i.e., is not budget balanced.\nTo maintain the BB property in an IR and IC mechanism it is necessary to compromise on the optimality of the social welfare.\n1.1 Related Work and Specific Solutions\nThere have been several attempts to design budget balanced mechanisms for particular domains2.\nIn the distributed markets problem (and closely related problems) goods are transported between geographic locations while incurring some constant cost for transportation.\n[16, 9, 3] present mechanisms that approximate the social welfare while achieving an IR, IC and BB mechanism.\nFor supply chain problems [2, 4] bounds the loss of social welfare that is necessary to inflict on the mechanism in order to achieve the desired combination of IR, IC, and BB.\nDespite the works discussed above, the question of how to design a general mechanism that achieves IR, IC, and BB independently of the problem domain remains open.\nFurthermore, there are several domains where the question of how to design an IR, IC and BB mechanism which approx\nimates the social welfare remains an open problem.\nFor example, in the important domain of combinatorial doublesided auctions there is no known result that bounds the loss of social welfare needed to achieve budget balance.\nAnother interesting example is the open question left by [3]: How can one bound the loss in social welfare that is needed to achieve budget balance in an IR and IC distributed market where the transportation edges are strategic.\nNaturally an answer to the BB distributed market with strategic edges has vast practical implications, for example to transportation networks.\n1.2 Our Contribution\nIn this paper we unify all the problems discussed above (both the solved as well as the open ones) into one solution concept procedure.\nThe solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism for single-valued players and outputs an IR, IC and BB mechanism.\nThe output mechanism may suffer some welfare loss as a tradeoff of achieving BB.\nThere are problem instances in which no welfare loss is necessary but by [15] there are problem instances in which there is welfare loss.\nNevertheless for a wide class of problems we are able to bound the loss in welfare.\nA particularly interesting case is one in which the input mechanism is an efficient allocation.\nIn addition to unifying many of the BB problems under a single solution concept, the GTR procedure improves on existing results and solves several open problems in the literature.\nThe existing solutions our GTR procedure improves are homogeneous double-sided auctions, distributed markets [3], and supply chain [2, 4].\nFor the homogeneous doublesided auctions the GTR solution procedure improves on the well known solution [13] by allowing for some cases of no trade reduction at all.\nFor the distributed markets [3] and the supply chain [2, 4] the GTR solution procedure improves on the welfare losses' bound, i.e., allows one to achieve an IR, IC and BB mechanism with smaller loss on the social welfare.\nRecently we also learned that the GTR procedure allows one to turn the model newly presented [6] into a BB mechanism.\nIn addition to the main contribution described above, this paper also defines an important classification of problem domains.\nWe define class based domain and procurement class based domains.\nThe above definitions build on the different competition \"powers\" of players in a mechanisms called internal and external competition.","lvl-2":"Generalized Trade Reduction Mechanisms\nABSTRACT\nWhen designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB).\nIt is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB.\nThere have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions [13], distributed markets [3] and supply chain problems [2, 4].\nIn this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare.\nWe bound the welfare achieved by our procedure for a wide range of domains.\nIn particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains.\nFurthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.\n1.\nINTRODUCTION\nWhen designing a mechanism there are several key properties that are desirable to maintain.\nSome of the more important ones are individual rationality (IR) - to make it worthwhile for all players to participate, incentive compatibility (IC) - to give incentive to players to report their true value to the mechanism and budget balance (BB) - not to run the mechanism on a loss.\nIn many of the mechanisms the goal function that a mechanism designer attempts to maximize is the social welfare' - the total benefit to society.\nHowever, it is well known from [15] that any mechanism that maximizes social welfare while maintaining individual rationality and incentive compatibility runs a deficit perforce, i.e., is not budget balanced.\nOf course, for many applications of practical importance we lack the will and the capability to allow the mechanism to run a deficit and hence one must balance the payments made by the mechanism.\nTo maintain the BB property in an IR and IC mechanism it is necessary to compromise on the optimality of the social welfare.\n1.1 Related Work and Specific Solutions\nThere have been several attempts to design budget balanced mechanisms for particular domains2.\nFor instance, for double-sided auctions where both the buyers and sellers are strategic and the goods are homogeneous [13] (or when the goods are heterogeneous [5]).\n[13] developed a mechanism that given valuations of buyers and sellers produces an allocation (which are the trading players) and a matching between buyers and sellers such that the mechanism is IR, IC, and BB while retaining most of the social welfare.\nIn the distributed markets problem (and closely related problems) goods are transported between geographic locations while incurring some constant cost for transportation.\n[16, 9, 3] present mechanisms that approximate the social welfare while achieving an IR, IC and BB mechanism.\nFor supply chain problems [2, 4] bounds the loss of social welfare that is necessary to inflict on the mechanism in order to achieve the desired combination of IR, IC, and BB.\nDespite the works discussed above, the question of how to design a general mechanism that achieves IR, IC, and BB independently of the problem domain remains open.\nFurthermore, there are several domains where the question of how to design an IR, IC and BB mechanism which approx\nimates the social welfare remains an open problem.\nFor example, in the important domain of combinatorial doublesided auctions there is no known result that bounds the loss of social welfare needed to achieve budget balance.\nAnother interesting example is the open question left by [3]: How can one bound the loss in social welfare that is needed to achieve budget balance in an IR and IC distributed market where the transportation edges are strategic.\nNaturally an answer to the BB distributed market with strategic edges has vast practical implications, for example to transportation networks.\n1.2 Our Contribution\nIn this paper we unify all the problems discussed above (both the solved as well as the open ones) into one solution concept procedure.\nThe solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism for single-valued players and outputs an IR, IC and BB mechanism.\nThe output mechanism may suffer some welfare loss as a tradeoff of achieving BB.\nThere are problem instances in which no welfare loss is necessary but by [15] there are problem instances in which there is welfare loss.\nNevertheless for a wide class of problems we are able to bound the loss in welfare.\nA particularly interesting case is one in which the input mechanism is an efficient allocation.\nIn addition to unifying many of the BB problems under a single solution concept, the GTR procedure improves on existing results and solves several open problems in the literature.\nThe existing solutions our GTR procedure improves are homogeneous double-sided auctions, distributed markets [3], and supply chain [2, 4].\nFor the homogeneous doublesided auctions the GTR solution procedure improves on the well known solution [13] by allowing for some cases of no trade reduction at all.\nFor the distributed markets [3] and the supply chain [2, 4] the GTR solution procedure improves on the welfare losses' bound, i.e., allows one to achieve an IR, IC and BB mechanism with smaller loss on the social welfare.\nRecently we also learned that the GTR procedure allows one to turn the model newly presented [6] into a BB mechanism.\nThe open problems that are answered by GTR are distributed markets with strategic transportation edges and bounded paths, combinatorial double-sided auctions with bounded size of the trading group i.e., a buyer and its bundle goods' sellers, combinatorial double-sided auctions with bounded number of possible trading groups.\nIn addition to the main contribution described above, this paper also defines an important classification of problem domains.\nWe define class based domain and procurement class based domains.\nThe above definitions build on the different competition \"powers\" of players in a mechanisms called internal and external competition.\nMost of the studied problem domains are of the more restrictive procurement class domains and we believe that the more general setting will inspire more research.\n2.\nPRELIMINARIES 2.1 The Model\nIn this paper we design a method which given any IR and IC mechanism outputs a mechanism that maintains the IC and IR properties while achieving BB.\nFor some classes of mechanisms we bound the competitive approximation of welfare.\nIn our model there are N players divided into sets of trade.\nThe sets of trade are called procurement sets and are defined (following [2]) as follows:\nDEFINITION 2.1.\nA procurement set s is the smallest set of players that is required for trade to occur.\nFor example, in a double-sided auction, a procurement set is a pair consisting of a buyer and a seller.\nIn a combinatorial double-sided auction a procurement set can consist of a buyer and several sellers.\nWe mark the set of all procurement sets as S and assume that any allocation is a disjoint union of procurement sets.\nEach player i, 1 <i <n, assigns a real value vi (s) to each possible procurement set s E S. Namely, vi (s) is the valuation of player i on procurement set s.\nWe assume that for each player i vi (s) is i's private value and that i is a single value player, meaning that if vi (sj)> 0 then for every other sk, k = 6 j, either vi (sk) = vi (sj) or vi (sk) = 0.\nFor the ease of notation we will mark by vi the value of player i for any procurement set s such that vi (s)> 0.\nThe set Vi C _ R is the set of all possible valuations vi.\nThe set of all possible valuations of all the players is denoted by V = V1 X...X Vn.\nLet v \u2212 i = (v1,..., vi \u2212 1, vi +1,..., vn) be the vector of valuations of all the players besides player i, and let V \u2212 i be the set of all possible vectors v \u2212 i.\nS such that W (s) = E\nWe denote by W (s) the value of a procurement set s E i \u2208.\nvi (s) + F (s), where F is some function that assigns a constant to procurement sets.\nFor example, F can be a (non-strategic) transportation cost in a distributed market problem.\nLet the size of a procurement set s be denoted as IsI.\nIt is assumed that any allocation is a disjoint union of procurement sets and therefore one can say that an allocation partitions the players into two sets; a set of players that trade and a set of players that do not trade.\nThe paper denotes by O the set of possible partitions of an allocation A into procurement sets.\nThe value W (A) of an allocation A is the sum of the values of its most efficient parE tition to procurement sets, that is W (A) = maxS \u2208 O.\n\u2208 S W (s).\nThis means that W (A) = E the case where F is identically zero, then W (A) = E i \u2208 A vi + maxS \u2208 O.\n\u2208 S F (s).\nIn i \u2208 A vi.\nAn optimal partition S \u2217 (A) is a partition that maximizes the above sum for an allocation A. Let the value of A be W (S \u2217 (A)) (note that the value can depend on F).\nWe say that the allocation A is efficient if there is no other allocation with a higher value.\nThe efficiency of the allocation A\u02c6 is\nw.l.o.g. that there are no two allocations with the same value3.\nA mechanism M defines an allocation and payment rules, M = (R, P).\nA payment rule P decides i's payment pi where P is a function P: V--+ RN.\nWe work with mechanisms\nin which players are required to report their values.\nAn example of such a mechanism is the VCG mechanism [17, 8, 10].\nThe reported value bi E Vi of player i is called a bid and might be different from his private value vi.\nLet b E V be the bids of all players.\nAn allocation rule R decides the allocation according to the reported values b E V.\nWe make the standard assumption that players have quasi-linear utility so that when player i trades and pays pi then his utility is ui (vi, b_i) = vi \u2212 pi, ui: V \u21d2 R.\nWe also assume that players are rational utility maximizers.\nMechanism M is Budget Balanced (BB) if Ei, N pi> 0 for any bids b E V.\nM is Incentive-Compatible (IC) in dominant strategies if for any player i, value vi and any b_i E V_i, ui (vi, b_i)> ui (b) meaning that for any player i, bidding vi maximized i's utility over all possible bids of the other players.\nM is (ex-post) Individually Rational (IR) if for any player i value vi, and any b_i E V_i ui (vi, b_i)> 0 meaning that for all possible bids of the other players, player's i utility is non-negative.\nNote that since our mechanisms are normalized IR, if a player does not trade then the player pays 0 and has utility 0.\nOur algorithm presented in the next section employs a commonly used payment scheme, the critical value payment scheme.\nWe denote by Ci the critical value price computed for player\ni.\n2.2 Competitions and Domains\nIn this paper we present two generalized trade reduction algorithms.\nThe two algorithms are such that given an IR and IC mechanism M that solves a problem in some domain (different domains are formally defined below), turns M into IR, IC and BB mechanism.\nThe algorithm presented finds procurement sets and removes them in iterations until the \"right conditions\" are fulfilled and the mechanism M is turned into a BB one.\nThe \"right conditions\" that need to be met are conditions of competition among the players in the given problem.\nThe following definitions leads us to the competition conditions we are looking for.\nFor example, in a (homogeneous) double-sided auction (see problem B. 1) the replacement set for any buyer is simply any other buyer.\nIn an auction for transportation slots (see problem B. 7), the replacement set of an edge is a path between the endpoints of the edge.\nNote that a set can replace a single player.\nFurthermore, this relationship is transitive but not necessarily symmetric.\nIf i is a replacement set for j, it is not necessarily true that j is a replacement set for i.\nWe will assume, without loss of generality, that there are no ties between the values of any allocations, and in particular there are no ties between values of procurement sets.\nIn case of ties, these can be broken by using the identities of the players4.\nSo for any allocation A, procurement set s and player i with external competition Ei (A, s), there exists exactly one set representing the maximally valued external competition.\nThe following defines the required competition needed to maintain IC, IR and BB.\nThe set X5 denotes this competition and is closed under replacement.\nIn the remainder of the paper we will assume that all of our sets which define competition in a mechanism are closed under replacement.\n1.\nEach player i E X has an external competition.\n2.\nEach player i E \/ X has an internal competition.\n3.\nFor all players i1,..., it E s \\ X there exist Ri1 (A, s),..., Rit (A, s) such that for every iz = 6 iq, Riz (A, s) ff Riq (A, s) = 0 4.\nfor every procurement set s E S it holds that sffX = 6 0\nFor general domains the choice of X can be crucial.\nIn fact even for the same domain the welfare (and revenue) can vary widely depending on how X is defined.\nIn appendix C we give an example where two possible choices of X yield greatly different results.\nAlthough we show that X should be chosen as small as possible we do not give any characterization of the optimality of X and this is an important open problem.\nOur two generalized trade reduction algorithms will ensure that for any allocation we have the desired types of competition.\nSo given a mechanism M that is IC and IR with allocation A, the goal of the algorithms is to turn M into an X-external mechanism.\nThe two generalized trade reduction algorithms utilize a dividing function D which divides allocation A into disjoint procurement sets.\nThe algorithms order the procurements sets defined by D in order of increasing value.\nFor any procurement set there is a desired type of competition that depends only on the players who compose the procurement set.\nThe generalized trade reduction algorithms go over the procurement sets in order (from the smallest to the largest) and remove any procurement set that does not have the desired competition when the set is reached.\nThe reduction of procurement sets will also be referred to as a trade reduction.\nFormally,\nDEFINITION 2.8.\nD is a dividing function if for any allocation A and the players' value vector v, D divides the allocation into disjoint procurements sets s1,..., sk s.t. Usj = A and for any player i with value vi if i E sj1 and t E sj2 s.t.\nThe two generalized trade reduction algorithms presented accept problems in different domains.\nThe formal domain definitions follow:\nIntuitively, this means that replacement sets are of size 1 and the replacing relationship is symmetric.\nWe define the class of a player i as the set of the player's replacement sets and denote the class of player i by [i].\nIt is important to note that since replacement sets are transitive relations and since class domains also impose symmetric relations on the replacement sets, the class of a player i, [i] is actually an equivalence class for i.\nThe double sided combinatorial auction consisting of a single multi-minded buyer and multiple sellers of heterogenous goods (see problem B. 9), is a class based domain (as we have a single buyer) but not a procurement-class based domain.\nIn this case, the buyer is a class and each set of sellers of the same good is a class.\nHowever, for a buyer there is no bijection between the different the procurement sets of the bundles of goods the buyer is interested in.\nThe spatial-distributed market with strategic edges (see problem B. 6) is not a class-based domain (and therefore not a procurement-class domain).\nFor example, even for a fixed buyer and a fixed seller there are two different procurement sets consisting of different paths between the buyers and sellers.\nThe next sections present two algorithms GTR-1 and GTR2.\nGTR-1 accepts problems in procurement-class based domains, its properties are proved with a general dividing function D.\nThe GTR-2 algorithm accepts problems in any domain.\nWe prove the GTR-2's properties with specific dividing function D0.\nThe function will be defined in section 4.\nSince the dividing function can have a large practical impact on welfare (and revenue) the generality of GTR \u2212 1 (albeit in special domains) can be an important practical consideration.\n3.\nPROCUREMENT-CLASS BASED DOMAINS\nThis section focuses on the problems that are procurementclass based domains.\nFor this domain, we present an algorithm called GTR-1, which given a mechanism that is IR and IC outputs a mechanism with reduced welfare which is IR, IC and budget balanced.\nAlthough procurement class domains appear to be a relatively restricted model, in fact many domains studied in the literature are procurement class domains.\n\u2022 double-sided auctions with homogenous goods [13] (problem B. 1).\nIn this domain there are two classes.\nThe class of buyers and the class of sellers.\nEach procurement set consists of a single buyer and a single seller.\nSince every pair of (buyer, seller) is a valid procurement set (albeit possible with negative value) this is a procurement class domain.\nIn this domain the constant assigned to the procurement sets is F = 0.\n\u2022 Spatially distributed markets with non strategic edges [3, 9] (problem B. 3).\nLike the double-sided auctions with homogenous goods, their are two classes in the domain.\nClass of buyers and class of sellers with procurement sets consisting of single buyer and single seller.\nThe sellers and buyers are nodes in a graph and the function F is the distance of two nodes (length of the edge) which represent transport costs.\nThese costs differ between different (buyer, seller) pairs.\n\u2022 Supply chains [2, 4] (problem B. 5).\nThe assumption of unique manufactory by [2, 4] can best be understood as turning general supply chains (which need not be a procurement class domain) into a procurement class domain.\n\u2022 Single minded combinatorial auctions [11] (problem B. 8).\nIn this context each seller sells a single good and each buyer wants a set of goods.\nThe classes are the sets of sellers selling the same good as well as the buyers who desire the same bundle.\nA procurement set consists of a single buyer as well as a set of sellers who can satisfy that buyer.\nA definition of the mechanism follows: DEFINITION 3.1.\nThe GTR-1 algorithm - given a mechanism M, a set X C N which is closed under replacement, a dividing function D, and allocation A, GTR-1 operates as follows:\n1.\nUse the dividing function D to divide A into procurement sets s1,..., sk E S. 2.\nOrder the procurement sets by increasing value.\n3.\nFor each sj, starting from the lowest value procurement set:\nIf for every i E sj n X there is external competition and every i E sj \\ X there is internal competition then\n3.1 The GTR-1 Produces an X-external Mechanism that is IR, IC and BB\nIn this subsection we prove that the GTR-1 algorithm produces an X-external mechanism that is IR, IC and BB.\nTo prove GTR-1's properties we make use of theorem 3.1 which is a well known result (e.g., [14, 11]).\nTheorem 3.1 characterizes necessary and sufficient conditions for a mechanism for single value players to be IR and IC:\nSo for normalized IR7 and IC mechanisms, the allocation rule which is bid monotonic uniquely defines the critical values for all the players and thus the payments.\nOBSERVATION 3.1.\nLet M1 and M2 be two IR and IC mechanisms with the same allocation rule.\nThen M1 and M2 must have the same payment rule.\nIn the following we prove that the X-external GTR-1 algorithm produces a IR, IC and BB mechanism, but first a subsidiary lemma is shown.\nWe start by proving IR and IC:\nPROOF.\nBy the definition of a critical value pricing scheme 2.2 and the GTR-1 algorithm 3.1 it follows that for every trading player i, vi> 0.\nBy the GTR-1 algorithm 3.1 nontrading players i have a payment of zero.\nThus for every player i, value vi, and any b_i E V_i ui (vi, b_i)> 0, meaning the produced X-external mechanism is IR.\nAs the X-external GTR-1 algorithm is IR and applies the critical value payment scheme according to theorem 3.1, in order to show that the produced X-external mechanism with the critical value payment scheme is IC, it remains to show that the produced mechanism's allocation rule is bid monotonic.\nSince GTR-1 orders the procurement sets according to increasing value, if player i increases his bid from bi to b0i> bi then for any division function D of procurement sets, the procurement set s containing i always appears later with the bid b0i than with the bid bi.\nSo the likelihood of competition can only increase if i appears in later procurement sets.\nThis follows as GTR-1 can reduce more of the lower value procurement sets which will result in more non-trading players.\nTherefore if s has the required competition and is not reduced with bi then it will have the required competition with b0i and will not be reduced.\nFinally we prove BB:\nPROOF.\nIn order to show that the produced mechanism is BB we show that each procurement set that is not reduced has a positive budget (i.e., the sum of payments is positive).\nLet s E S be a procurement set that is not reduced.\nLet i E s n X then according to the definition of X-external definition 2.7 and the GTR-1 algorithm 3.1 i has an external competition.\nAssume w.l.o.g. 8 that i is the only player with external competition in s and all other players j = 6 i, j E s have internal competition.\nLet A be the allocation after the procurement sets reduction by the GTR-1 algorithm.\nAccording to the definition of external competition 2.5, there exists a set Ei (A, s) C N \\ A such that i U Ei (A, s) E S and W (i U Ei (A, s))> 0.\nSince W (iUEi (A, s)) = vi + W (Ei (A, s)) then vi>--W (Ei (A, s)).\nBy the critical value pricing scheme definition 2.2 it means that if player i bids any less than--W (Ei (A, s)) he will not have external competition and therefore will be removed from trading.\nThus i pays no less than min--W (Ei (A, s)).\nSince all other players j E s have internal competition their critical price cannot be less than their maximal value internal competitor (set) i.e., max W (Rj (A, s)).\nIf any player j E s bids less then its maximal internal competitor (set) then he will not be in s but his maximal internal competitor (set) will.\nAs a possible Ei (A, s) is UjEsRj (A, s) one can bound the maximal value of i's external competition W (Ei (A, s)) by the sum of the maximal values of the rest of the players in s\ninternal competition i.e., Ej, s max W (Rj (A, s)).\nTherefore min--W (Ei (A, s)) =--(j, s max W (Rj (A, s))).\nAs the F function is defined to be a positive constant we get that W (s) = min--W (Ei (A, s)) + (Ej, s max W (Rj (A, s))) + F (s)> 0 and thus s is at least budget balanced.\nAs each procurement set that is not reduced is at least budget balanced, it follows that the produced X-external mechanism is BB.\nThe above two lemmas yield the following theorem:\n4.\nNON PROCUREMENT-CLASS BASED DOMAINS\nThe main reason that GTR-1 works for the procurementclass domains is that each player's possibility of being reduced is monotonic.\nBy the definition of a dividing function if a player i E sj increases his value, i can only appear in\nlater procurement set sj and hence has a higher chance of having the desired competition.\nTherefore, the chance of i lacking the requisite competition is decreased.\nSince the domain is a procurement class domain, all other players t = 6 i, t E sj are also more likely to have competition since members of their class continue to appear before i and hence the likelihood that i will be reduced is decreased.\nSince by theorem 3.1 a necessary and sufficient condition for the mechanism to be IC is monotonicity.\nGTR-1 is IC for procurement-class domains.\nHowever, for domains that are not procurement class domains this does not suffice even if the domain is a class based domain.\nAlthough, all members of sj continue to have the required competition it is possible that there are members of sj who do not have analogues in sj who do not have competition.\nHence i might be reduced after increasing his value which by lemma 3.1 means the mechanism is not IC.\nWe therefore define a different algorithm for non procurement class domains.\nOur modified algorithm requires a special dividing function in order to maintain the IC property.\nAlthough our restriction to this special dividing function appears stringent, the dividing function we use is a generalization of the way that procurement sets are chosen in procurement-class based domains e.g., [13, 16, 9, 3, 2, 4].\nFor ease of presentation in this section we assume that F = 0.\nThe dividing function for general domains is defined by looking at all possible dividing functions.\nFor each dividing function Di and each set of bids, the GTR-1 algorithm yields a welfare that is a function of the bids and the dividing function9.\nWe denote by D0 the dividing function that divides the players into sets s.t. the welfare that GTR-1 finds is maximal10.\nFormally, Let - IF be the set of all dividing functions D. Denote the welfare achieved by the mechanism produced by GTR1 when using dividing function D and a set of bids \u00af b by\nFor ease of presentation we denote D0 (b) by D0 when the dependence on b is clear from the context.\nREMARK 4.1.\nD0 is an element of the set of dividing functions, and therefore is a dividing function.\nThe second generalized trade reduction algorithm GTR-2 follows.\n1.\nCalculate the dividing function D0 as defined above.\n2.\nUse the dividing function D0 to divide A into procurement sets s1,..., sk E S. 3.\nFor each sj, starting from the lowest value procurement set, do the following:\nIf for i E sj n X there is an external competition and there is at most one i E sj that does not have an internal competition then keep sj.\nOtherwise, reduce the trade sj.\n4.\nAll trading players are charged the critical value for trading.\nAll non trading players are charged zero.\nWe will prove that the mechanism produced by GTR-2 maintains the desired properties of IR, IC, and BB.\nThe following lemma shows that the GTR-2 produced mechanism is IR, and IC.\nPROOF.\nBy theorem 3.1 it suffices to prove that the produced mechanism by the GTR-2 algorithm is bid monotonic for every player i. Suppose that i was not reduced when bidding bi we need to prove that i will not be reduced when bidding b0i> bi.\nDenote by D1 = D0 (b) the dividing function used by GTR-2 when i reported bi and the rest of the players reported b-i.\nDenote by D01 = D0 (b0i, b-i) the dividing function used by GTR-2 when i reported b0i and the rest of the players reported b-i.\nDenote by \u00af D1 (b) a maximal dividing function that results in GTR-1 reducing i when reporting bi.\nAssume to the contrary that GTR-2 reduced i from the trade when i reported b0i then GTR1 (D01, (b0i, b-i)) = GTR1 (\u00af D1, b).\nSince D1 E - IF it follows that GTR1 (D1, b)> GTR1 (\u00af D1, b) and therefore GTR1 (D1, b)> GTR1 (D01, (b0i, b-i)).\nHowever according to the definition D01 E - IF, GTR-2 should not have reduced i mial time for procurement-class domains.\nCalculating D0 in polynomial time for general domains is an important open problem.\n11In the full version GTR-2 is extend such that it suffices that there exists some time in which the third step holds.\nThat extension is omitted from current version due to lack of space.\nwith the dividing function D01 and gained a greater welfare than GTR1 (D1, b).\nThus a contradiction arises and and GTR-2 does not reduce i from the trade when i reports b0i> bi.\nLEMMA 4.2.\nFor any X, the X-external mechanism with critical value pricing scheme produced by the GTR-2 algorithm is a BB mechanism.\nPROOF.\nThis proof is similar to the proof of lemma 3.3.\nCombining the two lemmas above we get: THEOREM 4.1.\nFor any X closed under replacement, the X-external mechanism with critical value pricing scheme produced by the GTR-2 algorithm is an IR, IC and BB mechanism.\nAppendix A shows how to calculate D0 for procurement class domains in polynomial time, it is not generally known how to easily calculate D0.\nCreating a general method for calculating the needed dividing function in polynomial time remains as an open question.\n4.1 Bounding the Welfare for ProcurementClass Based Domains and other General Domains Cases\nThis section shows that in addition to producing a mechanism with the desired properties, GTR-2 also produces a mechanism that maintains high welfare.\nSince the GTR-2 algorithm finds a budget balanced mechanism in arbitrary domains we are unable to bound the welfare for general cases.\nHowever we can bound the welfare for procurementclass based domain and a wide variety of cases in general domains which includes many cases previously studied.\nDEFINITION 4.2.\nDenote f reqk ([i], sj) to indicate that a class [i] appears in a procurement set sj, k times and there are k members of [i] in sj.\nDEFINITION 4.3.\nDenote by freqk ([i], S) the maximal k s.t. there are k members of [i] in sj.\nI.e., freqk ([i], S) = maxsj \u2208 S f reqk ([i], sj).\nLet the set of equivalence classes in procurement class based domain mechanism be e. and 1e .1 be the number of those equivalence classes.\nUsing the definition of class appearance frequency we can bound the welfare achieved by the GTR-2 produced mechanism for procurement class domains12: LEMMA 4.3.\nFor procurement class domains with F = 0, the number of procurement sets that are reduced by GTR-213 is at most 1e .1 times the maximal frequency of each class.\nFormally, the maximal number of procurement sets that is reduced is O (E [i] \u2208 ec freqk ([i], S)) PROOF.\nLet D be an arbitrary dividing function.\nWe note that by definition any procurement set sj will not be reduced if every i E sj has both internal competition and external competition.\nEvery procurement set s that is reduced has at least one player i who has no competition.\nOnce s is reduced all players of [i] have internal competition.\nSo by reducing the number of equivalence classes 1e .1 procurement sets we cover all the remaining players with internal competition.\nIf the maximal frequency of every equivalence classes was one then each remaining player t in procurement set sk also have external competition as all the internal competitors of players t \u00af = 6 t, t \u00af E sk are an external competition for t.\nIf we have f reqk ([t], S) players from class [t] who were reduced then there is sufficient external competition for all players in sk.\nTherefore it suffices to reduce O (E [i] \u2208 ec freqk ([i], S)) procurement sets in order to ensure that both the requisite internal and external competition exists.\nThe next theorem follows as an immediate corollary for lemma 4.3: THEOREM 4.2.\nGiven procurement-class based domain mechanisms with H procurement sets, the efficiency is at least a\nThe following corollaries are direct results of theorem 4.2.\nAll of these corollaries either improve prior results or achieve the same welfare as prior results.\nCOROLLARY 4.1.\nUsing GTR-2 for homogenous doublesided auctions (problem B. 1) at most14 one procurement set must be reduced.\nSimilarly, for spatially distributed markets without strategic edges (problem 13.3) using GTR-2 improves the result of [3] where a minimum cycle including a buyer and seller is reduced.\nCOROLLARY 4.2.\nUsing GTR-2 for spatially distributed markets without strategic edges at most one cycle per connected component15 will be reduced.\nFor supply chains (problem 13.5) using GTR-2 improves the result of [2, 4] similar to corollary 4.2.\nCOROLLARY 4.3.\nUsing GTR-2 for supply chains at most one cycle per connected component16 will be reduced.\nThe following corollary solves the open problem at [3].\nCOROLLARY 4.4.\nFor distributed markets on n nodes with strategic agents and paths of bounded length K (problem B. 6) it suffices to remove at most K * n procurements sets.\nPROOF.\nSketch: These will create at least K spanning trees, hence we can disjointly cover every remaining procurement set.\nThis improves the naive algorithm of reducing n2 procurement sets.\nWe provide results for two special cases of double sided CA with single value players (problem 13.8).\n14It is possible that no reductions will be made, for instance when there is a non-trading player who is the requisite external competition.\n15Similar to the double-sided auctions, sometimes there will be enough competition without a reduction.\n16Similar to double-sided auctions, sometimes there will be enough competition without a reduction.\nCOROLLARY 4.5.\nif there are at most M different kinds of procurement sets it suffices to remove M procurement sets.\nCOROLLARY 4.6.\nIf there are K types of goods and each procurement set consists of at most one of each type it suffices to remove at most K procurement sets.\n5.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we presented a general solution procedure called the Generalized Trade Reduction (GTR).\nGTR accepts an IR and IC mechanism as an input and outputs mechanisms that are IR, IC and BB.\nThe output mechanisms achieves welfare that is close to optimal for a wide range of domains.\nThe GTR procedure improves on existing results such as homogeneous double-sided auctions, distributed markets, and supply chains, and solves several open problems such as distributed markets with strategic transportation edges and bounded paths, combinatorial double-sided auctions with bounded size procurements sets, and combinatorial doublesided auctions with a bounded number of procurement sets.\nThe question of the quality of welfare approximation both in general and in class domains that are not procurement class domains is an important and interesting open question.\nWe also leave open the question of upper bounds for the quality of approximation of welfare.\nAlthough we know that it is impossible to have IR, IC and BB in an efficient mechanism it would be interesting to have an upper bound on the approximation to welfare achievable in an IR, IC and BB mechanism.\nThe GTR procedure outputs a mechanism which depends on a set X \u2282 N. Another interesting question is what the quality of approximation is when X is chosen randomly from N before valuations are declared.\nAcknowledgements\nThe authors wish to thank Eva Tardos et al for sharing their results with us.\nThe authors also wish to express their gratitude to the helpful comments of the anonymous reviewers.\n6.\nREFERENCES\nAPPENDIX A. CALCULATING THE OPTIMAL DIVIDING FUNCTION IN PROCUREMENT CLASS DOMAINS IN POLYNOMIAL TIME\nIn this section we show how to calculate the optimal dividing function for procurement class domains in polynomial time.\nWe first define a special dividing function D0o which is easy to calculate: We define the dividing function D0o recursively as follows: At stage j, D0o divides the trading players into two sets Aj and A0j s.t.\n\u2022 Aj is a procurement set \u2022 A0j can be divided into a disjoint union of procurement sets.\n\u2022 Aj has minimal value from all possible such partitions.\nDefine sj = Aj and recursively invoke D0o and A0j until A0j = \u2205.\nWe now prove that D0o is the required dividing function.\nLEMMA A. 1.\nFor procurement class domains Do = D0o.\nPROOF.\nSince the domain is a procurement class domain, for every reduced procurement set the set of players which achieve competition (either internal or external) is fixed.\nTherefore, the number of procurement sets which are reduced is independent of the dividing function D.\nSince the goal is to optimize welfare by reducing procurement sets with the least value we can optimize welfare.\nThis is achieved by D0.\nB. PROBLEMS AND EXAMPLES\nFor completeness we present in this section the formal definitions of the problems that we use to illustrate our mechanism.\nThe first problem that we define is the double-sided auction with homogeneous goods.\nPROBLEM 13.1.\nDouble-sided auction with homogeneous goods: There are m sellers each of which have a single good (all goods are identical) and n buyers each of which are interested in receiving a good.\nWe denote the set of sellers by S and the set of buyers by B. Every player i E S U B (both buyers and sellers) has a value vi for the good.\nIn this model a procurement set consists of a single buyer and a single seller, i.e., lsl = 2.\nThe value of a procurement set is W (s) = vj--vi where j E B and i E S, i.e., the gain from trade.\nIf procurement sets are created by matching the highest value buyer to the lowest value seller then [13]'s deterministic trade reduction mechanism17 reduces the lowest value procurement set.\nA related model is the pair related costs [9] model.\nPROBLEM 13.2.\nThe pair related costs: A double-sided auction B. 1 in which every pair of players i E S and j E B has a related cost F (i, j)> 0 in order to trade.\nF (i, j) is a friction cost which should be minimized in order to maximize welfare.\n[9] defines two budget-balanced mechanisms for this case.\nOne of [9]'s mechanisms has the set of buyers B as the X set for the X-external mechanism and the other has the set of sellers S as the X set for the X-external mechanism.\nA similar model is the spatially distributed markets (SDM) model [3] in which there is a graph imposing relationships on the cost.\nPROBLEM 13.3.\nSpatially distributed markets: there is a graph G = (V, E) such that each v E V has a set of sellers S' and a set of buyers B'.\nEach edge e E E has an associated cost which is the cost to transport a single unit of good along the edge.\nThe edges are non strategic but all players are strategic.\n[3] defines a budget balanced mechanism for this case.\nOur paper improves on [3] result.\nAnother graph model is the model defined in [6].\nPROBLEM 13.4.\nTrading Networks: Given a graph and buyers and sellers who are situated on nodes of the graph.\nAll trade must pass through a trader.\nIn this case procurement sets are of the form (buyer, seller, trader) where the possible sets of this form are defined by a graph.\nThe supply chain model [2, 4] can be seen as a generalization of [6] in which procurement sets consist of the form (producer, consumer, trader1,..., traderk).\n17It is also possible to randomize the reduction of procurements sets so as to achieve an expected budget of zero similar to [13], details are obvious and omitted.\nPROBLEM 13.5.\nSupply Chain: There is a set D of agents and a set G of goods and a graph G = (V, E) which defines possible trading relationships.\nAgents can require an input of multiple quantities of goods in order to output a single good.\nThe producer type of player can produce goods out of nothing, the consumer has a valuation and an entire chain of interim traders is necessary to create a viable procurement set.\n[2, 4] consider unique manufacturing technology in which the graph defining possible relationships is a tree.\nAll of the above problems are procurement-class domains.\nWe also consider several problems which are not procurement class domains and generally the questions of budget balance have been left as open problems.\nAn open problem raised in [3] is the SDM model in which edges are strategic.\nPROBLEM 13.6.\nSpatially distributed markets with strategic edges: there is a graph G = (V, E) such that each v E V has a set of sellers S' and a set of buyers B'.\nEach edge e E E has an associated cost which is the cost to transport a single unit of good along the edge.\nEach buyer, seller and edge has a value for the trade, i.e., all entities are strategic.\n[2, 4] left open the question of budget balanced mechanisms for supply chains where there is no unique manufacturing technology.\nIt is easy to see that this problem is not a procurement class domain.\nAnother interesting problem is transport networks.\nPROBLEM 13.7.\nTransport networks: A graph G = (V, E) where the edges are strategic players with costs and the goal is to find a minimum cost transportation route between a pair of privileged nodes Source, Target E V.\nIt was shown in [1] that the efficient allocation can have a budget deficit that is linear in the number of players.\nClearly, this problem is not a procurement class domain and [1] left the question of a budget balanced mechanism open.\nAnother non procurement-class based domain mechanism is the double-sided combinatorial auction (CA) with singlevalue players.\nPROBLEM 13.8.\nDouble-sided combinatorial auction (CA) with single value players: There exists a set S of sellers each selling a single good.\nThere also exists a set B of buyers each interested in bundles of 2S 18.\nThere are two variants of this problem.\nIn the single minded case each buyer has a positive value for only a single subset whereas in the multi minded case each buyer can have multiple bundles with positive valuation but all of the values are the same.\nIn both cases we assume free disposal so that all bundles containing the desired bundle have the same value for the buyer.\nWe also consider problems that are non class domains.\nPROBLEM 13.9.\nDouble-sided combinatorial auction (CA) with general multi-minded players: same as B. 8 but each buyer can have multiple bundles with positive valuation which are not necessarily the same.\n18We abuse notation and identify the seller with the good.\nC. COMPARING DIFFERENT CHOICES OF X\nThe choice of X can have a large impact on the welfare (and revenue) of the reduced mechanism and therefore the question arises of how one should choose the set X.\nAs the X-external mechanism is required to maintain IC clearly the choice of X cannot depend on the value of the players as otherwise the reduced mechanism will not be truthful.\nIn this section we motivate the choice of small X sets for procurement class domains and give intuition that it may also be the case for some other domains.\nWe start by illustrating the effect of the set X over the welfare and revenue in the double-sided auction with homogeneous goods problem B. 1.\nSimilar examples can be constructed for the other problems defined is B.\nThe following example shows an effect on the welfare.\nEXAMPLE C. 1.\nThere are two buyers and two sellers and two non intersecting (incomparable) sets X = {buyers} and Y = {sellers}.\nIf the values of the buyers are 101, 100 and the sellers are 150, 1 then the X-external mechanism will yield a gain from trade of 0 and the Y - external mechanism will yield a gain from trade of 100.\nConversely, if the buyers values are 100, 1 and the sellers are 2, 3 the X-external mechanism will yield a gain from trade of 98 and and the Y - external mechanism will yield a gain from trade of zero.\nThe example clearly shows that the difference between the X-external and the Y-external mechanism is unbounded although as shown above the fraction each of them reduces can be bound and therefore the multiplicative ratio between them can be bound (as a function of the number of trades).\nOn the revenue side we cannot even bound the ratio as seen from the following example: EXAMPLE C. 2.\nConsider k buyers with value 100 and k + 1 sellers with value 1.\nIf X = {buyers} then there is no need to reduce any trade and all of the buyer receive the good and pay 1.\nk + 1 of the sellers sell and each of them receive 1.\nThis yields a net revenue of zero.\nIf Y = {sellers} then one must reduce a trade!\nThis means that all of the buyers pay 100 while all of the sellers still receive 1.\nthe revenue is then 99k.\nSimilarly, an example can be constructed that yields much higher revenue for the X-external mechanism as compared to the Y-external mechanism.\nThe above examples refer to sets X and Y which do not intersect and are incomparable.\nThe following theorem compares the X-external and Y-external mechanisms for procurement class domains where X is a subset of Y.\n2.\nThis follows from the fact that for any ordering D any procurement set s that is reduced in the X-external mechanism is also reduced in the Y-external mechanism.\nTherefore, the critical value is no less in the Yexternal mechanism than the X-external mechanism.\nREMARK C. 1.\nFor any two sets X, Y it is easy to build an example in which the X-external and Y - external mechanisms reduce the same procurement sets so the inequality is weak.\nTheorem C. 1 shows an inequality in welfare as well as for payments but it is easy to construct an example in which the revenue can increase for X as compared to Y as well as the opposite.\nThis suggests that in general we want X to be as small as possible although in some domains it is not possible to compare different X's.","keyphrases":["gener trade reduct","trade reduct","budget balanc","gtr","homogen good","intern competit","extern competit","effici","player power","optim","inequ in welfar","multi-mind player","budget-balanc mechan","spatial distribut market"],"prmu":["P","P","P","P","P","U","U","U","M","U","M","M","M","M"]}